uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,477,468,751,281
arxiv
\section{Introduction} \label{s:intro} \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{sensors-at-jari.pdf} \caption{Multiple 3D LiDARs} \label{F:sensors} \vspace{-2em} \end{figure} LiDAR (\emph{Light Detection And Ranging}, sometimes \emph{Light Imaging Detection And Ranging} for the image-like resolution of modern 3D sensors) is one of the core perception technologies which has shaped the fields of Advanced Driver Assistance System (ADAS) and autonomous driving vehicles. While LiDARs are relative newcomers to the automotive industry when compared with radars and cameras, 2D and especially 3D LiDARs have demonstrated high measurement accuracy and illumination independent sensing capabilities for self-driving tasks\cite{thrun2006stanley}. Of course, not only used in automotive applications, LiDARs have been deployed in wheeled autonomous robots, drones, humanoid robots, consumer level applications, and at intersections in smart cities. The rapid development of research and industry relating to self-driving vehicles has created a large demand for such sensors. Depending on the individual perception application and operating domain, there are several key LiDAR performance attributes: measurement range, measurement accuracy, point density, scan speed and configurability, wavelength, robustness to environmental changes, form factor, and cost. As such, a large number of LiDAR manufacturers have emerged in recent years introducing new technologies to address such needs\cite{yole2018}. With many different manufacturers and technologies becoming available, it is necessary to assess the perception characteristics of each device according to the intended application. In addition, while each LiDAR manufacturer subjects their products to quality tests (vibration and shock endurance, tolerance to electromagnetic interference (EMI), water and dust ingress protection (IP), operating temperature and pressure, measurement accuracy for different reflectors, etc.), LiDARs are meant for general use and not exclusively tested on vehicles. Furthermore, with LiDAR costs remaining high, it can be difficult to select the best LiDAR in terms of cost performance for a particular application. { \begin{table*}[t] \begin{center} \footnotesize \setlength{\tabcolsep}{5pt} \begin{tabular}[c]{p{3.3cm}p{5.2cm}lllp{4.5cm}} \hline\hline Dataset & LiDAR(s) & Image & Labels & Diversity & Other sensors, notes\\ \hline Stanford Track Collection\cite{stanford2011} & 1 (HDL-64S2) & - & 3D & E & GPS/IMU\\ KITTI\cite{kitti2013} & 1 (HDL-64) & Yes & 2D/3D & E & 3x Cam (Stereo), GPS/IMU \\ KAIST multispectral\cite{kaist2018} & 1 (HDL-32E) & Yes & 2D/3D & E/T & 2 Cameras, 1 Thermal (infrared) cam. IMU+GNSS \\ nuScenes\cite{nuscenes2019} & 1 (HDL-32E) & Yes & 2D/3D & E/T/W & 6 Cameras, 5 RADARs, GNSS, IMU\\ H3D\cite{h3d} & 1 (HDL-64S2) & Yes & 2D/3D & E & 3 Cameras, IMU+GNSS \\ ApolloScape\cite{apolloscape2018} & 2 (2x Riegl VUX-1HA) & Yes & 2D/3D & E/T/W & Depth Images, GPS/IMU, \\ LiVi-Set\cite{li-vi2018} & 2 (HDL-32E, VLP-16${}^{\bm{a}}$) & Yes & - & E & Dash-board camera, CAN (driving behavior dataset)\\ ArgoVerse\cite{argoverse} & 2 (2x VLP-32C) & Yes & 2D/3D & E & 7 Cameras ring, 2 Stereo cams, GNSS\\ FORD Campus\cite{ford2011} & 3 (HDL-64S2, 2x Riegl LMS-Q120) & Yes & - & E & Camera, $360\degree$ cam., IMU, INS\\ Oxford RobotCar\cite{RobotCarDatasetIJRR} & 3 (2x SICK LMS-151 (2D), SICK LD-MRS (3D)) & Yes & - & E/T/W & 3x Cameras, Stereo cam., GPS \\ Lyft\cite{lyft2019} & 3 (2 Pandar40 + 1 Pandar40 in Beta\_V0, and 2 Pandar40 + 1 Pandar64 in Beta\_Plus) & Yes & Yes & E & 6 Cameras, IMU, INS\\ Waymo\cite{waymo2019} & 5 (1 $360\degree$ $75m$ range, 4x ``HoneyComb'' $20m$ range${}^{\bm{b}}$) & Yes & 2D/3D & E/T/W & 5 Cameras \\ DENSE\cite{dense2019} & 2 (HDL-64S3, VLP-32C) & Yes & 2D/3D & E/T/W & Stereo Camera, Gated Camera, FIR Camera, Radar, laser illumination, weather station \\ A2D2\cite{geyer2020a2d2} & 5 (5x VLP-16) & Yes & 2D/3D & E/W & 6x Cameras \\ {\textbf{{LIBRE} (ours)}} & {10} (VLS-128\protect\footnotemark, HDL-64S2, HDL-32E, VLP-32C, VLP-16, Pandar64, Pandar40P, OS1-64, OS1-16, RS-Lidar32) & Yes & 2D/3D\protect\footnotemark & E/T/W & Camera, IMU, GNSS, CAN, $360\degree$ 4K cam., Event cam., Infrared cam., 3D pointcloud map, Vector map \\ \hline \end{tabular} \caption{Publicly available datasets featuring LiDARs (arranged chronologically and by number of LiDARs, names of LiDAR manufacturers are omitted for those models in this study). Diversity refers to changes in the data collected, as in types of environments (E), times of day (T), weather conditions (W). ${}^{\bm{a}}$The authors in \cite{li-vi2018} state they only used the HDL-32E. ${}^{\bm{b}}$LiDARs proprietary and developed by Google/Waymo.} \label{tab:datasets} \vspace{-2em} \end{center} \end{table*} } In this study, we aim to collect data to enable the attribute analysis of several 3D LiDARs for applications in autonomous driving vehicles. We capture data to evaluate LiDARs in terms of: measurement range, accuracy, density, object detection, mapping and localization, and robustness to weather conditions and interference. During our study we collected a large dataset of vehicle-mounted LiDARs both in normal traffic scenes, and in a controlled chamber for testing performance in adverse weather conditions. \addtocounter{footnote}{-1} \footnotetext{In addition to the VLS-128, the Velodyne Alpha Prime will be also added to the dataset.} \addtocounter{footnote}{1} \footnotetext{At the time of writing, 2D/3D data labeling is ongoing. Labels will be included for a subsets of the dynamic traffic data.} Following data capture in the above environments, we released the {LIBRE} dataset covering multiple 3D LiDARs.\footnote{A teaser of {LIBRE} dataset was released on January 28th, 2020 at \url{https://sites.google.com/g.sp.m.is.nagoya-u.ac.jp/libre-dataset}. The full set will be released during 2020. For additional details, please refer to the complementary video available at \url{https://youtu.be/rWyecoCtKcQ}.} It features {10} LiDARs, each one a different model from diverse manufacturers. Fig.~\ref{F:sensors} shows some of the 3D LiDARs used in our evaluations. The {LIBRE} dataset includes data from three different environments and configurations: \begin{itemize} \item \emph{Dynamic traffic}: dynamic traffic objects (vehicles, pedestrians, bicycles, buildings, etc.) captured from a vehicle driving on public urban roads around Nagoya University \item \emph{Static targets}: static objects (reflective targets, black car and a mannequin), placed at known controlled distances, and measured from a fixed position \item \emph{Adverse weather}: static objects placed at a fix location and measured from a moving vehicle while exposing the LiDARs to adverse conditions (fog, rain, strong light) \end{itemize} The contributions of this work are summarized as follows. We introduce the {LIBRE} dataset including data from {10} different LiDARs in the above environments and configurations. We present a quantitative summary of performance of the different LiDARs in terms of range and density for static targets, and a qualitative evaluation of response to adverse weather conditions. While this paper offers some limited analysis of the large amount of data captured, the main contribution is the publishment of a novel and openly available dataset which will allow many researchers to perform more detailed analysis and comparisons. This paper is structured as follows: Section~\ref{s:relworks} presents related datasets featuring LiDARs, while Section~\ref{s:dataset} describes our dataset. Section~\ref{s:env-dynamic} presents results on dynamic traffic scenes, Section~\ref{s:env-static} on static evaluations, and Section~\ref{s:env-weather} is about weather chamber tests. Finally, this paper is concluded in Section~\ref{s:concl}. { \begin{table*}[t] \begin{center} \footnotesize \setlength{\tabcolsep}{4pt} \begin{tabular}{p{0.09302\textwidth}|p{0.05632\textwidth}p{0.08058\textwidth}p{0.084395\textwidth}p{0.05414\textwidth}p{0.04832\textwidth}|p{0.0532\textwidth}p{0.06395\textwidth}|p{0.07958\textwidth}p{0.08558\textwidth}|p{0.06395\textwidth}} \hline\hline \T & \multicolumn{5}{c|}{Velodyne} & \multicolumn{2}{c|}{Hesai} & \multicolumn{2}{c|}{Ouster} & RoboSense \\ \T & \begin{minipage}{0.05632\paperwidth} \raggedright \includegraphics[width=0.041746\paperwidth]{vls128.pdf}\\ VLS-128$^{\bm{*}}$\\\cite{alphaprime} \end{minipage} & \begin{minipage}{0.08058\paperwidth} \raggedright \includegraphics[width=0.056376\paperwidth]{hdl64s2.pdf}\\ HDL-64S2\cite{hdl64s2} \end{minipage} & \begin{minipage}{0.081395\paperwidth} \raggedright \includegraphics[width=0.021516\paperwidth]{hdl32e.pdf}\\ HDL-32E\\\cite{hdl32e} \end{minipage} & \begin{minipage}{0.06395\paperwidth} \raggedright \includegraphics[width=0.02598\paperwidth]{vlp32c.pdf}\\ VLP-32C\\\cite{vlp32c} \end{minipage} & \begin{minipage}{0.05814\paperwidth} \raggedright \includegraphics[width=0.02606\paperwidth]{vlp16.pdf}\\ VLP-16\\\cite{vlp16} \end{minipage} & \begin{minipage}{0.04895\paperwidth} \raggedright \includegraphics[width=0.02926\paperwidth]{pandar64.pdf}\\ Pandar64\\\cite{pandar64} \end{minipage} & \begin{minipage}{0.06395\paperwidth} \raggedright \includegraphics[width=0.02926\paperwidth]{pandar40p.pdf}\\ Pandar40P\\\cite{pandar40p} \end{minipage} & \begin{minipage}{0.06395\paperwidth} \raggedright \includegraphics[width=0.02144\paperwidth]{os-1-64.pdf}\\ OS1-64\\\cite{os1} \end{minipage} & \begin{minipage}{0.06395\paperwidth} \raggedright \includegraphics[width=0.02144\paperwidth]{os-1-64.pdf}\\ OS1-16\\\cite{os1} \end{minipage} & \begin{minipage}{0.06395\paperwidth} \raggedright \includegraphics[width=0.02875\paperwidth]{rslidar-rs32.pdf}\\ RS-Lidar32\\\cite{rslidar32} \end{minipage} \\ \hline \T Channels & 128 & 64 & 32 & 32 & 16 & 64 & 40 & 64 & 16 & 32 \\ FPS[Hz] & 5-20 & 5-20 & 5-20 & 5-20 & 5-20 & 10,20 & 10,20 & 10,20 & 10,20 & 5,10,20 \\ Precision[m] & $\pm0.03$ & $\pm0.02^{\bm{a}}$ & $\pm0.02$ & $\pm0.03$ & $\pm 0.03$ & $\pm0.02^{\bm{c}}$ & $\pm 0.02^{\bm{c}}$ & $\pm0.03^{\bm{d}}$ & $\pm 0.03^{\bm{d}}$ & $\pm0.03^{\bm{c}}$ \\ Max.Range[m] & $245$ & $120$ & $100$ & $200$ & $100$ & $200$ & $200$ & $120$ & $120$ & $200$ \\ Min.Range[m] & & $3$ & $2$ & $1$ & $1$ & $0.3$ & $0.3$ & $0.8$ & $0.8$ & $0.4$ \\ VFOV[deg] & $40$ & $26.8$ & $41.33$ & $40$ & $30$ & $40$ & $40$ & $33.2$ & $33.2$ & $40$ \\ VB[deg] & {\vfovbounds{15}{25}} & {\vfovbounds{2}{24.8}} & {\vfovbounds{10.67}{30.67}} & {\vfovbounds{15}{25}} & {\vfovbounds{15}{15}} & {\vfovbounds{15}{25}} & {\vfovbounds{15}{25}} & {\vfovbounds{16.6}{16.6}} & {\vfovbounds{16.6}{16.6}} & {\vfovbounds{15}{25}} \\ HRes[deg] & $0.1$-$0.4$ & $0.09$ & $0.08$-$0.33$ & $0.1$-$0.4$ & $0.1$-$0.4$ & $0.2$,$0.4$ & $0.2$,$0.4$ & $0.7$,$0.35$,$0.17$ & $0.7$,$0.35$,$0.17$ & $0.1$-$0.4$ \\ VRes[deg] & ${0.11}^{\bm{b}}$ & ${0.33}^{\bm{a}}$ & $1.33$ & ${0.33}^{\bm{b}}$ & $2.0$ & $0.167{}^{\bm{b}}$ & $0.33{}^{\bm{b}}$ & $0.53$ & $0.53$ & ${0.33}^{\bm{b}}$ \\ $\lambda$[nm] & 903 & 903 & 903 & 903 & 903 & 905 & 905 & 850 & 850 & 905 \\ $\phi$[mm] & 165.5 & 223.5 & 85.3 & 103 & 103.3 & 116 & 116 & 85 & 85 & 114 \\ Weight(kg) & 3.5 & 13.5 & 1.0 & 0.925 & 0.830 & 1.52 & 1.52 & 0.425 & 0.425 & 1.17 \\ Firmware ver. & ${}^{\bm{e}}$ & 4.07 & 2.1.7.1 & N/A & 3.0.29.0 & 5.10 & 4.29 & 1.12.0 & 1.12.0 & ${}^{\bm{f}}$ \\ \hline \end{tabular} \vspace{1pt} \caption{LiDARs tested in this study, by manufacturer and number of channels (rings).\protect\footnotemark Acronyms are frames per second (FPS), vertical field-of-view (VFOV), VFOV upper and lower bounds (VB), horizontal resolution (HRes), vertical resolution (VRes), laser wavelength ($\lambda$), and diameter $\phi$. $\,^{\bm{*}}$Velodyne VLS128 pre-production model (63-9480 Rev-3). $\,^{\bm{a}}$Velodyne states HDL-64S2 accuracy is $\pm2cm$ for 80\% of channels, and $\pm5cm$ for the remaining; VRes for $+2\degree\hspace{-3pt}\mathrel{\raisebox{1pt}{{.}{.}}\hspace{-3pt}}\nobreak-8.33\degree$ is $1/3\degree$ and for $-8.83\degree\hspace{-3pt}\mathrel{\raisebox{1pt}{{.}{.}}\hspace{-3pt}}\nobreak-24.33\degree$ is $1/2\degree$. $\,^{\bm{b}}$Minimum (or finest) resolution, as these sensors have variable angle difference between beams. $\,^{\bm{c}}$Hesai and RoboSense state that accuracy for $0.3m\hspace{-3pt}\mathrel{\raisebox{1pt}{{.}{.}}\hspace{-3pt}}\nobreak 0.5m$ is $\pm0.05$m, then $\pm0.02$m from $0.5m\hspace{-3pt}\mathrel{\raisebox{1pt}{{.}{.}}\hspace{-3pt}}\nobreak 200m$. $\,^{\bm{d}}$Ouster states accuracy for $0.8m\hspace{-3pt}\mathrel{\raisebox{1pt}{{.}{.}}\hspace{-3pt}}\nobreak 2m$ is $\pm0.03m$, for $2m\hspace{-3pt}\mathrel{\raisebox{1pt}{{.}{.}}\hspace{-3pt}}\nobreak 20m$ is $\pm0.015m$, for $20m\hspace{-3pt}\mathrel{\raisebox{1pt}{{.}{.}}\hspace{-3pt}}\nobreak 60m$ is $\pm0.03m$, and over $60m$ is $\pm0.10m$. ${}^{\mathbf{e}}$VLS-128 firmware is not stated here as it was not a production model. ${}^{\mathbf{f}}$RS-Lidar32 had top board firmware version T9R23Va\_Tb\_00 and bottom board firmware version B8R02Va\_T5\_A.} \label{tab:lidar-list} \vspace{-2.5em} \end{center} \end{table*} } \section{LiDAR datasets} \label{s:relworks} Table~\ref{tab:datasets} summarizes current datasets featuring LiDARs, and highlights the contributions made by our dataset. The Stanford Track Collection\cite{stanford2011} carefully records tracks of objects and their dataset offer the object tracks, while FORD Campus vision and LiDAR dataset\cite{ford2011} include several complete scenes captured by multiple LiDARs. The Oxford RobotCar Dataset\cite{RobotCarDatasetIJRR} has one 3D LiDAR and two 2D LiDARs, and accumulation of 2D data as the vehicle moves allows the reconstruction of 3D scenes. ApolloScape\cite{apolloscape2018} features two 3D LiDARs, in several environments, times of the day and varying weather. The KAIST dataset\cite{kaist2018} features 3D LiDAR (905\,nm infrared) plus normal vision and a thermal (long wave infrared 8\,$\mu$m to 15\,$\mu$m), and is therefore considered multispectral. The Lidar-video driving dataset\cite{li-vi2018} also collects data from one LiDAR, a camera and CAN bus data targeting driving behaviour. More recently, the ArgoVerse dataset\cite{argoverse} features two LiDARs, one on top of the other, plus a ring of cameras for 360$\degree$ annotation. Vector maps (HD maps) are also provided. The nuScenes dataset by Aptiv\cite{nuscenes2019} features one LiDAR, several cameras and other sensors, and is captured in a diverse range of environments, times of day and weather conditions. The Honda Research Institute 3D dataset (H3D)\cite{h3d} also features one LiDAR and multiple sensors, with labels provided at 2\,Hz and propagated at 10\,Hz so as to provide labels at the same rate as the LiDAR. Similarly, Lyft dataset\cite{lyft2019} features 3 LiDARs and an array of cameras, and different versions of the dataset are available. The Waymo Open Dataset\cite{waymo2019} features 5 LiDARs created by Google/Waymo, one 360$\degree$ and 4 for lower FOV and proximity detection in several different locations. The A2D2 dataset by Audi\cite{geyer2020a2d2} features 5 VLP-16 LiDARs tilted to cover the immediate surroundings of the vehicle in diverse environments. \footnotetext{Sensor images are not to scale and copyrights are owned by their respective manufacturers.} Different from the above works, this would be the first dataset to collect data under the similar conditions but with different LiDARs. Some of the above datasets feature more than one LiDAR but with limited models, while in our work we offer {10} different models. Also, as far as we know, no static tests of LiDARs are publicly available. Besides datasets featuring LiDARs, other related works have consider diverse LiDAR evaluations. Jokela\etal\cite{jokela2019testing} tested 5 different LiDARs in fog and rain conditions at Clermont-Ferrand's 31\,m long fog chamber\cite{colomb2008innovative}, including different perception targets and conditions; they also evaluated these LiDARs under low temperature snowy environments. The EU project DENSE\cite{dense2019,ritter2019dense} tested 2 different LiDARs plus gated camera, FIR camera and other devices under adverse weather conditions in urban environments and also used the Clermont-Ferrand fog chamber. While our present study currently lacks evaluations under snowy conditions, we test a broader range of sensors in a wider variety of adverse weather experiments. { \begin{figure*}[!htb] \centering \subfloat[][]{ \includegraphics[width=0.38\textwidth]{around-nu-map-w-roads2.pdf} \label{F:m3ldmap-a} } \subfloat[][]{ \includegraphics[width=0.265\textwidth]{ptcld-map.pdf} \label{F:m3ldmap-b} } \subfloat[][]{ \begin{minipage}[b]{.295\textwidth} \includegraphics[width=\textwidth]{vtor-sample1.pdf} \vfill \includegraphics[width=\textwidth]{vtor-sample2.pdf} \end{minipage} \label{F:m3ldmap-c} } \caption[]% {Map of the dynamic environment included in the dataset: \subref{F:m3ldmap-a} location reference, followed trajectory is shown in red (total length of 6.56\,km) {\color{red}{\faMapMarker}} and {\color{green}{\faMapMarker}} markers denote the starting and goal points, respectively, {\color{yellow}{\faMapMarker}} corresponds to a vehicle gate in/out the campus. \subref{F:m3ldmap-b} is the pointcloud map (grid cell size 10\,m) and \subref{F:m3ldmap-c} some scenes with the vector map.} \label{F:m3ldmap} \vspace{-1em} \end{figure*} } \begin{figure}[!htb] \centering \includegraphics[width=0.46\textwidth]{ginpuri.pdf} \caption{Instrumented vehicle used to capture static and dynamic data, sensors are mounted on a metal plate about 2\,m from the ground.} \label{F:ginpuri} \vspace{-1em} \end{figure} \section{{LIBRE} Dataset} \label{s:dataset} The {LIBRE} dataset features 5 LiDARs from Velodyne Lidar\footnote{\url{https://velodynelidar.com}}, two from Ouster Inc.\footnote{\url{https://ouster.com}}, two from Hesai Photonics Technology Co., Ltd\footnote{\url{https://www.hesaitech.com}}, and one from RoboSense--Suteng Innovation Technology Co., Ltd.\footnote{\url{http://www.robosense.ai}}. Table~\ref{tab:lidar-list} describes the general specifications of each tested device. Data from all LiDAR and all environments was collected from April to September 2019. All sensors tested in this study were off-the-shelf production models with the exception of the Velodyne VLS-128. This sensor was a pre-production model, and was tested to provide a preview for the production 128-line Alpha Prime sensor which was unavailable at the time the experiments were carried out. The dataset will be extended with the Alpha Prime results when testing has been completed. All these sensors correspond to the multi-beam (multi-channel) mechanical scanning type: several pairs of laser diodes and photo-detectors (avalanche photo detector (APD) and single-photon avalanche diode (SPAD)) and corresponding emit-remit optics and mirrors, are rotated by a motor for 360$\degree$ which defines azimuth, while the vertical angle of a laser and photo-detector pair defines elevation. All sensors in this selection have short-wave infrared (SWIR) wavelengths between 850\,nm, 903\,nm and 905\,nm. While some support multiple returns (echoes), the data collected in our dataset always records only the strongest echo. \section{Dynamic data} \label{s:env-dynamic} \subsection{Data Collection} \label{ss:datacollection} The target was to collect data in a variety of traffic conditions, including different types of environments, varying density of traffic and times of the day. We drove our instrumented vehicle, a Toyota Prius shown in Fig.~\ref{F:ginpuri}, three times per day, around the trajectory shown in Fig.~\ref{F:m3ldmap}, and collected data for the following key time periods: \begin{itemize} \item Morning (9am-10am) \begin{itemize} \item Pedestrian traffic: medium-low \item Vehicle traffic: high \item Conditions: people commuting, students and staff arriving on the campus. Clear to overcast weather. \end{itemize} \item Noon (12pm-1pm) \begin{itemize} \item Pedestrian traffic: high \item Vehicle traffic: medium-low \item Conditions: large number of students and staff heading to and from cafeterias and restaurants. Clear to overcast weather. \end{itemize} \item Afternoon (2pm-4pm) \begin{itemize} \item Pedestrian traffic: low \item Vehicle traffic: medium-low \item Conditions: busy work and class period. Clear to overcast weather. \end{itemize} \end{itemize} Fig.~\ref{F:ginpuri} shows the vehicle used for data capture. The 3D LiDAR on top was replaced for each experiment only after the three time periods were recorded, and only one LiDAR was used at a time to avoid noise due to mutual interference. Data from other sensors (RGB camera, IR camera, 360$\degree$ camera, event camera, IMU, GNSS, CAN) was recorded together with the LiDAR data, every data with corresponding timestamps, using ROS\cite{ros2009}. In addition, we collected calibration data for each new LiDAR setup to perform extrinsic LiDAR to camera calibration, using a checkerboard and various other points of interest. Clear lighting conditions were ensured when recording such data. The routes driven in this data capture also have a reference pointcloud map available, which was created by a professional mobile mapping system (MMS). This map includes RGB data, and vector map files (HD map) for public road outside of the Nagoya University campus, and is also provided as part of the dynamic traffic data. \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{nu-drive-vls128-ndt-yolo.pdf} \caption{Dynamic traffic scenes by applying SOTA algorithms on pointcloud.} \label{F:dynamic} \vspace{-1em} \end{figure} \begin{figure*}[!htb] \centering \sbox{\arrangebox}{% \subfloat[][]{ \centering \includegraphics[width=.3\textwidth,height=.36\textwidth]{jari-experiments3.pdf} \label{F:jaritargets-a} }% } \setlength{\arrangeht}{\ht\arrangebox} \usebox{\arrangebox}\hspace{-1em \begin{minipage}[b][\arrangeht][s]{0.7\textwidth} \begin{center} \centering \subfloat[][]{ \includegraphics[width=0.27\textwidth]{jari-measure.pdf} \label{F:jaritargets-b} }\hspace{-10pt} \subfloat[][]{ \includegraphics[width=0.4\textwidth]{jari-targets.pdf} \label{F:jaritargets-c} }\hspace{-10pt} \subfloat[][]{ \includegraphics[width=0.27\textwidth]{TS5-1.pdf} \label{F:jaritargets-g} }\\ \subfloat[][]{ \includegraphics[width=0.31\textwidth]{jari-fog2.pdf} \label{F:jaritargets-d} }\hspace{-10pt} \subfloat[][]{ \includegraphics[width=0.31\textwidth]{jari-rain2.pdf} \label{F:jaritargets-e} }\hspace{-10pt} \subfloat[][]{ \includegraphics[width=0.31\textwidth]{jari-sun3.pdf} \label{F:jaritargets-f} } \end{center} \end{minipage} \caption{Static targets and adverse weather experiments at JARI's weather chamber: \subref{F:jaritargets-a} configuration of the different scenarios, \subref{F:jaritargets-b} and \subref{F:jaritargets-c} measurement, \subref{F:jaritargets-d} to \subref{F:jaritargets-f} sample adverse weather scenes, \subref{F:jaritargets-g} setting up ground truth.} \label{F:jaritargets} \vspace{-1em} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=0.8\textwidth]{range_overall.pdf} \caption{Range RMSE on $x$-axis distance per LiDAR.} \label{F:range} \vspace{-1.5em} \end{figure*} \subsection{Evaluation in Autoware} \label{ss:evalinautoware} Fig.~\ref{F:dynamic} shows qualitative results of running state-of-the-art algorithms, implemented in the self-driving open source platform Autoware\footnote{https://gitlab.com/autowarefoundation/autoware.ai} (see Kato\etal\cite{kato2018}), on LiDAR pointclouds. Fig.~\ref{F:dynamic} shows VLS-128 pointcloud when localized using the Normal Distributions Transform (NDT), LiDAR/camera fusion, and CNN-based object detection. \section{Static targets} \label{s:env-static} For the static targets and the adverse weather conditions, we used the Japan Automobile Research Institute (JARI\footnote{\url{http://www.jari.or.jp}}) weather experimental facilities. Fig.~\ref{F:jaritargets}\subref{F:jaritargets-a} shows a cross view of such facilities during our experiments. It is a 200\,m long and 15\,m wide indoor weather chamber with 3 straight and marked lanes (each 3.5\,m wide as per Japanese regulations), regularly flat, with fences, traffic lights, controlled illumination and ventilation, and multiple sprinklers for fog and rain. A description of JARI's weather chamber equipment and conditions is given in Section~\ref{s:env-weather}. As shown on Fig.~\ref{F:jaritargets}\subref{F:jaritargets-c}, the static targets in this study include: A0 size (841\,mm x 1189\,mm) reflective targets (Edmund Optics light absorbing black-out black velvet sheet (10\% reflectance), polyboard white, and 3M diamond-grade 4090 series sheet), a Toyota Esquire black mini-van, two mannequins wearing black clothes, and occasionally human participants when conditions were safe. Reflective targets were fixed on an aluminum frame reinforced to prevent warping and with backing material to ensure sheets remained flat. \begin{table} \centering \begin{tabular}{c l | c l} \hline\hline Distance & Target Ground & Distance & Target Ground\\ to Lidar [m] & Truth [m] & to Lidar [m] & Truth [m]\\ \hline 5 & 4.982 & 65 & 65.008\\ 10 & 9.998 & 85 & 85.005\\ 15 & 14.994 & 100 & 100.010\\ 20 & 20.001 & 120 & 120.006\\ 25 & 25.999 & 140 & 140.005\\ 35 & 35.007 & 160 & 160.007\\ 50 & 49.997 & 180 & 180.007\\ \hline \end{tabular} \caption{Target distances and LiDAR to targets ground truth, as measured by the TS15.} \label{tab:truth} \vspace{-2.5em} \end{table} During this experiment, each LiDAR was warmed up for at least 30\,min to increase detection accuracy of the photo-detectors. As shown in Fig.~\ref{F:jaritargets}\subref{F:jaritargets-g}, we used a Leica Geosystems Total Station Viva TS15\cite{leica-ts15} and reflector prisms to setup the ground truth for target positions. Table~\ref{tab:truth} shows the target distances (along the LiDAR's $x$-axis) and the actual measured distances with the TS15. Reflective targets were carefully aligned at each measurement position, which we previously marked on the road surface, while the mini-van and the mannequins were approximately aligned with this. Fig.~\ref{F:jaritargets}\subref{F:jaritargets-b} shows the 5\,m mark as an example. \begin{figure*}[!htb] \centering \subfloat[expected][Expected]{ \includegraphics[width=0.8\textwidth]{theoretical.pdf} \label{F:density-a} }\\ \vspace{-1em} \subfloat[actual][Measured]{ \includegraphics[width=0.8\textwidth]{density_overall_log.pdf} \label{F:density-b} } \caption{Expected vs measured density (number of points) on reflective targets per LiDAR, \subref{F:density-a} number of expected points, and \subref{F:density-b} average number of measured points.} \label{F:density} \vspace{-1.5em} \end{figure*} We used two metrics to compare LiDARs measurement performance: range accuracy and point density. We segmented the reflective targets as a whole and individually. We accumulated 40 frames of LiDAR data and rejected data with insufficient points (min. 9 points per target, or 3 points per reflective surface type). RMSE between the measured points and the ground truth was calculated at every distance, and results are shown in Fig.~\ref{F:range}. We can easily see that generally, RMSE grows with distance and some LiDARs struggle at very close distances. Upon closer investigation, some LiDARs specifically struggle with high reflectivity targets at close range. Fig.~\ref{F:density} shows the expected vs actual number of points detected on the reflective targets, averaged over 40 frames. The expected density is obtained from simulation, using each LiDAR's HRes, VRes and VFOV, to find the number of points falling inside the reflector targets at each range. In general, the VLS-128 had the best performance and measured values matched very closely the expected density. Pandar64 came in second place and also performed similar to its expected density. Pandar40P, RS-Lidar32 and VLP-32 followed closely the HDL-64S2. OS1-64 drops very rapidly within the first 20\,m and after 35\,m provides a similar density to the sensors with a lower number of channels. Finally, the OS1-16 comparatively had lower density than VLP-16 at the same number of channels. As shown on the VFOV data on Table~\ref{tab:lidar-list}, LiDARs have their laser vertical layout typically designed to cover more of the ground than the sky (i.e., more laser beams pointing downwards at negative elevation angles, than beams pointing upwards), the exception being VLP-16, OS1-16 and OS1-64 with symmetric coverage. Having the A0 reflective targets from 0.6\,m up to 1.8\,m from the ground while LiDARs are mounted on the car slightly over 2\,m from the ground, means that sensors which favour the ground portion will detect more points from targets than those which favour symmetric coverage. HDL-64S2 has the smallest sky portion coverage ($2\degree$) while HDL-32E has the largest ground portion coverage ($30.67\degree$). In addition, VLS-128, VLP-32C, Pandar64, Pandar40P and RS-Lidar32 have the same VFOV of $40\degree$ with equal \vfovboundsdegplus{15}{25} bounds (of course, number of beams and VRes are different). These sensors have a rather high density performance up to the maximum range therefore are suitable for object detection. A detailed study of the vertical layouts and vertical resolutions of these LiDARs, for diverse applications such as object detection, object classification, mapping, localization, etc., is left as a future work. \begin{figure*}[!htb] \centering \subfloat[][VLS-128]{ \begin{minipage}[b]{.16\textwidth} \includegraphics[width=\textwidth]{vls128_fog.pdf} \vfill \includegraphics[width=\textwidth]{vls128_rain30.pdf} \vfill \includegraphics[width=\textwidth]{vls128_light.pdf} \end{minipage} \label{F:adverseweather-a} }\hspace{-1em} \subfloat[][HDL-64S2]{ \begin{minipage}[b]{.16\textwidth} \includegraphics[width=\textwidth]{hdl64_fog.pdf} \vfill \includegraphics[width=\textwidth]{hdl64_rain30.pdf} \vfill \includegraphics[width=\textwidth]{hdl64_light.pdf} \end{minipage} \label{F:adverseweather-b} }\hspace{-1em} \subfloat[][HDL-32E]{ \begin{minipage}[b]{.16\textwidth} \includegraphics[width=\textwidth]{hdl32_fog.pdf} \vfill \includegraphics[width=\textwidth]{hdl32_rain30.pdf} \vfill \includegraphics[width=\textwidth]{hdl32_light.pdf} \end{minipage} \label{F:adverseweather-c} }\hspace{-1em} \subfloat[][Pandar64]{ \begin{minipage}[b]{.16\textwidth} \includegraphics[width=\textwidth]{pandar64_fog.pdf} \vfill \includegraphics[width=\textwidth]{pandar64_rain30.pdf} \vfill \includegraphics[width=\textwidth]{pandar64_light.pdf} \end{minipage} \label{F:adverseweather-d} }\hspace{-1em} \subfloat[][RS-Lidar32]{ \begin{minipage}[b]{.16\textwidth} \includegraphics[width=\textwidth]{rs32_fog.pdf} \vfill \includegraphics[width=\textwidth]{rs32_rain30.pdf} \vfill \includegraphics[width=\textwidth]{rs32_light.pdf} \end{minipage} \label{F:adverseweather-e} }\hspace{-1em} \subfloat[][OS1-64]{ \begin{minipage}[b]{.16\textwidth} \includegraphics[width=\textwidth]{os64_fog.pdf} \vfill \includegraphics[width=\textwidth]{os64_rain30.pdf} \vfill \includegraphics[width=\textwidth]{os64_light.pdf} \end{minipage} \label{F:adverseweather-f} } \caption[]% {Adverse weather results, color represents intensity, top row: fog, middle row: rain, and bottom row: strong light.} \label{F:adverseweather} \vspace{-1.5em} \end{figure*} \section{Adverse weather} \label{s:env-weather} JARI's weather experimental facilities allowed us to test LiDARs in controlled weather conditions (refer to Fig.~\ref{F:jaritargets}\subref{F:jaritargets-a}). For fog emission, this weather chamber has 7.5\,$\mu$m particle size and controllable visibility of 10\,m up to 80\,m, with fog emitted over the complete 200\,m track. For rain emission, there are two different sprinklers with particle size of 640\,$\mu$m and 1400\,$\mu$m, and 3 precipitation levels: strong (30\,mm/h), intense (50\,mm/h), and very intense (80\,mm/h). In our study we used strong and very intense. Rain is emitted only for half of the track (100\,m). Strong ``sun'' light comes from a controlled mobile 6\,kW xenon light source with maximum luminous intensity of 350\,Mcd, and adjustable position, yaw and pitch angles. It has an optional yellow filter to approximate the color temperature of the sun; however, as it reduces illuminance, we tested without this filter for a maximum color temperature of 6000\,K (sample scene in Fig.~\ref{F:jaritargets}\subref{F:jaritargets-f}). In our experiment, the maximum illuminance at the LiDAR mount position on the car was set to 200\,klx (full sunlight illuminance at noon) at a distance of 40\,m from the light source. This means illuminance gradually increases from the starting position, reaching the peak illuminance at 40\,m from the light source, and then decreases towards the stopping position. For safety reasons, during the adverse weather experiments, we drove the vehicle between 15\,km/h and 25\,km/h. Due to poor visibility during fog and light experiments, we also added small bumps on the road (see Fig.~\ref{F:jaritargets}\subref{F:jaritargets-d}) so the driver could identify the slow down and stopping positions; as we drove forward and backwards, there were two such stopping areas at either end of the track. For all the weather experiments, a passenger was present to lend an extra pair of eyes to the driver. The driver, other team members and the JARI staff kept constant communication over push-talk ratios to regulate the start and end of each run, and to ensure safety. For the fog experiment, we ensured fog density before each experiment. For the strong light experiment, both driver and passenger and other people outside the vehicle wore special dark sunglasses. The strong light experiment was conducted right after the rain experiment, thus our data has the additional value of including specular reflections (Fig.~\ref{F:jaritargets}\subref{F:jaritargets-e}) due to the wet road surface for half the test track. We also recorded RGB camera and IR camera data during these experiments. The fog experiment started with a very dense 10\,m visibility and the vehicle drove forward towards the stop position, then backwards towards the start position, waited 30\,s for the fog to dissipate, and repeated again until perceived visibility was over 80\,m. It takes about 10\,min for the fog chamber to reach maximum density again, so during this time we changed LiDARs (we kept other LiDARs warming up at least 30\,min before any test) and repeated. For the rain experiment, we started with a 30\,mm/h precipitation rate, waited about 1\,min for it to become steady, and drove the vehicle backwards towards the stop position and then forward to the start position only one time; as rain falls only in the last half of the track, our vehicle made transitions from dry to rainy and vice versa, with targets inside the rainy area. We then set the 80\,mm/h precipitation rate and repeated driving, returning to the start position to change LiDARs for the next test. Finally, the strong light experiment happened after rain experiment therefore half the test track was wet creating specular reflection conditions; from the start position we turned on the xenon light source, drive forward towards the stop position (passing through the maximum illuminance zone) and backwards towards the start position, turned off the light, changed the LiDAR, and repeated. Adverse weather qualitative results are shown on Fig.~\ref{F:adverseweather} for a selection of LiDARs. The top row shows the fog experiment when the vehicle was close to the targets, the middle row shows the rain experiment at 30\,mm/h precipitation rate with the vehicle under the rainy area, and the bottom row shows the strong light experiment when the vehicle was close to the highest illuminance area. All LiDARs were affected in a similar way by fog: several of the low reflection intensity points tend to form a toroidal shape around the LiDAR for the echo from the fog is stronger, the highly reflective walls are partially visible but with a much lower intensity values, with only the highly reflective white markers in the road and the diamond-grade and white reflectors ahead are partially visible with a diminished intensity; this means that much of the intensity of the reflected light is scattered and attenuated by the fog. \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{rain2.pdf} \caption{``Rain pillars'' as detected by a LiDAR.} \label{F:rain} \vspace{-1.5em} \end{figure} Rain also affects all the LiDARs: while it does not attenuate reflections, it creates fake obstacles especially when precipitation rate is high and non-uniform. This situation is clearly shown in Figs.~\ref{F:adverseweather}\subref{F:adverseweather-a}, \subref{F:adverseweather-d} and \subref{F:adverseweather-e}. The rain experiment was not encouraging, as most LiDARs detected the water showers from the sprinklers as vertical pillars, as shown in Fig.~\ref{F:rain}. This points to the need of better rain generation systems in weather chambers. Finally, during the strong light experiment, when the vehicle was approximately at the maximum illuminance area, we obtained almost no data from the experiment targets, road and wall in front of the LiDAR. These elements become again visible when the vehicle is in other areas with much lower illuminance. While such strong illuminance is not expected at the horizon, certain LiDAR setups on the car, especially when LiDARs are mounted with large roll/pitch angles, will be affected by strong sunlight. \section{Conclusions} \label{s:concl} In this work we introduced an first-of-its-kind collection of data from multiple 3D LiDARs, made publicly available for research and industry, with the objective of improving our understanding of the capabilities and limitations of popular LiDARs for autonomous vehicles. This dataset will enable benchmarking of new LiDARs, better representations in vehicle simulation software, direct comparison of LiDARs capabilities before purchasing, and better perception algorithms. This study still lacks some important conditions such as low temperature snowy environments, night time scenes, direct interference, realistic rain, other wavelengths, and so on, which will be addressed in future extensions. However, this work sheds light onto existing issues with LiDARs which require research: the serious noise induced by indirect interference and strong light, the almost null visibility during dense fog, and the need for adequate existing object detection algorithms to work with multiple LiDARs. Efforts to extend the {LIBRE} dataset have already started: we will add more sensors, including the Velodyne Alpha Prime; environments, including direct and indirect interference; other evaluations, including new perception algorithms, mapping and localization. We are also preparing a second phase which will include, among others, newer solid-state LiDARs (MEMS-based scanners), different wavelengths such as 1550\,nm, and other scanning techniques. \section*{Acknowledgments} This work was supported by the Core Research for Evolutional Science and Technology (CREST) project of the Japan Science and Technology Agency (JST). We would like to extend our gratitude to the Japan Automobile Research Institute (JARI) for all the support while using their JTOWN weather chamber and other facilities. We appreciate the help of the Autoware Foundation to realize this project. Finally, and as a matter of course, this work would not have been possible without the invaluable support of Velodyne Lidar Inc., Ouster Inc., Hesai Photonics Technology Co., Ltd., and RoboSense--Suteng Innovation Technology Co., Ltd. \bibliographystyle{IEEEtran}
1,477,468,751,282
arxiv
\section*{Introduction} After more than fifty years since its birth, Anderson localization still remains in the focus of studies \cite{Evers2008,fifty}. During the last decade it became almost ubiquitous in experimental physics, being observed with electromagnetic \cite{Optics}, acoustic \cite{Hu08}, and matter waves \cite{Billy08,Roati08,Kondov11,Jendr2011}. In the theoretical domain, a generalized problem of localization in presence of nonlinearity and interactions was brought to the forefront of the studies \cite{Shepelyansky_1993,Molina1998,Pikovsky2008,Fishman2008,Flach2009,Laptyeva2010,Johansson2010,Basko_2011,Michaely2012,csigsf13}. The predicted wave packet delocalization and chaotic subdiffusion has already received an impressive support in the pioneering experiments with interacting ultracold atoms expanding in effectively one dimensional (1D) optical potentials \cite{nonlinear2,Deissler2010,Lucioni2011}. Most of the current activity in the field remains restricted to a dissipationless limit, when the dynamics of a system is fully specified by its Hamiltonian. Otherwise, since Anderson localization is a phenomenon relying on interference \cite{And58}, one expects the destructive effect of dissipation due to rising of decoherence effects. Indeed, absorption of light in waveguide arrays (and, optionally, gain) and disorder have proved to produce an intricate interplay instead of pure Anderson localization, though permitting strongly suppressed diffusion \cite{Frank2006,Yamilov2014}. Likewise, it has been demonstrated for quantum particles that scattering \cite{Fyodorov} and spectral \cite{Huse} properties of localizing systems are deteriorated, though survive weak dissipation or coupling to a Hamiltonian bath, respectively. Noteworthy, dissipation in ordered lattices have proved to be destructive for the originally ballistic transport. Namely, it evokes the mobility transition towards diffusive light propagation, when introduced homogeneously \cite{Eichelkraut_2013}, and exponential localization, when randomized \cite{Basiri_2014}. Instructively, the dissipation introduced at the boundaries of passive chains (or mimicked by semi-infinite propagating leads) organizes non-trivial transitions in the scaling of relaxation \cite{Kottos_2004}, transparency \cite{Tietsche_2008}, and arising asymmetry of wave propagation \cite{Lepri_2011}, depending on the levels of disorder and nonlinearity. The first example of the constructive interplay was recently found in a random laser operating in the Anderson regime, when localization reduced the spatial overlap between lasing modes, preventing their competition and improving stability \cite{LiuJ.2014}. Importantly, distinct lasing thresholds for Anderson modes in pumping strength were observed, enabling sequential excitation and control. It was also argued that interactions between the modes get suppressed in the strong localization and vanishing dissipation limit, although with significant deviations found beyond \cite{Stano2013}. A new room for dissipation effects was created by the recent progress in experimental manipulations with exciton-polariton condensates \cite{Kasprzak,Balili,Deng2010,Carusotto2013,Byrnes2014}. A condensate can be considered as an active system balancing between excitation (by a pumping source) and decay (due to the continuous light emission). Further on, one can arrange 1D arrays of condensate centers by synthesizing spatial inhomogeneities \cite{Balili, Lai, Tanese, Bloch} or by rotating ring-shaped optical potentials and switching to the co-moving frame \cite{amico,berloff}. Spatial interaction appears due to polariton diffraction and diffusion and, importantly, would include both Josephson and dissipative terms (the former typically prevails). The resulting collective dynamics is a blend of excitation and lasing effects and can be modeled with Ginzburg-Landau type equations (GLE) \cite{GLE}. In this framework, dissipative effects act as internal decay mechanisms and their influence on the center dynamics is accounted by additional imaginary terms in the model equations \cite{Deng2010,Byrnes2014,GPE,Cristofolini2013}. The recent pioneering theoretical and experimental studies have already demonstrated a rich nonlinear dynamics of traveling and immobile gap solitons in periodic 1D condensate center arrays \cite{Tanese,Kivshar}, and further stretched to spatially quasiperiodic structures to uncover the fractal energy spectrum \cite{Bloch}. Altogether, these advances naturally lead to the question of Anderson localization in \textit{active} arrays, where pumping and dissipation join the old players, nonlinearity and disorder. Some collective phenomena in such systems are well studied, for example, synchronization \cite{Pik_book} and oscillation death \cite{Osipov1998,Rubchinsky2000,Rubchinsky2002}. However, most of the related studies address lattices that crumble into a set of uncoupled oscillators in the linear conservative limit. In this Report we demonstrate and study Anderson attractors in 1D active arrays, as described by a disordered \cite{And58} version of the discrete complex GLE \cite{dGLE}. We find that the increase of the pumping strength leads to the formation of a stationary multipeak pattern formed by a set of excited and interacting Anderson modes. We determine the transition from the regime of Anderson attractors to delocalized collective oscillations upon the increase of pumping. Both excitation and delocalization thresholds scale with the strength of the dissipative coupling and increase with the increase of disorder. Finally, we show that the increase of pumping beyond the delocalization threshold leads to a multi-mode chaos followed by cluster synchronization. \section*{Results} We consider a one-dimensional disordered discrete Ginsburg-Landau equation, a generalization of the original Anderson lattice equations \cite{And58} that suitably accounts for non-equilibrium condensate dynamics \cite{Cristofolini2013} \begin{equation} \label{eq:1a} \begin{aligned} &i\dot{z}_l=\Delta_l z_l + i\left(\alpha-\sigma\left|z_l\right|^2\right)z_l+\left|z_l\right|^2z_l-\left(1-i\eta\right)(z_{l+1}-2z_l+z_{l-1}), \end{aligned} \end{equation} where $\Delta_l\in\left[-W/2, W/2\right]$ are independent uniformly distributed random numbers and $W$ is the disorder strength. Further on, $\alpha$ is the pumping rate, $\sigma$ is the nonlinear dissipation coefficient, and $\eta$ is the strength of dissipative coupling between adjacent sites. Without loss of generality we set conservative nonlinearity and coupling coefficients to one. In numerics, we study finite systems, and do not find appreciable finite size-effects for reasonably large array lengths, $N>100$. Zero boundary conditions are assumed for definiteness, $z_0=z_{N+1}=0$. In the linear dissipationless limit, $\alpha=\eta=0$ and $\left|z_l\right|^2 \rightarrow 0$, the stationary solutions $z_l=A_l {\rm e}^{ - i \lambda t}$ satisfy \begin{equation} \label{eq:1b} \lambda_\nu A_l^{(\nu)}=\Delta_l A^{(\nu)}_l-A^{(\nu)}_{l+1}+2A^{(\nu)}_l-A^{(\nu)}_{l-1}, \end{equation} which by $E_\nu \equiv \lambda_\nu-2$ reduces to the standard Anderson eigenvalue problem. All eigenstates $A_l^{(\nu)}$ are exponentially localized, $|A^{(\nu)}_l|\sim \exp\left[-|l-l_\nu|/\xi_\lambda \right]$, with $l_\nu$ and $\xi_\lambda$ denoting a center of mass and localization length of the mode, respectively. The eigenvalues are restricted to a finite interval, $\lambda_\nu \in \left[-W/2, 4+W/2 \right]$. In the limit of weak disorder, $W \ll 1$, and far from the band edges, $0<\lambda_\nu<4$, the localization length is approximated by \cite{Thouless_1979} \begin{equation} \xi_\lambda \approx \frac{24(4-E^2(\lambda))}{W^2}=\frac{24\lambda(4-\lambda)}{W^2}. \label{eq:1p6ppppp} \end{equation} Switching to the Anderson mode basis $z_l=\sum_\nu \psi_\nu(t) A_l^{(\nu)}$, we recast the original equation (\ref{eq:1a}) in the form: \begin{equation} \label{eq:3} \begin{aligned} &i\dot{\psi}_\nu=\lambda_\nu \psi_\nu + i\left(\alpha-\eta\lambda_\nu\right)\psi_\nu+i\eta\sum_{\nu_1} J_{\nu,\nu_1} \psi_{\nu_1}+(1-i\sigma)\sum \limits_{\nu_1,\nu_2,\nu_3}I_{\nu,\nu_1,\nu_2,\nu_3}\psi_{\nu_1}\psi_{\nu_2}^\ast\psi_{\nu_3}, \end{aligned} \end{equation} where $J_{\nu,\nu_1}=\sum_l\Delta_l A_{l}^{(\nu)}A_{l}^{(\nu_1)}$ and $I_{\nu,\nu_1,\nu_2,\nu_3}=\sum_l A_{l}^{(\nu)}A_{l}^{(\nu_1)}A_{l}^{(\nu_2)}A_{l}^{(\nu_3)}$. These equations contain both the linear and nonlinear terms that account for dissipation and pumping. Nonlinear terms are responsible for the mode interaction. However, due to the exponential localization of the eigenstates, interactions are confined to localization volume $V_{loc}(\lambda) \approx 3.3 \xi_\lambda$ \cite{Krim10}. We start the analysis of Eq. (\ref{eq:1a}) by considering the net norm $Z=\sum |z_l|^2$. The dynamics of the norm is given by \begin{equation} \label{eq:4} \dot{Z}=2\sum\left[(\alpha-\sigma|z_l|^2)|z_l|^2-\eta|z_{l+1}-z_l|^2\right]. \end{equation} It follows that the zero solution $z_l \equiv 0$ is globally stable for all $\alpha \le 0$. It also suggests that homogeneous in-phase solutions $z_{l+1}\approx z_l$ are more energetically favorable than anti-phase ones, $z_{l+1}\approx -z_l$. To study stability of the zero solution, we assign increments $p_\nu$ to the small-amplitude Anderson modes, $z_l(t)=\zeta A_l^{(\nu)}\exp[(p_\nu-i\lambda_\nu) t], \zeta\ll1$, and substitute them into Eq. (\ref{eq:1a}). Linearization gives \begin{equation} \label{eq:5} p_\nu=\alpha-\eta\sum\left|{A}^{(\nu)}_{l+1}-{A}_l^{(\nu)}\right|^2. \end{equation} The zero solution is stable when $\mbox{max} \ p_\nu <0$. This quantity depends only on the strength $W$ and particular realization $\{\Delta_l\}$ of the disorder, and also on the ratio between incoherent pumping rate and dissipative coupling, $\bar{\alpha}=\alpha/\eta$. Irrespective of the strength and particular realization of disorder, the scaled excitation threshold \begin{equation} \label{eq:6} \bar{\alpha}^*=\min\limits_{\nu}\bar{\alpha}^*_\nu=\min\limits_{\nu}\sum\left|{A}^{(\nu)}_{l+1}-{A}_l^{(\nu)}\right|^2 \end{equation} is bounded, $0\le\bar{\alpha}^*\le4$. As the Anderson modes have finite localization lengths for finite disorder strength $W$ and, hence, inside localization volume we have $|A_l^{(\nu)}|\sim 1/\sqrt{V_{loc}}$, there is a finite excitation threshold $\bar{\alpha}>0$ for finite $W$. Figure~\ref{fig:1} presents the results of numerical simulations for a particular realization of disorder. Profiles for different values of $\alpha$ were obtained as independent \textit{attractor} solutions, by setting the system into an initial random low-energy state $|z_l(0)| \ll 1$ and letting it evolve until the corresponding amplitude profile is stabilized. (We observed single-attractor regimes in all performed numerical tests, although multistability is not excluded, in principle.) The key feature of the attractor patterns is the multipeak structure, well pronounced above a certain threshold (e.g. $\alpha \approx 0.006$ for $W=1, \eta=0.1$, $\sigma=1$, $N=1000$, Fig.~\ref{fig:1}). The positions of the peaks remain unaffected by the further increase of the pumping strength. Zooming into a single peak, we find that it extends over many sites, top right panel of Fig.~\ref{fig:1}. By going into the reciprocal Anderson space, we find that the excitation is well-localized at a single Anderson mode, bottom right panel of Fig.~\ref{fig:1}. This observation supports the conjecture that the attractor peaks are produced through excitation of Anderson modes. Mode specific excitation conditions can be further analyzed by using the linearized version of equations (\ref{eq:3}), \begin{equation} \label{eq:7} i\dot{\psi}_\nu=\lambda_\nu \psi_\nu + i\left(\alpha-\eta\lambda_\nu\right)\psi_\nu+i\eta\sum_{\nu_1} J_{\nu,\nu_1} \psi_{\nu_1}. \end{equation} In the weak disorder limit, $W\ll1$, the localization length of the modes that are far from the band edges is large, $\xi_\lambda\gg1$. Since within the localization volume $|A_l^{(\nu)}|\ll1$, the terms with $J_{\nu,\nu_1}$ can be neglected. It follows immediately that the rescaled excitation threshold of $\nu$-th Anderson mode can be approximated well by its eigenvalue, \begin{equation} \label{eq:8} \bar{\alpha}^*_\nu\approx\lambda_\nu. \end{equation} This also means that the modes closer the lower band edge will be excited first. However, the localization length of such modes can substantially decrease, potentially, up to $\xi_\lambda \sim 1$, so that corrections to Eq.~(\ref{eq:7}) due to $J_{\nu,\nu_1}$ terms might become significant. The instability threshold can be estimated more accurately by using Eq.~(\ref{eq:6}). Neglecting exponentially decaying tails of the modes, $A_l=0, \ l\notin[l_\nu-V_{loc}/2,l_\nu+V_{loc}/2]=0$, and minimizing $\bar{\alpha}^*_\nu$ under normalization constraint $\sum A_l^2=1$, we obtain: \begin{equation} \label{eq:5b} \min\bar{\alpha}^*_\nu=4\sin^2\frac{\pi}{2(V_{loc}+1)}. \end{equation} Finally, by substituting the localization length $\xi_0\approx 8 W^{-2/3}$ for the modes with $\lambda_\nu\approx0$ \cite{Derrida} in $V_{loc}\approx3.3\xi_\lambda$, we arrive at: \begin{equation} \label{eq:5c} \bar{\alpha}^*\approx W^{4/3}/64. \end{equation} Note, that this approach is also valid in the strong disorder limit, $W\gg1$, when all Anderson modes are essentially single-site excitations: substituting $V_{loc}=1$ in (\ref{eq:5b}) one obtains $\bar{\alpha}^*_\nu\approx2$. Moreover, taking into account the strong decay of the mode amplitudes, $\left|A_{l_\nu\pm (l'+1)}^{(\nu)}/A_{l_\nu\pm l'}^{(\nu)}\right|\sim \exp\left(-\xi_{\lambda_\nu}^{-1}\right)\ll 1$, one finds that the mode specific excitation thresholds (\ref{eq:6}) are approximated by \begin{equation} \label{eq:5d} \bar{\alpha}^*_\nu\approx2+\sum\limits_{\nu\neq l_\nu}\left(A_l^{(\nu)}\right)^2\approx2(1+e^{-2/\xi_\nu}). \end{equation} It follows, that they tend to the limiting value $\bar{\alpha}^*=2$ as $W\rightarrow\infty$. To test the analytical results, we calculate mode excitation thresholds $\bar{\alpha}^*_\nu$ according to (\ref{eq:6}) and plot them as a function of the numerically calculated eigenvalues $\lambda_\nu$, Fig.~\ref{fig:2}. The obtained statistical dependencies corroborate approximation (\ref{eq:8}) for the modes far from the band edges, especially well in the limit of weak disorder. The values of minimal excitation thresholds correspond to $\lambda_\nu\approx0$, and the estimate (\ref{eq:5c}) is in a good agreement with numerical results, see inset of Fig.\ref{fig:2}. By approximating the dependence around its dip by $|\bar{\alpha}-\bar{\alpha}^*|\propto|\Delta\lambda|^2$ and taking into account the finiteness of the density of Anderson states at $\lambda=0$, we get that the density of excited states scales $\propto \sqrt{\bar{\alpha}-\bar{\alpha}^*}$. By getting over the oscillation threshold $\bar{\alpha}^*$ one would not immediately excite all modes near the band edge. These modes are well-localized and their interaction with other modes is exponentially weak. In addition, next-neighbor mode interaction remains significantly damped since mode eigenvalues differ substantially due to the level repulsion. As a result, Anderson modes from the vicinity of the band edge arise in a one-by-one manner as the pumping rate exceeds thresholds $\bar{\alpha}>\bar{\alpha}_\nu^*$. Mode amplitudes saturate because of the nonlinear dissipation and amplitude asymptotic values can be estimated, by using Eq.~(\ref{eq:3}), as: \begin{equation} \label{eq:6a} |\psi_\nu|\approx\sqrt\frac{\alpha+\eta\lambda_\nu-\eta J_{\nu,\nu}}{\sigma I_{\nu,\nu,\nu,\nu}}. \end{equation} As the pumping strength increases further, the set of excited modes becomes dense and mode interaction starts contributing to the formation of the system attractor. Multi-mode nonlinear dynamics has two well-known trademarks: chaos and synchronization \cite{Pik_book}. Both appear in our model system, see Fig.~\ref{fig:4}. By gradually increasing the pumping strength, we first observe a transition from the Anderson attractors to the regime of delocalized oscillations, Fig.~\ref{fig:4} (middle panel). The delocalized regime is characterized by irregular spatio-temporal patterns. In terms of the localized modes, this is a well-developed mode chaos. When the pumping is increased further, we observe formation of synchronization clusters with the typical size of the Anderson localization length. We can estimate the transition to delocalized oscillations by assuming that it happens when the sum of the localization volumes of the excited modes becomes of the order of the system size, $\sum V_{loc}\sim\mathcal{O}(N)$. An average localization volume that measures the ratio of effectively excited sites is $\langle V_{loc}\rangle\sim\mathcal{O}(1)$, where the non-excited modes are formally assigned $V_{loc}=0$. By using expression (\ref{eq:8}) for the mode excitation thresholds, neglecting contributions of the highly localized modes near the lower band edge, and approximating the density of states in the weak disorder limit as $\rho(\lambda)\approx (\pi\sqrt{\lambda(4-\lambda)})^{-1}$, we obtain \begin{equation} \label{eq:13} \langle V_{loc} \rangle\approx\int\limits^{\bar{\alpha}}_0 V_{loc}(\lambda)\rho(\lambda)d\lambda\approx\frac{105\bar{\alpha}^{3/2}}{\pi W^2} \end{equation} and get the transition value: \begin{equation} \label{eq:14} \bar{\alpha}^{**}\approx \left(\frac{\pi}{105}\right)^{2/3}W^{4/3}. \end{equation} In the strong disorder limit the mode excitation thresholds (\ref{eq:5d}) converge to $\bar{\alpha}^*=2$, which, therefore, also approximates the onset of delocalized oscillations, $\bar{\alpha}^{**}\approx2$. For a numerical test we average $|z_l|^2$ over observation time and calculate the participation number (a quantity commonly used to estimate the number of effectively excited sites) normalized by the system size: \begin{equation} \label{eq:16} P=\left(\frac{1}{N}\sum |z_l|^4/Z^2\right)^{-1}. \end{equation} Since the maximally possible $P=1$ requires a uniform distribution of $|z_l|$, we use $P=1/2$ as the threshold value to indicate localization-delocalization transition. The left panel of Fig.~\ref{fig:3} presents the results obtained by averaging over ten disorder realizations. For weak dissipative coupling $\eta\ll1$, the scaled curves $P(\bar{\alpha})$ fall closely to each other, in accord to the theoretical prediction, Eq.~(\ref{eq:14}). It also estimates the numerical thresholds reasonably well, e.g. compare $\bar{\alpha}^{**}\approx0.1$ for $W=1$, Eq.~(\ref{eq:14}), to $\bar{\alpha}^{**}\approx0.13\ldots0.15$, as read from Fig.~\ref{fig:3}. When the dissipative coupling becomes of the order of the conservative one, $\eta=\mathcal{O}(1)$, estimate (\ref{eq:14}) with the scaling $P(\alpha,\eta,W)=P(\bar\alpha,W)$ are no longer valid, and the actual delocalization threshold is significantly different from (\ref{eq:14}). In this limit one cannot neglect the last term in Eq.~(\ref{eq:7}) which is responsible for dissipative interaction between the modes. In order to quantify the transition to the mode chaos regime, we calculate the largest Lyapunov exponent as a function of the pumping strength, Fig. \ref{fig:3} (right panel). Comparing the exponents, obtained for different values of dissipative coupling constant $\eta$, with the results presented in Fig. \ref{fig:3}, we confirm that the transition to delocalized oscillations is a precursor of the mode chaos. Remarkably, a further increase of the pumping above $\alpha\approx1$ leads to the drop of the largest Lyapunov exponents to zero thus marking the transition back to regular dynamics. This transition is weakly dependent of $\eta$ and corresponds to the emergence of synchronized clusters \cite{Pik_book}, see Fig. \ref{fig:4} (bottom panel). \section*{Discussion} Anderson localization in active disordered systems is a combined effect produced by the energy pumping, dissipation and nonlinearity. It results in the formation of the Anderson attractor consisting of many localized weakly-interacting modes. We have found that the pumping excitation thresholds for the Anderson modes are mode-specific and those with lowest values correspond to the modes located near the lower band edge. Sequential excitation of Anderson modes by tuned pumping leads to the transition from Anderson attractors to the mode chaos and attractor patterns in the form of delocalized oscillations. These results pose a broad range of theoretical challenges, as studying Anderson attractors in higher dimensions, which allow for a mobility edge or criticality, in other types of localizing potentials, and their counterparts in open quantum systems. It would also be of interest to consider non-uniform dissipation, e.g. absorbing boundaries only. For the experimental perspective, lattices of exciton-polariton condensates and active waveguide arrays are most promising candidates for the realization of Anderson attractors. The recent study of another localizing -- quasiperiodically modulated -- $1$D polariton condensate arrays has paved a way \cite{Bloch}, and the on-chip random lasing in the Anderson regime is, probably, the first already present example \cite{LiuJ.2014}. Other candidates (although on the model level at the moment) are cavity-QED arrays with the cavities filled up with two-level atoms or qubits, where the dynamics the mean-field states in the adjoint cavities can be described by using GLE-type equation \cite{Sedov2012,Chen2012} and plasmonic nanostructures \cite{Shi2014}. Finally, Anderson attractor regimes can be generalized to the systems of coupled disordered Josephson junction arrays, marked by the recent rise of interest to dissipative response effects \cite{Basko}.
1,477,468,751,283
arxiv
\section{Notation \section{Introduction \label{sec: introduction} Let $F$ be a local field of characteristic zero and $W_{F}$ be the Weil group, then the local Langlands group is defined as follows \[ L_{F} = \begin{cases} W_{F} & \text{if $F$ is archimedean}, \\ W_{F} \times SL(2, \mathbb{C}) & \text{if $F$ is nonarchimedean}. \end{cases} \] Let $G$ be a quasisplit connected reductive group over $F$ and $\D{G}$ be its complex dual group, the Langlands dual group $\L{G}$ is a semidirect product $\D{G} \rtimes W_{F}$, where the action of $W_{F}$ on $\D{G}$ factors through the absolute Galois group $\Gal{F} = \text{Gal}(\bar{F}/F)$. A local Langlands parameter $\phi$ is a $\D{G}$-conjugacy class of admissible homomorphisms from $L_{F}$ to $\L{G}$ (see \cite{Borel:1979}). In particular, it respects the projections on $W_{F}$ from both $L_{F}$ and $\L{G}$. We denote a representative of $\phi$ by $\underline{\phi}: L_{F} \rightarrow \L{G}$. Let $\P{G}$ be the set of local Langlands parameters and $\Pkt{}(G(F))$ be the set of isomorphism classes of irreducible admissible representations of $G(F)$. The local Langlands conjecture asserts a correspondence between $\P{G}$ and $\Pkt{}(G(F))$. The correspondence is not necessarily a bijection. In fact, it is conjectured that each $\phi \in\P{G}$ is associated with a finite set $\Pkt{\phi}$ of $\Pkt{}(G(F))$, such that they give a partition of $\Pkt{}(G(F))$ \begin{align*} \Pkt{}(G(F)) = \bigsqcup_{\phi \in \P{G}} \Pkt{\phi}. \end{align*} Such sets $\Pkt{\phi}$ are called L-packets. The local Langands conjecture has been proved for $GL(N)$ by Harris-Taylor \cite{HarrisTaylor:2001}, Henniart \cite{Henniart:2000} and Scholze \cite{Scholze:2013}, in which case one does get a bijection. Arthur \cite{Arthur:2013} extended their results to $Sp(N)$ and $SO(N)$ through the theory of twisted endoscopy, and in his case the packets are not always singletons. By the Langlands classification of irreducible admissible representations of $G(F)$, one can reduce this correspondence to the tempered case, namely one can replace $\Pkt{}(G(F))$ by the subset $\Pkt{temp}(G(F))$ of tempered representations, and $\P{G}$ by the subset $\Pbd{G}$ of bounded parameters (i.e., the closure of the image of $\underline{\phi}|_{W_{F}}$ is compact). The tempered L-packets can be characterized by ``stability". To explain this concept, we need to introduce the Harish-Chandra characters. For any $\r \in \Pkt{}(G(F))$, the associated Harish-Chandra character is a distribution on $G(F)$ defined by \[ f_{G}(\r) := trace \int_{G(F)} f(g)\r(g) dg \] for $f \in C^{\infty}_{c}(G(F))$. Harish-Chandra showed this distribution can be represented by a $G(F)$-conjugate invariant locally integrable function $\Theta_{\r}$ over $G(F)$. Moreover, $\Theta_{\r}$ is smooth over the strongly regular semisimple elements $G_{reg}(F)$. Later on, we will simply call them characters. We say a finite linear combination $\Theta$ of Harish-Chandra characters is {\bf stable} if it is $G(\bar{F})$-conjugate invariant over $G_{reg}(F)$, namely $\Theta(\gamma) = \Theta(\gamma')$ for any $\gamma, \gamma' \in G_{reg}(F)$ such that $\gamma = g^{-1} \gamma' g$ for some $g \in G(\bar{F})$. Then the tempered L-packets are conjectured to be the minimal subsets of irreducible tempered representations, within which some linear combination of the Harish-Chandra characters is stable (cf. Conjecture 9.2, \cite{Shahidi:1990}). Let $D$ be a torus and $\widetilde{G}$ be a quasisplit connected reductive group over $F$, which is an extension of $D$ by $G$ \begin{align*} \xymatrix{1 \ar[r] & G \ar[r] & \widetilde{G} \ar[r]^{\c} & D \ar[r] & 1. } \end{align*} Dual to this exact sequence, we have \begin{align*} \xymatrix{1 \ar[r] & \D{D} \ar[r] & \D{\widetilde{G}} \ar[r]^{\bold{p}} & \D{G} \ar[r] & 1. } \end{align*} The projection $\bold{p}: \D{\widetilde{G}} \rightarrow \D{G}$ can be extended to an L-homomorphism, so it induces a map $\Pbd{\widetilde{G}} \rightarrow \Pbd{G}$. Labesse (\cite{Labesse:1985}, Theorem 8.1) showed this map is in fact surjective. For $\tilde{\phi} \in \Pbd{\widetilde{G}}$ and $\phi = \bold{p} \circ \tilde{\phi}$, it is believed that the restriction $\Pkt{\tilde{\phi}}|_{G} = \Pkt{\phi}$. Motivated by this, we want to construct the L-packets of $\widetilde{G}$ from that of $G$, when $G = Sp(2n)$ (resp. $SO(2n)$) and $\widetilde{G} = GSp(2n)$ (resp. $GSO(2n)$). In fact, one can also consider the case when $G = SO(2n+1)$ and $\widetilde{G} = GO(2n+1)$. Note $GO(2n+1)$ is connected. Since $GO(2n+1) \cong SO(2n+1) \times \mathbb{G}_{m}$, this case would be trivial. To give the precise statement of our result, we need to first recall Arthur's results about $G$. We fix an outer automorphism $\theta_{0}$ of $G$, such that it is trivial when $G = Sp(2n)$, and it is induced from the conjugate action of $O(2n)$ when $G = SO(2n)$. Let $\Sigma_{0} = <\theta_{0}>$, then $\Sigma_{0}$ acts on $\Pkt{}(G(F))$. Note $\theta_{0}$ induces a dual automorphism $\D{\theta}_{0}$ on $\D{G}$, so $\Sigma_{0}$ also acts on $\P{G}$ through the action of $\D{\theta}_{0}$ on $\D{G}$. We denote the set of $\Sigma_{0}$-orbits in $\Pkt{temp}(G(F))$ by $\cPkt{temp}(G(F))$ and the set of $\Sigma_{0}$-orbits in $\Pbd{G}$ by $\cPbd{G}$. The action of $\Sigma_{0}$ can be extended to $\widetilde{G}$, so we can also define the analogues of these sets for $\widetilde{G}$. \begin{theorem}[Arthur \label{thm: L-packet for G} \begin{enumerate} \item There is a canonical way to associate any $\phi \in \cPbd{G}$ with a finite subset $\cPkt{\phi}$ of $\cPkt{temp}(G(F))$ such that \begin{align*} \cPkt{temp}(G(F)) = \bigsqcup_{\phi \in \cPbd{G}} \cPkt{\phi}. \end{align*} \item For $\phi \in \cPbd{G}$, \[ \bar{\Theta}_{\phi} := \frac{1}{2} \sum_{[\r] \in \cPkt{\phi}} (\Theta_{\r} + \Theta_{\r^{\theta_{0}}}) \] is stable. \end{enumerate} \end{theorem} When $G = SO(2n)$, we let $\Pkt{\phi}^{\Sigma_{0}}$ be the set of all irreducible representations of $O(2n)$, whose restriction to $SO(2n)$ have irreducible constituents contained in $\cPkt{\phi}$, and we call $\Pkt{\phi}^{\Sigma_{0}}$ an L-packet of $O(2n)$. In this sense, the sets $\cPkt{\phi}$ really determine the L-packets of $Sp(2n)$ and $O(2n)$. But for simplicity, we will still call the sets $\cPkt{\phi}$ L-packets of $G$ in this paper. Suppose $\tilde{\phi} \in \cPbd{\widetilde{G}}$ and $\phi = \bold{p} \circ \tilde{\phi}$. Since $\cPkt{\phi}$ admits a stable linear combination of Harish-Chandra characters, $\widetilde{G}(F)$ acts on $\cPkt{\phi}$ by conjugation. We fix a character $\lif{\zeta}$ of the centre $Z_{\widetilde{G}}(F)$ of $\widetilde{G}(F)$, such that its restriction to $Z_{G}(F) = Z_{\widetilde{G}}(F) \cap G(F)$ is the central character of $\cPkt{\phi}$. Let $\clPkt{\phi, \lif{\zeta}}$ be the subset of representations of $\cPkt{temp}(\widetilde{G}(F))$ with central character $\lif{\zeta}$, whose restriction to $G(F)$ have irreducible constituents contained in $\cPkt{\phi}$. Let $X = \text{Hom}(\widetilde{G}(F)/Z_{\widetilde{G}}(F)G(F), \mathbb{C}^{\times})$. Note $X$ acts on $\clPkt{\phi, \lif{\zeta}}$ by twisting. In Corollary~\ref{cor: theta twisting character} we show there exists a subgroup $\a(\S{\phi}^{\Sigma_{0}})$ of $X$ such that for any $[\tilde{\pi}] \in \clPkt{\phi, \lif{\zeta}}$, \( \tilde{\pi} \otimes \omega \cong \tilde{\pi}^{\theta} \text{ for some $\theta \in \Sigma_{0}$ if and only if } \omega \in \a(\S{\phi}^{\Sigma_{0}}). \) Now we can state our main result. \begin{theorem \label{thm: main local theorem} Suppose $\phi \in \cPbd{G}$, there exists a subset $\cPkt{\tilde{\phi}}$ of $\clPkt{\phi, \lif{\zeta}}$ unique up to twisting by $X$, such that it satisfies the following properties: \begin{enumerate} \item \[ \clPkt{\phi, \lif{\zeta}} = \bigsqcup_{\omega \in X / \a(\S{\phi}^{\Sigma_{0}})} \cPkt{\tilde{\phi}} \otimes \omega. \] \item \[ \bar{\Theta}_{\tilde{\phi}} := \frac{1}{2} \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}}} (\Theta_{\tilde{\pi}} + \Theta_{\tilde{\pi}^{\theta_{0}}}) \] is stable. \end{enumerate} \end{theorem} In this paper, we call the sets $\cPkt{\tilde{\phi}}$ in this theorem L-packets of $\widetilde{G}$, although they really determine the L-packets of $GSp(2n)$ and $GO(2n)$ for the same reason as we have discussed above. When $F$ is archimedean, this theorem is known due to Langlands \cite{Langlands:1989} and Shelstad \cite{Shelstad:1979}. In fact this case could also follow from Theorem~\ref{thm: L-packet for G} directly. So in this paper, we will focus on the case when $F$ is nonarchimedean. Note if $\cPkt{\phi}$ is a singleton, then $\cPkt{\tilde{\phi}}$ is also a singleton by part (1) of the theorem, but it is still by no means clear that part (2) will hold for such $\cPkt{\tilde{\phi}}$. Our proof of this theorem is by global means, and it is certainly interesting to know if one can establish it by purely local methods. The main idea of the proof is to realize the L-packet as the local component of some global L-packet. To describe the global picture, we let $F$ be a number field and $\mathbb{A}_{F}$ be the adele ring of $F$. We define the automorphic representations of $G$ to be the irreducible constituents of the regular representation of $G(\mathbb{A}_{F})$ on $L^{2}(G(F) \backslash G(\mathbb{A}_{F}))$. If $\r$ is an irreducible admissible representation of $G(\mathbb{A}_{F})$, it can be decomposed as a restricted tensor product \[ \r = \otimes_{v}'\r_{v} \] of irreducible admissible representations $\r_{v}$ of $G(F_{v})$ over all the places $v$. These local representations $\r_{v}$ are unramified for almost all places, which is the necessary condition to form the restricted tensor product. We assume the global Langlands group $L_{F}$ exists and it is equipped with embeddings $L_{F_{v}} \rightarrow L_{F}$ for all places $v$. Then we can define the global Langlands parameters as in the local case. We denote the set of $\Sigma_{0}$-orbits of bounded global Langlands parameters by $\cP{G}$, for this is the set relevant in the classification of automorphic representations of $G$. For any $\phi \in \cP{G}$, we can associate a family of local Langlands parameter $\phi_{v} \in \cPbd{G_{v}}$ for all places by the following diagram \[ \xymatrix{L_{F_{v}} \ar[d] \ar[rr]^{\,\, \phi_{v}} && \L{G_{v}} \ar[d] \\ L_{F} \ar[rr]^{\,\, \phi} && \L{G}.} \] So one can define the global L-packet to be the restricted tensor product of the local L-packets \[ \cPkt{\phi} := \otimes'_{v} \cPkt{\phi_{v}}. \] \begin{theorem}[Arthur \label{thm: global L-packet for G} There exist automorphic representations in $\cPkt{\phi}$. \end{theorem} For any irreducible admissible representation $\r$ of $G(\mathbb{A}_{F})$, one can associate a family of Satake parameters $c(\r) = \{c(\r_{v})\}$ for all unramified places of $\r$. If we define an equivalence relation on the families of Satake parameters attached to irreducible admissible representations of $G(\mathbb{A}_{F})$ by requiring $c(\r) \sim c(\r')$ if $c(\r_{v})$ is $\Sigma_{0}$-conjugate to $c(\r'_{v})$ for almost all places, then another way of characterizing $\cPkt{\phi}$ is through the equivalence class $c(\phi)$ of family of Satake parameters associated with the representations in $\cPkt{\phi}$. If we take the standard embedding $\xi:\L{G} \rightarrow GL(N, \mathbb{C})$, where $N = 2n + 1$ (resp. $2n$) if $G = Sp(2n)$ (resp. $G = SO(2n)$), then $\xi(c(\phi))$ defines a family of Satake parameters for irreducible admissible representations of $GL(N, \mathbb{A}_{F})$. By the conjectural Langlands principle of functoriality and strong multiplicity one for automorphic representations of $GL(N)$, $\xi(c(\phi))$ determines a unique automorphic representation of $GL(N)$. In practice, Arthur gets around the assumption on global Langlands group by reversing our discussion here. To be more precise, he substituted for $\cP{G}$ by the subset of self-dual automorphic representations of $GL(N)$, which are induced from cuspidal automorphic representations of the Levi subgroups of $GL(N)$. Then $\phi_{v}$ will correspond to the representations of $GL(N, F_{v})$. Since we do not have the generalized Ramanujan conjecture, now we can only conclude $\phi_{v} \in \cuP{G} \supseteq \cPbd{G}$ (see Proposition~\ref{prop: local constituents of cuspidal representations}). Nonetheless, the local packet $\cPkt{\phi}$ can still be defined in this case. In this way, Theorem~\ref{thm: global L-packet for G} should really be viewed as a statement about Langlands principle of functoriality with respect to the embedding $\xi$. To summarize, the global L-packet $\cPkt{\phi}$ can be uniquely characterized by either an equivalence class of family of Satake parameters $c(\phi)$ or an automorphic representation of $GL(N)$ associated with $\xi(c(\phi))$. We call this the strong multiplicity one property for the global L-packets of $G$. The main tool in our proof is the stabilized twisted Arthur-Selberg trace formula. The ordinary stable trace formula has been established by Arthur in \cite{Arthur:2001}\cite{Arthur:2002}\cite{Arthur:2003}. The twisted case results from a long project of M{\oe}glin and Waldspurger \cite{MW:2016} which has been finished recently. All of these also rest upon Ngo's celebrated proof \cite{Ngo:2010} of the Fundamental Lemma. To give some ideas of the proof of our theorem, we would like to briefly describe two typical kinds of trace formulas used in this paper. Let $\lif{\zeta}$ be a character of $Z_{\widetilde{G}}(F) \backslash Z_{\widetilde{G}}(\mathbb{A}_{F})$. The space of $\lif{\zeta}$-equivariant $L^{2}$-functions over $\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F})$ can be decomposed into a discrete part and a continuous part: \[ L^{2}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) = L^{2}_{disc}(\widetilde{G}, \lif{\zeta}) \+ L^{2}_{cont}(\widetilde{G}, \lif{\zeta}). \] If we take a $\lif{\zeta}^{-1}$-equivariant smooth compactly supported function $\tilde{f} = \otimes_{v} \tilde{f}_{v}$ over $\widetilde{G}(\mathbb{A})$, then we can define an operator on $L^{2}_{disc}(\widetilde{G}, \lif{\zeta})$ by \[ (R^{\widetilde{G}}_{disc}(\tilde{f}) \varphi)(x) = \int_{Z_{\widetilde{G}}(\mathbb{A}_{F}) \backslash \widetilde{G}(\mathbb{A}_{F})} \tilde{f}(y) \varphi(xy) dy, \,\,\,\,\, \varphi \in L^{2}_{disc}(\widetilde{G}, \lif{\zeta}). \] M{\"u}ller \cite{Muller:1989} showed this operator $R^{\widetilde{G}}_{disc}(\tilde{f})$ is of trace class, so we can write \[ tr R^{\widetilde{G}}_{disc}(\tilde{f}) = \sum_{\tilde{\pi}} m(\tilde{\pi}) \tilde{f}_{\widetilde{G}}(\tilde{\pi}). \] where the sum is over all irreducible admissible representations of $\widetilde{G}(\mathbb{A}_{F})$ and $m(\tilde{\pi})$ is the multiplicity of $\tilde{\pi}$ in $L^{2}_{disc}(\widetilde{G}, \lif{\zeta})$. We define the discrete part of the trace formula to be the following distribution \[ I^{\widetilde{G}}_{disc}(\tilde{f}) = tr R^{\widetilde{G}}_{disc}(\tilde{f}) + ``\text{ symmetric part in } L^{2}_{cont}(\widetilde{G}, \lif{\zeta})", \] where the symmetry on the continuous spectrum is given by the action of the regular elements of the relative Weyl groups (see Section~\ref{subsec: stable trace formula}). The stable trace formula gives a stabilization of this distribution $I^{\widetilde{G}}_{disc}(\tilde{f})$, and it relates the ``error terms" to the stable distributions on some smaller groups, i.e, elliptic endoscopic groups of $\widetilde{G}$. We state it in the following theorem. \begin{theorem}[Arthur] By induction, one can define a stable distribution \begin{align \label{eq: stable trace formula} S^{\widetilde{G}}_{disc}(\tilde{f}) = I^{\widetilde{G}}_{disc}(\tilde{f}) - \sum_{\widetilde{G}'} \iota(\widetilde{G}, \widetilde{G}')S^{\widetilde{G}'}_{disc}(\tilde{f}^{\widetilde{G}'}), \end{align} where the sum is over elliptic endoscopic groups $\widetilde{G}' \neq \widetilde{G}$ of $\widetilde{G}$, $\iota(\widetilde{G}, \widetilde{G}')$ are some constants (see \eqref{formula: endoscopic coefficient}), and $\tilde{f} \rightarrow \tilde{f}^{\widetilde{G}'}$ is the Langlands-Shelstad-Kottwitz transfer. \end{theorem} The relation between $S^{\widetilde{G}}_{disc}(\tilde{f})$ and L-packets can be described in the following conjecture. \begin{conjecture}[Stable Multiplicity Formula] \label{conj: global conjecture} \[ S^{\widetilde{G}}_{disc}(\tilde{f}) = \sum_{\lq \in \Q{\widetilde{G}}} a_{\lq} S^{\widetilde{G}}_{\lq}(\tilde{f}), \] and \[ S^{\widetilde{G}}_{\lq}(\tilde{f}) = \prod_{v} \tilde{f}_{v}(\lq_{v}), \] where $\tilde{f}_{v}(\lq_{v})$ is a linear combination of Harish-Chandra characters in some finite subset $\Pkt{\lq_{v}}$ of $\Pkt{}(\widetilde{G}(F_{v}))$, which defines a stable distribution on $\widetilde{G}(F_{v})$. Moreover, there is an explicit formula for the constants $a_{\lq}$. \end{conjecture} In this conjecture, $\Q{\widetilde{G}}$ is the set of so-called global Arthur parameters of $\widetilde{G}$, which generalizes the set $\P{\widetilde{G}}$ of bounded global Langlands parameters. The global packet $\Pkt{\lq} = \bigotimes'_{v}\Pkt{\lq_{v}}$ associated with the stable distribution $S^{\widetilde{G}}_{\lq}(\tilde{f})$ is called a global Arthur packet. One can view the global L-packets as a special case of global Arthur packets, and the local L-packets that we are looking for will be the local components of some global L-packets, which contribute to $S^{\widetilde{G}}_{disc}(\tilde{f})$. Before we can talk about how to isolate a global L-packet from $S^{\widetilde{G}}_{disc}(\tilde{f})$, we want to introduce the twisted version of \eqref{eq: stable trace formula} first. Let $\omega$ be a character of $\widetilde{G}(\mathbb{A}_{F})/\widetilde{G}(F)G(\mathbb{A}_{F})$ and $\theta \in \Sigma_{0}$, we define the discrete part of the $(\theta, \omega)$-twisted trace formula to be \[ I^{(\widetilde{G}^{\theta}, \omega)}_{disc}(\tilde{f}) = tr (R(\theta)^{-1} \circ R(\omega) \circ R^{\widetilde{G}}_{disc}(\tilde{f})) + ``\text{ $(\theta, \omega)$-twisted symmetric part in } L^{2}_{cont}(\widetilde{G}, \lif{\zeta})", \] where $R(\theta)$ is induced by action on $\widetilde{G}(\mathbb{A}_{F})$ by $\theta$, and $R(\omega)$ is induced by multiplication on $L^{2}_{disc}(\widetilde{G}, \lif{\zeta})$ by $\omega$. Then the stabilization of $I^{(\widetilde{G}^{\theta}, \omega)}_{disc}(\tilde{f})$ is given by the following theorem. \begin{theorem}[Moeglin and Waldspurger] \begin{align \label{eq: twisted stable trace formula} I^{(\widetilde{G}^{\theta}, \omega)}_{disc}(\tilde{f}) = \sum_{\widetilde{G}'} \iota(\widetilde{G}, \widetilde{G}')S^{\widetilde{G}'}_{disc}(\tilde{f}^{\widetilde{G}'}), \end{align} where the sum is over $(\theta, \omega)$-twisted elliptic endoscopic groups $\widetilde{G}'$ of $\widetilde{G}$. \end{theorem} One application of \eqref{eq: twisted stable trace formula} is it gives a multiplicity formula for the automorphic representations of $\widetilde{G}$. Let $X = \text{Hom}(\widetilde{G}(\mathbb{A}_{F}) / Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}), \mathbb{C}^{\times})$ and $Y = \text{Hom}(\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F) Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}), \mathbb{C}^{\times})$. If $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(\mathbb{A}_{F})$, we write $Y(\tilde{\pi}) = \{\omega \in Y: \tilde{\pi} \cong \tilde{\pi} \otimes \omega\}$, which is finite. \begin{proposition \label{prop: global result 1} \begin{enumerate} \item Suppose $\tilde{\pi}$ is a discrete automorphic representation of $\widetilde{G}$, and $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$. If $[\r] \in \cPkt{\phi}$ for $\phi \in \cP{G}$, then \begin{align} m(\tilde{\pi}) = m_{\tilde{\phi}} |Y(\tilde{\pi}) /\a(\S{\phi})|, \end{align} where $\a(\S{\phi})$ (see \eqref{eq: global twisted endoscopic sequence}) is a subgroup of $Y(\tilde{\pi})$, $m_{\tilde{\phi}} = 1 \text{ or } 2$. Moreover, $m_{\tilde{\phi}} = 2$ only when $G$ is special even orthogonal, $\phi \notin \P{G^{\theta_{0}}}$ (see Section~\ref{subsec: substitute Langlands parameter}), and $\tilde{\pi} \otimes \omega \cong \tilde{\pi}^{\theta_{0}}$ for some $\omega \in Y$. \item Suppose $\tilde{\pi}$ and $\tilde{\pi}'$ are discrete automorphic representations of $\widetilde{G}$, and there exists $\omega \in X$ such that $\tilde{\pi}_{v}$ is $\Sigma_{0}$-conjugate to $\tilde{\pi}'_{v} \otimes \omega_{v}$ for all places. If $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$ and $[\r] \in \cPkt{\phi}$ for $\phi \in \cP{G}$, then there exists some $\omega' \in Y$ and $\theta \in \Sigma_{0}$ such that $\tilde{\pi}' \cong \tilde{\pi}^{\theta} \otimes \omega'$. \end{enumerate} \end{proposition} Back to the proof of Theorem~\ref{thm: main local theorem}, a key step is to isolate the global L-packets from the stable distribution $S^{\widetilde{G}}_{disc}(\tilde{f})$ for $\tilde{f} = \otimes_{v}\tilde{f}_{v}$ such that $\tilde{f}_{v}$ is $\Sigma_{0}$-invariant. By the theory of multipliers, one can isolate the parts associated with different equivalence classes of families of Satake parameters. For $\phi \in \cP{G}$, the equivalence class $c(\phi)$ determines the packet $\cPkt{\phi}$ of $G$ uniquely from our previous discussion. But this may not be the case for $\widetilde{G}$. In view of part (2) of Proposition~\ref{prop: global result 1}, this means if the global L-packet $\cPkt{\tilde{\phi}}$ exists for $\tilde{\phi} \in \cP{\widetilde{G}}$, there might exist $\omega \in Y$ such that $\cPkt{\tilde{\phi}} \neq \cPkt{\tilde{\phi}}\otimes \omega$, whereas $\cPkt{\tilde{\phi}_{v}} = \cPkt{\tilde{\phi}_{v}} \otimes \omega_{v}$ for almost all places. For our proof, we only need something weaker, that is we will fix a nonarchimeadan place $u$, and we require if $\cPkt{\tilde{\phi}_{v}} = \cPkt{\tilde{\phi}_{v}} \otimes \omega_{v}$ for all places $v \neq u$, then $\cPkt{\tilde{\phi}} = \cPkt{\tilde{\phi}} \otimes \omega$. Such global parameters can be constructed using the result of Sug-Woo Shin on automorphic plancherel density \cite{Shin:2012}. At last, we prove the following global result, which is parallel with Theorem~\ref{thm: global L-packet for G}. \begin{theorem \label{thm: global result 2} For $\phi \in \cP{G}$ satisfying $\S{\tilde{\phi}} = 1$ (see Section~\ref{subsec: Langlands parameters}), there exists a global L-packet \[ \cPkt{\tilde{\phi}} = \otimes'_{v} \cPkt{\tilde{\phi}_{v}} \] of $\widetilde{G}$ unique up to twisting by $Y$, such that if $\tilde{\pi}$ is an automorphic representation of $\widetilde{G}$ whose irreducible constituents in the restriction to $G(\mathbb{A}_{F})$ are contained in $\cPkt{\phi}$, then $[\tilde{\pi}]$ is contained in $\cPkt{\tilde{\phi}} \otimes \omega$ for some $\omega \in Y$. \end{theorem} The local and global results of this paper will be proved together by a complicated induction argument. For the purpose of giving a clear proof of the local results, we have minimized the global assumptions needed in our induction arguments by imposing very restrictive conditions on the global results (like in Theorem~\ref{thm: global result 2}). In a sequel to this paper, we will prove the global results of this paper in a more general setting. A full description of the discrete spectrum of $\widetilde{G}$ will also require the Arthur packets. Unfortunately, the technique in this paper will not be sufficient for that. This is somehow reflected by the fact that the Arthur packets of $G$ can have more complicated structure than its L-packets. To be able to construct the Arthur packets of $\widetilde{G}$ in the nonarchimedean case, one will need to extend the works of M{\oe}glin \cite{Moeglin:2006} \cite{Moeglin:2009} on explicit construction of the Arthur packets of $G$. The global case could be even more challenging, because that would require certain description of the residue spectrum for both $G$ and $\widetilde{G}$. So we would like to keep that as a project for the future. This paper is organized as follows. In Section~\ref{sec: preliminary}, we discuss various group theoretic properties about $G$ and $\widetilde{G}$. We introduce their Levi subgroups and twisted endoscopic groups. We also discuss the relation between $\cP{G}$ and $\cP{\widetilde{G}}$ both in the local and global cases. We recall some known results about restricting the local representations of $\widetilde{G}$ to $G$, in particular we have restriction multiplicity one in this case. In Section~\ref{sec: Arthur's theory}, we review Arthur's endoscopic classification theory for $G$ in the tempered case. In the local theory, we describe the $\theta$-twisted endoscopic character identities (or character relations) for $G$ and $\theta \in \Sigma_{0}$. In the global theory, we give Arthur's multiplicity formula for automorphic representations of $G$. In Section~\ref{sec: coarse L-packet}, we state our main local theorem (Theorem~\ref{thm: refined L-packet}). In this theorem, we formulate the $(\theta, \omega)$-twisted endoscopic character identities for $\widetilde{G}$ and $\omega \in X$, which are natural extensions of the $\theta$-twisted endoscopic character identities for $G$. Similarly we also formulate the natural extensions of the twisted local intertwining relations from $G$ to $\widetilde{G}$. In Section~\ref{sec: multiplicity formula}, we introduce various stable trace formulas used in this paper. We prove Proposition~\ref{prop: global result 1} as an application of the twisted stable trace formula. We also state some global conjectures, whose special versions have to be proved together with our main local theorem. In particular, we give the precise statement of Conjecture~\ref{conj: global conjecture} in the tempered case. In the end of this section, we make a comparison of both sides of the twisted stable trace formulas for $\widetilde{G}$, which is analogous to what Arthur did for $G$. In the final section, we give the proofs of our main local theorem together with all the global theorems by an induction argument. In particular, we address the issue of lack of strong multiplicity one as we mentioned above. {\bf Some standard notations}: If $G$ is a reductive group over a field $F$, let $G^{0}$ be the identity component, $G_{der}$ be the derived group of $G^{0}$, $G_{sc}$ be the simply connected cover of $G_{der}$, and $G_{ad}$ be the adjoint group of $G_{der}$. We denote the centre of $G$ by $Z_{G}$ or $Z(G)$, the split connected component of $Z_{G}$ by $A_{G}$. If $G$ is connected, let $X^{*}(G)$ be the group of algebraic characters of $G$ over $F$ and $\mathfrak{a}_{G} = \text{Hom}_{\mathbb{Z}}(X^{*}(G), \mathbb{R})$. If $F$ is a local field, there is a homomorphism $H_{G}: G(F) \rightarrow \mathfrak{a}_{G}$ defined by $e^{<H_{G}(g), \chi>} = |\chi(g)|_{F}$ for $g \in G(F)$ and $\chi \in X^{*}(G)$. If $G$ is abelian and $\theta$ is an automorphism of $G$, let $G^{\theta}$ be the $\theta$-invariant subgroup of $G$, and $G_{\theta}$ be the $\theta$-coinvariant group of $G$, i.e., $G_{\theta} = G / (\theta - 1)G$. If $A$ is a locally compact abelian group, we denote its Pontryagin dual by $A^{*}$. {\bf Acknowledgements}: The author wants to thank his thesis advisor James Arthur for his generous support and constant encouragement when this work was carried out. He also wants to thank the hospitality of the Institute for Advanced Study, where he finished writing up the current version. During his stay at IAS, he was supported by the National Science Foundation No. DMS-1128155 and DMS-1252158. At last, the author wants to thank the referee for many helpful comments and suggestions. \section{Preliminary \label{sec: preliminary} \subsection{Groups \label{subsec: groups} \subsubsection{Similitude groups \label{subsubsec: notations} Let $F$ be a local or global field of characteristic zero and $\bar{F}$ be its algebraic closure. When $F$ is global, let us denote the adele ring over $F$ by $\mathbb{A}_{F}$, and the id\`ele group by $I_{F}$. The absolute Galois group over $F$ is written as $\Gal{F}$ or $\Gal{}$ for abbreviation. Let $G$ be a quasisplit connected reductive group over $F$ and $D$ be a torus. We denote by $\widetilde{G}$ an extension of $D$ by $G$ \begin{align \label{eq: extension} \xymatrix{1 \ar[r] & G \ar[r] & \widetilde{G} \ar[r]^{\c} & D \ar[r] & 1. } \end{align} Let us denote the centres of $G$ and $\widetilde{G}$ by $Z_{G}$ and $Z_{\widetilde{G}}$ respectively. Sometimes we need to distinguish $\c$ for different groups, so we will also write $\c_{G} = \c$. The primary example that we are going to consider in this paper is when $G$ is a special even orthogonal group or a symplectic group, and $\widetilde{G}$ is the corresponding similitude group, in which case $\c$ is called the similitude character. A split general symplectic group (or symplectic similitude group) is defined as follows $$GSp(2n) = \{ g \in GL(2n) : g \begin{pmatrix} 0 & -J_{n} \\ J_{n} & 0 \end{pmatrix} {}^tg = \c(g) \begin{pmatrix} 0 & -J_{n} \\ J_{n} & 0 \end{pmatrix} \},$$ where \( J_{n} = \begin{pmatrix} &&&1\\ &&1&\\ &\iddots&&\\ 1&&& \end{pmatrix} \) and $\c(g)$ is a scalar. It is connected as an algebraic group. A split general even orthogonal group (or orthogonal similitude group) is defined by $$GO(2n) = \{ g \in GL(2n) : g \begin{pmatrix} 0 & J_{n} \\ J_{n} & 0 \end{pmatrix} {}^tg = \c(g) \begin{pmatrix} 0 & J_{n} \\ J_{n} & 0 \end{pmatrix} \}.$$ Since $(\det g)^{2} = \c(g)^{2n}$, it has two connected components depending on whether $\det g / \c(g)^{n}$ being $1$ or $-1$. Let us denote the identity component by $GSO(2n)$, and we call it the connected general even orthogonal group. Because $SO(2n)$ (resp.$GSO(2n)$) has an outer automorphism from the conjugate action by $O(2n)$ (resp.$GO(2n)$), let us denote an outer twist of $SO(2n)$ (resp.$GSO(2n)$) with respect to this outer automorphism and an arbitrary quadratic extension $E / F$ by $SO(2n, \eta)$ (resp.$GSO(2n, \eta)$), where $\eta$ is the quadratic (id\`ele class) character associated to $E / F$ by the local (global) class field theory. We would like to allow $E = F$ and $\eta = 1$, in which case this is the split group. If $G = SO(2n, \eta)$, we define $\eta_{G} = \eta$. If $G = Sp(2n)$, we define $\eta_{G} = 1$. These groups that we have defined above give all the quasisplit general symplectic groups and quasisplit connected general even orthogonal groups. Another description of quasisplit general symplectic groups and quasisplit connected general even orthogonal groups is given by \[ GSp(2n) = (\mathbb{G}_{m} \times Sp(2n)) / (\mathbb{Z}/2\mathbb{Z}) \,\, \text{ and } \,\, GSO(2n, \eta) = (\mathbb{G}_{m} \times SO(2n, \eta)) / (\mathbb{Z}/2\mathbb{Z}), \] where $\mathbb{Z}/2\mathbb{Z}$ is embedded diagonally into the centre of each factor. The similitude character $\c$ is square on $\mathbb{G}_{m}$ and trivial on the other factor. More generally we can define \begin{align \label{eq: similitude} G(Sp(2n_{1}) \times \cdots \times Sp(2n_{s}) \times SO(2n_{s+1}, \eta_{1}) \times \cdots \times SO(2n_{s+t}, \eta_{t})) \end{align} to be \[ (\mathbb{G}_{m} \times Sp(2n_{1}) \times \cdots \times Sp(2n_{s}) \times SO(2n_{s+1}, \eta_{1}) \times \cdots \times SO(2n_{s+t}, \eta_{t})) / (\mathbb{Z}/2\mathbb{Z}), \] where $\mathbb{Z}/2\mathbb{Z}$ is again embedded diagonally. We can also generalize the similitude character $\c$ to these groups such that it is square on $\mathbb{G}_{m}$ and trivial on all the other factors. At last let us write $GSp(0) = GSO(0) = \mathbb{G}_{m}$ and set $\c = id$ in this case. For any quasisplit connected reductive group $G$ defined over $F$, we denote by $\D{G}$ its complex dual group, by $Z(\D{G})$ the centre of $\D{G}$, and by $\L{G}$ its $L$-group, which is a semidirect product of $\D{G}$ with the Weil group $W_{F}$, i.e., $\D{G} \rtimes W_{F}$. Then dual to the extension \eqref{eq: extension}, we have \begin{align*} \xymatrix{1 \ar[r] & \D{D} \ar[r] & \D{\widetilde{G}} \ar[r]^{\bold{p}} & \D{G} \ar[r] & 1,} \end{align*} where all the homomorphisms can be extended to $L$-homomorphisms of $L$-groups. If $\widetilde{G}$ is $GSp(2n)$ or $GSO(2n, \eta)$, then $\D{\widetilde{G}}$ is the general Spin group \[ GSpin(2n+1, \mathbb{C}) = (\mathbb{C}^{\times} \times Spin(2n+1, \mathbb{C})) / (\mathbb{Z}/2\mathbb{Z}) \,\, \text{ or } \,\, GSpin(2n, \mathbb{C}) = (\mathbb{C}^{\times} \times Spin(2n, \mathbb{C})) / (\mathbb{Z}/2\mathbb{Z}), \] where $\mathbb{Z}/2\mathbb{Z}$ is embedded diagonally to the centre of each factor. Here the embedding needs to be specified. Note that the Spin group is an extension of the special orthogonal group by $\mathbb{Z}/2\mathbb{Z}$. If we denote the generator of this $\mathbb{Z}/2\mathbb{Z}$ by $z$, then in defining the general Spin group we want $\mathbb{Z}/2\mathbb{Z}$ to be embedded to $<z>$ for the Spin factor. In fact, $Z(Spin(2n + 1, \mathbb{C})) = <z>$, and for $Z(Spin(2n, \mathbb{C}))$ there is an exact sequence \[ \xymatrix{1 \ar[r] & <z> \ar[r] & Z(Spin(2n, \mathbb{C})) \ar[r] & Z(SO(2n, \mathbb{C})) \ar[r] & 1}. \] We take a preimage of the generator of $Z(SO(2n, \mathbb{C})) \cong \mathbb{Z}/2\mathbb{Z}$ in $Z(Spin(2n, \mathbb{C}))$ and denote it by $w$, then it is well-known that $w^{2} = 1$ if $n$ is even and $w^{2} = z$ if $n$ is odd. On the other hand, $Z(GSpin(2n+1, \mathbb{C})) \cong \mathbb{C}^{\times}$, and $Z(GSpin(2n, \mathbb{C})) \cong \mathbb{C}^{\times} \times \mathbb{Z}/2\mathbb{Z}$. This is because when $n$ is even $u = (1, w)$ (resp. $u = (\sqrt{-1}, w)$ when $n$ is odd) splits the exact sequence \[ \xymatrix{1 \ar[r] & \mathbb{C}^{\times} \ar[r] & Z(GSpin(2n, \mathbb{C})) \ar[r] & Z(SO(2n, \mathbb{C})) \ar[r] & 1.} \] $\L{\widetilde{G}}$ is $GSpin(2n+1, \mathbb{C}) \times W_{F}$ or $GSpin(2n, \mathbb{C}) \rtimes W_{F}$ where the action of $W_{F}$ on $GSpin(2n, \mathbb{C})$ factors through the Galois group $\Gal{E/F}$ of the quadratic extension $E/F$ associated with $\eta$, and it acts trivially on $\mathbb{C}^{\times}$. It is interesting to see its action on the centre of $GSpin(2n, \mathbb{C})$. If $\tau$ is the nontrivial element in $\Gal{E/F}$, then $\tau$ is trivial on the factor $\mathbb{C}^{\times}$ and \begin{align \label{eq: Galois action} \tau(u) = (-1) \cdot u, \text{ for } -1 \in \mathbb{C}^{\times}. \end{align} If $\widetilde{G}$ is type \eqref{eq: similitude}, then $\D{\widetilde{G}}$ is \[ (\mathbb{C}^{\times} \times Spin(2n_{1}+1, \mathbb{C}) \times \cdots \times Spin(2n_{s}+1, \mathbb{C}) \times Spin(2n_{s+1}, \mathbb{C}) \times \cdots \times Spin(2n_{s+t}, \mathbb{C})) / (\mathbb{Z}/2\mathbb{Z})^{s+t}, \] where $(\mathbb{Z}/2\mathbb{Z})^{s+t}$ is embedded as the subgroup generated by \[ (-1, \overbrace{1, \cdots, 1}^{k-1}, z, \overbrace{1, \cdots, 1}^{s+t-k} \,) \] for $1 \leqslant k \leqslant s+t$. For $\L{\widetilde{G}}$, the action of $W_{F}$ on $\D{\widetilde{G}}$ factors through the Galois group $\Gal{E'/F}$, where $E'$ is the composite field $E_{1} E_{2} \cdots E_{t}$ for the quadratic extensions $E_{i}/F$ associated with $\eta_{i}$, and it acts on each factor as in the previous case. \begin{lemma \label{lemma: similitude character} The image of $\c$ on GSp(2n,F), GSO(2n,F) is $F^\times$, and on $GSO(2n, \eta)(F)$ is $Nm_{E/F}E^\times$, where $E/F$ is the quadratic extension associated to $\eta$. \end{lemma} \begin{proof} The cases of $GSp(2n)$ and $GSO(2n)$ are obvious, so we will only consider the case that $\widetilde{G} = GSO(2n, \eta)$. If $n=1$, $GSO(2, \eta)$ can be embedded into $GL(2)$ and $\c$ is given by the determinant map. Since $GSO(2, \eta)(F) = E^{\times}$, it is easy to see that the determinant map becomes the norm map on $E^{\times}$, and the image is $Nm_{E/F}E^\times$. For general $n$, we can take a Borel subgroup $\lif{B}$ of $GSO(2n, \eta)$ with a maximal torus $\lif{T}$ and unipotent radical $\lif{N}$. By the Bruhat decomposition, \[ \widetilde{G}(F) = \bigsqcup_{w \in W(\lif{T}(F), \widetilde{G}(F))} \lif{B}(F) \dot{w} \lif{B}(F), \] where $\dot{w}$ are representatives of $w$ in $\widetilde{G}(F)$. Since $W(\lif{T}(F), \widetilde{G}(F)) \cong W(T(F), G(F))$ for $T = G \cap \lif{T}$, one can take $\dot{w}$ in $G(F)$. Moreover, $\lif{N} = N$ for $N = G \cap \lif{N}$. Therefore, $\c(\widetilde{G}(F)) = \c(\lif{B}(F)) = \c(\lif{T}(F))$. Let us write $\widetilde{G} = (\mathbb{G}_{m} \times G) / (\mathbb{Z}/2\mathbb{Z})$, and choose $\lif{T}(\bar{F})$ such that it consists of $(x, g)$ modulo $\mathbb{Z}/2\mathbb{Z}$, where $x \in \bar{F}^{\times}$ and \[ g = \text{diag}\{z_{1}, \cdots, z_{n-1}, y, y^{-1}, z_{n-1}^{-1}, \cdots, z_{1}^{-1} \} \in G(\bar{F}) \] with $z_{i}, y \in \bar{F}^{\times}$. If $(x, g) \in \lif{T}(F)$, then $(x, y) \in GSO(2, \eta)(F)$, and $\c(x, g) = \c_{SO(2, \eta)}(x, y) = x^{2}$. On the other hand, if $(x, y) \in GSO(2, \eta)(F)$, then by letting $z_{i} = x$ for $1 \leqslant i \leqslant n-1$, we have $(x, g) \in \lif{T}(F)$. This shows $\c(\lif{T}(F)) = \c_{SO(2, \eta)}(GSO(2, \eta)(F)) = Nm_{E/F}E^\times$. \end{proof} This lemma can be easily generalized to groups of type \eqref{eq: similitude}. \begin{lemma \label{lemma: generalized similitude character} Suppose $\widetilde{G}$ is of type \eqref{eq: similitude}, the image of $\c$ on $\widetilde{G}(F)$ is \[ F^{\times} \cap Nm_{E_{1}/F}E_{1}^{\times} \cap \cdots \cap Nm_{E_{t}/F}E_{t}^{\times}, \] where $E_{i} / F$ is the quadratic extension associated to $\eta_{i}$ for $1 \leqslant i \leqslant t$. \end{lemma} \begin{proof} Let us denote by $\lif{\widetilde{G}}$ the product \begin{align \label{eq: product group} GSp(2n_{1}) \times GSp(2n_{2}) \times \cdots \times GSp(2n_{s}) \times GSO(2n_{s+1}, \eta_{1}) \times \cdots \times GSO(2n_{s+t}, \eta_{t}), \end{align} then $\widetilde{G}$ is the subgroup of $\lif{\widetilde{G}}$ characterized by $\c_{1}(g_{1}) = \cdots = \c_{s+t}(g_{s+t})$ for $(g_{1}, \cdots, g_{s+t}) \in \lif{\widetilde{G}}$. In particular, $\c(g) = \c_{1}(g_{1})$ for $g \in \widetilde{G} \subseteq \lif{\widetilde{G}}$. Then the lemma follows immediately from Lemma~\ref{lemma: similitude character}. \end{proof} When $F$ is global, we have the following corollary, whose proof is obvious. \begin{corollary \label{cor: similitude character} Suppose $\widetilde{G}$ is of type \eqref{eq: similitude}, the image of $\c$ on $\widetilde{G}(\mathbb{A}_{F})$ is \[ I_{F} \cap Nm_{E_{1}/F}I_{E_{1}} \cap \cdots \cap Nm_{E_{t}/F}I_{E_{t}}, \] where $E_{i} / F$ is the quadratic extension associated to $\eta_{i}$ for $1 \leqslant i \leqslant t$. \end{corollary} \begin{corollary \label{cor: relative Hasse principle} Suppose $\widetilde{G}$ is of type \eqref{eq: similitude}, then $\c(\widetilde{G}(\mathbb{A}_{F})) \cap F^{\times} = \c(\widetilde{G}(F))$ and $\c(Z_{\widetilde{G}}(\mathbb{A}_{F})) \cap F^{\times} = \c(Z_{\widetilde{G}}(F))$. \end{corollary} \begin{proof} For the first equality, by Lemma~\ref{lemma: generalized similitude character} and Corollary~\ref{cor: similitude character} it suffices to show $Nm_{E_{i}/F}I_{E_{i}} \cap F^{\times} = Nm_{E_{i}/F} E_{i}^{\times}$ for all $1 \leqslant i \leqslant t$, and this is a consequence of Hasse norm theorem (see \cite{Neukirch:1999}, Corollary VI.4.5). For the second equality, note $\c(Z_{\widetilde{G}}(\mathbb{A}_{F})) = I_{F}^{2}$ and $\c(Z_{\widetilde{G}}(F)) = F^{\times^{2}}$. So we need to show $F^{\times} \cap I_{F}^{2} = F^{\times^{2}}$, and this follows from Grunwald-Wang theorem. \end{proof} \subsubsection{Levi subgroups \label{subsubsec: Levi subgroups} Let $\widetilde{G}$, $G$, $D$ and $\c$ be defined as in Section~\ref{subsubsec: notations}. If we restrict $\c$ to a Levi subgroup $\widetilde{M}$ of $\widetilde{G}$, then its kernel will be a Levi subgroup $M$ of $G$, and we have \[ \label{eq: Levi subgroup} \xymatrix{1 \ar[r] & M \ar[r] & \widetilde{M} \ar[r]^{\c} & D \ar[r] & 1.} \] It is easy to see that this induces a bijection between Levi subgroups of $\widetilde{G}$ and $G$. Suppose $\widetilde{G}$ is a general symplectic group or a connected general even orthogonal group of semisimple rank $n$, then $\widetilde{M}$ is isomorphic to \begin{align \label{eq: levi} GL(n_{1}) \times \cdots \times GL(n_{r}) \times \widetilde{G}_{-}, \end{align} where $\widetilde{G}_{-}$ is of the same type as $\widetilde{G}$ with semisimple rank $n_{-} \geqslant 0$ and $n = \sum_{i = 1}^{r} n_{i} + n_{-}$. Throughout this paper we fix a Borel subgroup $\lif{B}$ of $\widetilde{G}$ consisting of upper-triangular matrices and we choose $\widetilde{M}$ to be contained in the group \[ \begin{pmatrix} GL(n_{1})&&&&&&0 \\ &\ddots &&&&& \\ && GL(n_{r})&&&&\\ &&&\widetilde{G}_{-} &&&\\ &&&&GL(n_{r})&& \\ &&&&&\ddots&\\ 0&&&&&&GL(n_{1}) \end{pmatrix}. \] In fact this gives all the standard Levi subgroups if $\widetilde{G}$ is $GSp(2n)$ or $GSO(2n, \eta)$ ($\eta \neq 1$), and $GO(2n)$-conjugacy classes of standard Levi subgroups if $\widetilde{G}$ is $GSO(2n)$. We fix an isomorphism from $\eqref{eq: levi}$ to $\lif{M}$ as follows \[ (g_{1}, \cdots g_{r}, g) \longrightarrow \text{diag}\{g_{1}, \cdots, g_{r}, g, \c(g){}_tg^{-1}_{r}, \cdots, \c(g){}_tg^{-1}_{1}\} \] if $n_{-} > 0$, and \[ (g_{1}, \cdots g_{r}, g) \longrightarrow \text{diag}\{g_{1}, \cdots, g_{r}, \c(g){}_tg^{-1}_{r}, \cdots, \c(g){}_tg^{-1}_{1}\} \] if $n_{-} = 0$. Here ${}_tg_{i} = J_{n_{i}}{}^tg_{i}J^{-1}_{n_{i}}$ for $1 \leqslant i \leqslant r$. Under this isomorphism, the Weyl group $W(\widetilde{M}) = \text{Norm}(A_{\widetilde{M}}, G)/\widetilde{M}$ acts on $\widetilde{M}$ by permuting the general linear factors and changing some $g_{i}$ to $\c(g){}_tg^{-1}_{i}$ (also compositions of these). Finally, note $M \cong GL(n_{1}) \times \cdots \times GL(n_{r}) \times G_{-}$ and $W(M) \cong W(\widetilde{M})$. \subsubsection{Twisted endoscopic groups \label{subsubsec: twisted endoscopy} Let $G$ be a quasisplit connected reductive group over $F$. When $F$ is local, we have an isomorphism \begin{align \label{eq: local characters} H^{1}(W_{F}, Z(\D{G})) \longrightarrow \text{Hom}(G(F), \mathbb{C}^{\times}). \end{align} When $F$ is global, we have a homomorphism \begin{align \label{eq: global characters} H^{1}(W_{F}, Z(\D{G})) \longrightarrow \text{Hom}(G(\mathbb{A}_{F})/G(F), \mathbb{C}^{\times}), \end{align} where $\text{Hom}(G(\mathbb{A}_{F})/G(F), \mathbb{C}^{\times})$ denotes the quasicharacter of $G(\mathbb{A}_{F})$ trivial on $G(F)$. If we let $F_{v}$ be the localization of $F$ at place $v$, then there is a commutative diagram \begin{align*} \xymatrix{ H^{1}(W_{F}, Z(\D{G})) \ar[d] \ar[r] & \text{Hom}(G(\mathbb{A}_{F})/ G(F), \mathbb{C}^{\times}) \ar[d] \\ H^{1}(W_{F_{v}}, Z(\D{G}_{v})) \ar[r] & \text{Hom}(G(F_{v}), \mathbb{C}^{\times}). } \end{align*} Let \[ \text{Ker}^{1}(W_{F}, Z(\D{G})) := \bigcap_{v} \text{Ker} \{H^{1}(W_{F}, Z(\D{G})) \rightarrow H^{1}(W_{F_{v}}, Z(\D{G}_{v}))\}, \] it is finite and gives the kernel of \eqref{eq: global characters}. Suppose $\widetilde{G}$, $G$, $D$ and $\c$ are defined as in Section~\ref{subsubsec: notations}, then we have the following fact. \begin{lemma} Suppose $Z(\D{G})$ is $\Gal{F}$-invariant and $D$ is split, then $\text{Ker}^{1}(W_{F}, Z(\D{\widetilde{G}})) = 1$. \end{lemma} \begin{proof} It is a consequence of Chebotarev's density theorem (see \cite{Neukirch:1999}, Corollary VII.13.10) that \[ \text{Ker}^{1}(W_{F}, Z(\D{G})) = \text{Ker}^{1}(W_{F}, \D{D}) = 1. \] Then we consider the exact sequence \[ \xymatrix{1 \ar[r] & \D{D} \ar[r] & Z(\D{\widetilde{G}}) \ar[r] & Z(\D{G}) \ar[r] &1,} \] it induces a commutative diagram \[ \xymatrix{\pi_{0}(Z(\D{G})^{\Gal{}}) \ar[d]^{\simeq} \ar[r] & H^{1}(W_{F}, \D{D}) \ar[d] \ar[r] & H^{1}(W_{F}, Z(\D{\widetilde{G}})) \ar[d] \ar[r] & H^{1}(W_{F}, Z(\D{G})) \ar[d] \\ \pi_{0}(Z(\D{G}_{v})^{\Gal{v}}) \ar[r] & H^{1}(W_{F_{v}}, \D{D}_{v}) \ar[r] & H^{1}(W_{F_{v}}, Z(\D{\widetilde{G}}_{v})) \ar[r] & H^{1}(W_{F_{v}}, Z(\D{G}_{v})),} \] with both the top and bottom rows being exact. Suppose $u \in \text{Ker}^{1}(W_{F}, Z(\D{\widetilde{G}}))$, then by the commutativity of the right square and $\text{Ker}^{1}(W_{F}, Z(\D{G})) = 1$, $u$ has a preimage $w$ in $H^{1}(W_{F}, \D{D})$. Since $\text{Ker}^{1}(W_{F}, \D{D}) = 1$, the Langlands correspondence for tori allows us to identify $H^{1}(W_{F}, \D{D})$ with $\text{Hom}(D(\mathbb{A}_{F})/D(F), \mathbb{C}^{\times})$. Now without loss of generality, we can assume $w \neq 1$. By the commutativity of the left square and the fact that the left end vertical map is an isomorphism, the localization of $w$ is determined by the localizations of those in the image of $\pi_{0}(Z(\D{G})^{\Gal{}})$ in $H^{1}(W_{F}, \D{D})$. Finally, we use Chebotarev's density theorem again to conclude that $w$ has to lie in the image of $\pi_{0}(Z(\D{G})^{\Gal{}})$, and hence $u = 1$. \end{proof} \begin{corollary \label{cor: ker1} Suppose $\widetilde{G}$ is of type \eqref{eq: similitude} and $\c$ is the generalized similitude character, then \[ \text{Ker}^{1}(W_{F}, Z(\D{\widetilde{G}})) = \text{Ker}^{1}(W_{F}, Z(\D{G})) = 1. \] \end{corollary} \begin{proof} One just needs to observe that in this case $\Gal{F}$ acts trivially on $Z(\D{G})$, and $D = \mathbb{G}_{m}$. \end{proof} Let $\theta$ be an automorphism of $G$, and $\omega$ be a quasicharacter of $G(F)$ if $F$ is local, or a quasicharacter of $G(\mathbb{A}_{F})$ trivial on $G(F)$ if $F$ is global. We define a twisted endoscopic datum for $(G, \theta, \omega)$ to be a triple $(H, s, \xi)$, where $H$ is a quasisplit connected reductive group over $F$, $s$ is a semisimple element in $\D{G} \rtimes \D{\theta}$, and $\xi$ is an $L$-embedding from $\L{H}$ to $\L{G}$ satisfying the following conditions: \begin{enumerate} \item \( \text{Int}(s) \circ \xi = {\bold a} \cdot \xi, \) for a $1$-cocycle ${\bold a}$ of $W_{F}$ in $Z(\D{G})$ which is mapped to $\omega$ under \eqref{eq: local characters} or \eqref{eq: global characters}; \item $\D{H} \cong \text{Cent}(s, \D{G})^{0}$ through $\xi$. \end{enumerate} Here $H$ is called a twisted endoscopic group of $G$ and for abbreviation we will denote $(H, s, \xi)$ by $H$. In this definition, we have required $\xi$ to be an $L$-embedding of $\L{H}$. But in general, $\xi$ can be an embedding of certain extension group of $\D{H}$ by $W_{F}$, which may not necessarily be isomorphic to $\L{H}$. In that case, one has to consider $z$-pairs (see \cite{KottwitzShelstad:1999}, 2.2). Since we do not need to deal with this general situation in this paper, we are content with the current definition. Two twisted endoscopic data $(H, s, \xi)$ and $(H', s', \xi')$ are called isomorphic if there exists an element $g \in \D{G}$ such that $g\xi(\L{H})g^{-1} = \xi'(\L{H'})$ and $gsg^{-1} \in s'Z(\D{G})$. Here such $g$ is called an isomorphism. We denote by $\tEnd{}{G^{\theta}}$ the set of isomorphism classes of twisted endoscopic data for $(G, \theta, \omega)$. When $\theta = id$ and $\omega =1$, we get the ordinary endoscopic data, and we abbreviate $\End{}{G^{\theta}, \omega}$ to $\End{}{G}$. A twisted endoscopic datum $(H, s, \xi)$ is called {\bf elliptic} if $\xi(Z(\D{H})^{\Gal{F}})^{0} \subseteq Z(\D{G})$, and we denote by $\End{ell}{G^{\theta}, \omega}$ the set of isomorphism classes of twisted elliptic endoscopic data for $(G, \theta, \omega)$. When $G = GL(N)$, we write $\End{ell}{N^{\theta}}$ for $\End{ell}{GL(N)^{\theta}}$. One can see from the definition that a twisted endoscopic group for $G$ can be viewed as an elliptic endoscopic group of some $\theta$-stable Levi subgroup $M$ (which also admits a $\theta$-stable parabolic subgroup $P \supseteq M$) of $G$. On the other hand, one can obtain all the twisted endoscopic groups of $G$ by taking the Levi subgroups of the twisted elliptic endoscopic groups of $G$. If $(H, s, \xi)$ is a twisted endoscopic datum for $(G, \theta, \omega)$, we denote the automorphism group of this twisted endoscopic datum by $\text{Aut}_{G}(H)$. By our definition, it is a subgroup of $\D{G}$. We define the inner automorphism group $\text{Int}_{G}(H)$ of this twisted endoscopic datum to be $\D{H}Z(\D{G})^{\Gal{F}}$, and the outer automorphism group to be \[ \text{Out}_{G}(H) = \text{Aut}_{G}(H) / \text{Int}_{G}(H). \] By fixing an $F$-splitting for $H$, we get a homomorphism from $\text{Out}_{G}(H)$ to the outer automorphism group $\text{Out}(H)$ of $H$. Let us denote the image by $\text{Out}(H, G)$, and define \[ C := \{z \in Z(\D{G}): \sigma (z) z^{-1} \in Z(\D{G}) \cap \D{H} \text{ for $\sigma \in \Gal{F}$ }\}. \] Then there is an exact sequence \begin{align \label{eq: automorphism of endoscopic datum} \xymatrix{1 \ar[r] & C/C \cap \D{H}Z(\D{G})^{\Gal{F}} \ar[r] & \text{Out}_{G}(H) \ar[r] & \text{Out}(H, G) \ar[r] &1.} \end{align} When $F$ is local, there is an action of $\text{Out}_{G}(H)$ on $\H(H)$ (or equivalently $C^{\infty}_{c}(H(F))$). For $g \in \text{Out}_{G}(H)$, let us denote its image in $\text{Out}(H, G)$ by $\tau_{g}$ and choose a representative $\dot{g}$ in $\text{Aut}_{G}(H)$ such that $\text{Int}(\dot{g})$ preserves a $\Gal{F}$-splitting of $\D{H}$. Then $b_{g}(w) = \dot{g}^{-1} \xi(1 \rtimes w) \dot{g} \xi(1 \rtimes w)^{-1}$ defines a $1$-cocycle of $W_{F}$ in $Z(\D{H})$ and it induces a quasicharacter $w_{g}$ of $H(F)$ by \eqref{eq: local characters}. So the action of $\text{Out}_{G}(H)$ on $\H(H)$ sends $f(h)$ to $^{g}f(h) = f(\tau_{g}(h)) \omega_{g}(h)^{-1}$. In all the cases that we will be considering in this paper, one can always split the exact sequence \eqref{eq: automorphism of endoscopic datum} and get $\text{Out}_{G}(H) \cong \text{Out}(H, G) \times (C/C \cap \D{H}Z(\D{G})^{\Gal{F}})$ such that $\text{Out}(H, G)$ acts on $\H(H)$ through its action on $H(F)$. When $G$ is a product of symplectic groups and special even orthogonal groups, $\text{Out}_{G}(H) \cong \text{Out}(H,G)$. When $G = GL(N)$, we write $\text{Out}_{N}(H)$ for $\text{Out}_{GL(N)}(H)$. Suppose $\widetilde{G}$, $G$, $D$, $\c$ are defined as Section~\ref{subsec: groups}. Let $\theta$ be an automorphism of $\widetilde{G}$, and we assume {\bf $\c$ is $\theta$-invariant}, then $\theta$ also induces an automorphism of $G$. If $\omega_{\widetilde{G}}$ is a quasicharacter associated with $\widetilde{G}$ as in the setup of twisted endoscopic datum, let us write $\omega_{G}$ for the restriction of $\omega_{\widetilde{G}}$ to $G(F)$ if $F$ is local or to $G(\mathbb{A}_{F})/G(F)$ if $F$ is global. The following lemma describes the relation for twisted endoscopic data between $\widetilde{G}$ and $G$. \begin{proposition \label{prop: lifting endoscopic group} There is a one to one correspondence between $\End{}{G^{\theta}, \omega_{G}}$ and \[ \bigsqcup_{\omega_{\widetilde{G}}|_{G} = \omega_{G}} \End{}{\widetilde{G}^{\theta}, \omega_{\widetilde{G}}}, \] such that if $G'$ corresponds to $\widetilde{G}'$, then there exists an exact sequence \[ \xymatrix{1 \ar[r] & G' \ar[r] & \widetilde{G}' \ar[r]^{\c'} & D \ar[r] & 1 }. \] Moreover, the same is true for twisted elliptic endoscopic data. \end{proposition} \begin{proof} We say a twisted endoscopic datum $(\widetilde{G}', \lif{s}, \lif{\xi})$ for $(\widetilde{G}, \theta, \omega_{\widetilde{G}})$ corresponds to a twisted endoscopic datum $(G', s, \xi)$ for $(G, \theta, \omega_{G})$ if $\bold{p}(\lif{s}) = s$ and they satisfy the following diagram \[ \xymatrix{ \L{\widetilde{G}'} \ar[r]^{\lif{\xi}} \ar[d]^{\bold{p}} & \L{\widetilde{G}} \ar[d]^{\bold{p}} \\ \L{G'} \ar[r]^{\xi} & \L{G}.} \] In \cite{Xu:2016} we have shown this gives a one to one correspondence between isomorphism classes of twisted endoscopic data when $F$ is local. In fact, our proof also works in the global case except that we need to use a global lifting result of Labesse for homomorphisms from the global Weil group to $L$-groups (see the paragraph after Theorem~\ref{thm: lifting parameter}). Moreover, it is easy to see $\lif{\xi}(Z(\D{\widetilde{G}'})^{\Gal{F}})^{0} \subseteq Z(\D{\widetilde{G}})$ if and only if $\xi(Z(\D{G}')^{\Gal{F}})^{0} \subseteq Z(\D{G})$, so $(\widetilde{G}', \lif{s}, \lif{\xi})$ is elliptic if and only if $(G', s, \xi)$ is elliptic. \end{proof} \begin{remark \label{rk: lifting endoscopic group} The most important case for us is when $\omega_{G} = 1$. Then Proposition~\ref{prop: lifting endoscopic group} shows there is a one to one correspondence between the $\theta$-twisted endoscopic data $\End{}{G^{\theta}}$ and the $(\theta, \omega)$-twisted endoscopic data \[ \bigsqcup_{\omega} \End{}{\widetilde{G}^{\theta}, \omega}, \] where $\omega$ runs through quasicharacters of $\widetilde{G}(F)/G(F)$ if $F$ is local and quasicharacters of $\widetilde{G}(\mathbb{A}_{F})/\widetilde{G}(F)G(\mathbb{A}_{F})$ if $F$ is global. The same is true for elliptic endoscopic data. \end{remark} As our most important examples, let us consider the general symplectic groups and connected general even orthogonal groups with trivial automorphisms, and we have the following table (cf. \cite{Arthur:2013}, 1.2 and \cite{Morel:2011}, 2.1): let $n = n_{1} + n_{2}$. \begin{spacing}{1.5} \begin{center} \begin{tabular}{| c | m{5cm} || c | m{5cm} | } \hline $G$ & $\End{ell}{G}$ & $\widetilde{G}$ & $\tEnd{ell}{\widetilde{G}}$ \\ \hline $Sp(2n)$ & $Sp(2n_1) \times SO(2n_2, \eta) $ & $GSp(2n)$ & $G(Sp(2n_1) \times SO(2n_2, \eta))$ \newline $ \omega = \eta \circ \c$ \\ \hline $SO(2n)$ & $SO(2n_1, \eta) \times SO(2n_2, \eta)$ & $GSO(2n)$ & $G(SO(2n_1, \eta) \times SO(2n_2, \eta))$ \newline $ \omega = \eta \circ \c$ \\ \hline $SO(2n, \eta)$ & $SO(2n_1, \eta_1) \times SO(2n_2, \eta_1\eta) $ & $GSO(2n, \eta')$ & $G(SO(2n_1, \eta) \times SO(2n_2, \eta \eta'))$ \newline $\omega = \eta \circ \c$ \\ \hline \end{tabular} \end{center} \end{spacing} Note in the cases above the isomorphism classes of twisted endoscopic data are completely determined by the twisted endoscopic groups. But that is not the case in general. For example, in the case of connected general even orthogonal groups, if we let $\theta_{0}$ be the outer automorphism induced by the conjugate action of the full orthogonal group, then the isomorphism classes of $\theta_{0}$-twisted elliptic endoscopic data of $SO(2n, \eta')$ are classified by $\theta_{0}$-twisted elliptic endoscopic groups $Sp(2n_1) \times Sp(2n_2)$ $(n = n_{1} + n_{2} +1)$ with a pair of quadratic characters $(\eta,\eta\eta')$. Correspondingly, the isomorphism classes of $(\theta_{0},\omega)$-twisted elliptic endoscopic data of $GSO(2n, \eta')$ are classified by $(\theta_{0}, \omega)$-twisted elliptic endoscopic groups $G(Sp(2n_1) \times Sp(2n_2))$ $(n = n_{1} + n_{2} +1)$ with a pair of quadratic characters $(\eta,\eta\eta')$ and $\omega =\eta \circ \c$. \subsection{Langlands parameters \label{subsec: Langlands parameters} Suppose $G$ is a quasisplit connected reductive group over $F$, a Langlands parameter of $G$ is a $\D{G}$-conjugacy class of admissible homomorphisms from the Langlands group $L_{F}$ to the $L$-group of $G$ (cf. \cite{Borel:1979}). We denote the set of Langlands parameters of $G$ by $\P{G}$, and for any $\phi \in \P{G}$ we denote a representative by $\underline{\phi}: L_{F} \rightarrow \L{G}$. If $F$ is local, the Langlands group is defined as follows, \[ L_{F} = \begin{cases} W_{F} & \text{if $F$ is archimedean}, \\ W_{F} \times SL(2, \mathbb{C}) & \text{if $F$ is nonarchimedean}. \end{cases} \] If $F$ is global, the existence of Langlands group is still conjectural. Let $\widetilde{G}$, $G$, $D$ and $\c$ be defined as in Section~\ref{subsubsec: notations}. The following theorem shows the relation for local Langlands parameters between $G$ and $\widetilde{G}$. \begin{theorem}[Labesse \label{thm: lifting parameter} Suppose $F$ is a local, every Langlands parameter $\phi$ of $G$ can be lifted to a Langlands parameter $\tilde{\phi}$ of $\widetilde{G}$ in the sense that the following diagram commutes \begin{displaymath} \xymatrix{ L_{F} \ar[rr]^{\underline{\tilde{\phi}}} \ar[drr]_{\underline{\phi}} & & \L{\widetilde{G}} \ar[d]^{\bold{p}} \\ & & \L{G}.} \end{displaymath} \end{theorem} In fact the global analogue of this theorem is also true if one uses the the global Weil group $W_{F}$ instead of the global Langlands group $L_{F}$. Both the local and global cases are proved in (\cite{Labesse:1985}, Theorem 8.1). In the global case, since we do not have the global Langlands group yet, this kind of lifting theorem for global Langlands parameters is unavailable. However let us assume the existence of global Langlands group and also the same kind of lifting theorem at this moment, so that we can investigate the relation for both local and global Langlands parameters between $G$ and $\widetilde{G}$ in a uniform way. Moreover, the consequences of this investigation will serve as motivations for the later definitions (see Section~\ref{sec: Arthur's theory}) that complement the lack of global Langlands group. To further simplify our discussion, we are going to assume \begin{align \label{eq: group assumption} \text{Ker}^{1}(W_{F}, Z(\D{\widetilde{G}})) = \text{Ker}^{1}(W_{F}, Z(\D{G})) = \text{Ker}^{1}(W_{F}, \D{D})= 1 \end{align} when $F$ is global. This assumption allows us to treat the local and global cases at the same time, and it also suffices for our purpose in view of Corollary~\ref{cor: ker1}. Let $\Sigma$ be a finite abelian group of automorphisms of $\widetilde{G}$ preserving an $F$-splitting of $\widetilde{G}$, and we assume $\c$ is {\bf $\Sigma$-invariant}, so $\Sigma$ also acts on $G$. We denote the dual automorphisms by $\D{\Sigma}$ and form the semidirect products $\D{\widetilde{G}}^{\Sigma} := \D{\widetilde{G}} \rtimes \D{\Sigma}$ and $\D{G}^{\Sigma} := \D{G} \rtimes \D{\Sigma}$. Let $\Sigma$ act on $\P{\widetilde{G}}$ and $\P{G}$ through the action of $\D{\Sigma}$ on $\D{\widetilde{G}}$ and $\D{G}$ respectively. For $\theta \in \Sigma$, we denote by $\P{G^{\theta}}$ the set of $\phi \in \P{G}$ such that $\phi^{\theta} = \phi$. Suppose $F$ is either local or global, for any $\phi \in \P{G}$ we choose a representative $\underline{\phi}$. Let $L_{F}$ act on $\D{D}$, $\D{G}^{\Sigma}$, and $\D{\widetilde{G}}^{\Sigma}$ by conjugation through $\underline{\phi}$. We denote the corresponding group cohomology by $H^{*}_{\underline{\phi}}(L_{F}, \cdot)$. Note $H^{0}_{\underline{\phi}}(L_{F}, \D{D}) = \D{D}^{\Gal{}}$, $H^{1}_{\underline{\phi}}(L_{F}, \D{D}) = H^{1}(W_{F}, \D{D})$, and \[ S^{\Sigma}_{\underline{\phi}}: = \text{Cent}(\Im \underline{\phi}, \D{G}^{\Sigma}) = H^{0}_{\underline{\phi}}(L_{F}, \D{G}^{\Sigma}), \] \[ S_{\underline{\tilde{\phi}}}^{\Sigma} := \text{Cent}(\Im \underline{\tilde{\phi}}, \D{\widetilde{G}}^{\Sigma}) = H^{0}_{\underline{\phi}}(L_{F}, \D{\widetilde{G}}^{\Sigma}). \] The short exact sequence \[ \xymatrix{1 \ar[r] & \D{D} \ar[r] & \D{\widetilde{G}}^{\Sigma} \ar[r] & \D{G}^{\Sigma} \ar[r] & 1} \] induces a long exact sequence \begin{align*} \xymatrix{1 \ar[r] & \D{D}^{\Gamma} \ar[r] & S_{\underline{\tilde{\phi}}}^{\Sigma} \ar[r] & S_{\underline{\phi}}^{\Sigma} \ar[r]^{\delta \quad \quad} & H^{1}(W_{F}, \D{D}),} \end{align*} and hence \begin{align \label{eq: old twisted endoscopic sequence} \xymatrix{1 \ar[r] & S_{\underline{\tilde{\phi}}}^{\Sigma}/\D{D}^{\Gal{}} \ar[r]^{\quad \iota} & S_{\underline{\phi}}^{\Sigma} \ar[r]^{\delta \quad \quad} & H^{1}(W_{F}, \D{D}).} \end{align} To describe $\delta$, we can identify \[ S_{\underline{\phi}}^{\Sigma} = \{ \lif{s} \in \D{\widetilde{G}}^{\Sigma}: \lif{s} \underline{\tilde{\phi}}(u) \lif{s}^{-1}\underline{\tilde{\phi}}(u)^{-1} \in \D{D}, \text{ for all } u \in L_{F}\} / \D{D}. \] Then $\delta(s) : u \longmapsto \lif{s} \underline{\tilde{\phi}}(u) \lif{s}^{-1}\underline{\tilde{\phi}}(u)^{-1}$, where $\lif{s}$ is a preimage of $s$ in $\D{\widetilde{G}}^{\Sigma}$, and $\delta(s)$ factors through $W_{F}$. About \eqref{eq: old twisted endoscopic sequence} we have the following lemma. \begin{lemma \label{lemma: centralizer} The image of $\delta$ consists of ${\bold a} \in H^{1}(W_{F}, \D{D})$ such that \[ \tilde{\phi}^{\theta} = \tilde{\phi} \otimes {\bold a} \] for some $\theta \in \Sigma$, and in particular it is finite. \end{lemma} \begin{proof} We have shown the lemma when $F$ is local in \cite{Xu:2016}. In particular the same argument applies to the global case except for the finiteness of $\Im \delta$. When $F$ is global, we need to use the commutative diagram \[ \xymatrix{ 1 \ar[r] & S^{\Sigma}_{\tilde{\phi}} / \D{D}^{\Gal{}} \ar[r] \ar@{^{(}->}[d] & S^{\Sigma}_{\phi} \ar[r]^{\delta \quad \quad} \ar@{^{(}->}[d] & H^{1}(W_{F}, \D{D}) \ar[d] \\ 1 \ar[r] & S^{\Sigma}_{\tilde{\phi}_{v}} / \D{D}_{v}^{\Gal{v}} \ar[r] & S^{\Sigma}_{\phi_{v}} \ar[r]^{\delta_{v} \quad \quad} & H^{1}(W_{F_{v}}, \D{D}_{v}).} \] Since $\com[0]{S_{\phi}}$ is mapped into $\com[0]{S_{\phi_{v}}}$ and $\delta_{v}$ is trivial on $\com[0]{(S^{\Sigma}_{\tilde{\phi}_{v}}/ \D{D}^{\Gal{v}})} = \com[0]{S_{\phi_{v}}}$, then $\delta = \prod_{v} \delta_{v}$ is trivial on $\com[0]{S_{\phi}}$. This implies the image of $\delta$ is finite. \end{proof} Take \[ \xymatrix{1 \ar[r] & \D{D} \ar[r] & Z(\D{\widetilde{G}}) \ar[r] & Z(\D{G}) \ar[r] & 1} \] and it induces \[ \xymatrix{1 \ar[r] & \D{D}^{\Gamma} \ar[r] & Z(\D{\widetilde{G}})^{\Gal{}} \ar[r] & Z(\D{G})^{\Gal{}} \ar[r]^{\delta \quad} & H^{1}(W_{F}, \D{D}) \ar[r] & H^{1}(W_{F}, Z(\D{\widetilde{G}})). } \] So $\text{Ker} \delta|_{Z(\D{G})^{\Gal{}}} = Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}^{\Gamma}$. Let \( \bar{H}^{1}(W_{F}, \D{D}) : = H^{1}(W_{F}, \D{D}) / \delta(Z(\D{G})^{\Gal{}}). \) Taking the quotient of \eqref{eq: old twisted endoscopic sequence} by $Z(\D{G})^{\Gal{}}$, we get \begin{align \label{eq: twisted endoscopic sequence mod center} \xymatrix{1 \ar[r] & \cS{\underline{\tilde{\phi}}}^{\Sigma} \ar[r]^{\iota} & \cS{\underline{\phi}}^{\Sigma} \ar[r]^{\bar{\delta} \quad \quad} & \bar{H}^{1}(W_{F}, \D{D}),} \end{align} where $\cS{\underline{\tilde{\phi}}}^{\Sigma} = S_{\underline{\tilde{\phi}}}^{\Sigma}/Z(\D{\widetilde{G}})^{\Gal{}}$ and $\cS{\underline{\phi}}^{\Sigma} = S_{\underline{\phi}}^{\Sigma}/Z(\D{G})^{\Gal{}}$. Since $\Im \delta$ is finite, we have $\cS{\underline{\tilde{\phi}}}^{0} = \cS{\underline{\phi}}^{0}$. After taking the quotient of \eqref{eq: twisted endoscopic sequence mod center} by the identity component, we get \begin{align \label{eq: twisted endoscopic sequence} \xymatrix{1 \ar[r] & \S{\underline{\tilde{\phi}}}^{\Sigma} \ar[r]^{\iota} & \S{\underline{\phi}}^{\Sigma} \ar[r]^{\bar{\delta} \quad \quad} & \bar{H}^{1}(W_{F}, \D{D}),} \end{align} where $\S{\underline{\tilde{\phi}}}^{\Sigma} = \cS{\underline{\tilde{\phi}}}^{\Sigma} / \cS{\underline{\tilde{\phi}}}^{0}$ and $\S{\underline{\phi}}^{\Sigma} = \cS{\underline{\phi}}^{\Sigma} / \cS{\underline{\phi}}^{0}$. There are natural maps from $S^{\Sigma}_{\underline{\phi}}, \cS{\underline{\phi}}^{\Sigma}$, and $\S{\underline{\phi}}^{\Sigma}$ to $\D{\Sigma}$, and for $\theta \in \Sigma$, we denote the preimages of $\D{\theta} \in \D{\Sigma}$ by $S^{\theta}_{\underline{\phi}}, \cS{\underline{\phi}}^{\theta}$ and $\S{\underline{\phi}}^{\theta}$ respectively. By the Langlands correspondence for tori and the assumption $\text{Ker}^{1}(W_{F}, \D{D})= 1$, we can identify $H^{1}(W_{F}, \D{D})$ with $\text{Hom}(D(F), \mathbb{C}^{\times})$ if $F$ is local or $\text{Hom}(D(\mathbb{A}_{F})/D(F), \mathbb{C}^{\times})$ if $F$ is global. Then we can compose with $\c$ to get a homomorphism from $H^{1}(W_{F}, \D{D})$ to $\text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times})$ if $F$ is local or $\text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times})$ if $F$ is global. Since $\delta(Z(\D{G})^{\Gal{}})$ is trivial in $H^{1}(W_{F}, Z(\D{\widetilde{G}}))$, it induces the trivial character on $\widetilde{G}(F)$ if $F$ is local or $\widetilde{G}(\mathbb{A}_{F})$ if $F$ is global. So we have a homomorphism \[ r: \bar{H}^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times}) \] if $F$ is local, and \[ r: \bar{H}^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times}) \] if $F$ is global. In the local case, $r$ is an isomorphism due to the fact that $\eqref{eq: local characters}$ is an isomorphism. For the global case, we have the following lemma. \begin{lemma \label{lemma: identification} If $F$ is global and $\widetilde{G}$ is of type \eqref{eq: similitude}, then $r$ is an isomorphism. \end{lemma} \begin{proof} First we consider the following diagram \begin{align \label{diagram: global Langlands correspondence for characters} \xymatrix{ \text{Hom}(\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F), \mathbb{C}^{\times}) & \text{Hom}(D(\mathbb{A}_{F})/D(F), \mathbb{C}^{\times}) \ar[l]_{\c^{*}} & \\ H^{1}(W_{F}, Z(\D{\widetilde{G}})) \ar[u] & H^{1}(W_{F}, \D{D}) \ar[u]_{\simeq} \ar[l] & \pi_{0}(Z(\D{G})^{\Gal{}}). \ar[l]_{\delta} } \end{align} By Corollary~\ref{cor: relative Hasse principle}, we have $\Im \c^{*} = \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times})$, and hence $r$ is surjective. On the other hand, the kernel of \begin{align*} H^{1}(W_{F}, Z(\D{\widetilde{G}})) \longrightarrow \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F), \mathbb{C}^{\times}) \end{align*} is \( \text{Ker}^{1}(W_{F},Z(\D{\widetilde{G}})) = 1 \) by Corollary~\ref{cor: ker1}. Therefore $r$ is also an inclusion. \end{proof} Let us denote the composition $r \circ \bar{\delta}$ by $\a$, then we can rewrite \eqref{eq: twisted endoscopic sequence} as \begin{align \label{eq: local twisted endoscopic sequence} \xymatrix{1 \ar[r] & \S{\underline{\tilde{\phi}}}^{\Sigma} \ar[r]^{\iota} & \S{\underline{\phi}}^{\Sigma} \ar[r]^{\a \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times})} \end{align} if $F$ is local, and \begin{align \label{eq: global twisted endoscopic sequence} \xymatrix{1 \ar[r] & \S{\underline{\tilde{\phi}}}^{\Sigma} \ar[r]^{\iota} & \S{\underline{\phi}}^{\Sigma} \ar[r]^{\a \quad \quad \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times})} \end{align} if $F$ is global. Note in the global case, we only know it is exact when $\widetilde{G}$ is of type \eqref{eq: similitude} according to the previous lemma. Sometimes, we want to distinguish the map $\a$ for different groups, so we will also write $\a^{G} = \a$. Next we want to discuss the relation between lifting Langlands parameters (see Theorem~\ref{thm: lifting parameter}) and lifting twisted endoscopic groups (see Proposition~\ref{prop: lifting endoscopic group}). Suppose $F$ is local or global and $\phi \in \P{G}$. For any semisimple element $s \in \cS{\underline{\phi}}^{\theta}$, let $\D{G}' := \text{Cent}(s, \D{G})^{0}$ and it can be equipped with a Galois action given by $\underline{\phi}$. This determines a quasisplit connected reductive group $G'$, and $\underline{\phi}$ will factor through $\L{G'}$ for some $\theta$-twisted endoscopic datum $(G', s, \xi)$ of $G$, and hence we get a parameter $\phi' \in \P{G'}$. In this way, we call $(G', \phi')$ corresponds to $(\phi, s)$, and we denote it by $(G', \phi') \rightarrow (\phi, s)$. By Proposition~\ref{prop: lifting endoscopic group}, $(G', s, \xi)$ can be lifted to a $(\theta, \omega)$-twisted endoscopic datum $(\widetilde{G}', \lif{s}, \lif{\xi})$ of $\widetilde{G}$ for some character $\omega$ of $\widetilde{G}(F)/G(F)$ if $F$ is local or $\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F})$ if $F$ is global. Then by theorem~\ref{thm: lifting parameter} and the global assumption that we made after, we can have a lift $\tilde{\phi}'$ of $\phi'$ in $\P{\widetilde{G}'}$. All of these can be summarized in the diagram below \begin{displaymath} \xymatrix{L_{F} \ar[r]^{\underline{\tilde{\phi}}'} \ar[dr]_{\underline{\phi}'} & \L{\widetilde{G}'} \ar[r]^{\lif{\xi}} \ar[d] & \L{\widetilde{G}} \ar[d] \\ & \L{G'} \ar[r]^{\xi} & \L{G}.} \end{displaymath} Then we have the following lemma. \begin{lemma \label{lemma: twisted character} $\a(s) = \omega$. \end{lemma} \begin{proof} It has been shown for the local case in \cite{Xu:2016}, and the proof for the global case is the same. \end{proof} \begin{remark \label{rk: twisted character} From this lemma we see the character $\omega$ associated with the twisted endoscopic datum $\widetilde{G}'$ only depends on the image of $s$ in $\S{\phi}^{\theta}$. In the global case, lifting Langlands parameter is not available due to the lack of the global Langlands group. However one can always lift twisted endoscopic groups in both local and global cases, so this lemma is behind the idea of our later definition of the map $\a$ (see \eqref{eq: global twisted endoscopic sequence}) in the global case. \end{remark} \subsection{Representations \label{subsec: representations} Let us assume $F$ is a local field, and $G$, $\widetilde{G}$, $D$, $\c$ are defined as in Section~\ref{subsubsec: notations}. In this section, we would like to recall some results about the restriction map $\Pkt{}(\widetilde{G}(F)) \rightarrow \Pkt{}(G(F))$ from (\cite{Xu:2016}, Section 6.1). \begin{lemma \label{lemma: finite restriction} If $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(F)$, then the restriction of $\tilde{\pi}$ to G(F) is a direct sum of finitely many irreducible admissible representations. \end{lemma} \begin{theorem}[J.D. Adler and D. Prasad, \cite{AdlerPrasad:2006} \label{thm: restriction multiplicity one} Suppose $\widetilde{G}$ is a quasisplit general symplectic group or connected general even orthogonal group, and $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(F)$, then the restriction of $\tilde{\pi}$ to $G(F)$ is multiplicity free. \end{theorem} \begin{remark \label{rk: restriction multiplicity one} This theorem can be easily extended to the groups $\widetilde{G}$ of type \eqref{eq: similitude}. To do so, we can first extend a representation of $\widetilde{G}(F)$ to $\lif{\widetilde{G}}(F)$ (see \eqref{eq: product group}), and then restrict it to $G(F)$. \end{remark} \begin{lemma \label{lemma: existence} If $\r$ is an irreducible admissible representation of G(F), then there exists a unique irreducible admissible representation $\tilde{\pi}$ of $\widetilde{G}(F)$ up to twisting by $\text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times})$, such that it contains $\r$ in its restriction to $G(F)$. \end{lemma} If $\r$ is an irreducible admissible representation of $G(F)$, let us denote \[ \widetilde{G}(\r) = \{ g \in \widetilde{G}(F) : \r^g \cong \r \}. \] If $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(F)$, let us denote \[ X(\tilde{\pi}) = \{ \omega \in (\widetilde{G}(F)/Z_{\widetilde{G}}(F)G(F))^{*} : \tilde{\pi} \otimes \omega \cong \tilde{\pi} \}. \] \begin{proposition \label{prop: restriction multiplicity one} Suppose $\widetilde{G}$ is of type \eqref{eq: similitude}, $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(F)$ and $\r$ is contained in its restriction to $G(F)$, then for $\omega \in (\widetilde{G}(F)/Z_{\widetilde{G}}(F)G(F))^{*}$, $\omega$ is in $X(\tilde{\pi})$ if and only if $\omega$ is trivial on $\widetilde{G}(\r)$. Moreover, the restriction of $\tilde{\pi}$ contains $|X(\tilde{\pi})|$ irreducible admissible representations of G(F). \end{proposition} \begin{lemma \label{lemma: restrict tempered character} Suppose $\tilde{\pi}$ is an irreducible admissible unitary representation of $\widetilde{G}(F)$, then $\tilde{\pi}$ is an essentially discrete series representation of $\widetilde{G}(F)$ if and only if its restriction to $G(F)$ is an essentially discrete series representation. The same is true of the tempered representations. \end{lemma} \subsection{Langlands-Shelstad-Kottwitz transfer \label{subsec: the transfer map} Let $F$ be a local field of characteristic zero and $G$ be a quasisplit connected reductive group over $F$. Suppose $\theta$ is an automorphism of $G$ preserving an $F$-splitting and $\omega_{G}$ is a quasicharacter of $G(F)$. We choose a quasicharacter $\chi$ on a closed subgroup $Z_{F}$ of $Z_{G}(F)$, and define $\H(G, \chi)$ to be the space $\chi^{-1}$-equivariant smooth compactly supported functions over $G(F)$ (i.e., equivariant Hecke algebra of $G$). Let $\delta$ be a strongly $\theta$-regular $\theta$-semisimple element of $G(F)$ such that $\omega_{G}$ is trivial on the $\theta$-twisted centralizer group $G^{\theta}_{\delta}(F)$ of $\delta$. We choose Haar measures on $G(F)$ and $G^{\theta}_{\delta}(F)$, and they induce a $G(F)$-invariant measure on $G^{\theta}_{\delta}(F) \backslash G(F)$. Then we can form the $(\theta, \omega_{G})$-twisted orbital integral of $f \in \H(G, \chi)$ over $\delta$ as \[ O^{\theta, \omega_{G}}_{G}(f, \delta) := \int_{G_{\delta}(F) \backslash G(F)} \omega_{G}(g)f(g^{-1}\delta \theta(g)) dg. \] We also form the $(\theta, \omega_{G})$-twisted stable orbital integral over $\delta$ as \[ SO^{\theta, \omega_{G}}_{G}(f, \delta) := \sum_{\{\delta'\}_{G(F)}^{\theta} \thicksim_{st} \{\delta\}_{G(F)}^{\theta}} O_{G}^{\theta, \omega_{G}}(f, \delta'), \] where the sum is over $\theta$-twisted conjugacy classes $\{\delta'\}^{\theta}_{G(F)}$ in the $\theta$-twisted stable conjugacy class of $\delta$ (i.e., $\delta' = g^{-1} \delta \theta(g)$ for some $g \in G(\bar{F})$), and the Haar measure on $G^{\theta}_{\delta'}(F)$ is translated from that on $G^{\theta}_{\delta}(F)$ by conjugation. Let $\mathcal{I}(G^{\theta, \omega_{G}}, \chi)$ ($\mathcal{SI}(G^{\theta, \omega_{G}}, \chi)$) be the space of $(\theta, \omega_{G})$-twisted (stable) orbital integrals of $\H(G, \chi)$ over the set $G^{\theta}_{reg}(F)$ of strongly $\theta$-regular $\theta$-semisimple elements of $G(F)$, then by definition we have projections \[ \xymatrix{\H(G, \chi) \ar@{->>}[r] & \mathcal{I}(G^{\theta, \omega_{G}}, \chi) \ar@{->>}[r] & \mathcal{SI}(G^{\theta, \omega_{G}}, \chi).} \] Suppose $\r$ is an irreducible admissible representation of $G(F)$ and $\chi_{\r}$ is the central character of $\r$. Let $\chi = \chi_{\r}|_{Z_{F}}$. Suppose $\r^{\theta} \cong \r \otimes \omega_{G}$, let $A_{\r}(\theta, \omega_{G})$ be the intertwining operator between $\r \otimes \omega_{G}$ and $\r^{\theta}$ (this is uniquely determined up to a scalar), we then define the $(\theta, \omega_{G})$-twisted character of $\r$ to be the distribution \begin{align \label{eq: twisted character} f_{G^{\theta}}(\r, \omega_{G}) := trace (\int_{Z_{F} \backslash G(F)} f(g)\r(g) dg \circ A_{\r}(\theta, \omega_{G})), \end{align} for $f \in \H(G, \chi)$. By results of Harish-Chandra \cite{H-C:1963} \cite{H-C:1999} in the non-twisted case, Bouaziz \cite{Bouaziz:1987}, Clozel \cite{Clozel:1987} and Lemaire \cite{Lemaire:2016} in the twisted case, there exists a locally integrable function $\Theta^{G^{\theta}, \omega_{G}}_{\r}$ on $G(F)$ such that for $x \in G^{\theta}_{reg}(F), g \in G(F)$ \[ \Theta^{G^{\theta}, \omega_{G}}_{\r} (g^{-1} x \theta(g)) = \omega_{G}(g) \Theta^{G^{\theta}, \omega_{G}}_{\r}(x), \] and \[ f_{G^{\theta}}(\r, \omega_{G}) = \int_{Z_{F} \backslash G(F)} f(g)\Theta^{G^{\theta}, \omega_{G}}_{\r}(g) dg. \] By the twisted Weyl integration formula, one can show this character defines a linear functional on $\mathcal{I}(G^{\theta, \omega_{G}}, \chi)$. A linear functional on $\mathcal{I}(G^{\theta, \omega_{G}}, \chi)$ is called {\bf stable} if it factors through $\mathcal{SI}(G^{\theta, \omega_{G}}, \chi)$. For a $(\theta, \omega_{G})$-twisted endoscopic datum $(H, s, \xi)$ of $G$, there is a map defined over $F$ from the semisimple conjugacy classes of $H(\bar{F})$ to the $\theta$-twisted conjugacy classes of $\theta$-semisimple elements in $G(\bar{F})$. We call a strongly regular semisimple element $\gamma \in H(\bar{F})$ is strongly $G$-regular if its associated $H(\bar{F})$-conjugacy class maps to a $\theta$-twisted $G(\bar{F})$-conjugacy class of strongly $\theta$-regular $\theta$-semisimple elements in $G(\bar{F})$. We denote the set of strongly $G$-regular semisimple elements of $H(F)$ by $H_{G-reg}(F)$. The transfer factor defined in \cite{KottwitzShelstad:1999} is a function \[ \Delta_{G, H}(\cdot, \cdot): H_{G-reg}(F) \times G^{\theta}_{reg}(F) \rightarrow \mathbb{C}, \] which is nonzero only when $\gamma \in H_{G-reg}(F)$ is a norm of $\delta \in G^{\theta}_{reg}(F)$, i.e., the $H(\bar{F})$-conjugacy class of $\gamma$ maps to the $\theta$-twisted $G(\bar{F})$-conjugacy class of $\delta$. Note if $\delta \in G^{\theta}_{reg}(F)$ has a norm $\gamma \in H_{G-reg}(F)$, then $\omega_{G}$ is trivial on $G^{\theta}_{\delta}(F)$ (see Lemma 4.4.C, \cite{KottwitzShelstad:1999}). In this paper, we always normalize the transfer factor with respect to some fixed $\theta$-stable Whittaker datum $(B, \Lambda)$, and we also assume the Haar measure is preserved for any admissible embedding $T_{H} \xrightarrow{\simeq} T_{\theta}$, where $T_{H}$ is a maximal torus of $H$, $T$ is a $\theta$-stable maximal torus of $G$ and $T_{\theta} = T/(\theta - 1)T$. There is a canonical inclusion $(Z_{G})_{\theta} \hookrightarrow Z_{H}$. Let us denote the image of $Z_{F}$ in $Z_{H}(F)$ by $Z'_{F}$, then one can associate a quasicharacter $\chi'$ of $Z'_{F}$, depending only on $\chi$ and the twisted endoscopic embedding $\xi$. The Langlands-Kottwitz-Shelstad transfer map (or twisted endoscopic transfer) is a correspondence from $f \in \H(G, \chi)$ to $f^{H} \in \H(H, \chi')$ such that \begin{align \label{eq: geometric transfer} SO_{H}(f^{H}, \gamma) = \sum_{\{\delta'\}_{G(F)}^{\theta} \thicksim_{st} \{\delta\}_{G(F)}^{\theta}} \Delta_{G, H}(\gamma, \delta') O_{G}^{\theta, \omega_{G}}(f, \delta') \end{align} where the sum is over $\theta$-twisted conjugacy classes $\{\delta'\}_{G(F)}^{\theta}$ in the $\theta$-twisted stable conjugacy class of $\delta$. In particular, it descends to a surjection \[ \mathcal{I}(G^{\theta, \omega_{G}}, \chi) \longrightarrow \mathcal{SI}(H, \chi')^{\text{Out}_{G}(H)}, \] where the action of $\text{Out}_{G}(H)$ on $\mathcal{SI}(H, \chi')$ is independent of the choice of $F$-splitting for $H$ (see Section~\ref{subsubsec: twisted endoscopy}). The existence of such a transfer has been a long standing problem. In the real case, it is now a theorem of Shelstad \cite{Shelstad:2012}. In the nonarchimedean case, the main obstacle is the Fundamental Lemma, which has been finally resolved by Ngo \cite{Ngo:2010}. And the proof of the transfer conjecture in this case was completed by Waldspurger \cite{Waldspurger:2008}. Now let us assume $G$, $\widetilde{G}$, $D$ and $\c$ are defined as in Section~\ref{subsubsec: notations}. Let $\theta$ be an automorphism of $\widetilde{G}$ preserving an $F$-splitting and $\c$ is $\theta$-invariant. Let $\lif{Z}_{F}$ be a closed subgroup of $Z_{\widetilde{G}}(F)$ such that $\lif{Z}_{F} \rightarrow (Z_{\widetilde{G}})_{\theta}(F)$ is injective and $D(F) / \c(\lif{Z}_{F})$ is finite (this is possible because we assume $\c$ is $\theta$-invariant). Let $Z_{F} = \lif{Z}_{F} \cap G(F)$. We choose Haar measures on $\lif{Z}_{F}$ and $Z_{F}$ such that the measure on $Z_{F} \backslash G(F)$ is the restriction of that on $\lif{Z}_{F} \backslash \widetilde{G}(F)$. Let $\lif{\chi}$ be a quasicharacter of $\lif{Z}_{F}$ and we denote its restriction to $Z_{F}$ by $\chi$. For every $f \in \H(G, \chi)$, it can be extended to ${\widetilde{G}}(F)$ through $\lif{Z}_{F}$ by $\lif{\chi}$, and the extension lies in $\H(\widetilde{G}, \lif{\chi})$, supported on $\lif{Z}_{F}G(F)$. Hence we get an inclusion map \begin{align \label{map: inclusion of Hecke algebra} \xymatrix{\H(G, \chi) \, \ar@{^{(}->}[r] & \H(\widetilde{G}, \lif{\chi}) \\ f \ar@{|->}[r] & \lif{f} }, \end{align} and we can identify $\H(G, \chi)$ with its image. Let $\omega_{\widetilde{G}}$ be a quasicharacter of $\widetilde{G}(F)$ and $\omega_{G} = \omega_{\widetilde{G}}|_{G}$. For any strongly $\theta$-regular $\theta$-semisimple element $\delta$ of $G(F)$ such that $\omega_{G}$ is trivial on the $G^{\theta}_{\delta}(F)$, we fix the Haar measure on $\widetilde{G}^{\theta}_{\delta}(F) \backslash \widetilde{G}(F)$, which determines the Haar measure on $G^{\theta}_{\delta}(F) \backslash G(F)$ by restriction. Then for $f \in \H(G, \chi)$, and $\tilde{f} \in \H(\widetilde{G}, \lif{\chi})$ being its extension, we have \[ SO_{\widetilde{G}}(\tilde{f}, \delta) = SO_{G}(f, \delta), \] and \[ O_{\widetilde{G}}^{\theta, \omega_{\widetilde{G}}}(\tilde{f}, \delta) = \sum_{\{\delta'\}_{G(F)}^{\theta} \thicksim_{\widetilde{G}(F)} \{\delta\}_{G(F)}^{\theta}} O_{G}^{\theta, \omega_{G}}(f, \delta')\omega_{\widetilde{G}}(g) \] where the sum is over $\theta$-twisted $G(F)$-conjugacy classes $\{\delta'\}_{G(F)}^{\theta}$ in the $\theta$-twisted $\widetilde{G}(F)$-conjugacy classes $\{\delta\}_{\widetilde{G}(F)}^{\theta}$ with $\delta' = g^{-1} \delta g$ for $g \in \widetilde{G}(F)$, and the Haar measure on $G^{\theta}_{\delta'}(F)$ is translated from that on $G^{\theta}_{\delta}$ by conjugation. Because $\lif{Z}_{F}G(F)$ is $\theta$-conjugate invariant under $\widetilde{G}(F)$, the map \eqref{map: inclusion of Hecke algebra} induces a map from $\mathcal{I}(G^{\theta, \omega_{G}}, \chi)$ to $\mathcal{I}(\widetilde{G}^{\theta, \omega_{\widetilde{G}}}, \lif{\chi})$. Moreover $\lif{Z}_{F}G(\bar{F})$ is conjugate invariant under $\widetilde{G}(\bar{F})$, so it also induces a map from $\mathcal{SI}(G, \chi)$ to $\mathcal{SI}(\widetilde{G}, \lif{\chi})$. Suppose $\widetilde{G}' \in \End{}{\widetilde{G}^{\theta}, \omega_{\widetilde{G}}}$ and $G' \in \End{}{G^{\theta}, \omega_{G}}$ correspond to each other according to Proposition~\ref{prop: lifting endoscopic group}. The natural inclusion $(Z_{\widetilde{G}})_{\theta} \rightarrow Z_{\widetilde{G}'}$ induces an inclusion on $\lif{Z}_{F}$. So we can define $\lif{Z}'_{F} \subseteq Z_{\widetilde{G}'}(F)$ to be the image of $\lif{Z}_{F}$ and $Z'_{F} = \lif{Z}'_{F} \cap Z_{G'}(F)$. The twisted endoscopic transfer sends $\H(\widetilde{G}, \lif{\chi})$ to $\H(\widetilde{G}', \lif{\chi}')$, where $\chi'$ is a quasicharacter of $\lif{Z}'_{F}$, depending only on $\lif{\chi}$ and the twisted endoscopic embedding. Let $\chi'$ be the restriction of $\lif{\chi}'$ to $Z'_{F}$. Then we have \begin{align*} \xymatrix{\H(G', \chi') \, \ar@{^{(}->}[r] & \H(\widetilde{G}', \lif{\chi}') \\ f \ar@{|->}[r] & \lif{f} }, \end{align*} The following lemma shows these inclusion maps are compatible with the twisted endoscopic transfers. \begin{lemma}[\cite{Xu:2016}, Lemma 3.8 \label{lemma: twisted endoscopic transfer} Suppose $f \in \H(G, \chi)$, then the $(\theta, \omega_{\widetilde{G}})$-twisted endoscopic transfer of the extension $\tilde{f}$ of $f$ is equal to the extension of $(\theta, \omega_{G})$-twisted endoscopic transfer $f^{G'}$ of $f$ as elements in $\mathcal{SI}({\widetilde{G}'}, \lif{\chi}')$, i.e. \begin{align} \tilde{f}^{\widetilde{G}'} = \lif{(f^{G'})} \label{eq: twisted endoscopic transfer} \end{align} \end{lemma} \begin{remark The inclusion map \eqref{map: inclusion of Hecke algebra} of Hecke algebras induces a restriction map of distributions in the opposite direction. Moreover the restriction of an invariant distribution is again invariant, and the restriction of a stable distribution is again stable. In particular, the restriction of the character of a representation is compatible with the restriction of the representation in the usual sense. \end{remark} \begin{corollary \label{cor: twisted endoscopic transfer} Suppose $S^{\widetilde{G}'}(\cdot)$ is a stable distribution on $\widetilde{G}'$, then the restriction of the pull-back of $S^{\widetilde{G}'}(\cdot)$ is equal to the pull-back of the restriction of $S^{\widetilde{G}'}(\cdot)$, i.e. \[ S^{\widetilde{G}'}(\lif{f}^{\widetilde{G}'}) = S^{\widetilde{G}'}(\lif{f^{G'}}) \] \end{corollary} \begin{proof} One just need to substitute \eqref{eq: twisted endoscopic transfer} into $S^{\widetilde{G}'}(\cdot)$. \end{proof} \section{Arthur's Classification Theory: tempered case} \label{sec: Arthur's theory} In this section we will review Arthur's classification theory for the tempered representations of quasi-split symplectic groups and special even orthogonal groups (cf. \cite{Arthur:2013}). So throughout this section, $G$ will always be a quasisplit symplectic group or special even orthogonal group over $F$ (if not specified). We fix an outer automorphism $\theta_{0}$ of $G$, and a nontrivial automorphism $\theta_{N}$ of $GL(N)$, so that they preserve an $F$-splitting respectively. When $G$ is symplectic, $\theta_{0}$ is trivial. When $G$ is special even orthogonal, we require $\theta_{0}$ to be the unique outer automorphism induced from the conjugation of the full orthogonal group. We denote $\Sigma_{0} = <\theta_{0}>$. When $F$ is local, $\Sigma_{0}$ acts on $\Pkt{}(G(F))$ and we denote the set of $\Sigma_{0}$-orbits in $\Pkt{}(G(F)$ by $\cPkt{}(G(F))$. We denote by $\bar{\mathcal{H}}(G, \chi)$ the subspace of $\Sigma_{0}$-invariant functions in $\H(G, \chi)$, and we abbreviate $\H(GL(N))$ to $\H(N)$. We also denote the corresponding space of (stable) twisted orbital integrals by $\bar{\mathcal{I}}(G^{\theta}, \omega_{G})$ ($\bar{\mathcal{SI}}(G^{\theta}, \omega_{G})$) for $\theta \in \Sigma_{0}$ and $\omega_{G} \in \text{Hom}(G(F), \mathbb{C}^{\times})$. \subsection{Substitute Langlands parameter \label{subsec: substitute Langlands parameter} First let $F$ be a global field, we define the sets of substitute global generic (or tempered) Langlands parameters as follows, \begin{align*} \Psm{N} & := \{ \text{ isomorphism classes of irreducible unitary cuspidal automorphic representations of } \, GL(N) \}, \\ \Psm{N^{\theta_{N}}} & := \{ \phi \in \Psm{N} : \phi = \phi^{\vee} \}, \\ \P{N^{\theta_{N}}} & := \{ \phi = l_{1}\phi_{1} \# \cdots \# l_{r}\phi_{r} : \phi = \phi^{\vee} , \phi_{i} \in \Psm{N_{i}}, \text{ with } \sum_{i=1}^{r} l_{i}N_{i} = N \}. \end{align*} Here $\phi^{\vee}$ denotes the dual (or contragredient) of $\phi$ if $\phi \in \Psm{N}$, and \[ \phi^{\vee} := l_{1}\phi_{1}^{\vee} \# \cdots \# l_{r}\phi_{r}^{\vee} \] if $\phi \in \P{N^{\theta_{N}}}$. Note that $\P{N^{\theta_{N}}}$ is just a set of formal sums of irreducible unitary cuspidal automorphic representations, and for every parameter $\phi \in \P{N^{\theta_{N}}}$ we can assign a family of semisimple conjugacy classes in $GL(N, \mathbb{C})$ by \[ c(\phi_{v}) := \underbrace{c(\phi_{1, v}) \+ \cdots \+ c(\phi_{1, v})}_{l_{1}} \+ \cdots \+ \underbrace{c(\phi_{r, v}) \+ \cdots \+ c(\phi_{r, v})}_{l_{r}} \] for unramified places $v$ of $\phi$, where $c(\phi_{i, v})$ is the Satake parameter of the local component $\phi_{i, v}$. Inside $\Psm{N^{\theta_{N}}}$ there are two types of parameters, we call $\phi$ is of {\bf orthogonal type} if the symmetric square $L$-function $L(s, \phi, S^2)$ has a pole at $s = 1$; we call $\phi$ is of {\bf symplectic type} if the skew-symmetric square $L$-function $L(s, \phi, \wedge^2)$ has a pole at $s = 1$. In fact every $\phi \in \Psm{N^{\theta_{N}}}$ will always be either one of these two types due to the fact that the Rankin-Selberg L-function \[ L(s, \phi \otimes \phi) = L(s, \phi, S^2)L(s, \phi, \wedge^2) \] has a simple pole at $s = 1$. Moreover when $N$ is odd, $\phi$ is always of orthogonal type. The following theorem proved in (\cite{Arthur:2013}, Theorem 1.4.1 and Theorem 1.5.3) shows how automorphic representations of $GL(N)$ are related to that of orthogonal groups and symplectic groups. If $\r$ is an automorphic representation of $G$, we denote by $c(\r) = \{c(\r_{v})\}$ the family of Satake parameters of $\r_{v}$ at the unramified places. \begin{theorem \label{thm: global functorial lifting} Suppose $\phi \in \Psm{N^{\theta_{N}}}$, then there is a unique class of elliptic endoscopic data $(G_{\phi}, s_{\phi}, \xi_{\phi})$ in $\End{ell}{N^{\theta_{N}}}$ such that \( c(\phi_{v}) = \xi_{\phi}(c(\r_{v})) \) for some discrete automorphic representation $\r$ of $G_{\phi}$ at almost all places. Moreover if $\phi$ is of orthogonal type, $\D{G}_{\phi} = SO(2n+1, \mathbb{C})$ when $N = 2n+1$, or $SO(2n, \mathbb{C})$ when $N = 2n$; if $\phi$ is of symplectic type, $\D{G}_{\phi} = Sp(2n, \mathbb{C})$ with $N = 2n$. \end{theorem} For $\phi = l_{1}\phi_{1} \# \cdots \# l_{r}\phi_{r} \in \P{N^{\theta_{N}}}$, since $\phi = \phi^{\vee}$, one gets an involution on the indices by letting $\phi_{i^{\vee}} = \phi_{i}^{\vee}$, and consequently one has $l_{i} = l_{i^{\vee}}$. This gives a disjoint decomposition of these indices \[ I_{\phi} \sqcup J_{\phi} \sqcup J_{\phi}^{\vee}, \] where $I_{\phi}$ indexes the set of self-dual simple parameters. Let $K_{\phi} = I_{\phi} \sqcup J_{\phi}$, and let $I_{\phi, O}$ ($I_{\phi, S}$) indexes the self-dual simple parameters of orthogonal (symplectic) type. By Theorem~\ref{thm: global functorial lifting}, for each $\phi_{i}$ with $i \in I_{\phi}$, we have a twisted elliptic endoscopic group $G_{i}$ of $GL(N_{i})$ and we fix the twisted endoscopic embedding $\xi_{i} : \L{G_{i}} \longrightarrow \L{GL(N_{i})}$. For $\phi_{j}$ with $j \in J_{\phi}$, let us just take $G_{j}$ to be $GL(N_{j})$ and define an $L$-embedding $\xi_{j} : \L{G_{j}} \longrightarrow \L{GL(2N_{j})}$ by sending $g \rtimes w$ to $\text{diag}\{g, {}^tg^{-1}\} \times w$. Then Arthur defines a substitute global Langlands group by taking the fibre product \[ \mathcal{L}_{\phi} := \prod_{k \in K_{\phi}} \{ \L{G_{k}} \longrightarrow W_{F} \}, \] and he also defines an $L$-homomorphism $\phi^{\mathcal{E}} : \mathcal{L}_{\phi} \longrightarrow \L{GL(N)}$, where \[ \phi^{\mathcal{E}} := \bigoplus_{k \in K_{\phi}} l_{k}\xi_{k}. \] By viewing $G$ as in $\End{ell}{N^{\theta_{N}}}$, we can define the set of substitute global parameters of $G$ as follows \[ \cP{G} := \{ \phi \in \P{N^{\theta_{N}}} : \phi^{\mathcal{E}} \text{ factors through } \L{G} \}. \] As a simple exercise, one can show for $\phi = l_{1}\phi_{1} \# \cdots \# l_{r}\phi_{r} \in \P{N^{\theta_{N}}}$, $\phi$ is in $\cP{G}$ if and only if $l_{i}$ is even for all $i \in I_{\phi, S}$. Since $\text{Out}_{N}(G) \cong \Sigma_{0}$, the above set is really the analogue of the set of $\Sigma_{0}$-conjugacy classes of global Langlands parameters for $G$. For $\phi \in \cP{G}$ and $\Sigma \subseteq \Sigma_{0}$, one can define \begin{align*} S^{\Sigma}_{\phi} &= \text{Cent}(\Im \phi^{\mathcal{E}}, \D{G}^{\Sigma}), \\ \cS{\phi}^{\Sigma} &= S^{\Sigma}_{\phi} / Z(\D{G})^{\Gal{F}}, \\ \S{\phi}^{\Sigma} &= \cS{\phi}^{\Sigma} / \com[0]{\cS{\phi}}, \end{align*} and from here one can also define the following important subsets of $\cP{G}$ \begin{align*} \cPsm{G} &= \{ \phi \in \cP{G} : \cS{\phi}^{\Sigma_{0}} = 1\}, \\ \cPdt{G} &= \{ \phi \in \cP{G} : |\cS{\phi}| < \infty \}, \\ \cP{G^{\theta}} & = \{ \phi \in \cP{G}: S_{\phi}^{\theta} \neq \emptyset\}, \\ \cPel{G^{\theta}} &= \{ \phi \in \cP{G^{\theta}} : |\cS{\phi, s}^{0}| < \infty \, \text{for some semisimple} \, s \in \cS{\phi}^{\theta} \}, \end{align*} where $\theta \in \Sigma_{0}$. In fact, one can even compute $S_{\phi}$ very explicitly (see \cite[(1.4.8)]{Arthur:2013}) \begin{align \label{formula: centralizer} S_{\phi} = (\prod_{i \in I_{\phi, O}} O(l_{i}, \mathbb{C}))_{\phi}^{+} \times (\prod_{i \in I_{\phi, S}} Sp(l_{i}, \mathbb{C})) \times (\prod_{j \in J_{\phi}} GL(l_{j}, \mathbb{C})), \end{align} where $(\prod_{i \in I_{\phi, O}} O(l_{i}, \mathbb{C}))_{\phi}^{+}$ is the kernel of the character \[ \varepsilon_{\phi}^{+} : \prod_{i} g_{i} \longrightarrow \prod_{i} (\det \, g_{i})^{N_{i}}, \,\,\,\,\, g_{i} \in O(l_{i}, \mathbb{C}), i \in I_{\phi, O}. \] Note $G$ is symplectic or speical even orthogonal here, so we have $I_{\phi, O} = I^{+}_{\phi}$ and $I_{\phi, S} = I^{-}_{\phi}$ in Arthur's original formula of $S_{\phi}$. When $G$ is special even orthogonal, \begin{align \label{formula: plus centralizer} S^{\Sigma_{0}}_{\phi} = (\prod_{i \in I_{\phi, O}} O(l_{i}, \mathbb{C})) \times (\prod_{i \in I_{\phi, S}} Sp(l_{i}, \mathbb{C})) \times (\prod_{j \in J_{\phi}} GL(l_{j}, \mathbb{C})). \end{align} As a consequence, one has the following description of those subsets of $\cP{G}$. \begin{lemma \label{lemma: discrete parameter} \begin{enumerate} \item $\cPsm{G} = \Psm{N} \cap \cP{G}$. \item Suppose $\phi \in \cP{G}$, then $\phi$ is in $\cPdt{G}$ if and only if $K_{\phi} = I_{\phi, O}$ and $l_{i} =1$ for all $i \in K_{\phi}$. \item Suppose $\phi$ is in $\cPel{G^{\theta}}$ for $\theta \in \Sigma_{0}$, then $K_{\phi} = I_{\phi, O}$ and $l_{i} \leqslant 2$ for all $i \in K_{\phi}$. \item Suppose $G$ is special even orthogonal and $\phi \in \cP{G}$, then $\phi$ is in $\cP{G^{\theta_{0}}}$ if and only if there exists $i \in I_{\phi, O}$ such that $N_{i}$ is odd. \end{enumerate} \end{lemma} The proof is a direct application of formulas~\eqref{formula: centralizer} and \eqref{formula: plus centralizer}. Now let $F$ be a local field, we also define the substitute local generic (or tempered) Langlands parameters similarly as follows, \begin{align*} \ePsm{N} & := \{ \text{ isomorphism classes of irreducible essentially discrete series representations of } \, GL(N) \}, \\ \ePsm{N^{\theta_{N}}} & := \{ \phi^{\mathcal{E}} \in \ePsm{N} : \phi^{\mathcal{E}} = ( \phi^{\mathcal{E}} ) ^{\vee} \}, \\ \ePbd{N^{\theta_{N}}} & := \{ \phi^{\mathcal{E}} = l_{1}\phi^{\mathcal{E}}_{1} \+ \cdots \+ l_{r}\phi^{\mathcal{E}}_{r} : \phi^{\mathcal{E}} = ( \phi^{\mathcal{E}} )^{\vee} , \phi^{\mathcal{E}}_{i} \in \ePsm{N_{i}} \text{ with } \sum_{i=1}^{r} l_{i}N_{i} = N \}. \end{align*} Suppose $\phi^{\mathcal{E}} \in \ePsm{\com{N}}$, we call $\phi^{\mathcal{E}}$ is of {\bf orthogonal type} if the local symmetric square $L$-function $L(s, \phi^{\mathcal{E}}, S^2)$ has a pole at $s =0$; we call $\phi^{\mathcal{E}}$ is of {\bf symplectic type} if the local skew-symmetric square $L$-function $L(s, \phi^{\mathcal{E}}, \wedge^2)$ has a pole at $s=0$. As in the global case, every $\phi^{\mathcal{E}} \in \ePsm{N^{\theta_{N}}}$ will be either of orthogonal type or symplectic type. We would like to state a local version of Theorem~\ref{thm: global functorial lifting}, which is proved in \cite[Theorem 6.1.1 and Corollary 6.8.1]{Arthur:2013}. For $\phi^{\mathcal{E}} \in \ePsm{N^{\theta_{N}}}$, let $\r_{\phi^{\mathcal{E}}}$ be the self-dual essentially discrete series representation of $GL(N)$ defined by $\phi^{\mathcal{E}}$. We write \begin{align \label{eq: twisted character of GL(N)} f_{N^{\theta_{N}}}(\phi^{\mathcal{E}}) := f_{GL(N)^{\theta_{N}}}(\r_{\phi^{\mathcal{E}}}) \,\,\,\,\,\,\,\,\,\,\, f \in \H(N), \end{align} with respect to some intertwining operator $A_{\r_{\phi^{\mathcal{E}}}}(\theta_{N})$. \begin{theorem \label{thm: local functorial lifting} Suppose $\phi^{\mathcal{E}} \in \ePsm{N^{\theta_{N}}}$, then there is a unique class of elliptic endoscopic data $(G_{\phi^{\mathcal{E}}}, s_{\phi^{\mathcal{E}}}, \xi_{\phi^{\mathcal{E}}})$ in $\End{ell}{N^{\theta_{N}}}$ such that \[ f_{N^{\theta_{N}}}(\phi^{\mathcal{E}}) = f^{G_{\phi^{\mathcal{E}}}}(\phi^{\mathcal{E}}), \text {\, \, for all \,}f \in \H(N) \] for some stable distribution $f(\phi^{\mathcal{E}})$ on $G_{\phi^{\mathcal{E}}}$, where $f^{G_{\phi^{\mathcal{E}}}}$ is the twisted endoscopic transfer of $f$. Moreover if $\phi^{\mathcal{E}}$ is of orthogonal type, $\D{G}_{\phi^{\mathcal{E}}} = SO(2n+1, \mathbb{C})$ when $N = 2n+1$, or $SO(2n, \mathbb{C})$ when $N = 2n$; if $\phi^{\mathcal{E}}$ is of symplectic type, $\D{G}_{\phi^{\mathcal{E}}} = Sp(2n, \mathbb{C})$ with $N = 2n$. \end{theorem} Note $\text{Out}_{N}(G) \cong \Sigma_{0}$, so the twisted endoscopic transfer $f^{G_{\phi^{\mathcal{E}}}}$ lies in $\bar{\mathcal{H}}(G)$. As in the global case, one can define the substitute local Langlands group $\mathcal{L}_{\phi^{\mathcal{E}}}$ and substitute local parameter $\phi^{\mathcal{E}}$. One can also define the set $\cePbd{G}$ of substitute parameters for $G$, various centralizer groups of parameter $\phi^{\mathcal{E}}$ in $\D{G}$, and various subsets in $\cePbd{G}$. Moreover, the formula~\eqref{formula: centralizer}, \eqref{formula: plus centralizer} and Lemma~\ref{lemma: discrete parameter} still hold in the local case. The link between these substitute local parameters and the genuine local Langlands parameters is through the local Langlands correspondence for $GL(N)$ proved by Harris-Taylor \cite{HarrisTaylor:2001}, Henniart \cite{Henniart:2000} and Scholze \cite{Scholze:2013}. The local Langlands correspondence for $GL(N)$ gives a bijection between $\ePsm{N}$ and the set $\Pdt{N}$ of equivalence classes of $N$-dimensional irreducible unitary representations of $L_{F}$, which also induces a bijection for the self-dual ones. Later in the paper, we will identify them and use the notation $\Psm{N}$ as in the global case. The following theorem proved in (\cite{Arthur:2013}, Corollary 6.8.1) shows the substitute local parameters of $G$ correspond to the genuine local Langlands parameters of $G$ under this bijection. \begin{theorem \label{thm: parameter identification} Suppose $\phi^{\mathcal{E}} \in \ePsm{N^{\theta_{N}}}$, then $\phi^{\mathcal{E}}$ is in $\cePsm{G}$ if and only if its corresponding Langlands parameter $\phi$ factors through $\L{G}$. \end{theorem} Since the elements in $\ePbd{N^{\theta_{N}}}$ corresponds to self-dual tempered representations of $GL(N)$ by the parabolic induction, the local Langlands correspondence also gives a bijection between $\Pbd{N^{\theta_{N}}}$ and $\ePbd{N^{\theta_{N}}}$. And we have the following corollary. \begin{corollary \label{cor: parameter identification} \begin{enumerate} \item Suppose $\phi^{\mathcal{E}} \in \ePbd{N^{\theta_{N}}}$, then $\phi^{\mathcal{E}}$ is in $\cePbd{G}$ if and only if its corresponding Langlands parameter $\phi$ factors through $\L{G}$.\\ \item Suppose $\phi^{\mathcal{E}} \in \cePbd{G}$ corresponds to $\phi \in \cPbd{G}$, then $S_{\phi^{\mathcal{E}}} \cong S_{\underline{\phi}}$ for any representative $\underline{\phi}$ of $\phi$. \end{enumerate} \end{corollary} For the proof, one just needs to notice for $\phi \in \cPbd{G}$, there is a decomposition through the twisted endoscopic embedding to $GL(N, \mathbb{C})$ \[ \phi = l_{1}\phi_{1} \+ \cdots \+ l_{r} \phi_{r}, \] where $\phi_{i} \in \Pdt{N_{i}}$ is irreducible. As a consequence of Theorem~\ref{thm: parameter identification} and Corollary~\ref{cor: parameter identification}, one can identify $\cePbd{G}$ with $\cPbd{G}$ through the twisted endoscopic embedding $\xi: \L{G} \rightarrow \L{GL(N)}$. And we also denote $S_{\phi^{\mathcal{E}}}$ by $S_{\phi}$. \subsection{Local theory \label{subsec: local theory} Now we can state the main local result of Arthur's theory (\cite{Arthur:2013}, Theorem 1.5.1 and Theorem 2.2.1) for quasisplit symplectic groups and special even orthogonal groups in the tempered case. Let $F$ be local. We fix a $\theta_{N}$-stable Whittaker datum $(B_{N}, \Lambda)$ for $GL(N)$. We also fix the twisted endoscopic embedding $\xi: \L{G} \rightarrow \L{GL(N)}$. \begin{theorem \label{thm: LLC} For every $\phi \in \cPbd{G}$, one can associate it with a finite set $\cPkt{\phi}$ of $\cPkt{temp}(G(F))$ such that it satisfies the following properties: \begin{enumerate} \item The distribution \begin{align \label{eq: stable distribution} f(\phi) := \sum_{[\r] \in \cPkt{\phi}} f_{G}(\r), \, \, \, f \in \bar{\mathcal{H}}(G) \end{align} is stable. \item If we normalize the intertwining operator $A_{\r_{\phi}}(\theta_{N})$ such that it preserves the Whittaker functional on $\r_{\phi}$, then \begin{align \label{eq: twisted character identity GL(N)} f_{N^{\theta_{N}}}(\phi) = f^{G}(\phi) \end{align} for $f \in \H(N)$ and the twisted endoscopic transfer $f^{G} \in \bar{\mathcal{H}}(G)$. \item There is a disjoint decomposition \[ \cPkt{temp}(G(F)) = \bigsqcup_{\phi \in \cPbd{G}} \cPkt{\phi}. \] \end{enumerate} \end{theorem} Since the transfer map $\mathcal{I}(N^{\theta_{N}}) \rightarrow \mathcal{SI}(G)^{\text{Out}_{N}(G)}$ is surjective (see Section~\ref{subsec: the transfer map}), $\phi$ determines the stable distribution \eqref{eq: stable distribution} on $G(F)$ through \eqref{eq: twisted character identity GL(N)}. In this way, $\phi$ determines the L-packet $\cPkt{\phi}$. If $G$ is a product of symplectic groups and special even orthogonal groups, we define a group of automorphisms of $G$ by taking the product of $\Sigma_{0}$ on each factors, and we denote this group again by $\Sigma_{0}$. We denote the set of $\Sigma_{0}$-orbits in $\Pkt{temp}(G(F))$ by $\cPkt{temp}(G(F))$ and the set of $\Sigma_{0}$-orbits in $\Pbd{G}$ by $\cPbd{G}$. Let $\bar{\mathcal{H}}(G)$ be the $\Sigma_{0}$-invariant functions in $\H(G)$. Then part (1) and (3) of this theorem can also be generalized to this case, in particular, the L-packets of $G$ are formed by taking tensor products of those of each factor. If $G' \in \End{}{G^{\theta}}$ for $\theta \in \Sigma_{0}$, then $G' \cong M_{l} \times G'_{-}$, where $M_{l}$ is a product of general linear groups, and $G'_{-}$ is again a product of symplectic groups and special even orthogonal groups. We can extend the action of $\Sigma_{0}$ to $G'$ by letting it act trivially on $M_{l}$. Then we can define $\cPbd{G'}$ and $\cPkt{temp}(G'(F))$ similarly. Part (1) and (3) of this theorem can again be extended further to this case. \begin{theorem \label{thm: character identity} \begin{enumerate} \item For $\phi \in \cPbd{G}$, there is a canonical pairing between $\cPkt{\phi}$ and $\S{\phi}$, which induces an inclusion \begin{align \label{eq: canonical pairing for G} [\r] \longrightarrow <\cdot, \r>, \,\,\,\, [\r] \in \cPkt{\phi}, \end{align} from $\cPkt{\phi}$ into the group $\D{\S{\phi}}$ of characters on $\S{\phi}$, such that $<\cdot, \r> = 1$ if $G$ and $\r$ are unramified. It becomes a bijection when $F$ is nonarchimedean. \item Suppose $s$ is a semisimple element in $\cS{\phi}$ and $(G', \phi') \longrightarrow (\phi, s)$ with $G' \in \End{}{G}$ and $\phi' \in \cPbd{G'}$. The packet $\cPkt{\phi'}$ can be defined by the generalization of the previous theorem. If $x$ is the image of $s$ in $\S{\phi}$, then \begin{align \label{eq: character relation for G} f^{G'}(\phi') = \sum_{[\r] \in \cPkt{\phi}} <x, \r> f_{G}(\r) , \,\,\,\, f \in \bar{\mathcal{H}}(G). \end{align} \end{enumerate} \end{theorem} When $G$ is special even orthogonal, one could further characterize those $\theta_{0}$-stable tempered representation. It is a theorem proved in (\cite{Arthur:2013}, Theorem 2.2.3). \begin{theorem \label{thm: twisted LLC} Suppose $G$ is a special even orthogonal group and $\phi \in \cPbd{\com{G}}$. \begin{enumerate} \item For any $[\r] \in \cPkt{\phi}$, $\r$ is a $\theta_{0}$-stable representation of $G(F)$ and hence has an extension $\r^{+}$ to $G^{+}(F) = G(F) \rtimes <\theta_{0}>$. \\ \item Suppose $s$ is a semisimple element in $\com{\cS{\phi}}$ and $(G', \phi') \longrightarrow (\phi, s)$ with $G' \in \End{}{\com{G}}, \phi' \in \cPbd{G'}$, then \begin{align \label{eq: theta twisted character relation for G} f'(\phi') = \sum_{[\r] \in \cPkt{\phi}} <x, \r^{+}>f_{G^{\theta_{0}}}(\r), \,\,\,\, f \in \bar{\mathcal{H}}(G), \end{align} where $x$ is the image of $s$ in $\com{\S{\phi}}$, $<\cdot, \r^{+}>$ is an extension of the character $<\cdot, \r>$ to $\S{\phi}^{+} = <\S{\phi}^{\theta_{0}}>$, and the intertwining operator $A_{\r}(\theta_{0}) = \r^{+}(\theta_{0})$. \end{enumerate} \end{theorem} \begin{remark \label{rk: twisted LLC} In the $\theta_{0}$-twisted character relation \eqref{eq: theta twisted character relation for G}, both the extensions of representation $\r^{+}$ and character $<\cdot, \r^{+}>$ are not uniquely determined, but the product $<\cdot, \r^{+}>f_{G^{\theta_{0}}}(\r)$ is determined and depends only on $\r$. Moreover, \eqref{eq: twisted character identity GL(N)} is the analogue of \eqref{eq: theta twisted character relation for G} for general linear groups, where we have fixed the extension of $\r_{\phi}$ using the Whittaker datum and taken the extended character $<\cdot, \r^{+}>$ to be trivial. It is not hard to see how to generalize both Theorem~\ref{thm: LLC} and Theorem~\ref{thm: twisted LLC} to products of symplectic groups and special even orthogonal groups. \end{remark} Because one does not know whether all the local constituents of a unitary cuspidal automorphic representation of $GL(N)$ are tempered, i.e. the generalized Ramanujan conjecture, one has to deal with a more general set of parameters $\uP{N}$, which is defined as follows. Let $\nu^{a}$ denote the map $| \cdot |_{F}^{a}$ of $W_{F}$ for $a \in \mathbb{R}$. Then, \begin{align*} \uP{N} := \{& \phi = \phi_{1} \+ \cdots \+ \phi_{r} \+ ( \nu^{a_{1}}\phi_{r+1} \+ \nu^{-a_{1}}\phi_{r+1} ) \cdots \+ (\nu^{a_{s}}\phi_{r+s} \+ \nu^{-a_{s}}\phi_{r+s}) : \\ & \phi_{i} \in \Psm{N_{i}} \text{ for } 1 \leqslant i \leqslant r+s \text{ and } 0 < a_{j} < 1/2 \text{ for } 1 \leqslant j \leqslant s \}. \end{align*} From the classification of the unitary dual of $GL(N)$ (cf. \cite{Tadic:2009} \cite{Vogan:1986} archimedean case, \cite {Tadic:1986} nonarchimedean case), we know the associated irreducible admissible representation $\r_{\phi}$ for any $\phi \in \uP{N}$ is unitary. And we have the following fact. \begin{proposition \label{prop: local constituents of cuspidal representations} If $F$ is global and $\phi \in \Psm{N}$, then $\phi_{v} \in \uP{N_{v}}$. \end{proposition} Correspondingly, we can define \[ \cuP{G} : = \cP{G} \cap \uP{N}. \] Theorem~\ref{thm: LLC}, Theorem~\ref{thm: character identity} and Theorem~\ref{thm: twisted LLC} can be extended to the case $\phi \in \cuP{G}$ except for the constituents of $\cPkt{\phi}$ may be non-tempered. In fact, for any $\phi \in \cuP{G^{\theta}}$ with $\theta \in \Sigma_{0}$, $\phi$ can be regarded as $\phi_{M, \lambda} := \phi_{M} \otimes (\lambda \circ |\cdot|_{F})$ for some $\theta$-stable Levi subgroup $M$ (which also admits a $\theta$-stable parabolic subgroup $P \supseteq M$), where $\phi_{M} \in \cPbd{M^{\theta}}$ and $\lambda \in \mathfrak{a}^{*}_{M}$ lies in the open chamber determined by $P$. Let $\cPkt{\phi_{M, \lambda}} := \cPkt{\phi_{M}} \otimes e^{<H_{M}(\cdot), \lambda>}$. Then one can just define $\cPkt{\phi}$ to be the irreducible constituents of the parabolic induction $\mathcal{I}_{P}( \cPkt{\phi_{M, \lambda}} )$. Note that the $\theta$-twisted endoscopy transfer is compatible with this parabolic induction, and also $\S{\phi_{M}} \cong \S{\phi}$, then it is enough to know the following proposition. \begin{proposition \label{prop: irreducibility of non-unitary induced representation} Suppose $F$ is local, $\phi \in \cuP{G}$, and $\phi$ can be regarded as $\phi_{M, \lambda}$ where $\phi_{M} \in \cPbd{M}$ and $\lambda \in \mathfrak{a}^{*}_{M}$ lies in some open chamber determined by $P \supseteq M$. Then for any $[\r_{M}] \in \cPkt{\phi_{M}}$, the induced representation $\mathcal{I}_{P} (\r_{M, \lambda}) $ is irreducible. \end{proposition} Proposition~\ref{prop: local constituents of cuspidal representations} and Proposition~\ref{prop: irreducibility of non-unitary induced representation} are well known to experts, but for the convenience of the readers we will give their proofs in Appendix~\ref{sec: irreducibility}. \subsection{Global theory \label{subsec: global theory} Now let us assume $F$ is global, and we fix the twisted endoscopic embedding $\xi: \L{G} \rightarrow \L{GL(N)}$. The global parameters and local parameters are related by the following theorem. \begin{theorem \label{thm: local-global compatibility} Suppose $\phi \in \cPsm{G}$, then $\phi_{v}$ factors through $\L{G_{v}}$ for all places $v$, i.e. $\phi_{v} \in \cuP{G_{v}}$. \end{theorem} This theorem is proved in (\cite{Arthur:2013}, Theorem 1.4.2). So for $\phi \in \cP{G}$, one has a commutative diagram \[ \xymatrix{L_{F_{v}} \ar[r]^{\phi_{v}} \ar[d] & \L{G_{v}} \ar[d] \\ \mathcal{L}_{\phi} \ar[r]^{\phi^{\mathcal{E}}} & \L{G},} \] where $L_{F_{v}} \rightarrow \mathcal{L}_{\phi}$ is defined by $\phi_{v}$. It gives rise to an inclusion $S_{\phi} \hookrightarrow S_{\phi_{v}}$ for any place $v$, which induces a homomorphism $\S{\phi} \rightarrow \S{\phi_{v}}$. One can define the global $L$-packet by taking the restricted tensor product \[ \cPkt{\phi} := \sideset{}{'} \bigotimes_{v} \cPkt{\phi_{v}} \] and define the global pairing by \[ <x, \r> := \prod_{v} <x_{v}, \r_{v}>. \] Note that for almost all places $v$, $<\cdot, \r_{v}> = 1$ by Theorem~\ref{thm: LLC}, so this product is well-defined. The main global result of Arthur's theory is to give a description of the discrete spectrum of automorphic representations of $G$. Here we only state it for those discrete automorphic representations parametrized by $\cPdt{G}$, i.e. for $\phi \in \cPdt{G}$, we want to describe $L^2_{disc, \phi} (G(F) \backslash G(\mathbb{A}_{F}))$ which consists of discrete automorphic representations $\r$ such that the Satake parameters satisfy $\xi(c(\r_{v})) = c(\phi_{v})$ for almost all places. Let $\bar{\mathcal{H}}(G) = \otimes'_{v} \bar{\mathcal{H}}(G_{v})$. \begin{theorem \label{thm: discrete spectrum} Suppose $\phi \in \cPdt{G}$, there is a decomposition as $\bar{\mathcal{H}}(G)$-modules \[ L^2_{disc, \phi} (G(F) \backslash G(\mathbb{A}_{F}))= m_{\phi} \sum_{\substack{[\r] \in \cPkt{\phi} \\ <\cdot, \r> = 1}} \r \] where $m_{\phi} =1 \text{ or } 2$, and $m_{\phi} = 2$ only when $G$ is special even orthogonal and $\phi \notin \cP{\com{G}}$. Moreover, \[ L^2_{disc, \phi} (G(F) \backslash G(\mathbb{A}_{F})) = 0 \] for $\phi \in \cP{G} - \cPdt{G}$. \end{theorem} \begin{remark \label{rk: discrete spectrum} This theorem is a special case of (\cite{Arthur:2013}, Theorem 1.5.2). By Arthur's complete description of the discrete spectrum for orthogonal and symplectic groups, one can see $\cPdt{G}$ only contributes to the discrete spectrum of $G$. It is not hard to extend this to products of symplectic and special even orthogonal groups. In fact if $G = G_{1} \times G_{2} \times \cdots \times G_{q}$, then we can define $\cP{G}$ to be consisting of $\phi := \phi_{1} \times \phi_{2} \times \cdots \times \phi_{q}$ such that $\phi_{i} \in \cP{G_{i}}$ for $1 \leqslant i \leqslant q$. Moreover, we can define $\mathcal{L}_{\phi} := \prod_{i=1}^{q}\mathcal{L}_{\phi_{i}}$, then $S_{\phi} = \prod_{i=1}^{q} S_{\phi_{i}}$. And we let $m_{\phi} = \prod_{i=1}^{q} m_{\phi_{i}}$. \end{remark} For $\phi \in \cP{G}$ and any subgroup $\Sigma \subseteq \Sigma_{0}$, let $\mathcal{L}_{\phi}$ act on $\D{D}$, $\D{\widetilde{G}}^{\Sigma}$ and $\D{G}^{\Sigma}$ by conjugation through $\phi^{\mathcal{E}}$. We denote the corresponding group cohomology by $H^{*}_{\phi^{\mathcal{E}}}(\mathcal{L}_{\phi}, \cdot)$. Note $H^{0}_{\phi^{\mathcal{E}}}(\mathcal{L}_{\phi}, \D{D}) = \D{D}^{\Gal{}}, H^{0}_{\phi^{\mathcal{E}}}(\mathcal{L}_{\phi}, \D{G}^{\Sigma}) = S^{\Sigma}_{\phi}$ and $H^{1}_{\phi^{\mathcal{E}}}(\mathcal{L}_{\phi}, \D{D}) = H^{1}(W_{F}, \D{D})$. We define $S_{\tilde{\phi}}^{\Sigma} := H^{0}_{\phi^{\mathcal{E}}}(\mathcal{L}_{\phi}, \D{\widetilde{G}}^{\Sigma})$. Then we have the following diagram \[ \xymatrix{ 1 \ar[r] & S^{\Sigma}_{\tilde{\phi}} / \D{D}^{\Gal{}} \ar[r] \ar@{^{(}->}[d] & S^{\Sigma}_{\phi} \ar[r]^{\delta \quad \quad} \ar@{^{(}->}[d] & H^{1}(W_{F}, \D{D}) \ar[d] \\ 1 \ar[r] & S^{\Sigma}_{\tilde{\phi}_{v}} / \D{D}_{v}^{\Gal{v}} \ar[r] & S^{\Sigma}_{\phi_{v}} \ar[r]^{\delta_{v} \quad \quad} & H^{1}(W_{F_{v}}, \D{D}_{v}).} \] Then Lemma~\ref{lemma: centralizer} is still valid, and we again have the following exact sequence as in Section~\ref{subsec: Langlands parameters} \begin{align*} \xymatrix{1 \ar[r] & \S{\tilde{\phi}}^{\Sigma} \ar[r]^{\iota} & \S{\phi}^{\Sigma} \ar[r]^{\a \quad \quad \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times}).} \end{align*} \section{Coarse L-packet \label{sec: coarse L-packet} \subsection{Statement of main local theorem \label{subsec: statement of main local theorem} Now we assume $\widetilde{G}$ is of type \eqref{eq: similitude}, and $\c$ is the generalized similitude character. In this case $G$ is a product of symplectic groups and special even orthogonal groups. We also assume $\theta \in \Sigma_{0}$. \begin{lemma}[\cite{Xu:2016}, Lemma 3.13 \label{lemma: twisting character} Suppose $\phi \in \cPbd{G}$ and $[\r] \in \cPkt{\phi}$ then \begin{align} <x, (\r^{+})^{g}> = \omega_{x}(g)<x, \r^{+}> \label{eq: theta twisting character} \end{align} for any $g \in \widetilde{G}(F)$ and $x \in \S{\phi}^{\theta}$, where $\omega_{x} = \a(x)$ and $\r^{+}$ is an extension of $\r$ to $G^{+}(F) = G(F) \times <\theta>$. \end{lemma} \begin{corollary}[\cite{Xu:2016}, Proposition 6.28 \label{cor: theta twisting character} Suppose $\phi \in \cPbd{G}$ and $[\r] \in \cPkt{\phi}$. If $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(F)$ whose restriction to $G(F)$ contains $\r$, then $\tilde{\pi}^{\theta} \cong \tilde{\pi} \otimes \omega$ if and only if $\omega \in \a(\S{\phi}^{\theta})$. In particular, \[ X(\tilde{\pi}) = \a(\S{\phi}). \] \end{corollary} \begin{remark} In view of Proposition~\ref{prop: irreducibility of non-unitary induced representation}, Lemma~\ref{lemma: twisting character} and Corollary~\ref{cor: theta twisting character} also hold for parameters in $\cuP{G}$, and the proofs are the same. \end{remark} For $\phi \in \cPbd{G}$, let us fix a character $\lif{\zeta}$ of $Z_{\widetilde{G}}(F)$ such that $\lif{\zeta}|_{Z_{G}(F)}$ is the central character of $\cPkt{\phi}$. Then we define $\clPkt{\phi, \lif{\zeta}}$ to be the subset of $\cPkt{temp}(\widetilde{G})$ with central character $\lif{\zeta}$, whose restriction to $G(F)$ have irreducible constituents contained in $\cPkt{\phi}$. Let \( X = \text{Hom}(\widetilde{G}(F)/Z_{\widetilde{G}}(F)G(F), \mathbb{C}^{\times}), \) so $X$ acts on $\clPkt{\phi, \lif{\zeta}}$ by twisting. We call $\clPkt{\phi, \lif{\zeta}}$ a coarse $L$-packet of $\widetilde{G}$, and its structure can be described in the following proposition. \begin{proposition}[\cite{Xu:2016}, Proposition 6.29 \label{prop: coarse L-packet} Suppose $\phi \in \cPbd{G}$ and $\lif{\zeta}$ is chosen as above. \begin{enumerate} \item The orbits in $\cPkt{\phi}$ under the conjugate action of $\widetilde{G}(F)$ all have size $|\S{\phi} / \S{\tilde{\phi}}|$. If $F$ is nonarchimedean, there are exactly $|\S{\tilde{\phi}}|$ orbits. \\ \item There is a natural fibration \[ \xymatrix{ X / \a(\S{\phi}^{\Sigma_{0}}) \ar[r] & \clPkt{\phi, \lif{\zeta}} \ar[r]^{Res \quad} & \cPkt{\phi} / \widetilde{G}(F)} \] \item There is a pairing \[ \tilde{\pi} \longrightarrow <\cdot, \tilde{\pi}> \] from $\clPkt{\phi, \lif{\zeta}} / X$ into $\D{\S{\tilde{\phi}}}$. It is uniquely characterized by \[ <x, \tilde{\pi}> = <\iota(x), \r> , \] where $\iota: \S{\tilde{\phi}} \hookrightarrow \S{\phi}$ and $\r$ is any irreducible representation of $G(F)$ in the restriction of $\tilde{\pi}$. Suppose $\widetilde{G}$ and $\tilde{\pi}$ are unramified, then $<\cdot, \tilde{\pi}> = 1$. Moreover, this mapping from $\clPkt{\phi, \lif{\zeta}} / X$ to $\D{\S{\tilde{\phi}}}$ is injective and when $F$ is nonarchimedean it is in fact a bijection. \end{enumerate} \end{proposition} This proposition is also true for $\cuP{G}$. Now it is natural to ask the following question. \begin{question \label{que: refined L-packet} For any lift $\tilde{\phi}$ of $\phi \in \cPbd{G}$, can one assign a packet $\cPkt{\tilde{\phi}}$ of representations of $\widetilde{G}(F)$ which gives a section of $\text{Res}: \clPkt{\phi, \lif{\zeta}} \rightarrow \cPkt{\phi} / \widetilde{G}(F)$, and also a stable distribution? \end{question} The answer to this question can be formulated in the following theorem, which is our main local result. \begin{theorem \label{thm: refined L-packet} Suppose $\phi \in \cPbd{G}$, and $\lif{\zeta}$ is a character of $Z_{\widetilde{G}}(F)$ whose restriction to $Z_{G}(F)$ is the central character of $\cPkt{\phi}$. Let $\lif{\chi} = \lif{\zeta}|_{\lif{Z}_{F}}$. Then there exists a subset $\cPkt{\tilde{\phi}}$ of $\clPkt{\phi, \lif{\zeta}}$ unique up to twisting by $X$, and it is characterized by the following properties: \begin{enumerate} \item \[ \clPkt{\phi, \lif{\zeta}} = \bigsqcup_{\omega \in X / \a(\S{\phi}^{\Sigma_{0}})} \cPkt{\tilde{\phi}} \otimes \omega. \] \item For $\tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$, the distribution \[ \tilde{f}(\tilde{\phi}) := \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}}} \tilde{f}_{\widetilde{G}}(\tilde{\pi}) \] is stable. \item Suppose $s$ is a semisimple element in $\cS{\phi}^{\theta}$ with $\omega = \a(s)$ and $(G', \phi') \longrightarrow (\phi, s)$. Fix a packet $\cPkt{\tilde{\phi}'}$ defined by part (1) and local Langlands correspondence for $GL(n)$, then we can choose $\cPkt{\tilde{\phi}}$ such that \begin{align \label{eq: theta twisted character relation} \tilde{f}'(\tilde{\phi}') = \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}}} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega), \,\,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}) \end{align} where $\tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = tr (\tilde{\pi}(\tilde{f}) \circ A_{\tilde{\pi}}(\theta, \omega))$, and $A_{\tilde{\pi}}(\theta, \omega)$ is an intertwining operator between $\tilde{\pi} \otimes \omega$ and $\tilde{\pi}^{\theta}$, which is normalized in a way so that if $f$ is the restriction of $\tilde{f}$ on $G(F)$, then \begin{align \label{eq: theta twisted intertwining operator} (\tilde{f}|_{\lif{Z}_{F}G(F)})_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = \sum_{\r \subseteq \tilde{\pi}|_{G}} <x, \r^+> f_{G^{\theta}}(\r) \end{align} where $x$ is the image of $s$ in $\S{\phi}^{\theta}$, $\r^{+}$ is an extension of $\r$ to $G^{+}(F) = G(F) \rtimes <\theta>$ such that $\r^{+}(\theta) = A_{\r}(\theta)$. \end{enumerate} \end{theorem} \begin{remark \label{rk: refined L-packet} \begin{enumerate} \item In the notation of $\cPkt{\tilde{\phi}}$, one can think of $\tilde{\phi}$ as some parameter of $\widetilde{G}$ lifted from $\phi$. Since $\cPkt{\tilde{\phi}}$ is only defined up to twisting by $X$, one can also take $\tilde{\phi}$ as a formal symbol built in the notation of $\cPkt{\tilde{\phi}}$. In this paper, we will take the second point of view. \item The normalizations in \eqref{eq: theta twisted intertwining operator} is a consequence of \eqref{eq: theta twisting character}. When $\theta = id$, $\omega = 1$ and $x \in \S{\tilde{\phi}}$, $A_{\tilde{\pi}}(id, 1)$ becomes a scalar and is equal to $<x, \tilde{\pi}>$ by \eqref{eq: theta twisted intertwining operator}. So we obtain the character relation from \eqref{eq: theta twisted character relation} \begin{align \label{eq: character relation} \tilde{f}'(\tilde{\phi}') = \sum_{[\r] \in \cPkt{\tilde{\phi}}} \tilde{f}_{\widetilde{G}}(\tilde{\pi}, 1) = \sum_{[\r] \in \cPkt{\tilde{\phi}}} <x, \tilde{\pi}> \tilde{f}_{\widetilde{G}}(\tilde{\pi}). \end{align} \item If $F$ is archimedean, $\cPkt{\tilde{\phi}}$ is defined by Langlands \cite{Langlands:1989}. In fact, we have $\cPkt{\tilde{\phi}} = \clPkt{\phi, \lif{\zeta}}$, if $\cPkt{\phi}$ is not a singleton (see Proposition~\ref{prop: refined L-packet Archimedean case} and also \cite{H-C:1975}, Theorem 27.1). Moreover, Part (2) and (3) can be directly reduced to \eqref{eq: stable distribution}, \eqref{eq: character relation for G} and \eqref{eq: theta twisted character relation for G} (see \cite{Xu:2016}, Remark 6.32). \item For $\phi \in \cuP{G}$, since $\cPkt{\phi} = \mathcal{I}_{P}(\cPkt{\phi_{M, \lambda}})$, we can define $\cPkt{\tilde{\phi}} := \mathcal{I}_{\widetilde{P}}(\cPkt{\tilde{\phi}_{M, \lambda}})$, and this theorem can be easily extended to this case. \end{enumerate} \end{remark} Let us call this subset $\cPkt{\tilde{\phi}}$ the refined $L$-packet of $\widetilde{G}$, and it is the genuine $L$-packet that one would expect modulo the action of $\Sigma_{0}$. As we can see from this theorem, the refined $L$-packet is uniquely determined by the character relation \eqref{eq: character relation} up to twisting by $X$. As a consequence of that, we can give another characterization of the refined L-packet. \begin{corollary \label{cor: refined L-packet} In the setup of the previous theorem, any stable linear combination in $\clPkt{\phi, \lif{\zeta}}$ is given by a linear combination of $\tilde{f}(\tilde{\phi} \otimes \omega) := (\tilde{f} \otimes \omega)(\tilde{\phi})$ for $\omega \in X$. \end{corollary} \begin{proof} In the archimedean case, one can deduce this from (\cite{Shelstad:1979}, Lemma 5.3). So we will assume $F$ is nonarchimedean, and we fix a refined L-packet $\cPkt{\tilde{\phi}} = \{ [\tilde{\pi}_{i}] \}_{i = 1}^{r}$. Suppose \[ \tilde{f}(\tilde{\phi}^{1}) := \sum_{i, j} a_{i j} \tilde{f}_{\widetilde{G}}(\tilde{\pi}_{i} \otimes \omega_{j}) \] is also stable for distinct $\omega_{j} \in X / \a(\S{\phi}^{\Sigma_{0}})$ and $a_{ij} \in \mathbb{C}$. Here $\tilde{\phi}^{1}$ is just a formal symbol for denoting another stable distribution. Since the map $[\tilde{\pi}] \longrightarrow <\cdot, \tilde{\pi}>$ is a bijection in the nonarchimedean case, we have \[ \sum_{x \in \S{\tilde{\phi}}} <x, \tilde{\pi}_{i}><x, \tilde{\pi}_{j}> = r \cdot \delta_{ij}. \] By inverting the formula of character relation \eqref{eq: character relation} we get \[ \tilde{f}_{\widetilde{G}}(\tilde{\pi}_{i}) = \sum_{x \in \S{\tilde{\phi}}} c(\tilde{\pi}_{i}, x) \tilde{f}' (\tilde{\phi}, x), \] where $\tilde{f}' (\tilde{\phi}, x) = \tilde{f}'(\tilde{\phi}')$ for some semisimple element $s \in \cS{\tilde{\phi}}$ whose image in $\S{\tilde{\phi}}$ is $x$, and some $\cPkt{\tilde{\phi}'}$ with $(G', \phi') \longrightarrow (\phi, s)$, and \[ c(\tilde{\pi}_{i}, x) = \frac{1}{r} <x, \tilde{\pi}_{i}>. \] Therefore \[ \tilde{f}^{\widetilde{G}}(\tilde{\phi}^{1}) = \sum_{i, j} a_{ij} c(\tilde{\pi}_{i}, x) \tilde{f}' (\tilde{\phi}\otimes \omega_{j}, x). \] If we separate those terms with $x = 1$ from the right hand side, and move them to the left hand side, we get \begin{align \label{eq: stable distribution identity} \tilde{f}^{\widetilde{G}}(\tilde{\phi}^{1}) - \frac{1}{r} \sum_{i, j} a_{ij }\tilde{f}(\tilde{\phi}\otimes \omega_{j}) = \sum_{i, j} \sum_{x \neq 1} c(\tilde{\pi}_{i}, x) \tilde{f}' (\tilde{\phi}\otimes \omega_{j}, x) \end{align} Now let us consider the endoscopic transfer map \[ \xymatrix{\mathcal{T}^{\varepsilon}: \bar{\mathcal{H}}(\widetilde{G}) \ar[r] & \bigoplus_{\widetilde{G}' \in \End{ell}{\widetilde{G}}} \bar{\mathcal{SI}}(\widetilde{G}') \\ \tilde{f} \ar@{|->}[r] &\bigoplus_{\widetilde{G}' \in \End{ell}{\widetilde{G}}} \tilde{f}^{\widetilde{G}'}. } \] The left hand side of \eqref{eq: stable distribution identity} can be viewed as value of $\mathcal{T}^{\varepsilon}(\tilde{f})$ on some stable distribution of $\widetilde{G}$, and similarly the right hand side of \eqref{eq: stable distribution identity} can be viewed as values of $\mathcal{T}^{\varepsilon}(\tilde{f})$ on stable distributions of elliptic endoscopic groups $\widetilde{G}' \in \End{ell}{\widetilde{G}} - \{\widetilde{G}\}$. It is a consequence of the main results in \cite{Arthur:1996} that the image of $\mathcal{T}^{\varepsilon}$ can be characterized as families of functions $(\tilde{f}^{\widetilde{G}'})_{\widetilde{G}' \in \End{ell}{\widetilde{G}}}$ such that for any $\widetilde{G}'_{1}, \widetilde{G}'_{2} \in \End{ell}{\widetilde{G}}$ the parabolic descent of any two functions $\tilde{f}^{\widetilde{G}'_{1}}$ and $\tilde{f}^{\widetilde{G}'_{2}}$ to their common Levi subgroups $\widetilde{M}'$ of $\widetilde{G}'_{1}$ and $\widetilde{G}'_{2}$ coincide. Since $\phi'$ does not factor though any proper Levi subgroups of $\L{G}$ for $x \neq 1$, then the stable distribution associated to $\phi'$ is not supported on any proper Levi subgroups of $G$. The same is true for the stable distribution associated with $\cPkt{\tilde{\phi}'}$. So the right hand side of $\eqref{eq: stable distribution identity}$ is not valued on any stable distributions supported on the Levi subgroups of $\widetilde{G}$. Since $\eqref{eq: stable distribution identity}$ holds for all functions in $\bar{\mathcal{H}}(\widetilde{G})$, then both sides of $\eqref{eq: stable distribution identity}$ must be zero. Therefore, \[ 0 = \tilde{f}^{\widetilde{G}}(\tilde{\phi}^{1}) - \frac{1}{r} \sum_{i, j} a_{ij }\tilde{f}(\tilde{\phi}\otimes \omega_{j}) = \sum_{k, j} (a_{kj} - \frac{1}{r} \sum_{i} a_{ij}) \tilde{f}_{\widetilde{G}}(\tilde{\pi}_{k} \otimes \omega_{j}). \] By the linear independence of characters, we have \[ a_{kj} - \frac{1}{r} \sum_{i} a_{ij} = 0 \] for any $k, j$. As we fix $j$ and vary $k$, we get a system of linear equations. The solutions of this system are $a_{ij} = a_{1j}$ for $1 \leqslant i \leqslant r$. Since this is also valid when we vary $j$, so we can conclude \[ \tilde{f}(\tilde{\phi}^{1}) = \sum_{ j} a_{1j} \tilde{f}(\tilde{\phi} \otimes \omega_{j}). \] \end{proof} This corollary is also valid for $\phi \in \cuP{G}$, and the proof is the same. \subsection{Local twisted intertwining relation} \label{subsec: local twisted intertwining relation} The proof of our main local theorem (Theorem~\ref{thm: refined L-packet}) requires global methods, and the existence of refined $L$-packet needs to be proved together with the character relations. Before we proceed to prove the theorem, let us first consider another form of character relation, called intertwining relation. The intertwining relation in its global form comes up naturally in the trace formula and it plays an important role in Arthur's work \cite{Arthur:2013}. Here we need a twisted version of the intertwining relation, which in its local form is related to the twisted character relation \eqref{eq: theta twisted character relation}. In this section, we again assume $\widetilde{G}$ is of type \eqref{eq: similitude}, and $\theta \in \Sigma_{0}$. Suppose $\phi \in \cPbd{G}$, and we assume $\phi$ factors through $\phi_{M} \in \cPbd{M}$ for some Levi subgroup $M$ of $G$. Let us define \[ \cT{\phi}(G, M) = A_{\D{M}} Z(\D{G})^{\Gal{F}} / Z(\D{G})^{\Gal{F}}, \] where $A_{\D{M}}$ is the maximal split central torus in $\D{M}$. It is a torus in $\com[0]{\cS{\phi}}$. Then we can define its normalizer in $\cS{\phi}$ \[ \cN{\phi}(G, M) = \text{Norm}(\cT{\phi}(G, M), \cS{\phi}), \] and the group of its connected components \begin{align*} \N{\phi}(G, M) & = \cN{\phi}(G, M) / \cN{\phi}(G, M)^{0} \\ & = \text{Norm}(\cT{\phi}(G, M), \cS{\phi}) / \text{Cent}(\cT{\phi}(G, M), \com[0]{\cS{\phi}})^{0}. \end{align*} Notice $\S{\phi}(M) := \S{\phi_{M}}$ is a normal subgroup of $\N{\phi}(G, M)$. The quotient $\N{\phi}(G, M) / \S{\phi}(M)$ is the Weyl group \[ W_{\phi}(G, M) = W(\cS{\phi}, \cT{\phi}(G, M)). \] We write $\com[0]{W_{\phi}}(G, M)$ to be the normal subgroup of automorphisms in $W_{\phi}(G, M)$ that are induced from the connected component $\com[0]{\cS{\phi}}$, and let \[ R_{\phi}(G, M) = W_{\phi}(G, M) / \com[0]{W_{\phi}}(G, M) \] Moreover, $\com[0]{W_{\phi}}(G, M)$ is a normal subgroup of $\N{\phi}(G, M)$, and we denote their quotient by $\S{\phi}(G, M)$, which is a subgroup of $\S{\phi}$. Suppose $\widetilde{M}$ is the Levi subgroup of $\widetilde{G}$ containing $M$, then similarly we can define \[ \cT{\tilde{\phi}} (G, M) = A_{\D{\widetilde{M}}} Z(\D{\widetilde{G}})^{\Gal{F}} / Z(\D{\widetilde{G}})^{\Gal{F}} \] which is a torus of $\com[0]{\cS{\tilde{\phi}}}$. Since $A_{\D{\widetilde{M}}} / \D{D} = A_{\D{M}}$, we have $\cT{\tilde{\phi}}(G, M) = \cT{\phi}(G, M)$. We can also define \[ \cN{\tilde{\phi}}(G, M) = \text{Norm}(\cT{\tilde{\phi}}(G, M), \cS{\tilde{\phi}}) \subseteq \cN{\phi}(G, M), \] and the group of its connected components \begin{align*} \N{\tilde{\phi}}(G, M) & = \cN{\tilde{\phi}}(G, M) / \cN{\tilde{\phi}}(G, M)^{0} \\ & = \text{Norm}(\cT{\tilde{\phi}}(G, M), \cS{\tilde{\phi}}) / \text{Cent}(\cT{\tilde{\phi}}(G, M), \com[0]{\cS{\tilde{\phi}}})^{0} \subseteq \N{\phi}(G, M). \end{align*} Again $\S{\tilde{\phi}}(M) := \S{\tilde{\phi}_{M}}$ is a normal subgroup of $\N{\tilde{\phi}}(G, M)$. The quotient $\N{\tilde{\phi}}(G, M) / \S{\tilde{\phi}}(M)$ is the Weyl group \[ W_{\tilde{\phi}}(G, M) = W(\cS{\tilde{\phi}}, \cT{\tilde{\phi}}(G, M)). \] Let us write $\com[0]{W_{\tilde{\phi}}}(G, M)$ to be the normal subgroup of automorphism in $W_{\tilde{\phi}}(G, M)$ that are induced from the connected component $\com[0]{\cS{\tilde{\phi}}}$. Since $\com[0]{\cS{\phi}} = \com[0]{\cS{\tilde{\phi}}}$, we have $\com[0]{W_{\tilde{\phi}}}(G, M) = \com[0]{W_{\phi}}(G, M)$. So \[ R_{\tilde{\phi}}(G, M) = W_{\tilde{\phi}}(G, M) / \com[0]{W_{\tilde{\phi}}}(G, M) \subseteq R_{\phi}(G, M). \] At last, $\com[0]{W_{\tilde{\phi}}}(G, M)$ is a normal subgroup of $\N{\tilde{\phi}}(G, M)$, and we denote their quotient by $\S{\tilde{\phi}}(G, M)$, which is a subgroup of $\S{\tilde{\phi}}$. If $\phi_{M} \in \cPdt{M}$, then $\cT{\phi}(G, M) = \cT{\phi}$ is a maximal torus in $\com[0]{\cS{\phi}}$, and hence $\S{\phi}(G, M) = \S{\phi}$, $\S{\tilde{\phi}}(G, M) = \S{\tilde{\phi}}$. So in this case let us also write \[ \N{\phi}(G, M) = \N{\phi}, \,\,\,\,\,\,\,\,\,\,\, \N{\tilde{\phi}}(G, M) = \N{\tilde{\phi}}, \] \[ W_{\phi}(G, M) = W_{\phi}, \,\,\,\,\,\,\,\,\,\,\, W_{\tilde{\phi}}(G, M) = W_{\tilde{\phi}}, \] \[ \com[0]{W_{\phi}}(G, M) = \com[0]{W_{\phi}}, \,\,\,\,\,\,\,\,\,\,\, \com[0]{W_{\tilde{\phi}}}(G, M) = \com[0]{W_{\tilde{\phi}}}. \] To summarize all these relations, we have the following commutative diagram. \begin{align} \label{eq: twisted intertwining relation diagram} \xymatrix @C=0.5cm @R=0.5cm{&&&&& 1 \ar[dd] && 1 \ar[dd] && \\ &&&& 1 \ar[dd] && 1 \ar[dd] &&& \\ &&&&& \com[0]{W_{\tilde{\phi}}}(G, M) \ar[dd] \ar@{=}[rr] && \com[0]{W_{\tilde{\phi}}}(G, M) \ar[dd]&& \\ &&&& \com[0]{W_{\phi}}(G, M) \ar@{=}[ur] \ar[dd] \ar@{=}[rr] && \com[0]{W_{\phi}}(G, M) \ar@{=}[ur] \ar[dd] \\ & 1 \ar[rr] && \S{\tilde{\phi}}(M) \ar@{_{(}->}[dl] \ar@{=}[dd] \ar[rr] && \N{\tilde{\phi}}(G, M) \ar@{_{(}->}[dl] \ar[dd] \ar[rr] && W_{\tilde{\phi}}(G, M) \ar@{_{(}->}[dl] \ar [dd] \ar[rr] && 1\\ 1 \ar[rr] && \S{\phi}(M) \ar@{=}[dd] \ar[rr] && \N{\phi}(G, M) \ar[dd] \ar[rr] && W_{\phi}(G, M) \ar[dd] \ar[rr] && 1 & \\ & 1 \ar[rr] && \S{\tilde{\phi}}(M) \ar@{_{(}->}[dl] \ar[rr] && \S{\tilde{\phi}}(G, M) \ar[dd] \ar@{_{(}->}[dl] \ar[rr] && R_{\tilde{\phi}}(G, M) \ar[dd] \ar@{_{(}->}[dl] \ar[rr] && 1\\ 1 \ar[rr] && \S{\phi}(M) \ar[rr] && \S{\phi}(G, M) \ar[dd] \ar[rr] && R_{\phi}(G, M) \ar[dd] \ar[rr] && 1 & \\ &&&&& 1 && 1 && \\ &&&& 1 && 1 &&& } \end{align} Suppose $u \in \N{\phi}(G, M)$, we write $w_{u}$ for the image of $u$ in $W_{\phi}(G, M)$ and $x_{u}$ for the image of $u$ in $\S{\phi}(G, M)$. Since $w_{u}$ normalizes $A_{\D{M}}$, it also normalizes $\D{M}$, and therefore can be treated as an element in $W(\D{M})$. The standard parabolic subgroup $P$ containing $M$ allows us to identify $w_{u}$ with an element in $W(M) \cong W(\widetilde{M})$. We choose a representative $\theta_{u}$ of $w_{u}$ in $G(F)$ preserving the $F$-splitting of $M$. Then $\phi_{M} \in \cPbd{M^{\theta_{u}}}$ and $u$ defines an element in $\S{\phi}(M)^{\theta_{u}} := \S{\phi_{M}}^{\theta_{u}}$. Note that \[ M \cong GL(N_{1}) \times \cdots \times GL(N_{q}) \times G_{-}, \] and \[ \widetilde{M} \cong GL(N_{1}) \times \cdots \times GL(N_{q}) \times \widetilde{G}_{-}, \] where $G_{-}$ (resp. $\widetilde{G}_{-}$) is of the same type as $G$ (resp. $\widetilde{G}$) with smaller rank. Suppose \[ \phi_{M} = \phi_{1} \times \cdots \times \phi_{q} \times \phi_{-}, \] where $\phi_{i} \in \Pbd{N_{i}}$ and $\phi_{-} \in \cPbd{G_{-}}$. Then we can define \[ \cPkt{\phi_{M}} = \r_{\phi_{1}} \otimes \cdots \otimes \r_{\phi_{q}} \otimes \cPkt{\phi_{-}}, \] where $\r_{\phi_{i}}$ is associated to $\phi_{i}$. And any representation in this packet can be written as \begin{align*} \r_{M} & = \r_{\phi_{1}} \otimes \cdots \otimes \r_{\phi_{q}} \otimes \r_{-} \\ & = \r_{GL} \otimes \r_{-}. \end{align*} Since $\S{\phi_{M}} \cong \S{\phi_{-}}$, we can define a pairing between $\cPkt{\phi_{M}}$ and $\S{\phi_{M}}$ by \[ <\cdot, \r_{M}> : = <\cdot, \r_{GL}> <\cdot, \r_{-}>, \] where $<\cdot, \r_{GL}>$ is in fact trivial. By Theorem~\ref{thm: twisted LLC} and the local Langlands correspondence for $GL(n)$, we know $\r_{M}^{\theta_{u}} \cong \r_{M}$. As usual, we can take the intertwining operator $\r_{M}^{+}(\theta_{u}) = \r_{GL}^{+}(\theta_{u}) \otimes \r_{-}^{+}(\theta_{u})$, which preserves the Whittaker models on those general linear components, then the extension $< \cdot, \r_{GL}^{+} >$ of $<\cdot, \r_{GL}>$ to $\S{\phi}(M)^{\theta_{u}}$ is trivial (see Theorem~\ref{thm: LLC}). So the extension $<\cdot, \r_{-}^{+}>$ defined in Theorem~\ref{thm: twisted LLC} (see also Remark~\ref{rk: twisted LLC}) determines an extension $<\cdot, \r_{M}^{+}>$. Now we define \begin{align \label{eq: induced character} f_{G}(\phi, u) = \sum_{[\r_{M}] \in \cPkt{\phi_{M}}} <u, \r_{M}^{+}>tr(R_{P}(\theta_{u}, \r_{M}^{+}, \phi)\mathcal{I}_{P}(\r_{M}, f)), \,\,\,\,\,\, f \in \bar{\mathcal{H}}(G, \chi), \end{align} where $R_{P}(\theta_{u}, \r_{M}^{+}, \phi)$ is the normalized self-intertwining operator on the space $\mathcal{H}_{P}(\r_{M})$ of normalized induced representation $\mathcal{I}_{P}(\r_{M})$ (see \cite{Arthur:2013}, Section 2.4). If we assume the existence of refined $L$-packet $\cPkt{\tilde{\phi}_{-}}$ for $\tilde{\phi}_{-}$ as defined in Theorem~\ref{thm: refined L-packet}, then $\cPkt{\tilde{\phi}_{M}}$ can be defined in the same way as $\cPkt{\phi_{M}}$. And we can also define \begin{align \label{eq: induced twisted character} \lif{f}_{\widetilde{G}}(\tilde{\phi}, u) = \sum_{[\tilde{\pi}_{M}] \in \cPkt{\tilde{\phi}_{M}}} tr(R_{\widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) \mathcal{I}^{\omega}_{P}(\tilde{\pi}_{M} \otimes \omega^{-1}, \lif{f})), \,\,\,\,\,\, \lif{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}). \end{align} Here we need to give some explanations of this distribution. Firstly, $\omega = \a^{M}(u) = \a^{G}(x_{u})$. If \[ \tilde{\pi}_{M} = \r_{\phi_{1}} \otimes \cdots \otimes \r_{\phi_{q}} \otimes \tilde{\pi}_{-} \] contains $\r_{M}$ in its restriction to $M(F)$, then it follows from Corollary~\ref{cor: theta twisting character} that $\tilde{\pi}_{M}^{\theta_{u}} \cong \tilde{\pi}_{M} \otimes \omega$, and we let $A_{\tilde{\pi}_{M}}(\theta_{u}, \omega)$ be the intertwining operator. Secondly, the automorphism $\theta_{u}$ on $\widetilde{M}$ is a composition of permutations of the general linear factors and automorphisms sending $g_{i}$ to \( \theta_{N_{i}}(g_{i}) \cdot \c(g_{-}), \) where $g = g_{1} \times \cdots \times g_{q} \times g_{-} \in \widetilde{M}$. However the effect on general linear components of $\tilde{\pi}_{M}$ is the same as for $M$, so we can use the same intertwining operator for the general linear components of $\tilde{\pi}_{M}$. In view of \eqref{eq: theta twisted intertwining operator}, the pairing inside \eqref{eq: induced character} is built into $A_{\tilde{\pi}_{M}}(\theta_{u}, \omega)$ and hence into the operator $R_{\widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi})$. Thirdly, \[ \mathcal{I}^{\omega}_{P}(\tilde{\pi}_{M} \otimes \omega^{-1}, \lif{f}) = R(\omega) \circ \mathcal{I}_{P}(\tilde{\pi}_{M} \otimes \omega^{-1}, \lif{f}) \] where $R(\omega)$ is multiplication by $\omega$, and $R_{\widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi})$ is the normalized intertwining operator between $\mathcal{H}_{\widetilde{P}}(\tilde{\pi}_{M})$ and $\mathcal{H}_{\widetilde{P}}(\tilde{\pi}_{M} \otimes \omega^{-1})$. The last thing is about this normalization. Let us recall the formulation of the normalized intertwining operator \[ R_{P}(\theta_{u}, \r_{M}^{+}, \phi) := \r_{M}^{+}(\theta_{u}) \circ (r_{P}(w_{u}, \phi_{M})^{-1} J_{P}(\theta_{u}, \r_{M})). \] Here $r_{P}(w_{u}, \phi_{M})$ is the normalizing factor, and $J_{P}(\theta_{u}, \r_{M})$ is the unnormalized intertwining operator between $\mathcal{H}_{P}(\r_{M})$ and $\mathcal{H}_{P}(\r_{M}^{\theta_{u}^{-1}})$, which is defined by an integral over \[ N_{P} \cap w_{u} N_{P} w_{u}^{-1} \backslash N_{P}, \] where $N_{P}$ is the unipotent radical of $P$. The key point is to notice that \[ \text{Res}^{\widetilde{G}(F)}_{G(F)} \mathcal{I}_{\widetilde{P}} ( \tilde{\pi}_{M} ) \cong \mathcal{I}_{P} ( \text{Res}^{\widetilde{M}(F)}_{M(F)} \tilde{\pi}_{M} ). \] So we obtain isomorphisms between the following spaces as $\bar{\mathcal{H}}(G, \chi)$-modules \begin{align*} \mathcal{H}_{\widetilde{P}}(\tilde{\pi}_{M}) & \cong \bigoplus_{\r_{M} \subseteq \text{Res} \tilde{\pi}_{M}} \mathcal{H}_{P}(\r_{M}) \cong \mathcal{H}_{\widetilde{P}}(\tilde{\pi}_{M} \otimes \omega^{-1}), \\ \mathcal{H}_{\widetilde{P}}(\tilde{\pi}_{M}^{\theta_{u}^{-1}}) & \cong \bigoplus_{\r_{M} \subseteq \text{Res} \tilde{\pi}_{M}} \mathcal{H}_{P}(\r_{M}^{\theta_{u}^{-1}}). \end{align*} Under these identifications, we can easily see from the definition of unnormalized intertwining operators that \[ J_{\widetilde{P}}(\theta_{u}, \tilde{\pi}_{M}) = \bigoplus_{\r_{M} \subseteq \text{Res} \tilde{\pi}_{M}} J_{P}(\theta_{u}, \r_{M}). \] Let $\r_{\phi_{M}} = \r_{\phi_{1}} \otimes \cdots \otimes \r_{\phi_{q}} \otimes \r_{\phi_{-}}$. The normalizing factor $r_{P}(w_{u}, \phi_{M})$ for $J_{P}(\theta_{u}, \r_{M})$ is equal to the product of $\lambda$-factor $\lambda(w_{u})$ (see \cite{Arthur:2013}, (2.3.19)) and \begin{align \label{formula: normalizing factor} L(0, \r_{\phi_{M}}, \rho^{\vee}_{w_{u}^{-1}P | P} ) \varepsilon(0, \r_{\phi_{M}}, \rho^{\vee}_{w_{u}^{-1}P | P}, \psi_{F} )^{-1} L(1, \r_{\phi_{M}}, \rho^{\vee}_{w_{u}^{-1}P | P})^{-1} \end{align} where the $L$-functions involved here are either Rankin-Selberg $L$-functions or (skew)-symmetric square $L$-functions. We can set $r_{\widetilde{P}}(w_{u}, \tilde{\phi}_{M}) := r_{P}(w_{u}, \phi_{M})$ for $J_{\widetilde{P}}(\theta_{u}, \tilde{\pi}_{M})$. In fact this is what one would expect according to Langlands' conjectural formula for the normalizing factors. By his conjecture, \eqref{formula: normalizing factor} could be replaced by \[ L(0, \rho^{\vee}_{w_{u}^{-1}P | P} \circ \phi_{M} ) \varepsilon(0, \rho^{\vee}_{w_{u}^{-1}P | P} \circ \phi_{M}, \psi_{F} )^{-1} L(1, \rho^{\vee}_{w_{u}^{-1}P | P} \circ \phi_{M} )^{-1}, \] where $\rho^{\vee}_{w_{u}^{-1}P | P}$ is the contragredient of the adjoint representation of $\L{M}$ over $\D{\mathfrak{n}}_{w_{u}^{-1}P} \cap \D{\mathfrak{n}}_{P} \backslash \D{\mathfrak{n}}_{w_{u}^{-1}P}$, where $\D{\mathfrak{n}}_{P}$ is the Lie algebra of $\D{N}_{P}$. Since \[ \rho^{\vee}_{w_{u}^{-1}\widetilde{P} | \widetilde{P}} \circ \tilde{\phi}_{M} = \rho^{\vee}_{w_{u}^{-1}P | P} \circ \phi_{M}, \] then the conjectural formulas for $r_{\widetilde{P}}(w_{u}, \tilde{\phi}_{M})$ and $r_{P}(w_{u}, \phi_{M})$ are the same. Finally we can normalize $A_{\tilde{\pi}_{M}}(\theta_{u}, \omega)$ according to \eqref{eq: theta twisted intertwining operator}, so after composing with this operator we get \begin{align \label{eq: intertwining operator identity} R_{\widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) := A_{\tilde{\pi}_{M}}(\theta_{u}, \omega) \circ (r_{\widetilde{P}}(w_{u}, \tilde{\phi}_{M})^{-1} J_{\widetilde{P}}(\theta_{u}, \tilde{\pi}_{M})) = \bigoplus_{\r_{M} \subseteq \text{Res} \tilde{\pi}_{M}} <u, \r_{M}^+> R_{P}(\theta_{u}, \r_{M}^{+}, \phi). \end{align} As a result, if $\tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$ is supported on $\lif{Z}_{F}G(F)$ and $f$ is its restriction to $G(F)$, then \[ \tilde{f}_{\widetilde{G}}(\tilde{\phi}, u)= f_{G}(\phi, u). \] Suppose $s$ is a semisimple element in $\cS{\phi}$, and $(G', \phi') \longrightarrow (\phi, s)$. For any lift $\cPkt{\tilde{\phi}'}$, let us define \[ \tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s) = \tilde{f}^{\widetilde{G}'}(\tilde{\phi}'), \,\,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}). \] Now we can state the local $\omega$-twisted intertwining relation. \begin{theorem \label{thm: twisted intertwining relation} Suppose $\phi \in \cPbd{G}$, for semisimple $s \in \cS{\phi}$ with $(G', \phi') \longrightarrow (\phi, s)$, the following identity holds for some lifts $\cPkt{\tilde{\phi}}$ and $\cPkt{\tilde{\phi}'}$ that \begin{align \label{eq: twisted intertwining relation} \tilde{f}_{\widetilde{G}}(\tilde{\phi}, u) = \tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s), \,\,\,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}), \end{align} where $u \in \N{\phi}(G, M)$ and $s \in \cS{\phi}$ have the same image in $\S{\phi}$. \end{theorem} The next lemma shows that for $\widetilde{G}$, the $\omega$-twisted intertwining relation \eqref{eq: twisted intertwining relation} is equivalent to the $\omega$-twisted character relation (see \eqref{eq: theta twisted character relation} when $\theta$ = id), if one has the local intertwining relation for $G$ (\cite{Arthur:2013}, Theorem 2.4.1). \begin{lemma \label{lemma: twisted intertwining relation 1} For $\phi \in \cPbd{G}$, we assume $\phi$ factors through $\phi_{M} \in \cPdt{M}$ and $\cPkt{\tilde{\phi}_{M}}$ exists. We define $\cPkt{\tilde{\phi}} := \mathcal{I}_{P}(\cPkt{\tilde{\phi}_{M}})$. Suppose $u \in \N{\phi}(G, M)$ and semisimple $s \in \cS{\phi}$ have the same image $x$ in $\S{\phi}$. Then \[ \tilde{f}_{\widetilde{G}}(\tilde{\phi}, u) = \tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s) \] for some $\cPkt{\tilde{\phi}}$ and $\cPkt{\tilde{\phi}'}$ if and only if \[ \tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s)= \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}}} \tilde{f}_{\widetilde{G}}(\tilde{\pi}, \omega), \] where $\omega = \a(x)$ and $\tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$. \end{lemma} \begin{proof} By Corollary~\ref{cor: theta twisting character}, we have $\cPkt{\tilde{\phi}} = \cPkt{\tilde{\phi}} \otimes \omega$. Since \[ \tilde{f}_{\widetilde{G}}(\tilde{\phi}, u) = \sum_{[\tilde{\pi}_{M}] \in \cPkt{\tilde{\phi}_{M}}} tr(R_{\widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi})\mathcal{I}^{\omega}_{P}(\tilde{\pi}_{M} \otimes \omega^{-1}, \lif{f})), \] we can assume \[ \tilde{f}_{\widetilde{G}}(\tilde{\phi}, u) = \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}}} \tilde{f}_{\widetilde{G}}(\tilde{\pi}, \omega)', \] where $\tilde{f}_{\widetilde{G}}(\tilde{\pi}, \omega)' = tr(\tilde{\pi}(\tilde{f}) \circ A_{\tilde{\pi}}(\omega)')$ for some $A_{\tilde{\pi}}(\omega)'$ intertwining $\tilde{\pi} \otimes \omega$ and $\tilde{\pi}$. Note that if $f$ is the restriction of $\tilde{f}$ on $G(F)$, then $(\tilde{f}|_{\lif{Z}_{F}G(F)})_{\widetilde{G}}(\tilde{\phi}, u) = f_{G}(\phi, u)$. By the local intertwining relation for $G$ (\cite{Arthur:2013}, Theorem 2.4.1), we have \[ (\tilde{f}|_{\lif{Z}_{F}G(F)})_{\widetilde{G}}(\tilde{\phi}, u) = f_{G}(\phi, u) = f'_{G}(\phi, s) = \sum_{[\r] \in \cPkt{\phi}} <x, \r>f_{G}(\r). \] So $\tilde{f}_{\widetilde{G}}(\tilde{\pi}, \omega)' = \tilde{f}_{\widetilde{G}}(\tilde{\pi}, \omega)$ for $\tilde{f}$ supported on $\lif{Z}_{F}G(F)$. This means for $[\tilde{\pi}] \in \cPkt{\tilde{\phi}}$, $A_{\tilde{\pi}}(\omega)' = A_{\tilde{\pi}}(\omega)$ as defined by \eqref{eq: theta twisted intertwining operator}. Thus $\tilde{f}_{\widetilde{G}}(\tilde{\pi}, \omega)' = \tilde{f}_{\widetilde{G}}(\tilde{\pi}, \omega)$ for all $\tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$, and the lemma is clear. \end{proof} As we can see from the proof of this lemma, $\tilde{f}_{\widetilde{G}}(\tilde{\phi}, u)$ only depends on the image of $u$ in $\S{\phi}$. And one should expect $\tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s)$ only depends on the image of $s$ in $\S{\phi}$ as well either from the $\omega$-twisted character relation or the $\omega$-twisted intertwining relation. But there is a little ambiguity here for $\tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s)$ depends on the choice of lift $\cPkt{\tilde{\phi}'}$. The next lemma resolves the ambiguity and establishes this property directly. \begin{lemma \label{lemma: induced twisted character} For $\phi \in \cPbd{G}$ and $x \in \S{\phi}$, there is a natural way to get a family of lifts $\cPkt{\tilde{\phi}'}$ for all semisimple $s \in \cS{\phi}$ with image $x$ in $\S{\phi}$ and $(G', \phi') \rightarrow (\phi, s)$. And for these lifts $\tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s)$ are the same. \end{lemma} \begin{proof} The proof is essentially the same as for $f'_{G}(\phi, s)$ in (\cite{Arthur:2013}, Section 4.5) except for one does not have any ambiguity in that case. Since it is important to clarify the ambiguity here, we will review the original proof and show how one can get rid of this ambiguity. Suppose semisimple $s \in \cS{\phi}$ has image $x$ in $\S{\phi}$, if $s$ is replaced by an $\com[0]{\cS{\phi}}$-conjugate $s_{1}$, then the corresponding pair $(G'_{1}, \phi'_{1})$ is isomorphic to $(G', \phi')$ under $\com[0]{\cS{\phi}}$-conjugation. And this extends to an isomorphism between $\widetilde{G}'_{1}$ and $\widetilde{G}'$ for $\com[0]{\cS{\tilde{\phi}}} = \com[0]{\cS{\phi}}$. So we can simply take the lifts $\cPkt{\tilde{\phi}'} \cong \cPkt{\tilde{\phi}'_{1}}$ and it is clear that $\tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s) = \tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s_{1})$. Now if we fix a maximal torus $\bar{T}_{\phi}$ of $\com[0]{\cS{\phi}}$ and a Borel subgroup $\bar{B}_{\phi}$ containing it, any automorphism of the complex reductive group $\com[0]{\cS{\phi}}$ stabilizes a conjugate of $(\bar{T}_{\phi}, \bar{B}_{\phi})$. So we can choose a semisimple representative $s_{x}$ of $x$ in $\cS{\phi}$ so that $\text{Int}(s_{x})$ stabilizes $(\bar{T}_{\phi}, \bar{B}_{\phi})$, and such representatives are determined up to a $\bar{T}_{\phi}$-translate. Moreover the complex torus \[ \bar{T}_{\phi, x} = \text{Cent}(s_{x}, \bar{T}_{\phi})^{0} \] in $\bar{T}_{\phi}$ is uniquely determined by $x$. Note that $\bar{T}_{\phi, x}$ is the connected component of the kernel of the following morphism \[ \xymatrix{ \bar{T}_{\phi} \ar[r] & \bar{T}_{\phi} \\ t \ar@{|->}[r] & s_{x}^{-1}ts_{x}t^{-1}} \] So any point of $\bar{T}_{\phi}$ can be written as $(s_{x}^{-1}ts_{x}t^{-1}) t_{x}$ for $t \in \bar{T}_{\phi}$ and $t_{x} \in \bar{T}_{\phi, x}$ (see \cite{Springer:2009}, Corollary 5.4.5), and hence any point in $s_{x} \bar{T}_{\phi}$ can be written as \[ s_{x} (s_{x}^{-1}ts_{x}t^{-1}) t_{x} = t s_{x} t^{-1} t_{x} = t s_{x} t_{x} t^{-1}, \,\,\,\, t \in \bar{T}_{\phi}, \,\,\, t_{x} \in \bar{T}_{\phi, x}. \] This means it is enough to consider the $\bar{T}_{\phi, x}$-translates of $s_{x}$. Finally, we can take the centralizer $\D{M}$ of $\bar{T}_{\phi, x}$ in $\D{G}$ which is a Levi subgroup of $\D{G}$, and it is dual to a Levi subgroup $M_{x}$ of $G$. So $(\phi, s_{x})$ is the image of a pair \[ (\phi_{M_{x}}, s_{M_{x}}), \,\,\,\, \phi_{M_{x}} \in \cPbd{M_{x}}, \,\, s_{M_{x}} \in S_{\phi_{M_{x}}}, \] attached to $M_{x}$ under the $L$-embedding $\L{M_{x}} \subseteq \L{G}$. This pair is in turn the image of an endoscopic pair $(M'_{x}, \phi'_{M_{x}})$, and in particular, $\phi'_{M_{x}} \in \cPdt{M'_{x}}$. Note that for all $\bar{T}_{\phi, x}$-translates $s_{x, 1}$of $s_{x}$, the corresponding $\phi'$ also factors through $\L{M'_{x}}$. And we have \[ \tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s_{x}) = \tilde{f}^{M'_{x}}(\tilde{\phi}'_{M_{x}}) = \tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s_{x, 1}). \] Now if we reverse our argument, we see any lift $\cPkt{\tilde{\phi}'_{M_{x}}}$ will give rise to a family of lifts $\cPkt{\tilde{\phi}'}$ for all semisimple $s \in \cS{\phi}$ with image $x$ in $\S{\phi}$, such that $\tilde{f}'_{\widetilde{G}}(\tilde{\phi}, s)$ are the same. This finishes the proof. \end{proof} In fact our discussion of the $\omega$-twisted intertwining relation for $\widetilde{G}$ can be extended to that for $\widetilde{G} \rtimes \theta$. For $\phi \in \cPbd{G^{\theta}}$, suppose it factors through $\phi_{M} \in \cPbd{M}$ for some Levi subgroup $M$ of $G$. Let us define \[ \xymatrix{\N{\phi}^{\theta}(G, M) = \text{Norm}(\cT{\phi}(G, M), \cS{\phi}^{\theta})/ \text{Cent}(\cT{\phi}(G, M), \com[0]{\cS{\phi}})^{0}, & W_{\phi}^{\theta}(G, M) = W(\cS{\phi}^{\theta}, \cT{\phi}(G, M)), } \] \[ \xymatrix{\N{\phi}^{+}(G, M) = \text{Norm}(\cT{\phi}(G, M), \cS{\phi}^{+})/ \text{Cent}(\cT{\phi}(G, M), \com[0]{\cS{\phi}})^{0}, & W_{\phi}^{+}(G, M) = W(\cS{\phi}^{+}, \cT{\phi}(G, M)), } \] and \[ \xymatrix{\N{\tilde{\phi}}^{+}(G, M) = \text{Norm}(\cT{\tilde{\phi}}(G, M), \cS{\tilde{\phi}}^{+})/ \text{Cent}(\cT{\tilde{\phi}}(G, M), \com[0]{\cS{\tilde{\phi}}})^{0}, & W_{\tilde{\phi}}^{+}(G, M) = W(\cS{\tilde{\phi}}^{+}, \cT{\tilde{\phi}}(G, M)).} \] Then we can have a commutative diagram analogous to \eqref{eq: twisted intertwining relation diagram}. \begin{align} \label{eq: theta twisted intertwining relation diagram} \xymatrix @C=0.5cm @R=0.5cm{&&&&& 1 \ar[dd] && 1 \ar[dd] && \\ &&&& 1 \ar[dd] && 1 \ar[dd] &&& \\ &&&&& \com[0]{W_{\tilde{\phi}}}(G, M) \ar[dd] \ar@{=}[rr] && \com[0]{W_{\tilde{\phi}}}(G, M) \ar[dd]&& \\ &&&& \com[0]{W_{\phi}}(G, M) \ar@{=}[ur] \ar[dd] \ar@{=}[rr] && \com[0]{W_{\phi}}(G, M) \ar@{=}[ur] \ar[dd] \\ & 1 \ar[rr] && \S{\tilde{\phi}}(M) \ar@{_{(}->}[dl] \ar@{=}[dd] \ar[rr] && \N{\tilde{\phi}}^{+}(G, M) \ar@{_{(}->}[dl] \ar[dd] \ar[rr] && W_{\tilde{\phi}}^{+}(G, M) \ar@{_{(}->}[dl] \ar [dd] \ar[rr] && 1\\ 1 \ar[rr] && \S{\phi}(M) \ar@{=}[dd] \ar[rr] && \N{\phi}^{+}(G, M) \ar[dd] \ar[rr] && W_{\phi}^{+}(G, M) \ar[dd] \ar[rr] && 1 & \\ & 1 \ar[rr] && \S{\tilde{\phi}}(M) \ar@{_{(}->}[dl] \ar[rr] && \S{\tilde{\phi}}^{+}(G, M) \ar[dd] \ar@{_{(}->}[dl] \ar[rr] && R_{\tilde{\phi}}^{+}(G, M) \ar[dd] \ar@{_{(}->}[dl] \ar[rr] && 1\\ 1 \ar[rr] && \S{\phi}(M) \ar[rr] && \S{\phi}^{+}(G, M) \ar[dd] \ar[rr] && R_{\phi}^{+}(G, M) \ar[dd] \ar[rr] && 1 & \\ &&&&& 1 && 1 && \\ &&&& 1 && 1 &&& } \end{align} For $u \in \N{\phi}^{\theta}(G, M)$, we again write $w_{u}$ for the image of $u$ in $W^{\theta}_{\phi}(G, M)$ and $x_{u}$ for the image of $u$ in $\S{\phi}^{\theta}(G, M)$. Since $w_{u}$ normalizes $A_{\D{M}}$, it also normalizes $\D{M}$, and therefore can be treated as an element in the Weyl set $W(\D{G} \rtimes \D{\theta}, \D{M})$. The standard parabolic subgroup $P$ containing $M$ allows us to identify $w_{u}$ with an element in the Weyl set $W(G \rtimes \theta, M) \cong W(\widetilde{G} \rtimes \theta, \widetilde{M})$. We choose a representative $\theta_{u}$ of $w_{u}$ in $G(F) \rtimes \theta$ preserving the $F$-splitting of $M$. Then $\phi_{M} \in \cPbd{M^{\theta_{u}}}$ and $u$ defines an element in $\S{\phi_{M}}^{\theta_{u}}$. As in the previous case, we define \begin{align \label{eq: theta induced character} f_{G^{\theta}}(\phi, u) = \sum_{\r_{M} \in \cPkt{\phi_{M}}} <u, \r_{M}^{+}>tr(R_{P|\theta P}(\theta_{u}, \r_{M}^{+}, \phi) \mathcal{I}_{P}^{\theta}(\r_{M}, f)), \,\,\,\,\,\, f \in \bar{\mathcal{H}}(G, \chi). \end{align} Here $R_{P|\theta P}(\theta_{u}, \r_{M}^{+}, \phi)$ is the normalized intertwining operator between $\mathcal{H}_{\theta P}(\r^{\theta^{-1}}_{M})$ and $\mathcal{H}_{P}(\r_{M})$, and \[ \mathcal{I}_{P}^{\theta}(\r_{M}, f) = R(\theta)^{-1} \circ \mathcal{I}_{P}(\r_{M}, f), \] where $R(\theta)$ is induced from the action of $\theta$ on $G(F)$. We can also define \begin{align \label{eq: theta induced twisted character} \lif{f}_{\widetilde{G}^{\theta}}(\tilde{\phi}, u) = \sum_{[\tilde{\pi}_{M}] \in \cPkt{\tilde{\phi}_{M}}} tr(R_{\widetilde{P} | \theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) \mathcal{I}^{\theta, \omega}_{P}(\tilde{\pi}_{M} \otimes \omega^{-1}, \lif{f})), \,\,\,\,\,\, \lif{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}). \end{align} Here $\omega = \a^{M}(u) = \a^{G}(x_{u})$, $R_{\widetilde{P} | \theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi})$ is the normalized intertwining operator between $\mathcal{H}_{\theta \widetilde{P}}(\tilde{\pi}_{M}^{\theta^{-1}})$ and $\mathcal{H}_{\widetilde{P}}(\tilde{\pi}_{M} \otimes \omega^{-1})$, and \[ \mathcal{I}_{\widetilde{P}}^{\theta, \omega}(\tilde{\pi}_{M} \otimes \omega^{-1}, \tilde{f}) = R(\theta)^{-1} \circ \mathcal{I}^{\omega}_{\widetilde{P}}(\tilde{\pi}_{M} \otimes \omega^{-1}, \tilde{f}). \] As before, we can identify \[ \mathcal{H}_{\theta \widetilde{P}}(\tilde{\pi}_{M}^{\theta^{-1}}) \cong \bigoplus_{\r_{M} \subseteq \text{Res} \tilde{\pi}_{M}} \mathcal{H}_{\theta P}(\r_{M}^{\theta^{-1}}), \] and \[ \mathcal{H}_{\widetilde{P}}(\tilde{\pi}_{M} \otimes \omega^{-1}) \cong \bigoplus_{\r_{M} \subseteq \text{Res} \tilde{\pi}_{M}} \mathcal{H}_{P}(\r_{M}). \] Then under these identifications, we have \[ R_{\widetilde{P} | \theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) = \bigoplus_{\r_{M} \subseteq \text{Res} \tilde{\pi}_{M}} <u, \r_{M}^{+}> R_{P|\theta P}(\theta_{u}, \r_{M}^{+}, \phi). \] Therefore, if $\tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$ is supported on $\lif{Z}_{F}G(F)$ and $f$ is the restriction of $\tilde{f}$ to $G(F)$, then \[ \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi}, u) = f_{G^{\theta}}(\phi, u). \] Suppose $s$ is a semisimple element in $\cS{\phi}^{\theta}$, and $(G', \phi') \longrightarrow (\phi, s)$. For any lift $\cPkt{\tilde{\phi}'}$, let us define \[ \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi}, s) = \tilde{f}^{\widetilde{G}'}(\tilde{\phi}'), \,\,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}). \] Now we can state the $(\theta, \omega)$-twisted intertwining relation. \begin{theorem \label{thm: theta twisted intertwining relation} Suppose $\phi \in \cPbd{G}$, for semisimple $s \in \cS{\phi}^{\theta}$ with $(G', \phi') \longrightarrow (\phi, s)$, the following identity holds for some lifts $\cPkt{\tilde{\phi}}$ and $\cPkt{\tilde{\phi}'}$ that \begin{align \label{eq: theta twisted intertwining relation} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi}, u) = \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi}, s), \,\,\,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}), \end{align} where $u \in \N{\phi}^{\theta}(G, M)$ and $s \in \cS{\phi}^{\theta}$ have the same image in $\S{\phi}^{\theta}$. \end{theorem} Finally, it is easy to see that Lemma~\ref{lemma: twisted intertwining relation 1} and Lemma~\ref{lemma: induced twisted character} can be extended to this case too. Moreover, the discussion of this section can be carried out for $\phi \in \cuP{G}$ as well, and the corresponding twisted intertwining relation will follow from the tempered case. \section{Stable Trace Formula and Multiplicity Formula} \label{sec: multiplicity formula} \subsection{General setup and stable trace formula \label{subsec: stable trace formula} In this section, we will set up the means to prove the main local theorem (Theorem~\ref{thm: refined L-packet}). The method is global and we are going to use various types of stabilized trace formulas. To be more precise it is the discrete part of the trace formula and its stabilized form that we are going to use. A detailed discussion of this can be found in (\cite{Arthur:2013}, Chapter 3). The stabilization of the ordinary trace formula has been established by Arthur in \cite{Arthur:2001} \cite{Arthur:2002} \cite{Arthur:2003}, and it also rests upon Ngo's proof \cite{Ngo:2010} of the Fundamental Lemma. In the twisted case, this results from the long project of Moeglin and Waldspurger \cite{MW:2016}. Let us assume $F$ is global, and let $G$, $\widetilde{G}$, $D$ and $\c$ be defined as in Section~\ref{subsubsec: notations}. Let $\theta$ be an automorphism of $\widetilde{G}$ preserving an $F$-splitting and we assume $\c$ is $\theta$-invariant. Let $\lif{Z}_{\mathbb{A}_{F}} = \prod'_{v} \lif{Z}_{F_{v}}$ be a closed subgroup of $Z_{\widetilde{G}}(\mathbb{A}_{F})$, such that $\lif{Z}_{F_{v}}$ satisfy the conditions in Section~\ref{subsec: the transfer map}. We also require $\lif{Z}_{\mathbb{A}_{F}} Z_{G}(\mathbb{A}_{F}) = Z_{\widetilde{G}}(\mathbb{A}_{F})$ and $\lif{Z}_{\mathbb{A}_{F}} Z_{\widetilde{G}}(F)$ is mapped to a closed and cocompact subgroup in $Z_{\widetilde{G}}(\mathbb{A}_{F})_{\theta}$. Let $\lif{Z}_{F} = \lif{Z}_{\mathbb{A}_{F}} \cap Z_{\widetilde{G}}(F)$ and $\lif{\chi}$ be a character of $\lif{Z}_{\mathbb{A}_{F}} / \lif{Z}_{F}$. Let $Z_{{\mathbb{A}_{F}}} = \lif{Z}_{\mathbb{A}_{F}} \cap Z_{G}(\mathbb{A}_{F})$ and $Z_{F} = \lif{Z}_{F} \cap Z_{G}(F)$. We denote the restriction of $\lif{\chi}$ to $Z_{\mathbb{A}_{F}}$ by $\chi$. First we consider the discrete part of the $\theta$-twisted trace formula for $G$. For any nonnegative real number $t$ and $f \in \H(G, \chi)$, it is a distribution defined as follows \begin{align \label{eq: spectral side} \Idt{G^{\theta}}{, t}(f) = \sum_{\{ M \}} |W(M)|^{-1} \sum_{w \in W^{\theta}(M)_{reg}} |\det(w-1)_{\mathfrak{a}^{G^{\theta}}_{M}}|^{-1} tr(M_{P|\theta P, t}(w, \chi)I^{\theta}_{P, t}(\chi, f)). \end{align} Here we need to give some explanations of this formula. The outer sum is taken over $G(F)$-conjugacy classes of Levi subgroups $M$ of $G$, and the inner sum is taken over elements $w$ in the Weyl set \[ W^{\theta}(M) = \text{Norm}(A_{M}, G \rtimes \theta) / M \] such that $|\det(w-1)_{\mathfrak{a}^{G^{\theta}}_{M}}|^{-1} \neq 0$, where $\mathfrak{a}^{G^{\theta}}_{M}$ is the kernel of the canonical projection of \[ \mathfrak{a}_{M} \rightarrow \mathfrak{a}_{G} \rightarrow \mathfrak{a}_{G^{\theta}}, \] and \[ \mathfrak{a}_{G^{\theta}} = \mathfrak{a}_{G} / \{ X - \theta(X): X \in \mathfrak{a}_{G}\}. \] For any Levi subgroup $M$ of $G$, we can take the direct sum of \[ L^{2}_{disc, t}(M(F) \backslash M(\mathbb{A}_{F}), \zeta_{M}) \subseteq L^{2}_{disc}(M(F) \backslash M(\mathbb{A}_{F}), \zeta_{M}) \] such that the central character $\zeta_{M}$ extends $\chi$ and is invariant under some element of $W^{\theta}(M)_{reg}$, and the archimedean infinitesimal characters of the irreducible constituents have norm $t$ on their imaginary parts, then \[ I_{P, t}(\chi, f) = \int_{Z_{\mathbb{A}_{F}} \backslash G(\mathbb{A}_{F})} f(g) I_{P, t}(\chi, g) dg \] defines an operator on the space $\H_{P, t}(\chi)$ of the corresponding normalized induced representations. The operator $I^{\theta}_{P, t}(\chi, f)$ is the composition $R(\theta)^{-1} \circ I_{P, t}(\chi, f)$, and $M_{P|\theta P, t}(w, \chi)$ is the standard intertwining operator between $\H_{\theta P, t}(\chi)$ and $\H_{P, t}(\chi)$. Note when $\theta = id$, the term for $M = G$ in \eqref{eq: spectral side} is given by trace of $R^{G}_{disc, t}(f) := I_{G, t}(\chi, f)$ on the corresponding part of the discrete spectrum of $G$. The discrete part of the trace formula \eqref{eq: spectral side} can be stabilized, and we get the following formula, \begin{align \label{eq: endoscopic side} \Idt{G^{\theta}}{, t}(f) = \sum_{G' \in \End{ell}{G^{\theta}}} \iota(G, G') \Sdt{G'}{, t}(f^{G'}). \end{align} Here $\Sdt{G'}{, t}(f')$ are stable distributions on $G'$, and they are defined by induction from the stabilized formula for $\Idt{G'}{, t}(f')$. If we denote the image of $Z_{\mathbb{A}_{F}}$ under the inclusion $(Z_{G})_{\theta} \rightarrow Z_{G'}$ by $Z'_{\mathbb{A}_{F}}$ and let $Z'_{F} = Z'_{\mathbb{A}_{F}} \cap Z_{G'}(\mathbb{A}_{F})$, then $\Idt{G'}{, t}(f')$ is defined with respect to a character $\chi'$ of $Z'_{\mathbb{A}_{F}}/Z'_{F}$ determined by $\chi$ and the twisted endoscopic embedding $\L{G}' \rightarrow \L{G}$, and $f' \in \H(G', \chi')$. The coefficients $\iota(G, G')$ are given by the Kottwitz's formula, \begin{align \label{formula: endoscopic coefficient} \iota(G, G')= \frac{|\pi_{0}(Z(\D{G})^{\Gal{}})|}{|\pi_{0}(Z(\D{G'})^{\Gal{})}|} \cdot \frac{|\ker^{1}(F, Z(\D{G'}))|}{|\ker^{1}(F, Z(\D{G}))|} \cdot |\pi_{0}(\text{Aut}_{G}(G'))|^{-1} \cdot |\kappa_{G^{\theta}} / \kappa_{G^{\theta}} \cap Z(\D{G}')|^{-1}. \end{align} where $\kappa_{G^{\theta}} = A_{\D{G}}^{\D{\theta}}$. When $\theta = id$, the term $|\kappa_{G^{\theta}} / \kappa_{G^{\theta}} \cap Z(\D{G}')|^{-1} =1$. We can also write down the same trace formulas for $\widetilde{G}$, but in this case we also need to consider the $\omega$-twisted version of these trace formulas. Let $\omega$ be a character of $\widetilde{G}(\mathbb{A}_{F})/\widetilde{G}(F)$ and $\tilde{f} \in \H(\widetilde{G}, \lif{\chi})$, the discrete part of the $(\theta, \omega)$-twisted trace formula for $\widetilde{G}$ takes the form \begin{align \label{eq: twisted spectral side} \tIdt{\widetilde{G}^{\theta}}{, t}(\tilde{f}) = \sum_{ \{ \widetilde{M} \} } |W(\widetilde{M})|^{-1} \sum_{w \in W^{\theta}(\widetilde{M})_{reg}} |\det(w-1)_{\mathfrak{a}^{\widetilde{G}^{\theta}}_{\widetilde{M}}}|^{-1} tr(M_{\widetilde{P}|\theta \widetilde{P}, t}(w, \lif{\chi}) I^{\theta, \omega}_{\widetilde{P}, t}(\lif{\chi}, \tilde{f})), \end{align} where the operator $I^{\theta, \omega}_{\widetilde{P}, t}(\lif{\chi}, \tilde{f})$ is the composition $R(\theta)^{-1} \circ R(\omega) \circ I_{\widetilde{P}, t}(\lif{\chi}, \tilde{f})$. For the term corresponding to $\widetilde{M} = \widetilde{G}$, we let $R^{\widetilde{G}}_{disc, t}(\tilde{f}) := I_{\widetilde{G}, t}(\lif{\chi}, \tilde{f})$ and denote $R(\theta)^{-1} \circ R(\omega) \circ R^{\widetilde{G}}_{disc, t}(\tilde{f})$ by $R^{(\widetilde{G}^{\theta}, \omega)}_{disc, t}(\tilde{f})$. After stabilization, \eqref{eq: twisted spectral side} becomes \begin{align \label{eq: twisted endoscopic side} \tIdt{\widetilde{G}^{\theta}}{, t}(\tilde{f}) = \sum_{\widetilde{G}' \in \tEnd{ell}{\widetilde{G}^{\theta}}} \iota(\widetilde{G}, \widetilde{G}') \Sdt{\widetilde{G}'}{, t}(\tilde{f}^{\widetilde{G}'}), \end{align} where the coefficients $\iota(\widetilde{G}, \widetilde{G}')$ are given by the same kind of formula as in ~\eqref{formula: endoscopic coefficient}. We denote by $\mathcal{A}(\widetilde{G})$ (resp. $\mathcal{A}_{2}(\widetilde{G})$) the set of (resp. discrete) automorphic representations of $\widetilde{G}$, and by $\mathcal{C}_{aut}(\widetilde{G})$ the set of families of Satake parameters of automorphic representations of $\widetilde{G}$, modulo the equivalence relation that $c = c' \in \mathcal{C}_{aut}(\widetilde{G})$ if and only if $c_{v} = c'_{v}$ for almost all places $v$. More generally, we extend this notion to admissible representations of $\widetilde{G}(\mathbb{A}_{F})$, and we denote the corresponding set by $\mathcal{C}_{\mathbb{A}}(\widetilde{G})$. For $\lif{c} \in \mathcal{C}_{\mathbb{A}}(\widetilde{G})$ and its projection $c$ on $\L{G}$, we can write $\Idt{G^{\theta}}{, t, c}(f)$ (resp. $\tIdt{\widetilde{G}^{\theta}}{, t, \lif{c}}(\tilde{f})$ and $R^{(\widetilde{G}^{\theta}, \omega)}_{disc, t, \lif{c}}(\tilde{f})$) for the part of $\Idt{G^{\theta}}{, t}(f)$ (resp. $\tIdt{\widetilde{G}^{\theta}}{, t}(\tilde{f})$ and $R^{(\widetilde{G}^{\theta}, \omega)}_{disc, t}(\tilde{f})$), which is contributed from automorphic representations $\r$ (resp. $\tilde{\pi}$) satisfying $c(\r) = c$ (resp. $c(\tilde{\pi}) = \lif{c}$). Then $\Sdt{G}{, t, c}(f)$ can be defined inductively using \eqref{eq: endoscopic side} for $\theta = id$. To be more precise, let \[ \Sdt{G'}{, t, c}(f') = \sum_{c' \rightarrow c} \Sdt{G'}{, t, c'}(f'), \] and the sum is over the preimages $c' $of $c$ in $\mathcal{C}_{\mathbb{A}}(G')$ under the twisted endoscopic embedding $\L{G}' \rightarrow \L{G}$. Then we define \begin{align*} \Sdt{G}{, t, c}(f) = \Idt{G}{, t, c}(f) - \sum_{G' \in \End{ell}{G} - \{G\}} \iota(G, G') \Sdt{G'}{, t, c}(f^{G'}). \end{align*} Similarly, we can define $\Sdt{\widetilde{G}}{, t, \lif{c}}(\tilde{f})$. The next lemma shows that $\Sdt{G}{, t, c}(f)$ (resp. $\Sdt{\widetilde{G}}{, t, \lif{c}}(\tilde{f})$) is stable and we get a decomposition for \eqref{eq: endoscopic side} (resp. \eqref{eq: twisted endoscopic side}). \begin{lemma \label{lemma: trace formula component} \begin{enumerate} \item $\Sdt{G}{, t, c}(f)$ (resp. $\Sdt{\widetilde{G}}{, t, \lif{c}}(\tilde{f})$) is stable. \item The stabilization of the twisted trace formula \eqref{eq: endoscopic side} (resp. \eqref{eq: twisted endoscopic side}) can be decomposed according to $c \in \mathcal{C}_{\mathbb{A}}(G)$ (resp. $\lif{c} \in \mathcal{C}_{\mathbb{A}}(\widetilde{G})$), i.e. \[ \Idt{G^{\theta}}{, t, c}(f) = \sum_{G' \in \End{ell}{G^{\theta}}} \iota(G, G') \Sdt{G'}{, t, c}(f^{G'}). \] resp. \begin{align \label{eq: trace formula component} \tIdt{\widetilde{G}^{\theta}}{, t, \lif{c}}(\tilde{f}) = \sum_{\widetilde{G}' \in \tEnd{ell}{\widetilde{G}^{\theta}}} \iota(\widetilde{G}, \widetilde{G}') \Sdt{\widetilde{G}'}{, t, \lif{c}}(\tilde{f}^{\widetilde{G}'}). \end{align} \end{enumerate} \end{lemma} The lemma is an application of the theory of multipliers, and the proof is the same as in (\cite{Arthur:2013}, Lemma 3.3.1). As in the local case where we study the relation of representations between $G$ and $\widetilde{G}$, here we want to discuss the relation of $\Idt{G}{, t, c}(f)$ (resp. $\Sdt{G}{, t, c}(f)$) with $\Idt{\widetilde{G}}{, t, \lif{c}}(\tilde{f})$ (resp. $\Sdt{\widetilde{G}}{, t, \lif{c}}(\tilde{f})$). The next lemma is the first step of studying this relation. \begin{lemma \label{lemma: Hecke eigenvalue correspondence} Suppose $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(\mathbb{A}_{F})$ and $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$, then the set of Satake parameters $c(\tilde{\pi})$ is mapped to $c(\r)$ under the projection $p : \L{\widetilde{G}} \longrightarrow \L{G}$. \end{lemma} \begin{proof} This lemma is essentially local, and it suffices to show for any place $v$ of $F$, if both $\r_{v}$ and $\tilde{\pi}_{v}$ are unramified and $\r_{v}$ is contained in the restriction of $\tilde{\pi}_{v}$ to $G(F_{v})$, then $c(\tilde{\pi}_{v})$ is mapped to $c(\r_{v})$. If $\widetilde{G}_{v}$ is a torus, this follows from the Langlands correspondence for tori. In general, $\tilde{\pi}_{v}$ is an irreducible constituent of \( \mathcal{I}_{\lif{B}_{v}} (\lif{\chi}_{v}) \) for some unramified character $\lif{\chi}_{v}$ on the maximal torus $\lif{T}_{v}$ with Borel subgroup $\lif{B}_{v} \supseteq \lif{T}_{v}$, and one has $c(\tilde{\pi}_{v}) = c(\lif{\chi}_{v})$. Since \[ \text{Res}^{\widetilde{G}_{v}}_{G_{v}} \, \, \mathcal{I}_{\lif{B}_{v}} \, (\lif{\chi}_{v}) \cong \mathcal{I}_{B_{v}} \, ( \text{Res}^{\lif{T}_{v}}_{T_{v}} \, \lif{\chi}_{v}), \] then $c(\r_{v}) = c(\lif{\chi}_{v}|_{T_{v}})$. So again by the Langlands correspondence for tori one has $c(\tilde{\pi}_{v})$ mapped to $c(\r_{v})$. \end{proof} Now we assume $\widetilde{G}$ is of type \eqref{eq: similitude}. By Corollary~\ref{cor: relative Hasse principle}, $\c(Z_{\widetilde{G}}(\mathbb{A}_{F})) \cap D(F) = \c(Z_{\widetilde{G}}(F))$. So we have $\c(Z_{\widetilde{G}}(\mathbb{A}_{F})) \cap \c(\widetilde{G}(F)) = \c(Z_{\widetilde{G}}(F))$, which is equivalent to \[ G(\mathbb{A}_{F}) \cap \widetilde{G}(F) Z_{\widetilde{G}}(\mathbb{A}_{F}) = G(F) Z_{G}(\mathbb{A}_{F}). \] Therefore, \[ G(F)Z_{G}(\mathbb{A}_{F}) \backslash G(\mathbb{A}_{F}) = \widetilde{G}(F) Z_{\widetilde{G}}(\mathbb{A}_{F}) \backslash \widetilde{G}(F) Z_{\widetilde{G}}(\mathbb{A}_{F}) G(\mathbb{A}_{F}). \] Let $\lif{\zeta}$ be a character of $Z_{\widetilde{G}}(\mathbb{A}_{F})/Z_{\widetilde{G}}(F)$ and $\zeta$ be the restriction of $\lif{\zeta}$ to $Z_{G}(\mathbb{A}_{F})$, then we have \[ L^{2}_{disc}( G(F) \backslash G(\mathbb{A}_{F}), \zeta) = L^{2}_{disc}(\widetilde{G}(F) \backslash \widetilde{G}(F) Z_{\widetilde{G}}(\mathbb{A}_{F}) G(\mathbb{A}_{F}) , \lif{\zeta}). \] Note that right multiplication by $\widetilde{G}(F) Z_{\widetilde{G}}(\mathbb{A}_{F}) G(\mathbb{A}_{F})$ on the right hand side induces an action on the left hand side. In fact the action by $\widetilde{G}(F)$ on the left hand side is given by conjugation on $G(F) \backslash G(\mathbb{A}_{F})$ and the action by $Z_{\widetilde{G}}(\mathbb{A}_{F})$ is through the central character $\lif{\zeta}$. The following lemma shows that the $L^{2}$-discrete spectrum of $\widetilde{G}(\mathbb{A}_{F})$ is essentially induced from the $L^{2}$-discrete spectrum of $G(\mathbb{A}_{F})$. \begin{lemma \label{lemma: induced discrete spectrum} $Ind^{\, \widetilde{G}(\mathbb{A}_{F})}_{\, \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} L^{2}_{disc}( G(F) \backslash G(\mathbb{A}_{F}), \zeta) \cong L^{2}_{disc}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta})$. \end{lemma} \begin{proof} First of all, there is a natural $\widetilde{G}(\mathbb{A}_{F})$-equivariant isomorphism \[ \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F}) \backslash \widetilde{G}(\mathbb{A}_{F}) \cong G(F)Z_{G}(\mathbb{A}_{F}) \backslash G(\mathbb{A}_{F}) \times_{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} \widetilde{G}(\mathbb{A}_{F}). \] Here we can view \( \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \backslash \widetilde{G}(\mathbb{A}_{F}) \) as a closed subgroup of $\c(Z_{\widetilde{G}}(\mathbb{A}_{F}))D(F) \backslash D(\mathbb{A}_{F})$. Since \[ \c(Z_{\widetilde{G}}(\mathbb{A}_{F}))D(F) \backslash D(\mathbb{A}_{F}) \] is compact (see \cite{Neukirch:1999}, Theorem 6.1.6), then $\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \backslash \widetilde{G}(\mathbb{A}_{F})$ is also compact. So one can define an inner product on the space of $\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})$-equivariant continuous functions from $\widetilde{G}(\mathbb{A}_{F})$ to $L^{2}( G(F) \backslash G(\mathbb{A}_{F}), \zeta )$, i.e., \[ [L^{2}( G(F) \backslash G(\mathbb{A}_{F}), \zeta ) \otimes C (\widetilde{G}(\mathbb{A}_{F}))]^{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} \] by integrating over $\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \backslash \widetilde{G}(\mathbb{A}_{F})$. Moreover, one can normalize its Haar measure such that \[ L^{2}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) \cong \text{ completion of }[L^{2}( G(F) \backslash G(\mathbb{A}_{F}), \zeta ) \otimes C (\widetilde{G}(\mathbb{A}_{F}))]^{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})}, \] which is compatible with $\widetilde{G}(\mathbb{A}_{F})$-action. Note that the right hand side is nothing but \[ \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} L^{2}( G(F) \backslash G(\mathbb{A}_{F}), \zeta). \] Finally, since $\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \backslash \widetilde{G}(\mathbb{A}_{F})$ is compact, one must have \[ \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} L^{2}_{disc}( G(F) \backslash G(\mathbb{A}_{F}), \zeta) \cong L^{2}_{disc}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}). \] \end{proof} Let $X$ be the set of characters of $\widetilde{G}(\mathbb{A}_{F}) / Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})$, and let $Y$ be the set of characters of $\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F) Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})$. If $\r$ is an irreducible admissible representation of $G(\mathbb{A}_{F})$, and $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(\mathbb{A}_{F})$, let us define \begin{align*} \widetilde{G}(\r) &= \{g \in \widetilde{G}(\mathbb{A}_{F}): \r^{g} \cong \r\} \\ X(\tilde{\pi}) &= \{ \omega \in X: \tilde{\pi} \cong \tilde{\pi} \otimes \omega \} \\ Y(\tilde{\pi}) &= Y \cap X(\tilde{\pi}). \end{align*} By \cite[Lemma 4.11]{HiragaSaito:2012}, we know $Y(\tilde{\pi}) = (\widetilde{G}(\mathbb{A}_{F})/\widetilde{G}(\r)\widetilde{G}(F))^{*}$ is finite. The following lemma is inspired by \cite[Lemma 6.2]{LabesseLanglands:1979}. \begin{lemma \label{lemma: multiplicity relation} Suppose $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(\mathbb{A}_{F})$, and $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$. Then the multiplicities of $\tilde{\pi}$ and $\r$ in the discrete spectrum are related by the following formula \begin{align \label{eq: multiplicity relation} \sum_{\omega \in X/ YX(\tilde{\pi})} m(\tilde{\pi} \otimes \omega) = \sum_{g \in \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(\r)\widetilde{G}(F)} m(\r^{g}). \end{align} \end{lemma} \begin{proof} By Lemma~\ref{lemma: induced discrete spectrum}, \begin{align \label{eq: multiplicity relation 0} L^{2}_{disc}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) \cong \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} L^{2}_{disc}( G(F) \backslash G(\mathbb{A}_{F}), \zeta) \end{align} and we would like to expand the right hand side. First, we need to decompose $L^{2}_{disc}( G(F) \backslash G(\mathbb{A}_{F}), \zeta)$ as a representation of $\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})$. Recall $Z_{\widetilde{G}}(\mathbb{A}_{F})$ acts through $\lif{\zeta}$ and $\widetilde{G}(F)$ acts by conjugation on $G(F) \backslash G(\mathbb{A}_{F})$. Let $\r$ be any constituent in $L^{2}_{disc}(G(F) \backslash G(\mathbb{A}_{F}), \zeta )$ and \[ G_{1}(\r) = \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \cap \widetilde{G}(\r). \] Then $G_{1}(\r)$ will act on the $\r$-isotypic component $I(\r)$ and we get \[ I(\r) = \bigoplus^{m(\r)}_{\omega_{1}} \r_{1} \otimes \omega_{1}, \] where $\r_{1}$ is an extension of $\r$ to $G_{1}(\r)$ and the sum is over $m(\r)$ characters $\omega_{1}$ of $G_{1}(\r) / Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})$, which depend on the extension $\r_{1}$ and can have multiplicities. Since $m(\r) = m(\r^{g})$ for $g \in \widetilde{G}(F)$, we have the following decomposition \[ L^{2}_{disc}( G(F) \backslash G(\mathbb{A}_{F}), \zeta) = \bigoplus_{ \{ \r \} } \text{Ind}^{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})}_{G_{1}(\r)} (\bigoplus^{m(\r)}_{\omega_{1}} \r_{1} \otimes \omega_{1}), \] where the outer sum is taken over equivalence classes $\{ \r \}$ of constituents in $L^{2}_{disc}(G(F) \backslash G(\mathbb{A}_{F}), \zeta )$ under the action by $\widetilde{G}(F)$. Taking this expression into \eqref{eq: multiplicity relation 0}, we get \begin{align*} L^{2}_{disc}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) & \cong \bigoplus_{ \{ \r \} } \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} \text{Ind}^{\widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})}_{G_{1}(\r)} (\bigoplus^{m(\r)}_{\omega_{1}} \r_{1} \otimes \omega_{1}) \\ & \cong \bigoplus_{ \{ \r \} } \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{G_{1}(\r)} (\bigoplus^{m(\r)}_{\omega_{1}} \r_{1} \otimes \omega_{1}). \end{align*} Moreover, \begin{align*} L^{2}_{disc}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) & \cong \bigoplus_{ \{ \r \} } \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \text{Ind}^{\widetilde{G}(\r)}_{G_{1}(\r)} (\bigoplus^{m(\r)}_{\omega_{1}} \r_{1} \otimes \omega_{1}) \\ & \cong \bigoplus_{ \{ \r \} } \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \bigoplus^{m(\r)}_{\omega_{1}} ( \bigoplus_{\omega \in (\widetilde{G}(\r) / G_{1}(\r))^{*}} \tilde{\pi}_{1} \otimes \omega \,\,\,\,) \otimes \omega_{1} \\ & \cong \bigoplus_{ \{ \r \} } \bigoplus^{m(\r)}_{\omega_{1}} \bigoplus_{\omega \in (\widetilde{G}(\r) / G_{1}(\r))^{*}} \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \tilde{\pi}_{1} \otimes \omega \otimes \omega_{1}, \end{align*} where $\tilde{\pi}_{1}$ is an extension of $\r_{1}$ to $\widetilde{G}(\r)$ and $\omega_{1}$ is extended to $\widetilde{G}(\r)$. Suppose $\r' = \r^{g}$ for some $g \in \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)\widetilde{G}(\r)$, we have \begin{align*} G_{1}(\r') = G_{1}(\r^{g}) = \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \cap \widetilde{G}(\r^{g}) = \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \cap \widetilde{G}(\r)^{g}. \end{align*} Since $\widetilde{G}(\r)^{g} \cong \widetilde{G}(\r)$, then $G_{1}(\r') = G_{1}(\r)$. Hence $\r_{1}' \cong \r_{1}^{g} \otimes \omega_{g}$ for some character $\omega_{g}$ of $G_{1}(\r) / Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})$. Similarly one can show $\tilde{\pi}_{1}' \cong \tilde{\pi}_{1}^{g} \otimes \omega_{g}$ for some extension of $\omega_{g}$ to $\widetilde{G}(\r)$. So \[ \text{Ind}^{\widetilde{G}(\r)}_{G_{1}(\r)} \r_{1}' \cong \bigoplus_{\omega \in (\widetilde{G}(\r) / G_{1}(\r))^{*}} \tilde{\pi}_{1}^{g} \otimes \omega \otimes \omega_{g}, \] and \begin{align*} \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \text{Ind}^{\widetilde{G}(\r)}_{G_{1}(\r)} \r_{1}' & \cong \bigoplus_{\omega \in (\widetilde{G}(\r) / G_{1}(\r))^{*}} \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \tilde{\pi}_{1}^{g} \otimes \omega \otimes \omega_{g} \\ & \cong \bigoplus_{\omega \in (\widetilde{G}(\r) / G_{1}(\r))^{*}} \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \tilde{\pi}_{1} \otimes \omega \otimes \omega_{g}. \end{align*} Therefore \[ L^{2}_{disc}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) \cong \bigoplus_{ \{ \r \}^{\sim} } \bigoplus_{g \in \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)\widetilde{G}(\r)} \bigoplus^{m(\r^{g})}_{\omega_{1}} \bigoplus_{\omega \in (\widetilde{G}(\r) / G_{1}(\r))^{*}} \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \tilde{\pi}_{1} \otimes \omega \otimes \omega_{1} \otimes \omega_{g}, \] where the outer sum is taken over equivalence classes $\{ \r \}^{\sim}$ of constituents in $L^{2}(G(F) \backslash G(\mathbb{A}_{F}), \zeta )$ under the action by $\widetilde{G}(\mathbb{A}_{F})$. Note the characters $\omega_{1}$ in this formula depend on $\r^{g}$. By our definition of $G_{1}(\r)$, the characters of $\widetilde{G}(\r) / G_{1}(\r)$ can be extended to that of $\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})$. If we let \[ \tilde{\pi} = \text{Ind}^{\widetilde{G}(\mathbb{A}_{F})}_{\widetilde{G}(\r)} \tilde{\pi}_{1}, \] then from the above formula one can see easily that \[ \sum_{\omega \in X/ YX(\tilde{\pi})} m(\tilde{\pi} \otimes \omega) = \sum_{g \in \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(\r)\widetilde{G}(F)} m(\r^{g}). \] \end{proof} \subsection{Multiplicity formula \label{subsec: multiplicity formula} It is natural to apply Arthur's multiplicity formula (cf. Theorem~\ref{thm: discrete spectrum}) to the right hand side of \eqref{eq: multiplicity relation} for those representations parametrized by $\phi \in \cPdt{G}$. But we can not apply that formula directly since it only gives the multiplicity of $\r$ as an $\bar{\mathcal{H}}(G)$-module for $[\r] \in \cPkt{\phi}$. So let us define \[ \bar{m}(\r) = \sum_{\r' \sim \r} m(\r'), \] where $\r' \cong \r$ as $\bar{\mathcal{H}}(G)$-modules. Then the multiplicity formula for $[\r] \in \cPkt{\phi}$ asserts that \begin{align \label{eq: multiplicity formula for classical group} \bar{m}(\r) = m_{\phi} |\S{\phi}|^{-1} \sum_{x \in \S{\phi}} <x , \r> , \end{align} where $m_{\phi}$ is defined in Theorem~\ref{thm: discrete spectrum} and Remark~\ref{rk: discrete spectrum}. For any irreducible admissible representation $\tilde{\pi}$ of $\widetilde{G}(\mathbb{A}_{F})$, whose restriction to $G(\mathbb{A}_{F})$ contains $\r$, let us also write \[ \bar{m}(\tilde{\pi}) = \sum_{\{\tilde{\pi}' \sim_{X} \tilde{\pi}\} / X} \quad \sum_{\omega \in X/ YX(\tilde{\pi}')} m(\tilde{\pi}' \otimes \omega), \] where $\tilde{\pi}' \cong \tilde{\pi} \otimes \omega'$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules for some $\omega' \in X$, and we take such $\tilde{\pi}'$ modulo twists by $X$ in the sum. Then we can rewrite the formula \eqref{eq: multiplicity relation} as \begin{align \label{eq: multiplicity relation 1} \bar{m}(\tilde{\pi}) = \sum_{g \in \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(\r)\widetilde{G}(F)} \bar{m}(\r^{g}). \end{align} Now we can apply Arthur's multiplicity formula \eqref{eq: multiplicity formula for classical group} to the right hand side of \eqref{eq: multiplicity relation 1} to get the following result. \begin{lemma \label{lemma: coarse multiplicity formula} Suppose $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(\mathbb{A}_{F})$, and $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$. If $[\r] \in \cPkt{\phi}$ for $\phi \in \cPdt{G}$, then \begin{align} \label{eq: coarse multiplicity formula} \bar{m}(\tilde{\pi}) = m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|} \cdot |\S{\tilde{\phi}}|^{-1} \sum_{x \in \S{\tilde{\phi}}}<x , \r>. \end{align} \end{lemma} \begin{proof} First we want to rewrite the right hand side of \eqref{eq: multiplicity relation 1} as an integral over $ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) $. Consider the integral \begin{align*} \int_{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) } \bar{m}(\r^{g}) \, \mathrm{d} g & = \sum_{g \in \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)\widetilde{G}(\r)} \int_{\widetilde{G}(\r) / \widetilde{G}(\r) \cap \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F})} \bar{m}(\r^{hg}) \, \mathrm{d} h \\ & = \sum_{g \in \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)\widetilde{G}(\r)} \bar{m}(\r^{g}) \cdot \, \mathrm{vol}\{ \widetilde{G}(\r) / \widetilde{G}(\r) \cap \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \}. \end{align*} Since \begin{align*} \mathrm{vol}\{ \widetilde{G}(\r) / \widetilde{G}(\r) \cap \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \} = \frac{\mathrm{vol}\{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \}}{ |\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)\widetilde{G}(\r)|} \end{align*} and \[ |\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)\widetilde{G}(\r)| = |Y(\tilde{\pi})|, \] then \begin{align} \bar{m}(\tilde{\pi}) = \frac{|Y(\tilde{\pi})|}{\mathrm{vol}\{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \}} \cdot \int_{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) } \bar{m}(\r^{g}) \, \mathrm{d} g. \label{eq: coarse multiplicity formula 1} \end{align} Combining the multiplicity formula \eqref{eq: multiplicity formula for classical group} and also our local formula \eqref{eq: theta twisting character}, we can compute the integral on the right hand side of \eqref{eq: coarse multiplicity formula 1} as follows, \begin{align*} & \int_{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) } \bar{m}(\r^{g}) \, \mathrm{d} g = \int_{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) } m_{\phi} |\S{\phi}|^{-1} \sum_{x \in \S{\phi}} <x , \r^{g}> \, \mathrm{d} g \\ & = m_{\phi} |\S{\phi}|^{-1} \sum_{x \in \S{\phi}} <x , \r> \cdot \int_{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) } \omega_{x}(g) \, \mathrm{d} g \\ & = m_{\phi} |\S{\phi}|^{-1} \sum_{x \in \S{\tilde{\phi}}} <x, \r> \cdot \mathrm{vol}\{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \} \\ & = m_{\phi} |\S{\phi} / \S{\tilde{\phi}}|^{-1} |\S{\tilde{\phi}}|^{-1} \sum_{x \in \S{\tilde{\phi}}} <x , \r> \cdot \mathrm{vol}\{ \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)Z_{\widetilde{G}}(\mathbb{A}_{F})G(\mathbb{A}_{F}) \}. \\ \end{align*} Substitute this into \eqref{eq: coarse multiplicity formula 1}, one gets \[ \bar{m}(\tilde{\pi}) = m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|} \cdot |\S{\tilde{\phi}}|^{-1} \sum_{x \in \S{\tilde{\phi}}} <x , \r>. \] \end{proof} Although this lemma does not give a multiplicity formula for $\widetilde{G}$, it has a very interesting consequence. \begin{corollary \label{cor: coarse multiplicity formula} Suppose $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(\mathbb{A}_{F})$, and $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$. If $[\r] \in \cPkt{\phi}$ for $\phi \in \cPdt{G}$, then there exists $\omega \in X$ such that $\tilde{\pi} \otimes \omega$ is isomorphic to a discrete automorphic representation as $\bar{\mathcal{H}}(\widetilde{G})$-module if and only if $<\cdot, \tilde{\pi}> = 1$. In particular, if $\S{\tilde{\phi}} =1$ such character always exists. \end{corollary} \begin{proof} Since $<x, \tilde{\pi}> = <x, \r>$ for $x \in \S{\tilde{\phi}}$, it follows from the formula \eqref{eq: coarse multiplicity formula} that \[ \bar{m}(\tilde{\pi}) = \begin{cases} m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|} &\text{ if } <\cdot, \tilde{\pi}> = 1, \\ 0 &\text{ otherwise }. \end{cases} \] So the first part of this corollary is clear. Next if $\S{\tilde{\phi}} = 1$, then we always have \[ \bar{m}(\tilde{\pi}) = m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}, \] and this shows the second part. \end{proof} In fact, we can refine the result of Lemma~\ref{lemma: coarse multiplicity formula} to get a multiplicity formula for $\widetilde{G}$ by applying the stabilized twisted trace formulas. First, we need to define an equivalence relation on $\mathcal{C}_{\mathbb{A}}(G)$ such that $c \sim c' \in \mathcal{C}_{\mathbb{A}}(G)$ if and only if $c_{v}$ is $\Sigma_{0}$-conjugate to $c'_{v}$ for almost all places, and we denote the set of equivalence classes by $\bar{\mathcal{C}}_{\mathbb{A}}(G)$. Let $\bar{\mathcal{C}}_{aut}(G)$ be the subset of $\bar{\mathcal{C}}_{\mathbb{A}}(G)$ consisting of equivalence classes of $\mathcal{C}_{aut}(G)$. \begin{lemma \label{lemma: vanishing} Suppose $\lif{c} \in \mathcal{C}_{\mathbb{A}}(\widetilde{G})$, then \[ \Idt{\widetilde{G}}{, t, \lif{c}} (\tilde{f}) = \Sdt{\widetilde{G}}{, t, \lif{c}} (\tilde{f} ) = 0 \] for $\tilde{f} \in \H(\widetilde{G}, \lif{\chi})$, unless the projection of $\lif{c}$ under $\bold{p}: \L{\widetilde{G}} \rightarrow \L{G}$ belongs to the set $\bar{\mathcal{C}}_{aut}(G)$. \end{lemma} \begin{proof} It follows from Lemma~\ref{lemma: Hecke eigenvalue correspondence} and Lemma~\ref{lemma: induced discrete spectrum} that \begin{align \label{eq: vanishing} tr\Rdt{\widetilde{G}}{, t, \lif{c}} (\tilde{f} )= 0, \end{align} unless $\lif{c}$ projects to $c \in \bar{\mathcal{C}}_{aut}(G)$. Suppose the projection of $\lif{c}$ in $\bar{\mathcal{C}}_{\mathbb{A}}(G)$ does not belong to $\bar{\mathcal{C}}_{aut}(G)$, then by the principle of functoriality (which results from Arthur's theory \cite{Arthur:2013}), it neither belongs to $\bar{\mathcal{C}}_{aut}(M)$ for any Levi subgroup $M$ of $G$, nor to $\bar{\mathcal{C}}_{aut}(G')$ for any endoscopic group $G'$ of $G$. Then for the same reason as \eqref{eq: vanishing}, one gets $tr\Rdt{\widetilde{M}}{, t, \lif{c}} (\tilde{f}_{M}) = 0$. So it follows from the definition (see \eqref{eq: spectral side}) that \[ \Idt{\widetilde{G}}{, t, \lif{c}} (\tilde{f}) = 0. \] Since \[ \Sdt{\widetilde{G}}{, t, \lif{c}} (\tilde{f}) = \Idt{\widetilde{G}}{, t, \lif{c}} (\tilde{f}) - ( \sum_{\widetilde{G}' \in \End{ell}{\widetilde{G}} - \{\widetilde{G}\}} \iota(\widetilde{G}, \widetilde{G}') \Sdt{\widetilde{G}'}{, t, \lif{c}}(\tilde{f}^{\widetilde{G}'}) \,\,\, ), \] we can assume $\Sdt{\widetilde{G}'}{, t, \lif{c}}(\tilde{f}^{\widetilde{G}'}) = 0$ by induction, then \[ \Sdt{\widetilde{G}}{, t, \lif{c}} (\tilde{f}) = 0. \] \end{proof} For $\phi \in \cP{G}$, Arthur (cf. \cite{Arthur:2013}, Section 3.3) defines the $\phi$-component of the discrete part of the twisted trace formula for $G$ and its stabilized form. Note that $c(\phi)$ defines an element in $\bar{\mathcal{C}}_{\mathbb{A}}(G)$, and $\phi$ also determines the norm of the imaginary part of archimedean infinitesimal character, which can be denoted by $t(\phi)$, so we can write \[ \Idt{G^{\theta}}{, \phi}(f) = \sum_{c \rightarrow c(\phi)} \Idt{G^{\theta}}{, t(\phi), c}(f), \] and \[ \Sdt{G}{, \phi}(f) = \sum_{c \rightarrow c(\phi)} \Sdt{G}{, t(\phi), c}(f), \] where these sums are all over preimages $c$ of $c(\phi)$ in $\mathcal{C}_{\mathbb{A}}(G)$. Then the stabilization of the $\phi$-component of the twisted trace formula for $G$ is \begin{align*} \Idt{G^{\theta}}{, \phi}(f) = \sum_{G' \in \End{ell}{G^{\theta}}} \iota(G, G') \Sdt{G'}{, \phi}(f^{G'}), \end{align*} where \[ \Sdt{G'}{, \phi}(f^{G'}) = \sum_{c' \rightarrow c(\phi)} \Sdt{G'}{, t(\phi), c'}(f^{G'}). \] Here we want to define the $\phi$-component of the discrete part of the twisted trace formula for $\widetilde{G}$. Let us write \begin{align*} \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) &= \sum_{\lif{c} \rightarrow c(\phi)} \tIdt{\widetilde{G}^{\theta}}{, t(\phi), \lif{c}}(\tilde{f}), \end{align*} and \[ \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) = \sum_{\lif{c} \rightarrow c(\phi)} \Sdt{\widetilde{G}}{, t(\phi), \lif{c}}(\tilde{f}). \] Then the stabilization of the $\phi$-component of the twisted stable trace formula for $\widetilde{G}$ is \begin{align \label{eq: endoscopic side component} \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = \sum_{\widetilde{G}' \in \End{ell}{\widetilde{G}^{\theta}, \omega}} \iota(\widetilde{G}, \widetilde{G}') \Sdt{\widetilde{G}'}{, \phi}(\tilde{f}^{\widetilde{G}'}), \end{align} where \[ \Sdt{\widetilde{G}'}{, \phi}(\tilde{f}^{\widetilde{G}'}) = \sum_{\lif{c}' \rightarrow c(\phi)} \Sdt{\widetilde{G}'}{, t(\phi), \lif{c}'}(\tilde{f}^{\widetilde{G}'}). \] Similarly, we can also define $R_{disc, \phi}^{(\widetilde{G}^{\theta}, \omega)}(\tilde{f})$. For $\phi \in \cPdt{G}$, it only contributes to the discrete spectrum of $G$ (cf. Remark~\ref{rk: discrete spectrum}), and by Lemma~\ref{lemma: induced discrete spectrum} it also only contributes to the discrete spectrum of $\widetilde{G}$. So we have \begin{align \label{eq: discrete part vs discrete spectrum} \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = tr R_{disc, \phi}^{(\widetilde{G}^{\theta}, \omega)}(\tilde{f}). \end{align} Now we can give our multiplicity formula for $\widetilde{G}$, and we will start with the simplest case, i.e., $\widetilde{G} = GSp(2n)$ or $GSO(2n, \eta)$. \begin{proposition \label{prop: multiplicity formula} Suppose $\widetilde{G} = GSp(2n)$ or $GSO(2n, \eta)$, $\tilde{\pi}$ is a discrete automorphic representation of $\widetilde{G}$, and $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$. If $[\r] \in \cPkt{\phi}$ for $\phi \in \cPdt{G}$, then \begin{align \label{eq: multiplicity formula} m(\tilde{\pi}) = m_{\tilde{\phi}} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}, \end{align} where $m_{\tilde{\phi}} = 1 \text { or } 2$, and $m_{\tilde{\phi}} = 2$ only when $G$ is special even orthogonal, $\phi \notin \cP{\com{G}}$, and $\tilde{\pi} \cong \tilde{\pi}^{\theta_{0}} \otimes \omega$ for some $\omega \in Y$. \end{proposition} \begin{proof} Since $\tilde{\pi}$ is automorphic, we can take $\r$ to be automorphic as well by Lemma~\ref{lemma: induced discrete spectrum}, and hence $<\cdot, \r> = 1$. It follows from Lemma~\ref{lemma: coarse multiplicity formula} that \begin{align \label{eq: multiplicity formula 0} \bar{m}(\tilde{\pi}) = m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}. \end{align} Since $\theta_{0}$ acts on $\{\tilde{\pi}' \sim_{X} \tilde{\pi}\}$, we can write \[ \bar{m}_{0}(\tilde{\pi}) = \sum_{\{\tilde{\pi}' \sim_{X} \tilde{\pi}\} / X, \theta_{0}} \quad \sum_{\omega \in X/YX(\tilde{\pi}')} m(\tilde{\pi}' \otimes \omega), \] where the sum modulo twists by $X$ and $\theta_{0}$. If $\r \cong \r^{\theta_{0}}$, then \[ \bar{m}_{0}(\tilde{\pi}) = \bar{m}(\tilde{\pi}) = \sum_{\omega \in X/YX(\tilde{\pi})} m(\tilde{\pi} \otimes \omega). \] If $\r \ncong \r^{\theta_{0}}$, $\bar{m}_{0}(\tilde{\pi}) = \frac{1}{2} \bar{m}(\tilde{\pi})$. Therefore, we have \begin{align \label{eq: multiplicity formula 1} \bar{m}_{0}(\tilde{\pi}) = \begin{cases} m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|} & \text{ if } \r \cong \r^{\theta_{0}}, \\ \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|} & \text{ if } \r \ncong \r^{\theta_{0}}. \end{cases} \end{align} Note that $\a(\S{\phi}) \subseteq Y(\tilde{\pi})$, so \begin{align*} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|} = |Y(\tilde{\pi}) / \a(\S{\phi})|. \end{align*} In particular, $Y(\tilde{\pi}) / \a(\S{\phi})$ is a two-group. We can fix a subgroup of representatives in $Y(\tilde{\pi})$ and denote it again by $Y(\tilde{\pi}) / \a(\S{\phi})$. Let us first consider the case $\r \ncong \r^{\theta_{0}}$. If $Y(\tilde{\pi}) / \a(\S{\phi}) = 1$, then the lemma becomes obvious. So let us assume $1 \neq \omega \in Y(\tilde{\pi}) / \a(\S{\phi})$, and by the stabilized $\omega$-twisted trace formula \eqref{eq: endoscopic side component}, one gets \[ \tIdt{\widetilde{G}}{, \phi}(\tilde{f}) = \sum_{\widetilde{G}' \in \tEnd{ell}{\widetilde{G}}} \iota(\widetilde{G}, \widetilde{G}') \Sdt{\widetilde{G}'}{, \phi}(\tilde{f}^{\widetilde{G}'}), \] for $\tilde{f} \in \H(\widetilde{G}, \lif{\chi})$. Since $\omega$ is not in $\a(\S{\phi})$, $\phi$ can not factor through $\L{G'}$ for any $G' \in \End{ell}{G}$ such that $\widetilde{G}' \in \tEnd{ell}{\widetilde{G}}$. Then by Lemma~\ref{lemma: vanishing}, $\Sdt{\widetilde{G}'}{, \phi}(\tilde{f}^{\widetilde{G}'}) = 0$ for all $\widetilde{G}' \in \tEnd{ell}{\widetilde{G}}$, and hence \[ \tIdt{\widetilde{G}}{, \phi}(\tilde{f}) = 0. \] In particular, \begin{align \label{eq: multiplicity formula 2} tr \tRdt{\widetilde{G}}{, \phi}(\tilde{f}) = \tIdt{\widetilde{G}}{, \phi}(\tilde{f}) = 0, \end{align} as $\phi \in \cPdt{G}$. This is true for all nontrivial $\omega \in Y(\tilde{\pi}) / \a(\S{\phi})$. Let $I(\tilde{\pi})$ be the $\tilde{\pi}$-isotypic component in $\Rdt{\widetilde{G}}{, \phi}$, and one observes $Y(\tilde{\pi}) / \a(\S{\phi})$ will act on $I(\tilde{\pi})$ by multiplication. The action of $Y(\tilde{\pi}) / \a(\S{\phi})$ does not commute with that of $\widetilde{G}(\mathbb{A}_{F})$, but one can take \[ \iG{\mathbb{A}_{F}} = \{g \in \widetilde{G}(\mathbb{A}_{F}) : \omega(g) = 1 \text{ for all } \omega \in Y(\tilde{\pi}) / \a(\S{\phi}) \}, \] which is of finite index in $\widetilde{G}(\mathbb{A}_{F})$, and then it will commute with the action of $\iG{\mathbb{A}_{F}}$. In fact one can have a decomposition \[ I(\tilde{\pi}) = \bigoplus_{g \in \widetilde{G}(\mathbb{A}_{F}) / \iG{\mathbb{A}_{F}}} I((\pi^{1})^{g}) \] by restricting to $\iG{\mathbb{A}_{F}}$. The point is each summand is invariant under $Y(\tilde{\pi}) / \a(\S{\phi})$ and has the same multiplicity as $\tilde{\pi}$. By \eqref{eq: multiplicity formula 2}, one has \[ tr ( R(\tilde{f}) \circ R(\omega) )|_{I(\tilde{\pi})} = 0, \] where $R(\omega)$ denotes the multiplication by $\omega$. In particular, one can restrict to those $\tilde{f}$ supported on $\iG{\mathbb{A}_{F}}$, then one has \[ tr (R(\tilde{f}) \circ R(\omega) ) |_{I((\pi^{1})^{g})} = 0 \] for all $g \in \widetilde{G}(\mathbb{A}_{F}) / \iG{\mathbb{A}_{F}}$. We can view $I(\pi^{1})$ as a representation of $\H(\iG{\mathbb{A}_{F}}) \times Y(\tilde{\pi}) / \a(\S{\phi})$ and write it as $ \pi^{1} \otimes W$, then \[ tr (R(\tilde{f}) \circ R(\omega) )|_{I(\pi^{1})} = tr \pi^{1}(\tilde{f}) \cdot tr \pi^{1}_{W}(\omega) = 0, \] where $\pi^{1}_{W}$ is the corresponding representation of $Y(\tilde{\pi}) / \a(\S{\phi})$ on $W$. Therefore, \[ tr \pi^{1}_{W}(\omega) = 0 \] for $1 \neq \omega \in Y(\tilde{\pi}) / \a(\S{\phi})$. We claim \begin{align \label{eq: divisibility} |Y(\tilde{\pi}) / \a(\S{\phi})| \text{ divides } \dim (W). \end{align} If that is the case, by noticing $m(\tilde{\pi}) = \dim (W)$ and comparing with \eqref{eq: multiplicity formula 1} one must have \[ |Y(\tilde{\pi}) / \a(\S{\phi})| = \dim (W), \] hence $m(\tilde{\pi}) = |Y(\tilde{\pi}) / \a(\S{\phi})|$. To prove the claim \eqref{eq: divisibility}, one just needs to show the following general statement. Suppose $V$ is a finite dimensional representation over the complex numbers of a finite group $A$ such that the trace of each nontrivial element of $A$ is zero, then the order of $A$ must divide the dimension of $V$. To see this, let $\chi_{V}$ and $\chi_{\text{triv}}$ be the characters of $V$ and the trivial representation of $A$ respectively, then the multiplicity of the trivial representation in $V$ can be given by \[ m = \langle \chi_{V}, \chi_{\text{triv}} \rangle = \dim(V)/|A|, \] which is an integer. Hence $|A|$ divides $\dim(V)$. For the case $\r \cong \r^{\theta_{0}}$ and $m_{\phi} = 1$, the proof is the same. So we are left with the case $\r \cong \r^{\theta_{0}}$ and $m_{\phi} = 2$. In this case, we have $\tilde{\pi}^{\theta_{0}} \cong \tilde{\pi} \otimes \omega$ for some $\omega \in X$. Let \[ X_{0}(\tilde{\pi}) = \{ \omega \in X : \tilde{\pi} \cong \tilde{\pi} \otimes \omega \text{ or } \tilde{\pi}^{\theta_{0}} \cong \tilde{\pi} \otimes \omega \}. \] If $\tilde{\pi} \otimes \omega \ncong \tilde{\pi}^{\theta_{0}}$ for any $\omega \in Y$, then \[ \sum_{\omega \in X/ YX_{0}(\tilde{\pi})} m(\tilde{\pi} \otimes \omega) = \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|} \] and the rest of the proof is again the same. If $\tilde{\pi} \otimes \omega_{1} \cong \tilde{\pi}^{\theta_{0}}$ for some $\omega_{1} \in Y$, we need to consider the action of the two-group \begin{align \label{eq: theta character group} <(\theta_{0}, \omega_{1})> \times ( Y(\tilde{\pi}) / \a(\S{\phi}) ) \end{align} on $I(\tilde{\pi})$, where $(\theta_{0}, \omega_{1})$ acts by $R(\theta_{0})^{-1} \circ R(\omega_{1})$. It commutes with the action of $(\theta_{0}, \omega_{1})$-invariant functions in $\H(\iG{\mathbb{A}_{F}})$, i.e. $\tilde{f}^{\theta_{0}} \otimes \omega_{1} = \tilde{f}$. And as a module of such space of functions, we have \[ I(\tilde{\pi}) \cong ( \bigoplus_{g \in \widetilde{G}(\mathbb{A}_{F}) / \iG{\mathbb{A}_{F}}} I((\pi^{1}_{+})^{g}) ) \bigoplus ( \bigoplus_{g \in \widetilde{G}(\mathbb{A}_{F}) / \iG{\mathbb{A}_{F}}} I((\pi^{1}_{-})^{g}) ) \] where the sign is according to the eigenvalues $\{\pm 1\}$ of any fixed intertwining operator between $\tilde{\pi} \otimes \omega_{1}$ and $\tilde{\pi}^{\theta_{0}}$ after we identify $I(\tilde{\pi}) \cong m(\tilde{\pi}) \, \tilde{\pi}$. Note that the multiplicity of $\tilde{\pi}$ is the same as that of irreducible modules in $I(\pi^{1}_{+})$ and $I(\pi^{1}_{-})$ of the subspace of functions described above, and then the rest of the argument proceeds in the same way as before by using the stabilized $(\theta, \omega)$-twisted trace formula \eqref{eq: endoscopic side component} for $(\theta_{0}, \omega)$ in \eqref{eq: theta character group}. \end{proof} \begin{corollary \label{cor: modular character} Suppose $\tilde{\pi}$ and $\tilde{\pi}'$ are discrete automorphic representations of $\widetilde{G}$, such that $\tilde{\pi} \cong \tilde{\pi}' \otimes \omega$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules for some $\omega \in X$. If $\r$ is an irreducible constituent in the restriction of $\tilde{\pi}$ to $G(\mathbb{A}_{F})$ and $[\r] \in \cPkt{\phi}$ for $\phi \in \cPdt{G}$, then there exists some $\omega' \in Y$ and $\theta \in \Sigma_{0}$ such that $\tilde{\pi}' \cong \tilde{\pi}^{\theta} \otimes \omega'$. \end{corollary} \begin{proof} If $\widetilde{G} = GSp(2n)$ or $GSO(2n, \eta)$, this can be seen easily by comparing \eqref{eq: multiplicity formula} with \eqref{eq: multiplicity formula 1}. In general, we can first go to the product group $\lif{\widetilde{G}}$ of general symplectic groups and connected general even orthogonal groups (see \eqref{eq: product group}), and it is clear this corollary holds in that case. Then by restricting to $\widetilde{G}$ we get the result. \end{proof} To generalize Proposition~\ref{prop: multiplicity formula}, for $[\r] \in \cPkt{\phi}$ with $\phi \in \cPdt{G}$, we denote by $\Sigma_{0}(\r, Y)$ the subgroup of $\Sigma_{0}$ consisting of $\theta$ such that $\tilde{\pi} \otimes \omega \cong \tilde{\pi}^{\theta}$ for some $\omega \in Y$, where $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(\mathbb{A}_{F})$ containing $\r$ in its restriction to $G(\mathbb{A}_{F})$. If we write $\Sigma_{Y}(\r)$ for the quotient of $\Sigma_{0}$ by $\Sigma_{0}(\r, Y)$, then we have an exact sequence \begin{align} \xymatrix{1 \ar[r] & \Sigma_{0}(\r, Y) \ar[r] & \Sigma_{0} \ar[r] & \Sigma_{Y}(\r) \ar[r] & 1}, \end{align} where all these groups are two-groups. We can also choose a splitting of this sequence and write $\Sigma_{0} \cong \Sigma_{0}(\r, Y) \times \Sigma_{Y}(\r)$. \begin{corollary \label{cor: multiplicity formula} Suppose $\tilde{\pi}$ is a discrete automorphic representation of $\widetilde{G}$, and $\r$ is an irreducible constituent of $\tilde{\pi}$ restricted to $G(\mathbb{A}_{F})$. If $[\r] \in \cPkt{\phi}$ for $\phi \in \cPdt{G}$, then \begin{align \label{eq: generalized multiplicity formula} m(\tilde{\pi}) = m_{\tilde{\phi}} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}, \end{align} where $m_{\tilde{\phi}} = m_{\phi} / |\Sigma_{Y}(\r)|$. \end{corollary} \begin{proof} We will use the formula~\eqref{eq: multiplicity formula 0} \[ \bar{m}(\tilde{\pi}) = m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}. \] It follows from Corollary~\ref{cor: modular character} that \[ \bar{m}(\tilde{\pi}) = \sum_{\theta \in \Sigma_{Y}(\r)} m(\tilde{\pi}^{\theta}). \] Since $m(\tilde{\pi}^{\theta}) = m(\tilde{\pi})$ for $\theta \in \Sigma_{0}$, then we get \[ |\Sigma_{Y}(\r)| \cdot m(\tilde{\pi}) = m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}. \] So by writing $m_{\tilde{\phi}} = m_{\phi} / |\Sigma_{Y}(\r)|$, we have proved the formula \eqref{eq: generalized multiplicity formula}. \end{proof} Suppose $\phi \in \cPdt{G}$, let $\lif{\zeta}$ be a character of $Z_{\widetilde{G}}(\mathbb{A}_{F}) / Z_{\widetilde{G}}(F)$ such that $\zeta = \lif{\zeta}|_{Z_{G}}$ is the central character of $\cPkt{\phi}$. If we denote by $\clPkt{\phi, \lif{\zeta}}$ all equivalence classes of irreducible admissible representations of $\widetilde{G}(\mathbb{A}_{F})$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules with central character $\lif{\zeta}$, whose restriction to $G(\mathbb{A}_{F})$ have irreducible constituents contained in $\cPkt{\phi}$, then by Corollary~\ref{cor: coarse multiplicity formula} we can always choose a representative for $[\tilde{\pi}] \in \clPkt{\phi, \lif{\zeta}} / X$ with $<\cdot, \tilde{\pi}> = 1$ in the discrete spectrum of $\widetilde{G}$. The following proposition gives a decomposition of the $\phi$-component of the discrete spectrum of $\widetilde{G}$. \begin{proposition \label{prop: discrete spectrum} Suppose $\phi \in \cPdt{G}$, we have the following decomposition as $\bar{\mathcal{H}}(\widetilde{G})$-modules \begin{align \label{eq: discrete spectrum} L^2_{disc, \phi} (\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) = m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} \quad \sum_{\substack{[\tilde{\pi}] \in \clPkt{\phi, \lif{\zeta}} / X \\ <\cdot, \tilde{\pi}> = 1}} \tilde{\pi} \otimes \omega, \end{align} where $\tilde{\pi}$ are taken to be the representatives of $\clPkt{\phi, \lif{\zeta}} / X$ in the discrete automorphic spectrum. Moreover, \[ L^2_{disc, \phi} (\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) = 0 \] for $\phi \in \cP{G} - \cPdt{G}$. \end{proposition} \begin{proof} For $\phi \in \cP{G} - \cPdt{G}$, we have $L^2_{disc, \phi} (G(F) \backslash G(\mathbb{A}_{F})) = 0$ (cf. Theorem~\ref{thm: discrete spectrum}). Then it follows from Lemma~\ref{lemma: Hecke eigenvalue correspondence} and Lemma~\ref{lemma: induced discrete spectrum} that \( L^2_{disc, \phi} (\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) = 0. \) Next we assume $\phi \in \cPdt{G}$. By Lemma~\ref{lemma: induced discrete spectrum}, $L^2_{disc, \phi} (\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta})$ consists of discrete automorphic representations in $\clPkt{\phi, \lif{\zeta}}$. Then for any automorphic representation $\tilde{\pi}'$ in $L^2_{disc, \phi} (\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta})$, there exists a representative $\tilde{\pi}$ chosen in \eqref{eq: discrete spectrum} such that $\tilde{\pi} \cong \tilde{\pi}' \otimes \omega$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules for some $\omega \in X$. By Corollary~\ref{cor: modular character}, $\tilde{\pi}' \cong \tilde{\pi}^{\theta} \otimes \omega'$ for $\theta \in \Sigma_{0}$ and $\omega' \in Y$. In particular, $\tilde{\pi}' \cong \tilde{\pi} \otimes \omega'$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules. Therefore, it suffices to count the multiplicity of $\tilde{\pi}$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules in $L^2_{disc, \phi} (\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta})$. By Corollary~\ref{cor: modular character} again, \[ \sum_{\tilde{\pi}' \sim \tilde{\pi}} m(\tilde{\pi}') = \sum_{\theta \in \Sigma_{0}, \, \omega \in \bar{Y}(\tilde{\pi})} m(\tilde{\pi}^{\theta} \otimes \omega) = |\frac{\bar{Y}(\tilde{\pi})}{Y(\tilde{\pi})}| \cdot |\Sigma_{Y}(\r)| \cdot m(\tilde{\pi}), \] where $\tilde{\pi}' \cong \tilde{\pi}$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules and \[ \bar{Y}(\tilde{\pi}) = \{\omega \in Y : \tilde{\pi} \otimes \omega \cong \tilde{\pi} \text{ as $\bar{\mathcal{H}}(\widetilde{G})$-modules}\}. \] By Corollary~\ref{cor: multiplicity formula}, we have \[ |\Sigma_{Y}(\r)| \cdot m(\tilde{\pi}) = m_{\phi} \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}, \] so \[ \sum_{\tilde{\pi}' \sim \tilde{\pi}} m(\tilde{\pi}') = m_{\phi} \frac{|\bar{Y}(\tilde{\pi})|}{|\a(\S{\phi})|}. \] This is exactly the multiplicity we get from \eqref{eq: discrete spectrum}. \end{proof} Now let us get back to the multiplicity formula. Note under the assumption of Proposition~\ref{prop: multiplicity formula}, if $\widetilde{G}$ is general symplectic, then the multiplicity formula \eqref{eq: multiplicity formula} becomes \[ m(\tilde{\pi}) = \frac{|Y(\tilde{\pi})|}{|\a(\S{\phi})|}. \] It is an interesting question to ask when one can have multiplicity one, i.e. $|Y(\tilde{\pi})| = |\a(\S{\phi})|$. Since $\a(\S{\phi})$ is a subgroup of $Y(\tilde{\pi})$, it is the same to ask when \[ \a(\S{\phi}) = Y(\tilde{\pi}). \] By Corollary~\ref{cor: theta twisting character} we have the following description for $Y(\tilde{\pi})$. Let us define \begin{align*} \prod^{aut}_{v} \a(\S{\phi_{v}}) &:= \{ \omega \in Y : \omega_{v} \in \a(\S{\phi_{v}}) \text{ for all } \, v \}, \\ \prod^{aut}_{almost \, all \, v} \a(\S{\phi_{v}}) &:= \{ \omega \in Y : \omega_{v} \in \a(\S{\phi_{v}}) \text{ for almost all} \, v \}, \end{align*} then \[ Y(\tilde{\pi}) = \prod^{aut}_{v} \a(\S{\phi_{v}}). \] Moreover, we get a sequence of inclusions \[ \a(\S{\phi}) \subseteq \prod^{aut}_{v} \a(\S{\phi_{v}}) \subseteq \prod^{aut}_{almost \, all \, v} \a(\S{\phi_{v}}). \] Motivated by the case that $G$ is symplectic and $\phi \in \cPdt{G}$, we give the following definition for both symplectic groups and special even orthogonal groups. \begin{definition Suppose $\phi \in \cP{G}$, we say {\bf multiplicity one} holds for $\tilde{\phi}$ if \[ \a(\S{\phi}) = \prod^{aut}_{v} \a(\S{\phi_{v}}). \] \end{definition} \begin{definition Suppose $\phi \in \cP{G}$, we say {\bf strong multiplicity one} holds for $\tilde{\phi}$ if \[ \prod^{aut}_{v} \a(\S{\phi_{v}}) = \prod^{aut}_{almost \, all \, v} \a(\S{\phi_{v}}). \] \end{definition} The motivation for the first definition is now clear, while the second definition needs some explanation. But before giving the explanation, we want to give two modified definitions of the same kind. In view of Theorem~\ref{thm: discrete spectrum}, we need to deal with the group of characters $\omega_{v}$ such that \[ \tilde{f}_{v}(\tilde{\pi}_{v} \otimes \omega_{v}) = \tilde{f}_{v}(\tilde{\pi}_{v}), \,\,\,\, \tilde{f}_{v} \in \bar{\mathcal{H}}(\widetilde{G}_{v}) \] for $[\r _{v}] \in \cPkt{\phi_{v}}$. It follows from Corollary~\ref{cor: theta twisting character} that this group is isomorphic to $\a(\S{\phi_{v}}^{\Sigma_{0}})$. Then we can similarly define a sequence of inclusions \[ \a(\S{\phi}^{\Sigma_{0}}) \subseteq \prod^{aut}_{v} \a(\S{\phi_{v}}^{\Sigma_{0}}) \subseteq \prod^{aut}_{almost \, all \, v} \a(\S{\phi_{v}}^{\Sigma_{0}}), \] and define the concepts of multiplicity one and strong multiplicity one in the same way regarding these groups. \begin{definition Suppose $\phi \in \cP{G}$, we say {\bf $\Sigma_{0}$-multiplicity one} holds for $\tilde{\phi}$ if \[ \a(\S{\phi}^{\Sigma_{0}}) = \prod^{aut}_{v} \a(\S{\phi_{v}}^{\Sigma_{0}}). \] \end{definition} \begin{definition Suppose $\phi \in \cP{G}$, we say {\bf $\Sigma_{0}$-strong multiplicity one} holds for $\tilde{\phi}$ if \[ \prod^{aut}_{v} \a(\S{\phi_{v}}^{\Sigma_{0}}) = \prod^{aut}_{almost \, all \, v} \a(\S{\phi_{v}}^{\Sigma_{0}}). \] \end{definition} Recall that we can associate a global $L$-packet $\cPkt{\phi}$ to $\phi \in \cP{G}$, so we can talk about strong multiplicity one for the global $L$-packet $\cPkt{\phi}$, i.e. if $\r$ is automorphic, and $[\r_{v}] \in \cPkt{\phi_{v}}$ for almost all places $v$, then $[\r]$ lies in $\cPkt{\phi}$. As in the local case (see Theorem~\ref{thm: refined L-packet}), we can expect to lift the global $L$-packet $\cPkt{\phi}$ to some global $L$-packet $\cPkt{\tilde{\phi}}$ for $\widetilde{G}$. Obviously the lift is not unique, but as one can see from Corollary~\ref{cor: modular character}, it should be unique up to twisting by id\`ele class characters. Because we already have strong multiplicity one for $\cPkt{\phi}$, so strong multiplicity one for $\cPkt{\tilde{\phi}}$ is equivalent to the property that for any $\omega$ in $Y$ if $\cPkt{\tilde{\phi}_{v}} = \cPkt{\tilde{\phi}_{v}} \otimes \omega_{v}$ for almost all places $v$, then $\cPkt{\tilde{\phi}} = \cPkt{\tilde{\phi}} \otimes \omega$. And it can be easily seen that this property is equivalent to the condition of $\Sigma_{0}$-strong multiplicity one in our definition. \subsection{Statement of global theorem} \label{subsec: statement of main global theorems} After discussing the multiplicity question, we want to describe the $\phi$-component of the discrete spectrum for $\widetilde{G}$, which should be an analogue of Theorem~\ref{thm: discrete spectrum}. We again assume $\widetilde{G}$ is of type ~\eqref{eq: similitude}. \begin{conjecture \label{conj: global L-packet} \begin{enumerate} \item Suppose $\phi \in \cP{G}$, one can associate a global packet $\cPkt{\tilde{\phi}}$ of $\bar{\mathcal{H}}(\widetilde{G})$-modules of irreducible admissible representations for $\widetilde{G}(\mathbb{A}_{F})$ satisfying the following properties: \begin{enumerate} \item$ \cPkt{\tilde{\phi}} = \bigotimes'_{v} \cPkt{\tilde{\phi}_{v}}$ where $\cPkt{\tilde{\phi}_{v}}$ is some lift of $\cPkt{\phi_{v}}$ defined in Theorem~\ref{thm: refined L-packet}. \item there exists $[\tilde{\pi}] \in \cPkt{\tilde{\phi}}$ such that that $\tilde{\pi}$ is isomorphic to an automorphic representation as $\bar{\mathcal{H}}(\widetilde{G})$-modules. \end{enumerate} Moreover, $\cPkt{\tilde{\phi}}$ is unique up to twisting by characters of $\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)G(\mathbb{A}_{F})$. And we can define a global character of $\S{\tilde{\phi}}$ by \[ <x, \tilde{\pi}> := \prod_{v}<x_{v}, \tilde{\pi}_{v}> \,\,\,\,\, \text{ for } \, \, \tilde{\pi} \in \cPkt{\tilde{\phi}} \text{ and } \, \, x \in \S{\tilde{\phi}}. \] \item Suppose $\phi \in \cPdt{G}$, the $\phi$-component of the discrete spectrum of $\widetilde{G}(\mathbb{A}_{F})$ as $\bar{\mathcal{H}}(\widetilde{G})$-module has a decomposition. \begin{align \label{formula: discrete spectrum} L^{2}_{disc, \phi}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) = m_{\phi} \bigoplus_{\omega \in Y / \a(\S{\phi})} \bigoplus_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega \\ <\cdot, \tilde{\pi}> = 1}} \tilde{\pi}, \end{align} where $m_{\phi}$ is defined as in Remark~\ref{rk: discrete spectrum}. \end{enumerate} \end{conjecture} Along with this conjecture, we need to prove some results about the stable multiplicity formula for $\widetilde{G}$ (see Conjecture~\ref{conj: global conjecture}). This formula has been conjectured by Arthur \cite{Arthur:1990} for any quasisplit connected reductive groups, and he also proved this for special orthogonal group and symplectic group in \cite{Arthur:2013}. To state the formula, we need some preparations. Suppose $S$ is a connected complex reductive group with an automorphism $\theta$, we denote $S^{\theta} = S \rtimes \theta$, which can be viewed as a connected component of the complex reductive group $S^{+} := S \rtimes <\theta>$. We fix a maximal torus $T$ of $S$, and define the Weyl set \[ W^{\theta}(S) = \text{Norm}(T, S^{\theta}) / T. \] Let $W^{\theta}(S)_{reg}$ be the set of Weyl elements $w$ such that \[ \det(w-1)|_{\mathfrak{a}_{T}} \neq 0. \] Moreover, let $s^{0}(w)$ denote the sign $(-1)^{n}$, where $n$ is the number of positive roots of $(S, T)$ mapped by $w$ to negative roots. Now we can assign to $S^{\theta}$ a real number \[ i^{\theta}(S) = |W(S)|^{-1} \sum_{w \in W^{\theta}_{reg}(S)} s^{0}(w) |\det(w - 1)|_{\mathfrak{a}_{T}}^{-1}, \] where $W(S)$ is the Weyl group of $S$. Next we want to define a constant $\sigma(S_{1})$ associated with any connected complex reductive group $S_{1}$. To define this we have to introduce some more notations. Still for the original $S^{\theta}$, let us denote the set of semisimple elements of $S^{\theta}$ by $S^{\theta}_{ss}$. And for any $s \in S^{\theta}_{ss}$, we write \begin{align*} S_{s} & = \text{Cent}(s, S). \end{align*} Let \[ S^{\theta}_{ell} = \{s \in S^{\theta}_{ss} : |Z(S_{s})| < \infty \}, \] and $\mathcal{E}^{\theta}_{ell}(S)$ be the $S$-conjugacy classes in $S^{\theta}_{ell}$. Finally the constant $\sigma(S_{1})$ can be characterized by the following proposition (\cite{Arthur:2013}, Proposition 4.1.1). \begin{proposition \label{prop: endoscopy of complex group} There are unique constants $\sigma(S_{1})$ defined for all connected complex reductive groups $S_{1}$, such that for any connected component $S^{\theta}$ of a complex reductive group, the following number \[ e^{\theta}(S) = \sum_{s \in \mathcal{E}^{\theta}_{ell}(S)} |\pi_{0}(S_{s})|^{-1} \sigma((S_{s})^{0}) \] equals $i^{\theta}(S)$, and furthermore \[ \sigma(S_{1}) = \sigma(S_{1} / Z_{1}) |Z_{1}|^{-1}, \] for any central subgroup $Z_{1}$ of $S_{1}$. \end{proposition} Now we can state the stable multiplicity formula for $\widetilde{G}$ as follows. \begin{conjecture \label{conj: stable multiplicity formula} Suppose $\phi \in \cP{G}$, then \begin{align \label{formula: stable multiplicity} \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} |\S{\tilde{\phi}}|^{-1} \sigma( \com[0]{\cS{\phi}}) \tilde{f}^{\widetilde{G}} (\tilde{\phi} \otimes \omega), \,\,\,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}), \end{align} where \[ \tilde{f}^{\widetilde{G}} (\tilde{\phi} \otimes \omega) := \prod_{v} \tilde{f}_{v}(\tilde{\phi}_{v} \otimes \omega_{v}), \] with respect to $\cPkt{\tilde{\phi}}$ defined in Conjecture~\ref{conj: global L-packet}. \end{conjecture} Finally, we need a twisted version of the decomposition \eqref{formula: discrete spectrum}, whose role will be clear in the next section. \begin{conjecture \label{conj: compatible normalization} Suppose $\phi \in \cPdt{G}$ and $x \in \S{\phi}^{\theta}$ with $\a(x) = \omega$ for $\theta \in \Sigma_{0}$ and some character $\omega$ of $\widetilde{G}(\mathbb{A}_{F})/\widetilde{G}(F)G(\mathbb{A}_{F})$. For $[\tilde{\pi}] \in \cPkt{\tilde{\phi}}$ with $<\cdot, \tilde{\pi}> =1$, the canonical intertwining operator \[ R(\theta)^{-1} \circ R(\omega) \] restricted to the $\tilde{\pi}$-isotypic component $I(\tilde{\pi})$ in the discrete spectrum is equal to the product of $m(\tilde{\pi})$ and the local intertwining operators $A_{\tilde{\pi}_{v}}(\theta, \omega_{v})$ normalized by $x_{v}$ (see \eqref{eq: theta twisted intertwining operator}), i.e. \begin{align \label{formula: theta twisted discrete spectrum} \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega' \in Y / \a(\S{\phi})} \sum_{\substack{[\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega), \,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}), \end{align} where $ \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = \prod_{v} \tilde{f}_{\widetilde{G}^{\theta}_{v}}(\tilde{\pi}_{v}, \omega_{v})$, and it does not depend on $x$. \end{conjecture} \begin{remark \label{rk: compatible normalization} This kind of result has been proved in the cases of special even orthogonal groups (see \cite{Arthur:2013}, Theorem 4.2.2) and general linear groups (see \cite{Arthur:2013}, Lemma 4.2.3). \end{remark} In this paper, we will only establish these conjectures in a special case. \begin{theorem \label{thm: main global} Suppose $G = G_{1} \times G_{2} \times \cdots \times G_{q}$, such that $G_{i}$ is a symplectic group or a special even orthogonal group. For $\phi = \phi_{1} \times \phi_{2} \times \cdots \times \phi_{q} \in \cP{G}$ with $\phi_{i} \in \cP{G_{i}}$, if $\S{\tilde{\phi}_{i}} = 1$ for all $i$, then Conjecture~\ref{conj: global L-packet}, ~\ref{conj: compatible normalization} hold. If we further assume $\phi \in \cPdt{G}$, then Conjecture~\ref{conj: stable multiplicity formula} also holds. \end{theorem} \subsection{Comparison of trace formulas \label{subsec: comparison of trace formulas} We assume $\widetilde{G}$ is of type \ref{eq: similitude} and $\theta \in \Sigma_{0}$. Since we are going to prove all the theorems by induction, here we would like to take a temporary induction assumption: we assume Conjecture~\ref{conj: global L-packet}, ~\ref{conj: stable multiplicity formula}, ~\ref{conj: compatible normalization} together with our main local theorem (Theorem~\ref{thm: refined L-packet}) hold for the proper Levi subgroups and twisted endoscopic groups of $\widetilde{G}$. Based on this assumption, we want to expand the $\phi$-component of \eqref{eq: twisted spectral side} and \eqref{eq: twisted endoscopic side} in terms of local objects. Before we do the expansion, let us write $\P{G, \phi}$ for the set of global Langlands parameters of $G$ giving rise to $\phi \in \cP{G}$. It is clear that $|\P{G, \phi}| = m_{\phi}$. So we can write formally those formulas \eqref{formula: discrete spectrum}, \eqref{formula: stable multiplicity} and \eqref{formula: theta twisted discrete spectrum} for $\phi_{G} \in \P{G, \phi}$ by simply setting $m_{\phi_{G}} = 1$. In fact these formal formulas do make sense when we associate to $\phi_{G}$ the refined global $L$-packet (see \cite{Arthur:2013}, Section 8.4). But we do not need this refinement here, for eventually we are going to sum over $\P{G, \phi}$. The benefit of working with these global Langlands parameters is one can imitate the computation in (\cite{Arthur:1990}, Section 5 and 7), where one does assume the global Langlands correspondence. \subsubsection{The spectral expansion \label{subsubsec: spectral expansion} Let us write the $\phi$-component of \eqref{eq: twisted spectral side} as \begin{align*} \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = \sum_{ \{ \widetilde{M} \} } |W(\widetilde{M})|^{-1} \sum_{w \in W^{\theta}(\widetilde{M})_{reg}} |\det(w-1)_{\mathfrak{a}^{\widetilde{G}^{\theta}}_{\widetilde{M}}}|^{-1} tr(M_{\widetilde{P}|\theta \widetilde{P}, \phi}(w, \lif{\chi}) I^{\theta, \omega}_{\widetilde{P}, \phi}(\lif{\chi}, \tilde{f})). \end{align*} So the key is to expand \begin{align} \label{eq: twisted spectral expansion 1} tr(M_{\widetilde{P}|\theta \widetilde{P}, \phi}(w, \lif{\chi}) I^{\theta, \omega}_{\widetilde{P}, \phi}(\lif{\chi}, \tilde{f})) \end{align} By definition, \eqref{eq: twisted spectral expansion 1} does not vanish only if there exists $\phi_{M} \in \cPdt{M, \phi}$. Moreover, the $(G(F) \rtimes \Sigma_{0})$-conjugacy class of $M$ such that $\cPdt{M, \phi} \neq \emptyset$ is determined by $\phi$, and the choice of $\phi_{G} \in \P{G, \phi}$ determines the $G(F)$-conjugacy class of $M$ such that $\Pdt{M, \phi_{G}} \neq \emptyset$. So we can fix such a $G(F)$-conjugacy class of $M$ and assume $\phi_{M} \in \cPdt{M, \phi}$. To apply our induction assumption we also need to assume $M \neq G$, i.e., $\phi \notin \cPdt{G}$. Note the diagram \eqref{eq: theta twisted intertwining relation diagram} in our discussion of the local $(\theta, \omega)$-twisted intertwining relation can be defined in the global case, and the global analogue of those groups in the diagram will map to their local counterparts. It is not hard to show, using a similar argument in Proposition~\ref{prop: multiplicity formula}, that \eqref{eq: twisted spectral expansion 1} vanishes unless there exists $u \in \N{\phi}^{\theta}$ such that $w_{u} = w$ and \( \omega = \a(x_{u}). \) So $\phi_{M} \in \cPdt{M^{\theta_{u}}}$. Now we can apply Conjecture~\ref{conj: global L-packet} to $\tilde{\phi}_{M}$. For any $[\tilde{\pi}_{M}] \in \cPkt{\tilde{\phi}_{M}}$, let us define \[ R_{\widetilde{P}|\theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) : = \prod_{v} R_{\widetilde{P}_{v} |\theta \widetilde{P}_{v}}(u_{v}, \tilde{\pi}_{M_{v}}, \tilde{\phi}_{v}). \] In particular, if $\tilde{\pi}_{M} \in \mathcal{A}_{2}(\widetilde{M})$, we can write \[ R_{\widetilde{P}|\theta \widetilde{P}}(w, \tilde{\pi}_{M}, \tilde{\phi}) := r_{P}(w, \phi_{M})^{-1} M_{\widetilde{P}|\theta \widetilde{P}}(w, \tilde{\pi}_{M}), \] where $r_{P}(w, \phi_{M})$ is the global normalizing factor defined by \[ r_{P}(w, \phi_{M}) = \prod_{v} r_{P}(w_{v}, \phi_{M_{v}}). \] It follows from Conjecture~\ref{conj: compatible normalization} and analogous result for $GL(N)$ (cf. Remark~\ref{rk: compatible normalization}) that \[ R_{\widetilde{P}|\theta \widetilde{P}}(w, \tilde{\pi}_{M}, \tilde{\phi}) = R_{\widetilde{P}|\theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) \] for any $u \in \N{\phi}^{\theta}(w, \omega)$. Here $\N{\phi}^{\theta}(w, \omega)$ consists of $u \in \N{\phi}^{\theta}$ such that $w_{u} = w$ and $\a(x_{u}) = \omega$. Applying Conjecture~\ref{conj: global L-packet} (2) to $\widetilde{M}$, we can write \eqref{eq: twisted spectral expansion 1} as a double sum over $\phi_{G} \in \P{G, \phi}$ and $\phi_{M} \in \Pdt{M^{\theta_{u}}, \phi_{G}}$ of \[ \sum_{\omega' \in Y / \a(\S{\phi_{M}})} \sum_{[\tilde{\pi}_{M}] \in \cPkt{\tilde{\phi}_{M}} \otimes \omega'} \delta_{\tilde{\phi}_{M} }(\tilde{\pi}_{M}) r_{P}(w, \phi_{M}) tr ( R_{\widetilde{P}|\theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) I^{\theta, \omega}_{\widetilde{P}}(\tilde{\pi}_{M} \otimes \omega^{-1}, \tilde{f}) ), \] where \[ \delta_{\tilde{\phi}_{M}} (\tilde{\pi}_{M}) = |\S{\tilde{\phi}_{M}}|^{-1} \sum_{x \in \S{\tilde{\phi}_{M}}} <x, \tilde{\pi}_{M}>. \] Moreover, we can write \[ \sum_{x \in \S{\tilde{\phi}_{M}}} <x, \tilde{\pi}_{M}> R_{\widetilde{P}|\theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) = \sum_{u \in \N{\phi}^{\theta}(w, \omega)} R_{\widetilde{P}|\theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}). \] If we switch the sum over $\tilde{\pi}_{M} \in \cPkt{\tilde{\phi}_{M}} \otimes \omega'$ with $u \in \N{\phi}^{\theta}(w, \omega)$, and define \[ \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', u) = \sum_{[\tilde{\pi}_{M}] \in \cPkt{\tilde{\phi}_{M}} \otimes \omega'} tr (R_{\widetilde{P}|\theta \widetilde{P}}(u, \tilde{\pi}_{M}, \tilde{\phi}) I^{\theta, \omega}_{\widetilde{P}}(\tilde{\pi}_{M} \otimes \omega^{-1}, \tilde{f}) ), \] then \eqref{eq: twisted spectral expansion 1} becomes a double sum over $\phi_{G} \in \P{G, \phi}$ and $\phi_{M} \in \Pdt{M^{\theta_{u}}, \phi_{G}}$ of \begin{align} \sum_{\omega' \in Y / \a(\S{\phi_{M}})} |\S{\tilde{\phi}_{M}}|^{-1} \sum_{u \in \N{\phi}^{\theta}(w, \omega)} r_{P}(w, \phi_{M}) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', u). \label{eq: twisted spectral expansion 2} \end{align} Since we are taking $\tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$, the contributions of $\phi_{G} \in \Pdt{G, \phi}$ to the $\phi$-component of \eqref{eq: twisted spectral side} are the same. So the $\phi$-component of \eqref{eq: twisted spectral side} can be written as a sum over $w \in W^{\theta}(\lif{M})_{reg}$ and $\phi_{M} \in \Pdt{M^{\theta_{u}}, \phi_{G}}$ of \[ m_{\phi} | W(\lif{M}) |^{-1} | det (w - 1)_{\mathfrak{a}^{\widetilde{G}^{\theta}}_{\widetilde{M}}} |^{-1} \] multiplied with \eqref{eq: twisted spectral expansion 2}. Here we can identify $W(\widetilde{M})$ with $W(M)$, and it is easy to see \[ | det (w - 1)_{\mathfrak{a}^{\widetilde{G}^{\theta}}_{\widetilde{M}}} | = | det (w - 1)_{\mathfrak{a}^{G^{\theta}}_{M}} |. \] Next we want to switch the order of the double sum over $w \in W^{\theta}(\lif{M})_{reg}$ and $\phi_{M} \in \Pdt{M^{\theta_{u}}, \phi_{G}} $ to a double sum over $\phi_{M} \in \Pdt{M, \phi_{G}} $ and $w \in W^{\theta}_{\phi , reg}$, where $W^{\theta}_{\phi , reg} = W^{\theta}(\lif{M})_{reg} \cap W^{\theta}_{\phi}$. Since the nonzero contribution of each $\phi_{M} \in \Pdt{M, \phi_{G}}$ is the same and \[ | \Pdt{M, \phi_{G}} | = \frac{| W(M) |}{| W_{\phi} |}, \] then we get a single sum over $w \in W^{\theta}_{\phi , reg}$ of \[ m_{\phi} |W_{\phi}|^{-1} |\S{\tilde{\phi}_{M}}|^{-1} | det (w - 1)_{\mathfrak{a}^{G^{\theta}}_{M}} |^{-1} \] multiplied with \[ \sum_{\omega' \in Y / \a(\S{\phi_{M}})} \sum_{u \in \N{\phi}^{\theta}(w, \omega)} r_{P}(w, \phi_{M}) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', u). \] Note the double sum over $ w \in W^{\theta}_{\phi , reg}$ and $u \in \N{\phi}^{\theta}(w, \omega)$ can be rearranged as a double sum over $x \in \S{\phi}^{\theta}(\omega)$ and $u \in \N{\phi, reg}^{\theta}(x)$, where \[ \S{\phi}^{\theta}(\omega) = \{x \in \S{\phi}^{\theta} : \a(x) = \omega\}, \] and \[ \N{\phi, reg}^{\theta}(x) = \{ u \in \N{\phi}^{\theta} : x_{u} = x, w_{u} \in W^{\theta}_{\phi, reg} \}. \] So we end up with a sum over $x \in \S{\phi}^{\theta}(\omega)$ of \[ m_{\phi} |W_{\phi}|^{-1} |\S{\tilde{\phi}_{M}}|^{-1} \] multiplied with \begin{align} \sum_{\omega' \in Y / \a(\S{\phi_{M}})} \sum_{u \in \N{\phi, reg}^{\theta}(x)} | det (w_{u} - 1)_{\mathfrak{a}^{G^{\theta}}_{M}} |^{-1} r_{P}(w_{u}, \phi_{M}) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', u). \label{eq: twisted spectral expansion 3} \end{align} If we write \[ r^{G}_{\phi}(w_{u}) = r_{P}(w_{u}, \phi_{M}), \] and define $s^{0}_{\phi}(w_{u})$ to be $(-1)^{n}$, where $n$ is the number of positive roots of $(\cS{\phi}^{0}, \bar{T}_{\phi})$ mapped to negative roots by $w_{u}$, then by Arthur's sign lemma (\cite{Arthur:2013}, Lemma 4.3.1) we have \[ r^{G}_{\phi}(w_{u}) = s^{0}_{\phi}(w_{u}). \] Moreover, by our comments after Lemma~\ref{lemma: twisted intertwining relation 1}, $\tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', u)$ only depends on the image of $u$ in $\S{\phi}^{\theta}$, so we can write \[ \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', u) = \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x) \] Therefore, the term \eqref{eq: twisted spectral expansion 3} becomes \[ \sum_{\omega' \in Y / \a(\S{\phi_{M}})} ( \sum_{u \in \N{\phi, reg}^{\theta}(x)} s^{0}_{\phi}(w_{u}) | det (w_{u} - 1)_{\mathfrak{a}^{G^{\theta}}_{M}} |^{-1} ) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x). \] For $\omega' \in \a(\S{\phi})$, $\cPkt{\tilde{\phi}} \otimes \omega' = \cPkt{\tilde{\phi}}$ and \[ (\tilde{f}|_{\lif{Z}_{F}G(F)})_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x) = f_{G^{\theta}}(\phi, x) = (\tilde{f}|_{\lif{Z}_{F}G(F)})_{\widetilde{G}^{\theta}}(\tilde{\phi}, x), \] where $f$ is the restriction of $\tilde{f}$ to $G(F)$. So we get for $\omega' \in \a(\S{\phi})$ \[ \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x) = \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi}, x). \] Therefore we only need to take the sum over $\omega' \in Y / \a( \S{\phi} )$ in \eqref{eq: twisted spectral expansion 3}, and then multiply by $|\a(\S{\phi}) / \a(\S{\phi_{M}})|$. Since \[ |\a(\S{\phi}) / \a(\S{\phi_{M}}) | = \frac{|\S{\phi}|}{|\S{\tilde{\phi}}|} \cdot \frac{|\S{\tilde{\phi}_{M}}|}{|\S{\phi_{M}}|}, \] the resulting constant multiple is $m_{\phi}$ times \begin{align*} |W_{\phi}|^{-1} |\S{\tilde{\phi}_{M}}|^{-1} \frac{|\S{\phi}|}{|\S{\tilde{\phi}}|} \cdot \frac{|\S{\tilde{\phi}_{M}}|}{|\S{\phi_{M}}|} & = | W_{\phi} |^{-1} | \S{\phi_{M}} |^{-1} \frac{\S{\phi}}{\S{\tilde{\phi}}} \\ & = |\N{\phi}|^{-1} \frac{| \S{\phi} |}{| \S{\tilde{\phi}} |} \\ & = | W^{0}_{\phi} |^{-1} | \S{\phi} |^{-1} \frac{| \S{\phi} |}{| \S{\tilde{\phi}} |} \\ & = | W^{0}_{\phi} |^{-1}| \S{\tilde{\phi}} |^{-1}. \end{align*} Let \[ C_{\tilde{\phi}} = m_{\phi}| \S{\tilde{\phi}} |^{-1}, \] and we define \[ i^{\theta}_{\phi}(x) = |W^{0}_{\phi}|^{-1} \sum_{w \in W^{\theta}_{\phi, reg}(x)} s^{0}_{\phi}(w) | det (w - 1)_{\mathfrak{a}^{G^{\theta}}_{M}} |^{-1}, \] where $W^{\theta}_{\phi, reg}(x)$ is the image of $\N{\phi, reg}^{\theta}(x)$ in $W^{\theta}_{\phi, reg}$. Hence we have shown the following lemma. \begin{lemma \label{lemma: twisted spectral expansion} Suppose $\phi \in \cP{G} - \cPdt{G}$, $\theta \in \Sigma_{0}$ and $\omega \in Y$, then \begin{align \label{eq: twisted spectral expansion} \tIdt{\widetilde{G}^{\theta}}{, \phi} (\tilde{f}) = C_{\tilde{\phi}} \sum_{\omega' \in Y / \a(\S{\phi})} \sum_{x \in \S{\phi}^{\theta}(\omega)} i^{\theta}_{\phi}(x) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x), \,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}). \end{align} \end{lemma} \subsubsection{The endoscopic expansion \label{subsubsec: endoscopic expansion} Parallel to this $(\theta, \omega)$-twisted spectral expansion \eqref{eq: twisted spectral expansion}, we will proceed to expand the $\phi$-component of \eqref{eq: twisted endoscopic side}. Note that if $\theta = id, \omega = 1$, we can only expand the right hand side of \[ \Idt{\widetilde{G}}{, \phi}(\tilde{f}) - \Sdt{\widetilde{G}}{, \phi} (\tilde{f}) = \sum_{\widetilde{G}' \in \End{ell}{\widetilde{G}} - \{\widetilde{G}\} } \iota( \widetilde{G}, \widetilde{G}' ) \Sdt{\widetilde{G}'}{, \phi}(\tilde{f}^{\widetilde{G}'}) \] based on our temporary induction assumption. By Corollary~\ref{cor: ker1} we know $\text{Ker}^{1}(F, Z(\D{\widetilde{G}})) = \text{Ker}^{1}(F, Z(\D{\widetilde{G}'})) = 1$, so the formula \eqref{formula: endoscopic coefficient} applied to $\iota(\widetilde{G}, \widetilde{G}')$ can be simplified as \begin{align \label{formula: endoscopic coefficient 1} \iota(\widetilde{G}, \widetilde{G}') = | \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} |^{-1} | \text{Out}_{\widetilde{G}}(\widetilde{G}') |^{-1} |\pi_{0}(\kappa_{\widetilde{G}^{\theta}})|^{-1} \end{align} where \[ \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} = Z(\D{\widetilde{G}'})^{\Gal{}} Z(\D{\widetilde{G}})^{\Gal{}} / Z(\D{\widetilde{G}})^{\Gal{}}, \] and \[ \text{Out}_{\widetilde{G}}(\widetilde{G}') = \text{Aut}_{\widetilde{G}}(\widetilde{G}') / \D{\widetilde{G}'}Z(\D{\widetilde{G}})^{\Gal{}}. \] Note that $|\pi_{0}(\kappa_{\widetilde{G}^{\theta}})| = 1$ here, and this formula \eqref{formula: endoscopic coefficient 1} is given in (\cite{Arthur:1990}, Lemma 3.2). By applying Conjecture~\ref{conj: stable multiplicity formula} to $\widetilde{G}'$, we get \[ \Sdt{\widetilde{G}'}{, \phi'}(\tilde{f}') = \sum_{\omega' \in Y' / \a^{G'}(\S{\phi'})} | \S{\tilde{\phi}'} |^{-1} \sigma(\com[0]{\cS{\phi'}}) \tilde{f}' (\tilde{\phi}' \otimes \omega'), \,\,\,\,\, \tilde{f}' \in \bar{\mathcal{H}}(\widetilde{G}', \lif{\chi}') \] for $\phi' \in \P{G'}$. By Lemma~\ref{lemma: vanishing}, the $\phi$-component of \eqref{eq: twisted endoscopic side} is summed over \( \phi_{G} \in \P{G, \phi} \) and \[ \{ (\widetilde{G}', \phi') : \widetilde{G}' \in \tEnd{ell}{\widetilde{G}^{\theta}} \text{ and } \phi' \in \P{G', \phi_{G}} \} \] of distributions $\Sdt{\widetilde{G}'}{, \phi'}(\tilde{f}')$. Again because we are taking $\tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$, the contributions of $\phi_{G} \in \P{G, \phi}$ to the $\phi$-component of \eqref{eq: twisted endoscopic side} are the same. If we fix a parameter $\phi_{G} \in \P{G, \phi}$, then the first sum collapses to be a constant multiple \[ | \P{G, \phi} | = m_{\phi}. \] Now let us fix $\phi_{G} := \phi^{\mathcal{E}}$ as a homomorphism from $\mathcal{L}_{\phi}$ to $\L{G}$, instead of a $\D{G}$-conjugacy class, and let $\cS{\phi}^{\theta} = \text{Cent} ( \Im \phi_{G}, \D{G} \rtimes \D{\theta} ) / Z(\D{G})^{\Gal{}}$. We observe $(\widetilde{G}' ,\phi')$ will correspond to $(\phi_{G}, s)$ for $s \in \cS{\phi, ss}^{\theta}(\omega)$, where \[ \cS{\phi, ss}^{\theta}(\omega) = \{ s \in \cS{\phi, ss}^{\theta}: \a(s) = \omega\} \] by taking suitable $\D{G}$-conjugation, and $s$ is determined up to $\cS{\phi}$-conjugation. Let us use the convention to denote the conjugacy class of $\cS{\phi}$ in $\cS{\phi, ss}^{\theta}(\omega)$ by \[ \cS{\phi} \backslash \cS{\phi, ss}^{\theta}(\omega). \] Then this correspondence gives us a map \[ \{ (\widetilde{G}', \phi') : \widetilde{G}' \in \tEnd{ell}{\widetilde{G}^{\theta}} \text{ and } \phi' \in \P{G', \phi_{G}} \} \longrightarrow \cS{\phi} \backslash \cS{\phi, ss}^{\theta}(\omega) \] The point is the contribution of $(\widetilde{G}', \phi')$ only depends on its image under this map, so we want to write the double sum over $(\widetilde{G}', \phi')$ as a single sum over the image. To characterize the image, it is equivalent to find $s \in \cS{\phi, ss}^{\theta}(\omega)$ with $(\widetilde{G}'_{s}, \phi') \longrightarrow (\phi_{G}, s)$ such that $\widetilde{G}'_{s}$ is elliptic. If we define \begin{align*} \cS{\phi, ell}^{\theta} & = \{ s \in \cS{\phi, ss}^{\theta} : | Z(\com[0]{\cS{\phi, s}}) | < \infty \} \\ \cS{\phi, ell}^{\theta}(\omega) & = \{ s \in \cS{\phi, ss}^{\theta}(\omega) : | Z(\com[0]{\cS{\phi, s}}) | < \infty \}, \end{align*} then for $s \in \cS{\phi, ell}^{\theta}(\omega)$, it is easy to see that $\widetilde{G}'_{s}$ is elliptic. The converse is not true, but the contribution from pairs $(\widetilde{G}'_{s}, \phi')$ with $s \notin \cS{\phi, ell}^{\theta}(\omega)$ is zero by the stable multiplicity formula~\eqref{formula: stable multiplicity}. In fact \[ \com[0]{\cS{\phi'}} = (\cS{\phi, s})^{0} \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} / \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} \] and $\sigma(\com[0]{\cS{\phi'}}) = 0$ unless $| Z(\com[0]{\cS{\phi'}}) | < \infty$. If we write \begin{align*} \cS{\phi, ell}'^{\theta} = \begin{cases} \cS{\phi, ell} - \{1\} & \text{ if } \theta = id \\ \cS{\phi, ell}^{\theta} & \text{ otherwise } \end{cases} \end{align*} and \begin{align*} \cS{\phi, ell}'^{\theta}(\omega) = \begin{cases} \cS{\tilde{\phi}, ell} - \{1\} & \text{ if } \theta = id, \, \omega =1 \\ \cS{\phi, ell}^{\theta}(\omega) & \text{ otherwise } \end{cases} \end{align*} then the effective image of this map should be $\cS{\phi} \backslash \cS{\phi, ell}'^{\theta}(\omega)$. The next problem is to count the fibre of this map. Since $\widetilde{G}'$ is taken to be the isomorphism class of endoscopic data, the fibre containing $(\widetilde{G}', \phi')$ must have the endoscopic datum isomorphic to $\widetilde{G}'$, and hence can be obtained by the action of $\text{Aut}_{G}(G')$. Moreover, $\phi'$ is taken to be $\D{G}'$-conjugacy classes, so the fibre should be isomorphic to \[ \text{Aut}_{G}(G') / S_{\phi, s} \text{Int}_{G}(G') \cong \text{Out}_{G}(G') / ( S_{\phi, s} \text{Int}_{G}(G') / \text{Int}_{G}(G')), \] where $\text{Int}_{G}(G') = \D{G}' Z(\D{G})^{\Gal{}}$ and $S_{\phi, s}$ is the preimage of $\cS{\phi, s}$ in $S_{\phi}$. Moreover let us write \[ S_{\phi, s} \text{Int}_{G}(G') / \text{Int}_{G}(G') \cong S_{\phi, s} / S_{\phi, s} \cap \D{G}' Z(\D{G})^{\Gal{}} \] So we can turn the $\phi$-component of \eqref{eq: twisted endoscopic side} into a sum over \( s \in \cS{\phi} \backslash \cS{\phi, ell}'^{\theta}(\omega), \) but multiplied with the size of each fibre \[ | \text{Out}_{G}(G') | | S_{\phi, s} / S_{\phi, s} \cap \D{G}' Z(\D{G})^{\Gal{}} |^{-1}. \] In fact, it is more convenient to sum over the conjugacy classes in $\cS{\phi, ell}'^{\theta}(\omega)$ by the group $\com[0]{\cS{\tilde{\phi}}}$. So let us define \begin{align*} \mathcal{E}_{\phi, ell}'^{\theta} & = \com[0]{\cS{\tilde{\phi}}} \backslash \cS{\phi, ell}'^{\theta}, \\ \mathcal{E}_{\phi, ell}'^{\theta}(\omega) & = \com[0]{\cS{\tilde{\phi}}} \backslash \cS{\phi, ell}'^{\theta}(\omega), \end{align*} then changing to sum over $\mathcal{E}_{\phi, ell}'^{\theta}(\omega)$ amounts to multiplying by \[ |\com[0]{\cS{\tilde{\phi}}} / \cS{\tilde{\phi}, s}^{0}| |\cS{\phi} / \cS{\phi, s}|^{-1} = | \cS{\phi, s} / \cS{\tilde{\phi}, s}^{0} | |\cS{\phi} / \com[0]{\cS{\tilde{\phi}}} |^{-1}. \] Finally, we get a sum over $s \in \mathcal{E}_{\phi, ell}'^{\theta}(\omega)$ of the product of the following three terms \[ |\text{Out}_{\widetilde{G}}(\widetilde{G}')|^{-1} |\text{Out}_{G}(G')|, \] \[ | S_{\phi, s} / S_{\phi, s} \cap \D{G'} Z(\D{G})^{\Gal{}} |^{-1} |\S{\tilde{\phi}'}|^{-1} | \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} |^{-1} | \cS{\phi, s} / \cS{\tilde{\phi}, s}^{0} |, \] and \[ m_{\phi} |\cS{\phi} / \com[0]{\cS{\tilde{\phi}}} |^{-1} \sum_{\omega' \in Y' / \a^{G'}(\S{\phi'})} \sigma(\com[0]{\cS{\phi'}}) \tilde{f}^{\widetilde{G}'} (\tilde{\phi}' \otimes \omega') \] where $(G' , \phi') \rightarrow (\phi_{G}, s)$. Note that \[ \xymatrix{1 \ar[r] & \D{D} \ar[r] & \text{Aut}_{\widetilde{G}}(\widetilde{G}') \ar[r] & \text{Aut}_{G}(G') \ar[r] & 1. } \] so \begin{align*} |\text{Out}_{\widetilde{G}}(\widetilde{G}')|^{-1} |\text{Out}_{G}(G')| & = |\text{Int}_{G}(G') / (\text{Int}_{\widetilde{G}}(\widetilde{G}') / \D{D})| \\ & = | \D{G}' Z(\D{G})^{\Gal{}} / (\D{\widetilde{G}'}Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})|^{-1} \\ & = |Z(\D{G})^{\Gal{}} / Z(\D{G})^{\Gal{}} \cap \D{G}' (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})|^{-1} \\ & = |Z(\D{G})^{\Gal{}} / (\D{G}' \cap Z(\D{G})^{\Gal{}})(Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}) |^{-1}. \end{align*} Moreover, we have $\cS{\tilde{\phi}, s}^{0} = \cS{\phi, s}^{0}$ and $\com[0]{\cS{\tilde{\phi}}} = \com[0]{\cS{\phi}}$, so we can rewrite the expansion of $\phi$-component of \eqref{eq: twisted endoscopic side} as a sum over $s \in \mathcal{E}_{\phi, ell}'^{\theta}(\omega)$ of the product of the following two terms \begin{align} | S_{\phi, s} / S_{\phi, s} \cap \D{G}' Z(\D{G})^{\Gal{}} |^{-1} |\S{\phi'}|^{-1} | \bar{Z}(\D{G}')^{\Gal{}} |^{-1} | \cS{\phi, s} / \cS{\phi, s}^{0} | \sigma(\com[0]{\cS{\phi'}}) \label{eq: twisted endoscopic expansion 1} \end{align} and \begin{align} \sum_{\omega' \in Y' / \a^{G'}(\S{\phi'})} m_{\phi} |\S{\phi}|^{-1} |\S{\phi'}| |\S{\tilde{\phi}'}|^{-1} | \bar{Z}(\D{G}')^{\Gal{}} | | \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} |^{-1} |Z(\D{G})^{\Gal{}} / (\D{G}' \cap Z(\D{G})^{\Gal{}})(Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}) |^{-1} \tilde{f}' (\tilde{\phi}' \otimes \omega'). \label{eq: twisted endoscopic expansion 2} \end{align} As one can see, \eqref{eq: twisted endoscopic expansion 1} is only relevant to $G$, and it can be simplified as in (\cite{Arthur:2013}, Section 4.4). So we will just repeat the simplification there. First we note that \[ |\S{\phi'}| = |\pi_{0}(\cS{\phi'})| = |\cS{\phi, s} \cap \bar{\D{G}'} / \com[0]{(\cS{\phi, s})} \bar{Z}(\D{G}')^{\Gal{}}|, \] where $\bar{\D{G}'}$ denotes the quotient \( \D{G}' Z(\D{G})^{\Gal{}} / Z(\D{G})^{\Gal{}}. \) Consequently, \begin{align*} &| S_{\phi, s} / S_{\phi, s} \cap \D{G}' Z(\D{G})^{\Gal{}} |^{-1} \, |\S{\phi'}|^{-1} \\ = & | \cS{\phi, s} / \cS{\phi, s} \cap \bar{\D{G}'} |^{-1} \, | \cS{\phi, s} \cap{\bar{\D{G}'}} / \com[0]{(\cS{\phi, s})} \bar{Z}(\D{G}')^{\Gal{}} | \\ = & |\cS{\phi, s} / (\cS{\phi, s})^{0} \bar{Z}(\D{G}')^{\Gal{}}|^{-1}. \end{align*} The product of the first four factors of \eqref{eq: twisted endoscopic expansion 1} therefore equals \begin{align*} & |\cS{\phi, s} / \cS{\phi, s}^{0} \bar{Z}(\D{G}')^{\Gal{}}|^{-1} \cdot |\cS{\phi, s}^{0} \bar{Z}(\D{G}')^{\Gal{}} / (\cS{\phi, s})^{0} \ \bar{Z}(\D{G}')^{\Gal{}}|^{-1} \cdot | \bar{Z}(\D{G}')^{\Gal{}} |^{-1} \cdot | \cS{\phi, s} / \cS{\phi, s}^{0} | \\ \\ = & |\cS{\phi, s}^{0} \bar{Z}(\D{G}')^{\Gal{}} / \cS{\phi, s}^{0}| \cdot |\cS{\phi, s}^{0} / (\cS{\phi, s})^{0}|^{-1} \cdot |\cS{\phi, s}^{0} \cap (\cS{\phi, s})^{0} \bar{Z}(\D{G}')^{\Gal{}} / (\cS{\phi, s})^{0}| \cdot | \bar{Z}(\D{G}')^{\Gal{}} |^{-1} \\ \\ = & |\pi_{0}(\cS{\phi, s}^{0} )|^{-1} \cdot |\bar{Z}(\D{G}')^{\Gal{}} / \cS{\phi, s}^{0} \cap \bar{Z}(\D{G}')^{\Gal{}}| \cdot |\cS{\phi, s}^{0} \cap \bar{Z}(\D{G}')^{\Gal{}} / (\cS{\phi, s})^{0} \cap \bar{Z}(\D{G}')^{\Gal{}} | \cdot | \bar{Z}(\D{G}')^{\Gal{}} |^{-1} \\ \\ = & |\pi_{0}(\cS{\phi, s}^{0})|^{-1} \cdot |(\cS{\phi, s})^{0} \cap \bar{Z}(\D{G}')^{\Gal{}} |^{-1}. \end{align*} Furthermore, we can write \begin{align*} \sigma(\com[0]{\cS{\phi'}}) = & \sigma( (\cS{\phi, s})^{0} / (\cS{\phi, s})^{0} \cap \bar{Z}(\D{G}')^{\Gal{}}) \\ = & \sigma((\cS{\phi, s})^{0}) |(\cS{\phi, s})^{0} \cap \bar{Z}(\D{G}')^{\Gal{}}|. \end{align*} Hence the first term \eqref{eq: twisted endoscopic expansion 1} is equal to \[ |\pi_{0}(\cS{\phi, s}^{0})|^{-1} \sigma((\cS{\phi, s})^{0}). \] For the second term \eqref{eq: twisted endoscopic expansion 2}, let us denote \[ \tilde{f}^{\widetilde{G}'}(\tilde{\phi}' \otimes \omega') = \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', s), \,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}). \] Also notice $|\a^{G'}(\S{\phi'})| = |\S{\phi'} / \S{\tilde{\phi}'}|$ and $m_{\phi} |\S{\tilde{\phi}}|^{-1} = C_{\tilde{\phi}}$, so we can write it as \[ \sum_{\omega' \in Y' / \a^{G'}(\S{\phi'})} C_{\tilde{\phi}} |\a(\S{\phi})|^{-1}|\a^{G'}(\S{\phi'})| | \bar{Z}(\D{G}')^{\Gal{}} | | \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} |^{-1} |Z(\D{G})^{\Gal{}} / (\D{G}' \cap Z(\D{G})^{\Gal{}})(Z(\D{\widetilde{G}})^{\Gal{}} / \D{Z}) |^{-1} \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi}' \otimes \omega', s). \] In view of \eqref{eq: twisted spectral expansion}, we need to turn this into a sum over $Y / \a(\S{\phi})$ instead of $Y' / \a^{G'}(\S{\phi'})$. To do so we need the following two lemmas. \begin{lemma \label{lemma: twisted endoscopic expansion 1} Suppose $\phi_{v} \in \cuP{G_{v}}$, and $s_{v}$ is a semisimple element of $\cS{\phi_{v}}$ with $(G'_{v}, \phi'_{v}) \rightarrow (\phi_{v}, s_{v})$. If we assume the main local theorem~\ref{thm: refined L-packet} for the lift $\tilde{\phi}'_{v}$ of $\phi'_{v}$, then for any $\omega'_{v} \in \a(\S{\phi_{v}})$ we have \[ \tilde{f}'_{\widetilde{G}^{\theta}_{v}}(\tilde{\phi}_{v} \otimes \omega'_{v}, s_{v}) = \tilde{f}_{\widetilde{G}^{\theta}_{v}}'(\tilde{\phi}_{v}, s_{v}), \,\,\,\,\,\, \tilde{f}_{v} \in \bar{\mathcal{H}}(\widetilde{G}_{v}, \lif{\chi}_{v}). \] \end{lemma} \begin{proof} Since $\tilde{f}'_{\widetilde{G}^{\theta}_{v}}(\tilde{\phi}_{v}, s_{v})$ only depends on the image of $s_{v}$ in $\S{\phi_{v}}^{\theta}$ (see Lemma~\ref{lemma: induced twisted character}), according to the formula \eqref{formula: centralizer} of $S_{\phi_{v}}$, we can assume $s_{v}$ commutes with some $t_{v} \in \cS{\phi_{v}}$ such that $\a(t_{v}) = \omega_{v}'$. Note that $t_{v} \in \text{Aut}_{G_{v}}(G'_{v}) / Z(\D{G}_{v})^{\Gal{v}}$. If $t_{v} \in \text{Int}_{G_{v}}(G'_{v}) / Z(\D{G}_{v})^{\Gal{v}}$, then it is easy to see $\omega_{v}' \in \a^{G'_{v}}(\S{\phi'_{v}})$, so there is nothing to prove. If $t_{v} \notin \text{Int}_{G_{v}}(G'_{v}) / Z(\D{G}_{v})^{\Gal{v}}$, we denote the inducing automorphism of $G'_{v}$ by $\theta'$, and it can be extended to $\widetilde{G}'_{v}$. Then it follows from Corollary~\ref{cor: theta twisting character} that $\cPkt{\tilde{\phi}'_{v}}^{\theta'} = \cPkt{\tilde{\phi}'_{v}} \otimes \omega_{v}'$. Since $\tilde{f}_{v}^{\widetilde{G}'_{v}}$ is $\text{Out}_{\widetilde{G}_{v}}(\widetilde{G}'_{v})$-invariant, we have $\tilde{f}_{v}^{\widetilde{G}'_{v}}(\tilde{\phi}'_{v}) = \tilde{f}_{v}^{\widetilde{G}'_{v}}((\tilde{\phi}'_{v})^{\theta'}) = \tilde{f}_{v}^{\widetilde{G}'_{v}}(\tilde{\phi}'_{v} \otimes \omega_{v}')$. \end{proof} Since $Z_{\widetilde{G}}(\mathbb{A}_{F}) = A_{\widetilde{G}}(\mathbb{A}_{F}) \cdot Z_{G}(\mathbb{A}_{F})$, we can identify $Y$ with the quotient \[ \frac{\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow H^{1}(W_{F}, \D{A}_{\widetilde{G}})\}}{\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F), \mathbb{C}^{\times})\}}, \] where the involved homomorphisms are from the following diagram \begin{align*} \label{diagram: global Langlands correspondence for characters} \xymatrix{ H^{1}(W_{F}, \D{D}) \ar[r] \ar[d]^{\simeq} & H^{1}(W_{F}, Z(\D{\widetilde{G}})) \ar[r] \ar[d] & H^{1}(W_{F}, \D{A}_{\widetilde{G}}) \ar[d]^{\simeq} \\ \text{Hom}(D(\mathbb{A}_{F})/D(F), \mathbb{C}^{\times}) \ar[r] & \text{Hom}( \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F), \mathbb{C}^{\times}) \ar[r] & \text{Hom}(A_{\widetilde{G}}(\mathbb{A}_{F}) / A_{\widetilde{G}}(F)). } \end{align*} In the same way, we can identify $Y'$ with \[ \frac{\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow H^{1}(W_{F}, \D{A}_{\widetilde{G}'})\}}{ \text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}'(\mathbb{A}_{F})/ \widetilde{G}'(F), \mathbb{C}^{\times})\}}. \] Note that $A_{\widetilde{G}} \cong A_{\widetilde{G}'}$ under the inclusion of $Z_{\widetilde{G}} = (Z_{\widetilde{G}})_{\theta}$ to $Z_{\widetilde{G}'}$, and also we have \[ \xymatrix{ H^{1}(W_{F}, \D{D}) \ar[r] & H^{1}(W_{F}, Z(\D{\widetilde{G}})) \ar[r] & H^{1}(W_{F}, \D{A}_{\widetilde{G}} ) \\ H^{1}(W_{F}, \D{D}) \ar[r] \ar@{=}[u] & H^{1}(W_{F}, Z(\D{\widetilde{G}'})) \ar[r] & H^{1}(W_{F}, \D{A}_{\widetilde{G}'}) \ar[u]^{\simeq}. } \] So $\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow H^{1}(W_{F}, \D{A}_{\widetilde{G}})\} = \text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow H^{1}(W_{F}, \D{A}_{\widetilde{G}'})\}$, and we can first sum over this group. Then the rest is to determine the quotient \[ \frac{|\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}'(\mathbb{A}_{F})/ \widetilde{G}'(F), \mathbb{C}^{\times})\}|}{|\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F), \mathbb{C}^{\times})\}|}. \] \begin{lemma \label{lemma: twisted endoscopic expansion 2} \begin{align*} & | \bar{Z}(\D{G}')^{\Gal{}} | | \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} |^{-1} |Z(\D{G})^{\Gal{}} / (\D{G}' \cap Z(\D{G})^{\Gal{}})(Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}) |^{-1} \\ = \quad & \frac{|\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}'(\mathbb{A}_{F})/ \widetilde{G}'(F), \mathbb{C}^{\times})\}|}{|\text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F), \mathbb{C}^{\times})\}|}. \end{align*} \end{lemma} \begin{proof} From the proof of Lemma~\ref{lemma: identification}, we see \[ \text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}'(\mathbb{A}_{F})/ \widetilde{G}'(F), \mathbb{C}^{\times})\} \cong Z(\D{G}')^{\Gal{}} / (Z(\D{\widetilde{G}'})^{\Gal{}} / \D{D}), \] and \[ \text{Ker} \{ H^{1}(W_{F}, \D{D}) \rightarrow \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F), \mathbb{C}^{\times})\} \cong Z(\D{G})^{\Gal{}} / (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}). \] Therefore it is enough to show \begin{align \label{eq: twisted endoscopic expansion 3} | \bar{Z}(\D{G}')^{\Gal{}} | | \bar{Z}(\D{\widetilde{G}'})^{\Gal{}} |^{-1} |Z(\D{G})^{\Gal{}} / (\D{G}' \cap Z(\D{G})^{\Gal{}})(Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}) |^{-1} = \frac{|Z(\D{G}')^{\Gal{}} / (Z(\D{\widetilde{G}'})^{\Gal{}} / \D{D})|}{|Z(\D{G})^{\Gal{}} / (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})|}. \end{align} We start by considering the following exact sequence \begin{align*} 1 \longrightarrow (\D{G}' \cap Z(\D{G})^{\Gal{}}) / (\D{G}' \cap (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})) \longrightarrow Z(\D{G})^{\Gal{}} / (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}) \\ \longrightarrow Z(\D{G})^{\Gal{}} / (\D{G}' \cap Z(\D{G})^{\Gal{}})(Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}) \longrightarrow 1. \end{align*} It follows \[ |Z(\D{G})^{\Gal{}} / (\D{G}' \cap Z(\D{G})^{\Gal{}})(Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})|^{-1} = \frac{|(\D{G}' \cap Z(\D{G})^{\Gal{}}) / (\D{G}' \cap (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D}))|}{|Z(\D{G})^{\Gal{}} / (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})|}. \] If we write \[ (\D{G}' \cap Z(\D{G})^{\Gal{}}) / (\D{G}' \cap (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})) = (Z(\D{G}')^{\Gal{}} \cap Z(\D{G})^{\Gal{}}) / ((Z(\D{\widetilde{G}'}) \cap Z(\D{\widetilde{G}})^{\Gal{}})/\D{D}), \] then \begin{align*} &|\D{G}' \cap Z(\D{G})^{\Gal{}} / \D{G}' \cap (Z(\D{\widetilde{G}})^{\Gal{}} / \D{D})| |\bar{Z}(\D{G}')^{\Gal{}}| |\bar{Z}(\D{\widetilde{G}'})^{\Gal{}}|^{-1} \\ = \quad & |Z(\D{G}')^{\Gal{}} / ((Z(\D{\widetilde{G}'}) \cap Z(\D{\widetilde{G}})^{\Gal{}})/\D{D})| |\bar{Z}(\D{\widetilde{G}'})^{\Gal{}}|^{-1} \\ = \quad & |Z(\D{G}') / (Z(\D{\widetilde{G}'})^{\Gal{}} / \D{D})|. \end{align*} Hence \eqref{eq: twisted endoscopic expansion 3} holds. \end{proof} As a consequence of Lemma~\ref{lemma: twisted endoscopic expansion 1} and Lemma~\ref{lemma: twisted endoscopic expansion 2}, we can sum over $Y / \a(\S{\phi})$ for \eqref{eq: twisted endoscopic expansion 2} and get \[ \sum_{\omega' \in Y / \a(\S{\phi})} C_{\tilde{\phi}} \tilde{f}'_{\widetilde{G}^{\theta}} (\tilde{\phi} \otimes \omega', s). \] To sum up, we have shown the $\phi$-component of \eqref{eq: twisted endoscopic side} has an expansion \begin{align*} \sum_{s \in \mathcal{E}'^{\theta}_{\phi, ell}(\omega)} |\pi_{0}(\cS{\phi, s}^{0})|^{-1} \sigma((\cS{\phi, s})^{0}) \sum_{\omega' \in Y / \a(\S{\phi})} C_{\tilde{\phi}} \tilde{f}'_{\widetilde{G}^{\theta}} (\tilde{\phi} \otimes \omega', s) \\ = \sum_{\omega' \in Y / \a(\S{\phi})} C_{\tilde{\phi}} \sum_{s \in \mathcal{E}'^{\theta}_{\phi, ell}(\omega)} |\pi_{0}(\cS{\phi, s}^{0})|^{-1} \sigma((\cS{\phi, s})^{0}) \tilde{f}'_{\widetilde{G}^{\theta}} (\tilde{\phi} \otimes \omega', s) . \end{align*} Finally by the same argument as in the proof of Lemma~\ref{lemma: induced twisted character}, there exists a family of global lifts $\cPkt{\tilde{\phi}'}$ for all $s \in \cS{\phi,ss}^{\theta}$ with image $x$ in $\S{\phi}^{\theta}$, such that the distribution $\tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', s)$ are the same. So we can write \[ \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', s) = \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x). \] Moreover, we can split the sum over $s \in \mathcal{E}'^{\theta}_{\phi, ell}(\omega)$ into a double sum over $x \in \S{\phi}^{\theta}(\omega)$ and $s \in \mathcal{E}'^{\theta}_{\phi, ell}(x)$, where $ \mathcal{E}'^{\theta}_{\phi, ell}(x)$ is the subset of $ \mathcal{E}'^{\theta}_{\phi, ell}$ that mapped to $x$. If we define \[ e'^{\theta}_{\phi}(x) = \sum_{s \in \mathcal{E}'^{\theta}_{\phi, ell}(x)} |\pi_{0}(\cS{\phi, s}^{0})|^{-1} \sigma((\cS{\phi, s})^{0}), \] then we get the following lemma. \begin{lemma \label{lemma: twisted endoscopic expansion} Suppose $\phi \in \cP{G}$, $\theta \in \Sigma_{0}$ and $\omega \in Y$. If $\theta =id, \omega = 1$ then \begin{align \label{eq: endoscopic expansion} \Idt{\widetilde{G}}{, \phi}(\tilde{f}) - \Sdt{\widetilde{G}}{, \phi} (\tilde{f}) = C_{\tilde{\phi}} \sum_{\omega' \in Y / \a(\S{\phi})} \sum_{x \in \S{\tilde{\phi}}} e'_{\phi}(x) \tilde{f}_{\widetilde{G}}' (\tilde{\phi} \otimes \omega', x). \end{align} Otherwise, \begin{align \label{eq: twisted endoscopic expansion} \tIdt{\widetilde{G}^{\theta}}{, \phi} (\tilde{f}) = C_{\tilde{\phi}} \sum_{\omega' \in Y / \a(\S{\phi})} \sum_{x \in \S{\phi}^{\theta}(\omega)} e'^{\theta}_{\phi}(x) \tilde{f}_{\widetilde{G}^{\theta}}' (\tilde{\phi} \otimes \omega', x). \end{align} \end{lemma} \begin{corollary \label{cor: stability} Suppose $\phi \in \cP{G}$ and $\S{\tilde{\phi}} = 1$, then the distribution $\Idt{\widetilde{G}}{, \phi}(\tilde{f})$ is stable . \end{corollary} \begin{proof} Since $\S{\tilde{\phi}} = 1$, it follows from \eqref{eq: endoscopic expansion} that \[ \Idt{\widetilde{G}}{, \phi}(\tilde{f}) = \Sdt{\widetilde{G}}{, \phi} (\tilde{f}) + C_{\tilde{\phi}} \sum_{\omega' \in Y / \a(\S{\phi})} e'_{\phi}(1) \tilde{f}_{\widetilde{G}}' (\tilde{\phi} \otimes \omega', 1). \] Note that $\tilde{f}_{\widetilde{G}}' (\tilde{\phi} \otimes \omega', 1)$ is defined by inducing global L-packets of Levi subgroups of $\widetilde{G}$, and hence is stable. Therefore $\Idt{\widetilde{G}}{, \phi}(\tilde{f})$ is stable. \end{proof} Later on, we will compare the formulas in Lemma~\ref{lemma: twisted spectral expansion} and Lemma~\ref{lemma: twisted endoscopic expansion}. Note it follows from Proposition~\ref{prop: endoscopy of complex group} that for $x \in \S{\phi}^{\theta}$, \[ i^{\theta}_{\phi}(x) - e'^{\theta}_{\phi}(x) = \begin{cases} 0 & \text{ if } x \neq 1, \\ \sigma(\com[0]{\cS{\phi}}) & \text{ if } x = 1. \end{cases} \] \section{Refined L-packet} \label{sec: refined L-packet} \subsection{Beginning of proofs \label{subsec: beginning of proofs} In the following sections, we are going to prove the main local theorem (Theorem~\ref{thm: refined L-packet}) along with the global theorem (Theorem~\ref{thm: main global}). First we need to impose our induction assumptions. It consists of a local part and a global part. Let $F$ be either local or global. We denote \begin{align*} G(n) & := Sp(2n), SO(2n+2, \eta), \\ \widetilde{G}(n) & := GSp(2n), GSO(2n+2, \eta). \end{align*} Let \[ G = G(n_{1}) \times G(n_{2}) \times \cdots \times G(n_{q}) \] and $\widetilde{G}$ be the corresponding similitude group (see \eqref{eq: similitude}), then our induction assumptions can be stated as follows. {\bf Local Induction Assumption:} The main local theorem (Theorem~\ref{thm: refined L-packet}) holds for $\widetilde{G}$, when $n_{i} < N$ for all $1 \leqslant i \leqslant q$. {\bf Global Induction Assumption:} The global theorem (Theorem~\ref{thm: main global}) hold for $\widetilde{G}$, when $\sum_{i = 1}^{q} n_{i} < N$. \begin{remark \label{rk: induction assumption} When $\widetilde{G} = GSp(2N)$, these assumptions imply all the local and global theorems hold for the Levi subgroups and twisted endoscopic groups of $\widetilde{G}$. But this is not true when $\widetilde{G} = GSO(2N + 2, \eta)$ for it can have twisted endoscopic group of the form $G(Sp(2N_{1}) \times Sp(2N_{2}))$ with $N = N_{1} + N_{2}$ and $N_{1}, N_{2} \geqslant 0$. To fix this, we will first prove the local and global theorems for $\widetilde{G}$ based on our induction assumption, when $G$ does not contain any factor of $SO(2N+2, \eta)$. Then we can add those results to our induction assumptions and repeat the same arguments to prove the rest of the cases. \end{remark} We will first establish the main local theorem (Theorem~\ref{thm: refined L-packet}) for $\widetilde{G} = \widetilde{G}(N)$, which is the most important case. In view of Remark~\ref{rk: refined L-packet}, we can further assume $F$ is nonarchimedean. Under our local induction assumption, we can prove a lot of cases of the main local theorem. The precise statement is formulated in the following lemma. \begin{lemma \label{lemma: refined L-packet for non-discrete parameter} Suppose $\phi \in \cPbd{G} - \cPdt{G}$, then one can assign an L-packet $\cPkt{\tilde{\phi}}$ for any lift $\tilde{\phi}$ such that it satisfies (1) and (2) of the main local theorem (Theorem~\ref{thm: refined L-packet}). Furthermore, the $(\theta, \omega)$-twisted character relation \eqref{eq: theta twisted character relation} holds for $\theta \in \Sigma_{0}$ and semisimple $s \in \cS{\phi}^{\theta}$ such that $|\cS{\phi, s}^{0}| = \infty$. \end{lemma} \begin{proof} Suppose $\phi \in \cPbd{G} - \cPdt{G}$, then $\phi$ factors through $\phi_{M} \in \cPdt{M}$ for some proper Levi subgroup $M$ of $G$. Since \[ M \cong G(m) \times \prod_{i} GL(n_{i}) \] with $m < N$, by our local induction assumption we can define a refined L-packet $\cPkt{\tilde{\phi}_{M}}$ associated to $\tilde{\phi}_{M}$. Then we can take local packet $\cPkt{\tilde{\phi}}$ for $\tilde{\phi}$ to be the irreducible constituents of those induced from $\cPkt{\tilde{\phi}_{M}}$. Because $\cPkt{\phi}$ is also obtained by induction from $\cPkt{\phi_{M}}$, we can easily see that $\cPkt{\tilde{\phi}}$ will satisfy (1) and (2) of the main local theorem. For the $(\theta, \omega)$-twisted character relation \eqref{eq: theta twisted character relation}, it will follow from the usual descent argument. For $(G', \phi') \rightarrow (\phi, s)$, let $T_{\phi, s}$ be a maximal torus of $(S_{\phi, s})^{0}$, which is nontrivial by our assumption that $|\cS{\phi, s}^{0}| = \infty$. Then $\D{M}' = \text{Cent}(T_{\phi, s}, \D{G}')$ defines a proper Levi subgroup of $\D{G}'$ such that $\phi'$ factors through $\phi'_{M} \in \cPdt{M'}$. Moreover, $M' \in \End{ell}{M^{\theta}}$ for a proper $\theta$-stable Levi subgroup $M$ of $G$, which is determined by $\D{M} = \text{Cent}(T_{\phi, s}, \D{G})$. So \[ \tilde{f}'(\tilde{\phi}') = \tilde{f}^{\widetilde{M}'}(\tilde{\phi}'_{M}) = \sum_{[\tilde{\pi}_{M}] \in \cPkt{\tilde{\phi}_{M}}} \tilde{f}_{\widetilde{M}^{\theta}}(\tilde{\pi}_{M}, \omega) = \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}}} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega), \] where $\omega = \a(s)$. \end{proof} \begin{remark} We have not shown the uniqueness of $\cPkt{\tilde{\phi}}$ here. In fact that will follow from the character relation (see Theorem~\ref{thm: twisted character relation for elliptic parameter}). \end{remark} The key issue in proving the main local theorem is to find a candidate for the stable distribution associated with any lift $\tilde{\phi}$ of $\phi \in \cPbd{G}$. As one can see from Lemma~\ref{lemma: refined L-packet for non-discrete parameter}, the critical case is when $\phi \in \cPdt{G}$. The way to find such a distribution is to lift $\phi$ to a global parameter $\dot{\phi}$ and use the global stable distribution $\Sdt{\lif{\dot{G}}}{, \dot{\phi}}$ in the stabilized trace formula. Under some assumptions on this lifted global parameter $\dot{\phi}$, we can obtain the local stable distribution associated with $\tilde{\phi}$ using an argument based on stability (cf. Corollary~\ref{cor: refined L-packet}). Let us write $S_{\infty}$ for the set of archimedean places of a global field $\dot{F}$, and $S_{\infty}(u) = S_{\infty} \cup \{u\}$ for any nonarchimedean place $u$. Suppose $F = \dot{F}_{u}$ and $\dot{G}_{u} = G$. Let $\dot{X} = \text{Hom}(\lif{\dot{G}}(\mathbb{A}_{\dot{F}})/Z_{\lif{\dot{G}}}(\mathbb{A}_{\dot{F}})\dot{G}(\mathbb{A}_{\dot{F}}), \mathbb{C}^{\times})$. \begin{theorem \label{thm: standard argument on stability} For $\phi \in \cPdt{G}$, suppose $\dot{\phi} \in \cPdt{\dot{G}}$ is a global lift of $\phi$ with $\dot{\phi}_{u} = \phi$ and it also satisfies the following additional conditions: \begin{enumerate} \item $\dot{\phi}_{v} \in \cuP{\dot{G}_{v}} - \cPel{\dot{G}_{v}} \text{ for all } v \notin S_{\infty}(u)$; \item $\S{\lif{\dot{\phi}}} = 1$; \item $\Sigma_{0}$-strong multiplicity one holds for $\lif{\dot{\phi}}$. \end{enumerate} Then one can assign an L-packet $\cPkt{\tilde{\phi}}$ to any lift $\tilde{\phi}$ of $\phi$ satisfying (1) and (2) of the main local theorem (Theorem~\ref{thm: refined L-packet}). \end{theorem} \begin{proof} In view of Lemma~\ref{lemma: refined L-packet for non-discrete parameter}, the first condition of our global lift $\dot{\phi}$ just means that the main local theorem (except for the $(\theta_{0}, \omega)$-twisted character relation in the even orthogonal case) holds for all $\lif{\dot{\phi}}_{v}$ $(v \neq u)$. The second condition means that \[ tr R^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}}) = I^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}}) = S^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}}) \neq 0 \] for $\lif{\dot{f}} \in \bar{\mathcal{H}}(\lif{\dot{G}}, \lif{\dot{\chi}})$, which follows from Corollary~\ref{cor: stability} and the fact that $\dot{\phi} \in \cPdt{\dot{G}}$ (cf. \eqref{eq: discrete part vs discrete spectrum}). It follows from Proposition~\ref{prop: discrete spectrum} that \begin{align} I^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}}) = m_{\dot{\phi}} \sum_{\dot{\omega} \in \dot{Y} / \a(\S{\dot{\phi}})} \sum_{\lif{\dot{\r}}} \lif{\dot{f}}_{\lif{\dot{G}}}(\lif{\dot{\r}} \otimes \dot{\omega}) \label{eq: standard argument on stability 1} \end{align} for $\lif{\dot{f}} \in \bar{\mathcal{H}}(\lif{\dot{G}}, \lif{\dot{\chi}})$, where the sum of $\lif{\dot{\r}}$ is taken over representatives of $\lif{\bar{\Pi}}_{\dot{\phi}, \lif{\dot{\zeta}}} / \dot{X}$ inside $\mathcal{A}_{2}(\lif{\dot{G}})$. Here we will always view representations of $\lif{\dot{G}}(\mathbb{A}_{F})$ as $\bar{\mathcal{H}}(\lif{\dot{G}}, \lif{\dot{\chi}})$-modules. Since $I^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}})$ is stable, it is stable at every place. If we take $\lif{\dot{f}} = \bigotimes_{w}\lif{\dot{f}}_{w}$ and fix $\bigotimes_{w \neq v}\lif{\dot{f}}_{w}$ for $v \neq u$, then by Corollary~\ref{cor: refined L-packet} the coefficients of $\lif{\dot{f}}_{v}(\lif{\dot{\r}}_{v})$ in $I^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}})$ must be the same for all $[\lif{\dot{\r}}_{v}] \in \cPkt{\lif{\dot{\phi}}_{v}}$. Moreover, if we fix a representation $\lif{\dot{\r}} \in \mathcal{A}_{2}(\lif{\dot{G}})$, by varying $\bigotimes_{w \neq v}\lif{\dot{f}}_{w}$ and the linear independence of characters of $\bigotimes_{w \neq v}\bar{\mathcal{H}}(\lif{\dot{G}}_{w}, \lif{\dot{\chi}}_{w})$-modules, we will observe that for $v \neq u$ \[ [\lif{\dot{\r}}_{v}] \bigotimes (\bigotimes_{w \neq v} [\lif{\dot{\r}}_{w}] ) \in \lif{\bar{\Pi}}_{\dot{\phi}, \lif{\dot{\zeta}}} \] contributes to \eqref{eq: standard argument on stability 1} if and only if \[ [\lif{\dot{\r}}'_{v}] \bigotimes (\bigotimes_{w \neq v} [\lif{\dot{\r}}_{w}] ) \in \lif{\bar{\Pi}}_{\dot{\phi}, \lif{\dot{\zeta}}} \] also contributes to \eqref{eq: standard argument on stability 1} for all $[\lif{\dot{\r}}'_{v}] \in \cPkt{\lif{\dot{\phi}}_{v}}$, where $\cPkt{\lif{\dot{\phi}}_{v}}$ contains $[\lif{\dot{\r}}_{v}]$. We still fix $\lif{\dot{\r}}$ and hence $\cPkt{\lif{\dot{\phi}}_{v}}$ for all $v \neq u$. Then \eqref{eq: standard argument on stability 1} will contain $\bar{\mathcal{H}}(\lif{\dot{G}}, \lif{\dot{\chi}})$-modules of the form \[ [\lif{\dot{\r}}_{u}] \bigotimes (\bigotimes_{v \neq u} [\lif{\dot{\r}}_{v}]) \] where $[\lif{\dot{\r}}_{v}]$ ranges over $\cPkt{\lif{\dot{\phi}}_{v}}$ for all $v \neq u$. Suppose there is a distinct $\bar{\mathcal{H}}(\lif{\dot{G}}, \lif{\dot{\chi}})$-module \[ [\lif{\dot{\r}}'_{u}] \bigotimes (\bigotimes_{v \neq u} [\lif{\dot{\r}}_{v}]) \] in \eqref{eq: standard argument on stability 1} such that $[\lif{\dot{\r}}_{v}] \in \cPkt{\lif{\dot{\phi}}_{v}}$ for all $v \neq u$, then $[\lif{\dot{\r}}'_{u}] \neq [\lif{\dot{\r}}_{u}] \otimes \omega$ for any character $\omega \in X$. Otherwise, there will exist $\dot{\omega} \in \dot{Y}$ such that $[\lif{\dot{\r}}_{u}] \otimes \dot{\omega}_{u} = [\lif{\dot{\r}}'_{u}] \neq [\lif{\dot{\r}}_{u}]$ and $\cPkt{\lif{\dot{\phi}}_{v}} = \cPkt{\lif{\dot{\phi}}_{v}} \otimes \dot{\omega}_{v}$ for all $v \neq u$. This is impossible because of the third condition, i.e. $\Sigma_{0}$-strong multiplicity one holds for $\lif{\dot{\phi}}$. Therefore if we consider all $[\tilde{\pi}] \in \lif{\bar{\Pi}}_{\phi, \lif{\zeta}}$ such that \[ [\tilde{\pi}] \bigotimes (\bigotimes_{v \neq u} \cPkt{\lif{\dot{\phi}}_{v}}) \] is contained in \eqref{eq: standard argument on stability 1}, this gives a non-empty set $\cPkt{\tilde{\phi}}$ of representatives of \( \lif{\bar{\Pi}}_{\phi, \lif{\zeta}} / X \) in $\lif{\bar{\Pi}}_{\phi, \lif{\zeta}}$. To see why this gives all the representatives, one just needs to take the test function $\lif{\dot{f}} = \otimes_{v}\lif{\dot{f}}_{v}$ such that $\lif{\dot{f}}_{u}$ is supported on $\lif{\dot{Z}}_{\dot{F}_{u}}\dot{G}(\dot{F}_{u})$, then it is the same to consider representations of \[ \lif{\dot{Z}}_{\dot{F}_{u}} \dot{G}(\dot{F}_{u}) \times \prod_{v \neq u} \lif{\dot{G}}(\dot{F}_{v}). \] By the same reasoning using stability, one can conclude that \[ \cPkt{\phi} \bigotimes (\bigotimes_{v \neq u} \cPkt{\lif{\dot{\phi}}_{v}}) \] is contained in \eqref{eq: standard argument on stability 1}, therefore $\cPkt{\tilde{\phi}}$ must contains all representatives of $\lif{\bar{\Pi}}_{\phi, \lif{\zeta}} / X$ in $\lif{\bar{\Pi}}_{\phi, \lif{\zeta}}$. Moreover, it follows again from $\Sigma_{0}$-strong multiplicity one and stability of \eqref{eq: standard argument on stability 1} that \[ \tilde{f}(\tilde{\phi}) := \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}}} \tilde{f}_{\widetilde{G}}(\tilde{\pi}) \] is stable. This shows the packet $\cPkt{\tilde{\phi}}$ satisfies the property (1) and (2) of Theorem~\ref{thm: refined L-packet}. \end{proof} \begin{remark \label{rk: standard argument on stability} \begin{enumerate} \item Following the proof, we can rewrite \eqref{eq: standard argument on stability 1} as \[ I^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}}) = m_{\dot{\phi}} \sum_{\dot{\omega} \in \dot{Y} / \a(\S{\dot{\phi}})} \sum_{[\lif{\dot{\r}}] \in \cPkt{\lif{\dot{\phi}}} \otimes \dot{\omega}} \lif{\dot{f}}_{\lif{\dot{G}}}(\lif{\dot{\r}} ) \] where \[ \cPkt{\lif{\dot{\phi}}} = \cPkt{\tilde{\phi}} \bigotimes (\bigotimes_{v \neq u} \cPkt{\lif{\dot{\phi}}_{v}} ) . \] If we define \[ \lif{\dot{f}}(\lif{\dot{\phi}}) := \prod_{v} \lif{\dot{f}}_{v}(\lif{\dot{\phi}}_{v}) , \] then we get the stable multiplicity formula for our lift $\lif{\dot{\phi}}$ \[ S^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}}) = I^{\lif{\dot{G}}}_{disc, \dot{\phi}}(\lif{\dot{f}}) = m_{\dot{\phi}} \sum_{\dot{\omega} \in \dot{Y} / \a(\S{\dot{\phi}})} \lif{\dot{f}}(\lif{\dot{\phi}} \otimes \dot{\omega}). \] This identity will be used in the proof of Theorem~\ref{thm: refined L-packet for discrete parameter}. \item The statement of this theorem indicates that we need a lifting result for the existence of such $\dot{\phi}$. In fact, there is a standard argument using the simple invariant trace formula which provides a global lift so that one is allowed to impose some local conditions. That argument is carried out in quite detail in (\cite{Arthur:2013}, Section 6.2 and 6.3), and the local conditions that Arthur imposes already take care of our first additional condition in most cases. Even though the global lift which Arthur uses does not necessarily satisfy the other two conditions, his argument is still flexible enough to leave us a lot of room to manipulate. In fact, it is not hard to impose the second condition after we give a combinatorial description of the exact sequence \begin{align*} \xymatrix{1 \ar[r] & \S{\lif{\dot{\phi}}} \ar[r]^{\iota} & \S{\dot{\phi}} \ar[r]^{\a \quad \quad \quad \quad \quad \quad } & \text{Hom}(\lif{\dot{G}}(\mathbb{A}_{\dot{F}})/\lif{\dot{G}}(\dot{F})\dot{G}(\mathbb{A}_{\dot{F}}), \mathbb{C}^{\times}).} \end{align*} However, for technical reasons the third condition is not so easy to satisfy, and it seems that we have asked something too strong. By tracking the argument in our proof carefully, one will observe that it is enough to have \[ \prod^{aut}_{v} \a(\S{\dot{\phi}_{v}}^{\Sigma_{0}}) = \prod^{aut}_{v \neq u} \a(\S{\dot{\phi}_{v}}^{\Sigma_{0}}). \] This condition can also be interpreted in terms of strong multiplicity one, which means that for any $\dot{\tilde{\pi}} \in \mathcal{A}(\dot{\widetilde{G}})$ such that $[\dot{\tilde{\pi}}_{v}] \in \cPkt{\lif{\dot{\phi}}_{v}}$ for all $v \neq u$, $[\dot{\tilde{\pi}}]$ must be also in $\cPkt{\lif{\dot{\phi}}}$. If this condition is satisfied, we say $\Sigma_{0}$-strong multiplicity one holds for $\lif{\dot{\phi}}$ {\bf at the place} $u$. And it is the most technical part of this paper to establish this property. \end{enumerate} \end{remark} \subsection{A combinatorial description of $\S{\phi}$} \label{sec: combinatorial description} Now we will give a combinatorial description of the exact sequences \eqref{eq: local twisted endoscopic sequence} and \eqref{eq: global twisted endoscopic sequence}. We assume $F$ is either local or global. Suppose $G = G(n)$, $\phi \in \cP{G}$ if $F$ is global or $\cPbd{G}$ if $F$ is local, and \[ \phi = l_{1} \phi_{1} \# \cdots \# l_{r} \phi_{r}, \] where $\phi_{i} \in \Psm{N_{i}}$ for $1 \leqslant i \leqslant r$. From the discussion of Section~\ref{sec: Arthur's theory}, the set of indices can be written as a disjoint union of \[ I_{\phi, O} \sqcup I_{\phi, S} \sqcup J_{\phi} \sqcup J_{\phi}^{\vee} \] where $I_{\phi, O}$ ($I_{\phi, S}$) is the set of indices that index self-dual parameters of orthogonal (symplectic) type. In particular, since we are considering $G$ to be either a special even orthogonal group or a symplectic group, $\D{G}$ will always be orthogonal and hence the multiplicities $l_{i}$ must be even for $i \in I_{\phi, S}$. On the other hand, let us denote \begin{align*} I^{odd}_{\phi, O} & = \{ i \in I_{\phi, O} : l_{i} \text{ is odd }\}, \\ I^{even}_{\phi, O} & = \{ i \in I_{\phi, O} : l_{i} \text{ is even }\}. \end{align*} Moreover, let $S$ and $T$ be subsets of $I^{odd}_{\phi, O}$ and $I^{even}_{\phi, O}$ respectively, with the condition that \( \sum_{i \in S \cup T} N_{i} \) is even if $G$ is special even orthogonal. And we allow $S$ and $T$ to be empty sets. Then the pair of such sets modulo the following equivalence relation gives us the combinatorial object that we need to substitute for $\S{\phi}$, i.e. \[ \mathcal{P}_{\phi} = \{ (S, T) \} / (S, T) \sim (S^{c}, T) \] where $S^{c}$ is the complement of $S$ in $I^{odd}_{\phi, O}$. There is a natural map from $\mathcal{P}_{\phi}$ to $\text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times})$ if $F$ is local (resp. $\text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times})$ if $F$ is global), which sends \[ (S, T) \longmapsto ( \prod_{i \in S \cup T} \eta_{\phi_{i}} ) \circ \c \] where $\eta_{\phi_{i}}$ is the central character of $\r_{\phi_{i}}$. Let us denote this map by $\a_{\mathcal{P}}$ and the kernel of this map by $\mathcal{P}_{\tilde{\phi}}$, then we get a sequence \begin{align \label{eq: local combinatorial sequence} \xymatrix{ 1 \ar[r] & \mathcal{P}_{\tilde{\phi}} \ar[r] & \mathcal{P}_{\phi} \ar[r]^{\a_{\mathcal{P}} \quad \quad \quad \quad } & \text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times}) } \end{align} if $F$ is local, and \begin{align \label{eq: global combinatorial sequence} \xymatrix{ 1 \ar[r] & \mathcal{P}_{\tilde{\phi}} \ar[r] & \mathcal{P}_{\phi} \ar[r]^{\a_{\mathcal{P}} \quad \quad \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times}) } \end{align} if $F$ is global. To compare these sequences with \eqref{eq: local twisted endoscopic sequence} and \eqref{eq: global twisted endoscopic sequence}, we need a map connecting $\S{\phi}$ and $\mathcal{P}_{\phi}$. To define such a map, we consider semisimple $s \in \cS{\phi}$, and $(G_{s}', \phi') \rightarrow (\phi, s)$. In general, $G'_{s}$ may not be elliptic, but it will lie in $\End{ell}{M}$ for some Levi subgroup $M$ of $G$. Since $M$ is a product of general linear groups with a group $G_{-}$ of the same type as $G$ with smaller rank, then $G'_{s}$ contains a factor $G'_{I} \times G'_{II} \in \End{ell}{G_{-}}$ and $\phi'$ will decompose accordingly. Suppose $\phi_{-}$ is the component of $\phi$ contributing to $G_{-}$, and \[ \phi_{-}' = \phi'_{I} \times \phi'_{II} \] if $G$ is special even orthogonal, or \[ \phi_{-}' = (\phi'_{I} \otimes \eta_{\phi'_{I}}) \times \phi'_{II} \] if $G$ and $G'_{I}$ are symplectic. In either case, $\phi'_{I}, \phi'_{II}$ give a partition of simple parameters in $\phi_{-}$. Let $S$ ($T$) be the subset of $I^{odd}_{\phi, O}$ ($I^{even}_{\phi, O}$) parametrizing simple parameters in $\phi'_{I}$ with odd multiplicites. It is easy to see that $(S, T) \in \mathcal{P}_{\phi}$, so we get a map $\bold{c}^{G} = \bold{c}: \cS{\phi} \longrightarrow \mathcal{P}_{\phi}$ in this way. Moreover we have the following lemma. \begin{lemma \label{lemma: combinatorial description} \begin{enumerate} \item The map $\bold{c}$ defined above will factor through $\S{\phi}$, and it gives a bijection between $\S{\phi}$ and $\mathcal{P}_{\phi}$. \item If we denote the bijection in (1) still by $\bold{c}$, then we have a commutative diagram. \[ \xymatrix{ 1 \ar[r] \ar@{=}[d] & \S{\tilde{\phi}} \ar[d] \ar[r] & \S{\phi} \ar[d]^{\bold{c}} \ar[r]^{\a \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times}) \ar@{=}[d] \\ 1 \ar[r] & \mathcal{P}_{\tilde{\phi}} \ar[r] & \mathcal{P}_{\phi} \ar[r]^{\a_{\mathcal{P}} \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times}) } \] if $F$ is local, or \[ \xymatrix{ 1 \ar[r] \ar@{=}[d] & \S{\tilde{\phi}} \ar[d] \ar[r] & \S{\phi} \ar[d]^{\bold{c}} \ar[r]^{\a \quad \quad \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times}) \ar@{=}[d] \\ 1 \ar[r] & \mathcal{P}_{\tilde{\phi}} \ar[r] & \mathcal{P}_{\phi} \ar[r]^{\a_{\mathcal{P}} \quad \quad \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times}) } \] if $F$ is global. \end{enumerate} \end{lemma} \begin{proof} First we would like to show $\bold{c}$ factors though $\S{\phi}$, i.e. for any $s \in \cS{\phi}$, $\bold{c}(s)$ only depends on the image $x$ of $s$ in $\S{\phi}$. Note that if $s$ is replaced by an $\com[0]{\cS{\phi}}$-conjugate $s_{1}$, then the corresponding pair $(G'_{1}, \phi'_{1})$ is isomorphic to $(G', \phi')$. So by our definition \[ \bold{c}(s_{1}) = \bold{c}(s). \] Now if we fix a maximal torus $\bar{T}_{\phi}$ of $\com[0]{\cS{\phi}}$ and a Borel subgroup $\bar{B}_{\phi}$ containing it, any automorphism of the complex reductive group $\com[0]{\cS{\phi}}$ stabilizes a conjugate of $(\bar{T}_{\phi}, \bar{B}_{\phi})$. So we can choose a representative $s_{x}$ of $x$ in $\cS{\phi}$ so that $\text{Int}(s_{x})$ stabilizes $(\bar{T}_{\phi}, \bar{B}_{\phi})$, and such representatives are determined up to a $\bar{T}_{\phi}$-translate. Moreover the complex torus \[ \bar{T}_{\phi, x} = \text{Cent}(s_{x}, \bar{T}_{\phi})^{0} \] in $\bar{T}_{\phi}$ is uniquely determined by $x$. Note that $\bar{T}_{\phi, x}$ is the connected component of the kernel of the following morphism \[ \xymatrix{ \bar{T}_{\phi} \ar[r] & \bar{T}_{\phi} \\ t \ar@{|->}[r] & s_{x}^{-1}ts_{x}t^{-1}} \] So any point of $\bar{T}_{\phi}$ can be written as $(s_{x}^{-1}ts_{x}t^{-1}) t_{x}$ for $t \in \bar{T}_{\phi}$ and $t_{x} \in \bar{T}_{\phi, x}$ (see \cite{Springer:2009}, Corollary 5.4.5), and hence any point in $s_{x} \bar{T}_{\phi}$ can be written as \[ s_{x} (s_{x}^{-1}ts_{x}t^{-1}) t_{x} = t s_{x} t^{-1} t_{x} = t s_{x} t_{x} t^{-1}, \,\,\,\, t \in \bar{T}_{\phi}, \,\,\, t_{x} \in \bar{T}_{\phi, x}. \] Therefore it will be enough to show that \[ \bold{c}(s_{x}) = \bold{c}( s_{x} t_{x}) \] for any $t_{x} \in \bar{T}_{\phi, x}$. The centralizer $\D{M}_{x}$ of $\bar{T}_{\phi, x}$ in $\D{G}$ is a Levi subgroup of $\D{G}$, which is dual to a Levi subgroup $M_{x}$ of $G$. So $(\phi, s_{x})$ is the image of a pair \[ (\phi_{M_{x}}, s_{M_{x}}), \,\,\,\, \phi_{M_{x}} \in \cP{M_{x}}, s_{M_{x}} \in S_{\phi_{M_{x}}}, \] attached to $M_{x}$ under the $L$-embedding $\L{M_{x}} \subseteq \L{G}$. And this pair is in turn the image of an endoscopic pair $(M'_{x}, \phi'_{M_{x}})$. Note that $M_{x}$ has a component $G_{x}$ of the same type as $G$ and $\cS{\phi_{G_{x}}} \cong \cS{\phi_{M_{x}}}$. So we can define a map $\bold{c}^{M_{x}}$ on $\cS{\phi_{M_{x}}}$ by $\bold{c}^{G_{x}}$ with respect to the component $\phi_{G_{x}}$ of $\phi_{M_{x}}$. Since $M'_{x}$ can be identified with a Levi subgroup of $G'$, one can easily check that \[ \bold{c}^{M_{x}}(s_{M_{x}}) = \bold{c}(s_{x}). \] Note that $\bold{c}^{M_{x}}(s_{M_{x}})$ is invariant under the translation of $s_{M_{x}}$ by $\bar{T}_{\phi, x}$, so the same is true of $\bold{c}(s_{x})$. Secondly we need to show that $\bold{c}$ is in fact a bijection between $\S{\phi}$ and $\mathcal{P}_{\phi}$. Note that we can actually compute $|\S{\phi}|$ and $|\mathcal{P}_{\phi}|$ explicitly. For $|\S{\phi}|$, we have the description from Section~\ref{sec: Arthur's theory} that \begin{align} S_{\phi} = (\prod_{i \in I_{\phi, O}} O(l_{i}, \mathbb{C}))_{\phi}^{+} \times (\prod_{i \in I_{\phi, S}} Sp(l_{i}, \mathbb{C})) \times (\prod_{j \in J_{\phi}} GL(l_{j}, \mathbb{C})), \label{eq: combinatorial description 1} \end{align} where $(\prod_{i \in I_{\phi, O}} O(l_{i}, \mathbb{C}))_{\phi}^{+}$ is the kernel of the character \[ \xi_{\phi}^{+} : \prod_{i} g_{i} \longrightarrow \prod_{i} (det \, g_{i})^{N_{i}}, \,\,\,\,\, g_{i} \in O(l_{i}, \mathbb{C}), i \in I_{\phi, O}. \] If $G$ is symplectic or $G$ is special even orthogonal with $I^{odd}_{\phi, O}$ being empty, then \[ \S{\phi} = \begin{cases} (\mathbb{Z}/2\mathbb{Z})^{|I_{\phi, O}|} & \text{ if all $N_{i}$ are even for $i \in I_{\phi, O}$}, \\ (\mathbb{Z}/2\mathbb{Z})^{|I_{\phi, O}|-1} & \text{ otherwise. } \end{cases} \] If $G$ is special even orthogonal and $I^{odd}_{\phi, O}$ is not empty, then $Z(\D{G}) \notin \com[0]{\cS{\phi}}$ and thus \[ \S{\phi} = \begin{cases} (\mathbb{Z}/2\mathbb{Z})^{|I_{\phi, O}| - 1} & \text{ if all $N_{i}$ are even for $i \in I_{\phi, O}$}, \\ (\mathbb{Z}/2\mathbb{Z})^{|I_{\phi, O}| - 2} & \text{ otherwise. } \end{cases} \] For $|\mathcal{P}_{\phi}|$, it is just a combinatorial computation. Suppose $G$ is symplectic, there is no condition on $N_{i}$, so $| \mathcal{P}_{\phi} | = 2^{|I_{\phi, O}|-1}$. Suppose $G$ is special even orthogonal, it again divides into two cases. If all $N_{i}$ are even for $i \in I_{\phi, O}$, then the condition on $N_{i}$ is automatically satisfied and hence \[ |\mathcal{P}_{\phi}| = \begin{cases} 2^{|I_{\phi, O}|} & \text{ if $I^{odd}_{\phi, O}$ is empty}, \\ 2^{|I_{\phi, O}| - 1} & \text{ otherwise }. \\ \end{cases} \] And if there exists $i \in I_{\phi, O}$ such that $N_{i}$ is odd then \[ |\mathcal{P}_{\phi}| = \begin{cases} 2^{|I_{\phi, O}| - 1} & \text{ if $I^{odd}_{\phi, O}$ is empty}, \\ 2^{|I_{\phi, O}| - 2} & \text{ otherwise }. \\ \end{cases} \] Therefore one can conclude that $|\S{\phi}| = |\mathcal{P}_{\phi}|$. Now it suffices to show that $\bold{c}$ is surjective. In fact for any partition $(S, T)$, one can choose an element $s = (s_{k})_{k \in K_{\phi}} \in S_{\phi}$ according to the decomposition \eqref{eq: combinatorial description 1} such that it has the form \[ s_{i} = \begin{pmatrix} -1 &&& \\ &1 && \\ && \ddots & \\ &&& 1 \end{pmatrix} \text{ for } i \in S \cup T \text{, and } s_{k} = I \text{ otherwise }. \] When $G$ is symplectic, we can assume $\sum_{i \in S \cup T} N_{i}$ is odd by possibly changing $(S, T)$ to $(S^{c}, T)$. If $(G', \phi') \rightarrow (\phi, s)$, then $G'$ is elliptic and $\phi' = \phi'_{I} \times \phi'_{II} \, (\text{or $\phi_{-}' = (\phi'_{I} \otimes \eta_{\phi'_{I}}) \times \phi'_{II} $ if $G'_{I}$ is symplectic})$ with the property that \[ \phi'_{I} = \boxplus_{i \in S \cup T} \phi_{i}. \] Hence by the definition $\bold{c}(s) = (S, T)$. For the second part of the lemma, it is enough to show that \[ \a(s) = \a_{\mathcal{P}}(\bold{c}(s)), \] for $s \in S_{\phi}$ being such representatives chosen above. Let \[ \eta' = \eta_{\phi'_{I}} = \prod_{i \in S \cup T} \eta_{\phi_{i}}, \] then $\a_{\mathcal{P}}(\bold{c}(s)) = \eta' \circ \c $. Moreover, $G'$ will be $Sp(2n_{1}) \times SO(2n_{2}, \eta')$ if $G = Sp(2n)$, and $SO(2n_{1}, \eta') \times SO(2n_{2}, \eta' \eta)$ if $G = SO(2n, \eta)$. As one can see from the table of twisted elliptic endoscopic groups in Section \ref{subsubsec: twisted endoscopy}, $G'$ can be lifted to $\widetilde{G}' \in \tEnd{ell}{\widetilde{G}}$ with $\omega = \eta' \circ \c$. So $\omega = \a_{\mathcal{P}}(\bold{c}(s))$. Finally we just need to notice $\a(s) = \omega$ by Lemma~\ref{lemma: twisted character}, hence $\a(s) = \a_{\mathcal{P}}(\bold{c}(s))$. \end{proof} \begin{corollary \label{cor: combinatorial description} Suppose $\phi = l_{1}\phi_{1} \# \cdots \# l_{r}\phi_{r} \in \cPbd{G}$ if $F$ is local (resp. $\cP{G}$ if F is global), and $S$, $T$ are subsets of $I^{odd}_{\phi, O}$, $I^{even}_{\phi, O}$ respectively. Suppose that \[ ( \prod_{i \in S \cup T}\eta_{\phi_{i}} ) \circ \c \neq 1 \] unless $T$ is empty and $S$ is either empty or equal to $I^{odd}_{\phi, O}$. Then $\S{\tilde{\phi}} = 1$. \end{corollary} \begin{proof} It follows from the definition that $| \mathcal{P}_{\tilde{\phi}} | = 1$. By Lemma~\ref{lemma: combinatorial description}, one has $| \S{\tilde{\phi}} | = | \mathcal{P}_{\tilde{\phi}} | = 1$. Hence $\S{\tilde{\phi}} = 1$. \end{proof} The following proposition will become useful in our later proofs. \begin{proposition \label{prop: consistency on induction} Suppose $\phi \in \cPel{G^{\theta}}$ for $\theta \in \Sigma_{0}$ and $\S{\tilde{\phi}} = 1$. We assume one of the following condition is satisfied. \begin{enumerate} \item $G$ is symplectic, \item $G$ is special even orthogonal with $\eta_{G} \neq 1$, \item $G$ is sepcial even orthogonal with $\eta_{G} = 1$, and $I_{\phi, O}^{odd}$ or $I_{\phi, O}^{even}$ is empty. \end{enumerate} If $\phi$ factors through $\phi' \in \cP{G'}$, where $G' = G_{I} \times G_{II}$ is a twisted elliptic endoscopic group of $G$, let $\phi' = \phi_{I} \times \phi_{II}$, where $\phi_{i} \in \cP{G_{i}}$ for $i = I, II$. Then $S_{\tilde{\phi}_{I}} = S_{\tilde{\phi}_{II}} = 1$. \end{proposition} \begin{proof} Since $\phi \in \cPel{G^{\theta}}$, we can view \[ I_{\phi_{I}, O}^{odd} \subseteq I_{\phi, O} \text{ and } I_{\phi_{I}, O}^{even} \subseteq I_{\phi, O}^{even} \] after possibly twisting $\phi_{I}$ by $\eta_{\phi_{I}}$. Suppose $\S{\tilde{\phi}_{I}} \neq 1$, we can represent any nontrivial element of $\S{\tilde{\phi}_{I}}$ by $(S', T')$ for \[ S' \subseteq I_{\phi_{I}, O}^{odd} \text{ and } T' \subseteq I_{\phi_{I}, O}^{even} \] such that $S' \cup T'$ is nonempty, $\sum_{i \in S' \cup T'} N_{i}$ is even and \[ \prod_{i \in S' \cup T'} \eta_{\phi_{i}} = 1. \] Then we want to show $\S{\tilde{\phi}} \neq 1$, which would lead to a contradiction. Let us define $(S, T)$ by \[ S = S' \cap I_{\phi, O}^{odd} \text{ and } T = (S' \cup T') \cap I_{\phi, O}^{even}. \] Then $(S, T)$ corresponds to a nontrivial element in $\S{\tilde{\phi}}$ unless $T$ is empty and $S = I_{\phi, O}^{odd}$. In the exceptional cases, we have \[ \text{$T'$ is empty and $S' = S = I_{\phi, O}^{odd}$}. \] By our conditions on $(S', T')$, we see $G$ has to be special even orthogonal and $\eta_{G} = 1$. So we only need to consider condition (3). In particular, we can assume $I_{\phi, O}^{even}$ is empty, i.e. $I_{\phi, O} = I_{\phi, O}^{odd}$. It follows $S' = I_{\phi_{I}, O}^{odd}$. But this is impossible for $(S', T')$ should correspond to a nontrivial element in $\S{\tilde{\phi}_{I}}$ by our assumption. Therefore, we see $\S{\tilde{\phi}} \neq 1$. Similarly, if we assume $\S{\tilde{\phi}_{II}} \neq 1$, we can also deduce $\S{\tilde{\phi}} \neq 1$. This finishes the proof. \end{proof} In the case that $G$ is a special even orthogonal group and $\phi \in \Pbd{\com{G}}$ if $F$ is local (resp. $\P{\com{G}}$, if $F$ is global), we can have a similar combinatorial description of the map from $\S{\phi}^{\Sigma_{0}}$ to $\text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times})$ if $F$ is local (resp. $\text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times})$ if $F$ is global). To do this we need to take a bigger set \[ \mathcal{P}_{\phi}^{\Sigma_{0}} = \{ (S, T) \} / S \sim S^{c} \] by withdrawing the condition that \( \sum_{i \in S \cup T} N_{i} \) must be even. It is easy to see that one can extend the map $\a_{\mathcal{P}}$ to $\mathcal{P}_{\phi}^{\Sigma_{0}}$ and the map $\bold{c}$ to $\cS{\phi}^{\Sigma_{0}}$ with its image in $\mathcal{P}_{\phi}^{\Sigma_{0}}$. As a result, we have the following lemma which is an analogue of Lemma~\ref{lemma: combinatorial description}, and the proof is similar. \begin{lemma \label{lemma: plus combinatorial description} \begin{enumerate} \item The extended map $\bold{c}$ will factor through $\S{\phi}^{\Sigma_{0}}$, and it gives a bijection between $\S{\phi}^{\Sigma_{0}}$ and $\mathcal{P}_{\phi}^{\Sigma_{0}}$. \item If we denote the bijection in (1) still by $\bold{c}$, then we have a commutative diagram. \[ \xymatrix{ 1 \ar[r] \ar@{=}[d] & \S{\tilde{\phi}}^{\Sigma_{0}} \ar[d] \ar[r] & \S{\phi}^{\Sigma_{0}} \ar[d]^{\bold{c}} \ar[r]^{\a \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times}) \ar@{=}[d] \\ 1 \ar[r] & \mathcal{P}^{\Sigma_{0}}_{\tilde{\phi}} \ar[r] & \mathcal{P}^{\Sigma_{0}}_{\phi} \ar[r]^{\a_{\mathcal{P}} \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(F)/G(F), \mathbb{C}^{\times}) } \] if $F$ is local, or \[ \xymatrix{ 1 \ar[r] \ar@{=}[d] & \S{\tilde{\phi}}^{\Sigma_{0}} \ar[d] \ar[r] & \S{\phi}^{\Sigma_{0}} \ar[d]^{\bold{c}} \ar[r]^{\a \quad \quad \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times}) \ar@{=}[d] \\ 1 \ar[r] & \mathcal{P}^{\Sigma_{0}}_{\tilde{\phi}} \ar[r] & \mathcal{P}^{\Sigma_{0}}_{\phi} \ar[r]^{\a_{\mathcal{P}} \quad \quad \quad \quad \quad \quad} & \text{Hom}(\widetilde{G}(\mathbb{A}_{F})/ \widetilde{G}(F)G(\mathbb{A}_{F}), \mathbb{C}^{\times}) } \] if $F$ is global. \end{enumerate} \end{lemma} \begin{remark \label{rk: plus combinatorial description} It is a consequence of Lemma~\ref{lemma: combinatorial description} and \ref{lemma: plus combinatorial description} that \[ \a(\S{\phi}^{\Sigma_{0}}) = \a_{\mathcal{P}}(\mathcal{P}_{\phi}^{\Sigma_{0}}) \] \end{remark} Here we give two applications of these combinatorial descriptions. The first one gives the refined $L$-packet in the archimedean case (cf. Remark~\ref{rk: refined L-packet}). \begin{proposition \label{prop: refined L-packet Archimedean case} Suppose $F$ is real, $\phi \in \cPbd{G}$ and $\tilde{\pi}$ is an irreducible admissible representation of $\widetilde{G}(F)$ whose restriction to $G(F)$ have irreducible constituents contained in $\cPkt{\phi}$. If $\S{\phi} \neq 1$, then $\tilde{\pi} \otimes \omega \cong \tilde{\pi}$ for all characters $\omega$ of $\widetilde{G}(F) / Z_{\widetilde{G}}(F)G(F)$. In particular, let $\lif{\zeta}$ be the central character of $\tilde{\pi}$, then we can define $\cPkt{\tilde{\phi}} = \clPkt{\phi, \lif{\zeta}} $ if $\cPkt{\phi}$ is not a singleton. \end{proposition} \begin{proof} Notice that $\widetilde{G}(\mathbb{R}) / Z_{\widetilde{G}}(\mathbb{R})G(\mathbb{R})$ is either $1$ or $\mathbb{R}^{\times} / \mathbb{R}_{>0}$, and the only nontrivial character $\varepsilon$ of $\mathbb{R}^{\times} / \mathbb{R}_{>0}$ is given by the sign character. Since $GL(n, \mathbb{R})$ has essentially discrete series only when $n \leqslant 2$, the set $I_{\phi, O}$ only parametrizes quadratic characters of $F^{\times}$ and discrete series of $GL(2, \mathbb{R})$ with central character $\varepsilon$. Now we suppose $\tilde{\pi} \otimes \varepsilon \ncong \tilde{\pi}$, then it is clear from Lemma~\ref{lemma: combinatorial description} that this is only possible when $I_{\phi, O}$ parametrize quadratic characters of $F^{\times}$, i.e. $\varepsilon$ and the trivial character $\varepsilon_{0}$. Depending on which characters $I_{\phi, O}^{odd}$ and $I_{\phi, O}^{even}$ parametrize, we have eight cases and we can represent them as follows: $\varepsilon_{0}, 2\varepsilon_{0}; \varepsilon, 2\varepsilon; \varepsilon_{0} \+ 2\varepsilon, \varepsilon \+ 2\varepsilon_{0}; \varepsilon_{0} \+ \varepsilon, 2\varepsilon_{0} + 2\varepsilon$. One can see easily from Lemma~\ref{lemma: combinatorial description} that in these cases either $\S{\phi} = 1$ or $\varepsilon \in \a(\S{\phi})$. Therefore we get a contradiction. For the last point, one just needs to notice $\S{\phi} \neq 1$ if $\cPkt{\phi}$ is not a singleton. \end{proof} \begin{remark \label{rk: refined L-packet Archimedean case} The proof of Proposition~\ref{prop: refined L-packet Archimedean case} also shows that $X(\tilde{\pi}) = X$ for discrete series representation $\tilde{\pi}$ of $\widetilde{G}(\mathbb{R})$. \end{remark} The second application is on the multiplicity problem that we have discussed in Section~\ref{sec: multiplicity formula}. Now let us assume $F$ is global again, and it turns out more convenient to ask when both $\Sigma_{0}$-multiplicity one and $\Sigma_{0}$-strong multiplicity one hold for $\tilde{\phi}$ together, i.e. \[ \a(\S{\phi}^{\Sigma_{0}}) = \prod^{aut}_{almost \, all \, v} \a(\S{\phi_{v}}^{\Sigma_{0}}). \] By our Remark~\ref{rk: plus combinatorial description}, this is the same as \[ \a_{\mathcal{P}}(\mathcal{P}_{\phi}^{\Sigma_{0}}) = \prod^{aut}_{almost \, all \, v} \a_{\mathcal{P}}(\mathcal{P}_{\phi_{v}}^{\Sigma_{0}}). \] The next lemma gives an answer for the simplest type of parameters. \begin{lemma \label{lemma: splitting parameters} Suppose \[ \phi = l_{1} \eta_{1} \# \cdots \# l_{r} \eta_{r} \in \cP{G}, \] where $\eta_{i}$ are quadratic id\`ele class characters for $1 \leqslant i \leqslant r$. Then both $\Sigma_{0}$-multiplicity one and $\Sigma_{0}$-strong multiplicity one hold for $\tilde{\phi}$. \end{lemma} \begin{proof} From Lemma~\ref{lemma: similitude character}, we can view $\eta_{i} \circ \c$ as quadratic id\`ele class characters of $F'$, where $F'$ is the extension of $F$ associated to the character \[ \eta_{\phi} = \prod_{i=1}^{r} \eta_{\phi_{i}}^{l_{i}} \] by class field theory. Let us denote $\eta_{i} \circ \c$ by $\eta'_{i}$. We are going to prove this lemma by induction on the number of nontrivial characters $\eta'_{i}$ for $1 \leqslant i \leqslant r$. Suppose there exists some quadratic id\`ele class character $\omega$ of $F$ such that $\omega' = \omega \circ \c$ is contained in \[ \prod^{aut}_{almost \, all \, v} \a_{\mathcal{P}}(\mathcal{P}_{\phi_{v}}^{\Sigma_{0}}). \] If we assume $\eta'_{1}$ is nontrivial, and let $E$ be the quadratic extension of $F'$ associated to $\eta'_{1}$, then after a base change to $E$, we get \[ \phi_{E} = l_{1} \eta_{E, 1} \# \cdots \# l_{r} \eta_{E, r} \] where $\eta_{E, i} = \eta'_{i} \circ \text{Nm}_{E / F'}$. And we have \[ \omega_{E} = \omega' \circ \text{Nm}_{E / F'} \] contained in \[ \prod^{aut}_{almost \, all \, v} \a_{\mathcal{P}}(\mathcal{P}_{\phi_{E, v}}^{\Sigma_{0}}). \] Since $\eta_{E,1} = 1$, by induction the lemma is true for $\phi_{E}$. Hence \[ \omega_{E} = \prod^{m}_{k=1} \eta_{E, i_{k}}, \] for some $1 < i_{k} \leqslant r$ and $m < r$. This implies that \[ ( \omega' \cdot \prod^{m}_{ k=1} \eta'_{i_{k}} ) \circ \text{Nm}_{E / F'} = 1. \] Since \( |I_{F'} : \text{Nm}_{E/F'} I_{E} | = 2, \) then \[ \omega' \cdot \prod^{m}_{ k=1 } \eta'_{i_{k}} = 1 \text{ or } \eta'_{1}. \] Hence $\omega' \in \a_{\mathcal{P}}(\mathcal{P}_{\phi}^{\Sigma_{0}})$. \end{proof} \subsection{Construction of global parameters \label{subsec: construction of global parameters} In this section we are going to give the global lifting result needed in Theorem~\ref{thm: standard argument on stability}. Let us assume $F$ is a nonarchimedean local field and $\dot{F}$ is a totally real global field with $\dot{F}_{u} = F$ (cf. \cite{Arthur:2013}, Lemma 6.2.1). Let $\dot{G}$ be a quasisplit special even orthogonal group or symplectic group over $\dot{F}$ such that $\dot{G}_{u} = G$ and $\dot{G}_{v}$ has discrete series for $v \in S_{\infty}$ (cf. \cite{Arthur:2013}, Lemma 6.2.2). For any finite set $S$ of nonarchimedean places of $\dot{F}$, we denote the unitary dual of $\dot{G}(\dot{F}_{S}) = \prod_{v \in S}\dot{G}(\dot{F}_{v})$ by $\widehat{\dot{G}(\dot{F}_{S})}$, and the Plancherel measure on $\widehat{\dot{G}(\dot{F}_{S})}$ by $\D{\mu}^{pl}_{S}$. \begin{lemma \label{lemma: multiple global lifting} For $\phi \in \cPsm{G}$ and an open subset $\D{U}$ of tempered representations of $\dot{G}(\dot{F}_{S})$ such that $\D{\mu}^{pl}_{S}(\D{U}) > 0$, one can find $\dot{\phi} \in \cPsm{\dot{G}}$ with the following properties. \begin{enumerate} \item $\dot{\phi}_{u} = \phi$, and $\otimes_{v \in S} \cPkt{\dot{\phi}_{v}} \subseteq \D{U}$. \item If $v \notin S_{\infty}(u) \cup S$, then $\dot{\phi}_{v}$ is spherical. In particular, it can be written as a direct sum of quasicharacters of $F_{v}^{\times}$ with at most one ramified quasicharacter. \item If $v \in S_{\infty}$, $\dot{\phi}_{v} \in \cPdt{\dot{G}_{v}}$. \end{enumerate} \end{lemma} \begin{proof} This lemma is the consequence of (\cite{Shin:2012}, Theorem 5.8) and (\cite{Arthur:2013}, Lemma 6.2.2 and Corollary 6.2.4). As one can see in the proof of (\cite{Arthur:2013}, Lemma 6.2.2), $\dot{\phi}_{v}$ has a ramified quasicharacter if and only if $\eta_{\dot{G}_{v}}$ is ramified. Also note that (\cite{Shin:2012}, Theorem 5.8) requires $G$ to have trivial centre, however this condition can be removed by choosing suitable discrete series in the archimedean places as in the proof of (\cite{Arthur:2013}, Lemma 6.2.2). In fact, the main techniques in both proofs are the same, i.e., Arthur's simple invariant trace formula. The new input in (\cite{Shin:2012}, Theorem 5.8) is Harish-Chandra's Plancherel formula and Sauvageot's principle of density result (cf. \cite{Sauvageot:1997}, Theorem 7.3). \end{proof} Lemma~\ref{lemma: multiple global lifting} serves as the building blocks of our global lifting result, and because we want to impose the condition of $\Sigma_{0}$-strong multiplicity one at one place for any global lift, it is important to consider the case of simple parameters first. We will begin with another description of $\Sigma_{0}$-strong multiplicity one, which is kind of dual to the original one. \subsubsection{$\Sigma_{0}$-strong multiplicity one at one place Suppose $F$ is a global field, $G$ is special even orthogonal or symplectic group over $F$ and $\phi \in \cP{G}$. We define \begin{align*} \iG{\mathbb{A}_{F}} & = Z_{\widetilde{G}}(F_{u})G(F_{u}) \times \prod_{v \neq u}\widetilde{G}(F_{v}), \\ \iG{F} & = \widetilde{G}(F) \cap \iG{\mathbb{A}_{F}}. \end{align*} Let \[ \widetilde{G}(\r_{v}^{\Sigma_{0}}) = \{ g \in \widetilde{G}(F_{v}) : \omega_{v}(g) = 1 \text{ for all } \omega_{v} \in \a(\S{\phi_{v}}^{\Sigma_{0}}) \}, \] for any $[\r_{v}] \in \cPkt{\phi_{v}}$, and \[ \widetilde{G}(\r^{\Sigma_{0}}) = \prod_{v} \widetilde{G}(\r_{v}^{\Sigma_{0}}), \] for any $[\r] \in \cPkt{\phi}$. We define $\iG{}(\r^{\Sigma_{0}}) = \widetilde{G}(\r^{\Sigma_{0}}) \cap \iG{\mathbb{A}_{F}}$. As a consequence we can have the following identities \begin{align*} \prod^{aut}_{v} \a(\S{\phi_{v}}^{\Sigma_{0}}) & = ( \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F) \widetilde{G}(\r^{\Sigma_{0}}) )^{*} \\ \prod^{aut}_{v \neq u} \a(\S{\phi_{v}}^{\Sigma_{0}}) & = ( \iG{\mathbb{A}_{F}} / \iG{F} \iG{}(\r^{\Sigma_{0}}) )^{*} . \end{align*} By the approximation theory for number fields (cf. \cite{Neukirch:1999}), we have $\widetilde{G}(\mathbb{A}_{F}) = \widetilde{G}(F) \iG{\mathbb{A}_{F}}$. So \[ ( \widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F) \widetilde{G}(\r^{\Sigma_{0}}))^{*} = ( \iG{\mathbb{A}_{F}} / \iG{\mathbb{A}_{F}} \cap \widetilde{G}(F) \widetilde{G}(\r^{\Sigma_{0}}))^{*} \] Therefore the condition of $\Sigma_{0}$-strong multiplicity one at the place $u$ is equivalent to \begin{align} \label{eq: strong multiplicity one at one place 1 |\iG{\mathbb{A}_{F}} \cap \widetilde{G}(F) \widetilde{G}(\r^{\Sigma_{0}}) : \iG{F} \iG{}(\r^{\Sigma_{0}})| = |\iG{\mathbb{A}_{F}} \cap \widetilde{G}(F) \widetilde{G}(\r^{\Sigma_{0}}) : \iG{F} ( \widetilde{G}(\r^{\Sigma_{0}}) \cap \iG{\mathbb{A}_{F}} ) | = 1. \end{align} Let \begin{align*} A &= \iG{\mathbb{A}_{F}} / G(\mathbb{A}_{F}), \quad A_{F} = \iG{F}G(\mathbb{A}_{F}) / G(\mathbb{A}_{F}) \\ B_{F} &= \widetilde{G}(F) / G(F), \quad \bar{B}(\r) = \widetilde{G}(\r^{\Sigma_{0}}) / G(\mathbb{A}_{F}) \end{align*} and $\bar{B}(\r_{v}) = \widetilde{G}(\r_{v}^{\Sigma_{0}}) / G(F_{v})$, then we can rewrite \eqref{eq: strong multiplicity one at one place 1} as \[ |A \cap \bar{B}(\r) B_{F} : ( A \cap \bar{B}(\r) ) A_{F}| = 1. \] In particular, \[ ( A \cap \bar{B}(\r) ) A_{F} = A \cap \bar{B}(\r) A_{F}, \] so we has proved the following lemma. \begin{lemma \label{lemma: strong multiplicity one at one place 1} Suppose $\phi \in \cP{G}$ and $[\r] \in \cPkt{\phi}$. Then $\Sigma_{0}$-strong multiplicity one at the place $u$ holds for $\tilde{\phi}$ if and only if \begin{align \label{eq: strong multiplicity one at one place 2} |A \cap \bar{B}(\r) B_{F} : A \cap \bar{B}(\r) A_{F}| = 1. \end{align} \end{lemma} Note that all these groups $A$, $\bar{B}(\r)$ and $A_{F}$, $B_{F}$ can be viewed as subgroups of $I_{F}$ and $F^{\times}$ respectively through the similitude character $\c$. Therefore we have the following equivalent statement for \eqref{eq: strong multiplicity one at one place 2}. \begin{lemma \label{lemma: strong multiplicity one at one place 2} $|A \cap \bar{B}(\r) B_{F} : A \cap \bar{B}(\r) A_{F}| = 1$ if and only if for any $x \in \bar{B}(\r)$, there exists $y \in \bar{B}(\r) \cap B_{F}$ such that $xy \in A$. (i.e., for any $x_{u} \in \bar{B}(\r_{u}) / (F^{\times}_{u})^{2}$, there existis $y \in \bar{B}(\r) \cap B_{F}$ such that $y_{u} = x_{u}^{-1}$ mod $(F^{\times}_{u})^{2}$.) \end{lemma} \begin{proof} Suppose $x \in \bar{B}(\r)$ and $z \in B_{F}$ such that $xz \in A$. First let us assume \eqref{eq: strong multiplicity one at one place 2}, then $xz \in A \cap \bar{B}(\r) A_{F}$ since $xz \in A \cap \bar{B}(\r)B_{F}$. So we can write $xz = u w$ where $u \in A_{F}$ and $w \in \bar{B}(\r) \cap A$. In particular $x w^{-1} = u z^{-1} \in \bar{B}(\r) \cap B_{F}$. Let us set $y = w x^{-1}$ which is also in $\bar{B}(\r) \cap B_{F}$, then one has $xy = w \in A$. Conversely, let us take $y \in \bar{B}(\r) \cap B_{F}$ such that $xy \in A$. Then we can write $xz = (xy)(y^{-1}z)$. Since both $xz$ and $xy$ lie in $A$, one has $y^{-1}z \in A$ and in particular, $y^{-1}z \in A\cap B_{F} = A_{F}$. Moreover, it clear that $xy \in \bar{B}(\r)$. Hence $xz \in A \cap \bar{B}(\r) A_{F}$ and the rest of the lemma should be clear. \end{proof} \subsubsection{Global lift Now we are going to use Lemma~\ref{lemma: multiple global lifting}, Lemma~\ref{lemma: strong multiplicity one at one place 1} and Lemma~\ref{lemma: strong multiplicity one at one place 2} to produce a global lift with the intended property of $\Sigma_{0}$-strong multiplicity one at one place. \begin{lemma \label{lemma: strong multiplicity one at one place} Suppose $F$ is a nonarchimedean local field and $\phi \in \cPsm{G}$, there exists a totally real global field $\dot{F}$ and a group $\dot{G}$ over $\dot{F}$ such that $\dot{F}_{u} = F$ and $\dot{G}_{u} = G$, and one can lift $\phi$ to a global simple parameter $\dot{\phi} \in \cPsm{G}$ satisfying the following properties. \begin{enumerate} \item $\dot{\phi}_{u} = \phi$, and $\dot{\phi}_{v} \in \cPdt{\dot{G}_{v}}$ for $v \in S_{\infty}$. \item If $v \notin S_{\infty}(u)$, $\dot{\phi}_{v}$ is a direct sum of quasicharacters of $F_{v}^{\times}$ with at most one ramified quasicharacter. \item $\Sigma_{0}$-strong multiplicity one holds for $\lif{\dot{\phi}}$ at the place $u$. \end{enumerate} \end{lemma} \begin{proof} Let $\dot{G}$ be a quasisplit special even orthogonal group or symplectic group over $\dot{F}$ such that $\dot{G}_{u} = G$ and $\dot{G}_{v}$ has discrete series for $v \in S_{\infty}$, and let $\dot{\eta}_{\phi} = \eta_{\dot{G}}$. In view of Lemma~\ref{lemma: multiple global lifting}, one would like to impose restrictions over a finite set $S$ of nonarchimedean places described by an open subset $\D{U}$ of tempered representations with $\D{\mu}^{pl}_{S} (\D{U}) > 0$, so that the global lift $\dot{\phi}$ obtained from Lemma~\ref{lemma: multiple global lifting} has the property of $\Sigma_{0}$-strong multiplicity one at the place $u$. To determine $S$ and $\D{U}$, we need to use the equivalent characterization of the property of $\Sigma_{0}$-strong multiplicity one at one place given by Lemma~\ref{lemma: strong multiplicity one at one place 1} and Lemma~\ref{lemma: strong multiplicity one at one place 2}. First, let us take two distinct quadratic id\`ele class character $\dot{\eta}_{i}$ ($i = 1, 2$), so that $\dot{\eta}_{i, u} = 1$ and $\dot{\eta}_{i, v} = \varepsilon_{v}$ the sign character of $\mathbb{R}^{\times}$ for $v \in S_{\infty}$. This is possible for one can construct a quadratic extension of any number field with arbitrarily prescribed localizations at finitely many places. Then we can consider the following composite parameter \[ \dot{\phi}_{\eta} = \dot{\eta}_{\phi} \# \dot{\eta}_{1} \# \dot{\eta}_{2} \# \dot{\eta}_{1}\dot{\eta}_{2} \in \cP{\dot{G}_{\eta}}, \] and let \begin{align*} \bar{B}( \dot{\phi}_{\eta, v} ) = \{ z \in \lif{\dot{G}}_{\eta}(\dot{F}_{v}) / \dot{G}_{\eta}(\dot{F}_{v}): \omega_{v}(z) = 1 \text{ for all } \omega_{v} \in \a(\S{\dot{\phi}_{\eta, v}}^{\Sigma_{0}})\} \end{align*} for any place $v$. Note that $B_{\dot{F}} \cong \lif{\dot{G}}_{\eta}(\dot{F})/ \dot{G}_{\eta}(\dot{F})$ and $\lif{\dot{G}}_{\eta}(\dot{F}_{v}) / \dot{G}_{\eta}(\dot{F}_{v}) \cong \lif{\dot{G}}(\dot{F}_{v}) / \dot{G}(\dot{F}_{v})$. Moreover if $[\r] \in \cPkt{\phi}$, then $\bar{B}( \r ) = \bar{B}(\dot{\phi}_{\eta, u})$. And $\bar{B}( \dot{\phi}_{\eta, v} ) = \mathbb{R}_{> 0}$ if $v \in S_{\infty}$. If we apply Lemma~\ref{lemma: splitting parameters} and Lemma~\ref{lemma: strong multiplicity one at one place 2} to $\dot{\phi}_{\eta}$, we get for any $x \in \bar{B}( \r ) / (F^{\times})^2$, there exists $y \in B_{\dot{F}}$ such that $y_{u} = x^{-1}$ mod $( F^{\times} )^{2}$ and $y_{v} \in \bar{B}( \dot{\phi}_{\eta, v} )$ for $v \neq u$. In particular, $y_{v} \in \mathbb{R}_{>0}$ if $v \in S_{\infty}$. Since we are going to use Lemma~\ref{lemma: multiple global lifting} to lift $\phi$, by its properties we can conclude immediately that $y_{v} \in \bar{B}(\dot{\r}_{v})$ unless $v \neq u$ is nonarchimedean and $|y_{v}| \neq 1$. To emphasize the dependence of $y$ on $x$, we also write $y = y(x)$. Let $S_{y(x)}$ be those nonarchimedean places $v \neq u$ where $|y_{v}| \neq 1$, and since the group $\bar{B}( \r ) / (F^{\times})^2$ is finite, we can take the union of all such sets and get \[ S = \bigcup_{x \in \bar{B}( \r ) / (F^{\times})^2} S_{y(x)} \] which is still finite. Note that $S$ depends on the choice of $y$ for each $x$. Now we can describe $\D{U} = \prod_{v \in S} \D{U}_{v}$. In fact for any $v \in S$, one just needs to take $\D{U}_{v}$ to be the union of tempered packets $\cPkt{\phi_{v}}$ for spherical $\phi_{v} \in \cPbd{\dot{G}_{v}}$ such that $\a(\S{\phi_{v}}^{\Sigma_{0}}) = 1$. By Lemma~\ref{lemma: plus combinatorial description}, this condition is equivalent to requiring no finite products of unramified quasicharacters in $\phi_{v}$ gives the nontrivial unramified quadratic character unless $\eta_{\dot{G}_{v}}$ is nontrivial and quadratic. Since this is an open condition, $\D{U}_{v}$ is open with positive Plancherel measure. Note that the condition $\a(\S{\phi_{v}}^{\Sigma_{0}}) = 1$ also guarantees $\bar{B}( \dot{\phi}_{\eta, v} ) \subseteq \bar{B}(\r_{v})$ for $[\r_{v}] \in \cPkt{\phi_{v}} \subseteq \D{U}_{v}$. Finally, we can use Lemma~\ref{lemma: multiple global lifting} to get a global lift $\dot{\phi}$ such that $\cPkt{\dot{\phi}_{v}} \subseteq \D{U}_{v}$ for $v \in S$. And it is clear that for any $x \in \bar{B}( \r ) / (F^{\times})^2$, the $y$ chosen above will lie in $\bar{B}(\dot{\r}) \cap B_{\dot{F}}$ for $[\dot{\r}] \in \cPkt{\dot{\phi}}$. This finishes the proof. \end{proof} This lemma is the first step to overcome the lack of strong multiplicity one, next we will generalize this to composite parameters. \begin{lemma \label{lemma: global lifting} Suppose \[ \phi = \phi_{1} \+ \cdots \+ \phi_{q} \+ 2\phi_{q+1} \+ \cdots \+ 2\phi_{r} \in \cPel{G^{\theta}}, \] where $\phi_{i}$ is simple for $1 \leqslant i \leqslant r$ and $\theta \in \Sigma_{0}$. We assume $\phi$ factors through $\phi_{M} \in \cPdt{M}$. Then one can choose a lift $(\dot{G}, \dot{M}, \dot{F}, \dot{\phi})$ of $(G, M, F, \phi)$ with the following properties: \begin{enumerate} \item $\dot{F}$ is a totally real field and there exists a place $u$ such $(\dot{G}_{u}, \dot{M}_{u}, \dot{F}_{u}, \dot{\phi}_{u}) = (G, M, F, \phi)$. \item $\dot{\phi} = \dot{\phi}_{1} \# \cdots \# \dot{\phi}_{q} \# 2\dot{\phi}_{q+1} \# \cdots \# 2\dot{\phi}_{r} \in \cPel{\dot{G}^{\theta}}$. \item $\S{\lif{\dot{\phi}}} = 1$. Moreover, $(\dot{G}, \dot{\phi})$ satisfies the conditions in Proposition~\ref{prop: consistency on induction}. \item If $v \notin S_{\infty}(u)$, the local Langlands parameter \[ \dot{\phi}_{v} = \dot{\phi}_{1, v} \+ \cdots \+ \dot{\phi}_{q, v} \+ 2\dot{\phi}_{q+1, v} \+ \cdots \+ 2\dot{\phi}_{r, v} \] is a direct sum of quasicharacters of $F_{v}^{\times}$, and it contains at most one ramified quasicharacter counted without multiplicities modulo the unramified quasicharacters. \item If $v \in S_{\infty}$, $\dot{\phi}_{i, v} \in \cPdt{\dot{G}_{\phi_{i, v}}}$. \item $\Sigma_{0}$-strong multiplicity one holds for $\lif{\dot{\phi}}$ at the place $u$. \end{enumerate} \end{lemma} \begin{proof} The idea is to apply Lemma~\ref{lemma: strong multiplicity one at one place} to each simple parameters. But one needs to be extra careful at the following points. The first point is about property (3) and (4), they require choosing those global characters $\dot{\eta}_{\phi_{i}}$ in a consistent way so that the condition of Corollary~\ref{cor: combinatorial description} is satisfied, and also there is at most one ramified character in $\dot{\eta}_{\phi_{1, v}} \+ \cdots \+ \dot{\eta}_{\phi_{r, v}}$ at each place $v \notin S_{\infty}(u)$ counted without multiplicities modulo the unramified quasicharacters. But this can be done easily. In fact, one can fix nonarchimedean places $\{v_{1}, v_{2}, \cdots, v_{r}\}$ distinct from $u$. If $q = 0$, we require for $1 \leqslant i \leqslant r$ and $1 \leqslant j \leqslant r$ that \begin{align \label{eq: global lifting} \dot{\eta}_{\phi_{i, v_{j}}} = \begin{cases} \text{the unramified quadratic character of $\dot{F}^{\times}_{v_{j}}$, when } i = j, \\ 1, \text{ otherwise. } \end{cases} \end{align} If $q \neq 0$, we only impose \eqref{eq: global lifting} for $2 \leqslant i \leqslant r$ and $1 \leqslant j \leqslant r$, and require for $2 \leqslant j \leqslant r$ that \[ \dot{\eta}_{\phi_{1, v_{j}}} = \begin{cases} \prod_{1 < i \leqslant q}\dot{\eta}_{\phi_{i, v_{j}}}, & \text{ if } q > 1, \\ 1, & \text{ if } q = 1. \end{cases} \] In this case, we also require $\dot{\eta}_{\phi_{1, v_{1}}} \neq 1$ when $G$ is special even orthogonal. It is easy to see that these conditions will guarantee (3). Next we can choose $\dot{\eta}_{\phi_{i}}$ satisfying these conditions. Moreover, we can choose them consecutively for $i$ decreasing from $r$ to $1$ such that $\dot{G}_{\phi_{i}}$ has discrete series and $\dot{\eta}_{\phi_{i}}$ is unramified over the ramified places of $\dot{\eta}_{\phi_{j}}$ for $j > i$, except in the case $i =1$ and $G$ is symplectic, where we would like to assume $G_{\phi_{1}}$ is also symplectic and take \[ \dot{\eta}_{\phi_{1}} = \begin{cases} \prod_{1 < i \leqslant q}\dot{\eta}_{\phi_{i}}, & \text{ if } q > 1, \\ 1, & \text{ if } q = 1. \end{cases} \] The second point is about choosing the set $S$ of nonarchimedean places as in the proof of Lemma~\ref{lemma: strong multiplicity one at one place}. It is tempting to think that it should be the union of all such sets defined in Lemma~\ref{lemma: strong multiplicity one at one place} for each simple parameter $\phi_{i}$. In fact, one should choose this set $S$ for $\phi$ as a whole. Let $\dot{\eta}_{1}$ and $\dot{\eta}_{2}$ again be two distinct quadratic id\`ele class characters defined as in Lemma~\ref{lemma: strong multiplicity one at one place}. And here we consider \[ \dot{\phi}_{\eta} = \dot{\eta}_{\phi_{1}} \# \cdots \# \dot{\eta}_{\phi_{q}} \# 2\dot{\eta}_{\phi_{q+1}} \# \cdots \# 2\dot{\eta}_{\phi_{r}} \# \dot{\eta}_{1} \# \dot{\eta}_{2} \# \dot{\eta}_{1} \dot{\eta}_{2} \in \cP{\dot{G}_{\eta}} \] when $q$ is odd; or \[ \dot{\phi}_{\eta} = \dot{\eta}_{\phi_{1}} \# \cdots \# \dot{\eta}_{\phi_{q}} \# 2\dot{\eta}_{\phi_{q+1}} \# \cdots \# 2\dot{\eta}_{\phi_{r}} \# 2\dot{\eta}_{1} \in \cP{\dot{G}_{\eta}} \] when $q$ is even. By applying Lemma~\ref{lemma: splitting parameters} and Lemma~\ref{lemma: strong multiplicity one at one place 2} to $\dot{\phi}_{\eta}$, we can get a set $S$ of nonarchimedean places using the same argument as in Lemma~\ref{lemma: strong multiplicity one at one place}. Finally, one can choose the open set of tempered representations $\D{U}_{i} = \prod_{v \in S} \D{U}_{i, v}$ for each simple parameter $\phi_{i}$ as in Lemma~\ref{lemma: strong multiplicity one at one place} and make them smaller enough so that $\a(\S{\dot{\phi}_{v}}^{\Sigma_{0}}) \subseteq \a(\S{(\dot{\phi}_{\eta})_{v}}^{\Sigma_{0}})$. This is possible again by Lemma~\ref{lemma: plus combinatorial description}. Note that if $\phi_{i}$ is a quadratic character, there is no need to impose any local conditions on $S$. This finishes the proof. \end{proof} \begin{remark} \begin{enumerate} \item In view of Lemma~\ref{lemma: plus combinatorial description}, the second property about this global lift $\dot{\phi}$ implies the natural inclusion $S_{\dot{\phi}}^{\Sigma_{0}} \hookrightarrow S_{\phi}^{\Sigma_{0}}$ is an isomorphism here. So we can identify $S_{\dot{\phi}}^{\Sigma_{0}}$ with $S_{\phi}^{\Sigma_{0}}$. \item In later proofs, we would like to apply the discussions in Section~\ref{subsec: comparison of trace formulas} to such global parameters $\dot{\phi}$. In fact, by our induction assumption and Proposition~\ref{prop: consistency on induction} we can replace Conjecture~\ref{conj: global L-packet}, ~\ref{conj: stable multiplicity formula}, ~\ref{conj: compatible normalization} by Theorem~\ref{thm: main global} for the proper Levi subgroups and twisted endoscopic groups of $\lif{\dot{G}}$. It is then clear that Lemma~\ref{lemma: twisted spectral expansion} is still valid for $\dot{\phi}$. Since we are only going to establish the stable multiplicity formula for discrete parameters in Theorem~\ref{thm: main global}, we need to require \[ \cS{\dot{\phi}, ell}^{\theta}(\dot{\omega}) = \cS{\dot{\phi}, ss}^{\theta}(\dot{\omega}) \] when $\dot{\phi}$ is not a discrete parameter in Lemma~\ref{lemma: twisted endoscopic expansion}. However, in our application this will always be satisfied by our choice of $\dot{\omega}$ and the fact that $\S{\lif{\dot{\phi}}} = 1$. \end{enumerate} \end{remark} \subsection{Proof of main local theorem \label{subsec: proof of main local theorem} With all these refined lifting results, we can start to prove the main local theorem. Let $F$ be a nonarchimedean local field, \begin{align \label{eq: elliptic parameter} \phi = \phi_{1} \+ \cdots \+ \phi_{q} \+ 2\phi_{q+1} \+ \cdots \+ 2\phi_{r} \in \cPel{G^{\theta}}, \end{align} where $\phi_{i}$ is simple for $1 \leqslant i \leqslant r$ and $\theta \in \Sigma_{0}$. The simplest cases are when $\phi_{i}$ are quadratic characters $\eta_{i}$ for $1 \leqslant i \leqslant r$, and one can see not all of these cases will follow from induction. So we have to treat the exceptional cases differently. In fact regarding property $(4)$ of Lemma~\ref{lemma: global lifting}, one only needs to consider the cases when $r \leqslant 4$, i.e., the trivial character $\varepsilon_{0}$, the unramified quadratic character $\varepsilon$, a ramified quadratic character $\eta$, and also $\eta \cdot \varepsilon$. In fact, when $r = 4$ and $G$ is special even orthogonal, it can be further reduced to the case $r \leqslant 3$ by our induction argument as one can see from the proof of Lemma~\ref{lemma: global lifting}. \begin{lemma \label{lemma: character case} For $\phi$ shown in \eqref{eq: elliptic parameter}, if $\phi_{i} = \eta_{i}$, and $r \leqslant 3$ (or $r \leqslant 4$ when $G$ is symplectic), then the main local theorem (Theorem~\ref{thm: refined L-packet}) holds for $\tilde{\phi}$. \end{lemma} \begin{proof} There are two types of parameters which lead to nontrivial cases here. \\ Type I : \[ \phi = \eta_{1} \+ \eta_{2} \+ \eta_{3} \in \cPbd{G} \] Type II: \[ \phi \in \cPel{G^{\theta}} - \cPdt{G} \] For type I, $\widetilde{G} = GL(2)$ and $\S{\tilde{\phi}} = 1$ by Lemma~\ref{lemma: combinatorial description}, so the refined $L$-packet $\cPkt{\tilde{\phi}}$ is a singleton and hence determined by $\cPkt{\phi}$. Since the character of any irreducible admissible representation of $GL(2, F)$ is already stable, the packet $\cPkt{\tilde{\phi}}$ is then stable. Therefore the only thing we need to prove is the twisted character relation \eqref{eq: theta twisted character relation} for $\omega \in \a( \S{\phi} )$ and $\theta = id$. To prove this, we use the stabilized $\dot{\omega}$-twisted trace formula for $\dot{\omega} \in \a( \S{\dot{\phi}} )$ and some global lift $\dot{\phi}$. Assume $\dot{\phi} = \dot{\eta}_{1} \# \dot{\eta}_{2} \# \dot{\eta}_{3}$ is a global lift of $\phi$. Because the global $L$-packet for $GL(2)$ should also be a singleton and multiplicity one holds for $GL(2)$, the spectral side of the discrete part of the $\dot{\omega}$-twisted trace formula becomes \[ I^{(\lif{\dot{G}}, \dot{\omega})}_{disc, \dot{\phi}} ( \lif{\dot{f}} ) = tr R^{(\lif{\dot{G}}, \dot{\omega})}_{disc, \dot{\phi}} ( \lif{\dot{f}} ) = \sum_{\dot{\omega}' \in Y / \a(\S{\dot{\phi}})} m( \lif{\dot{\r}} \otimes \dot{\omega}', \dot{\omega} ) \lif{\dot{f}}_{\lif{\dot{G}}}( \lif{\dot{\r}} \otimes \dot{\omega}', \dot{\omega} ), \] where $\lif{\dot{\r}}$ is taken to be any representation in $\mathcal{A}_{2}(\lif{\dot{G}})$, whose restriction to $\dot{G}(\mathbb{A}_{\dot{F}})$ are contained in $\cPkt{\dot{\phi}}$, and \[ \lif{\dot{f}}_{\lif{\dot{G}}}( \lif{\dot{\r}} \otimes \dot{\omega}', \dot{\omega} ) = \prod_{v} \lif{\dot{f}}_{{\lif{\dot{G}}}_{v}}( \lif{\dot{\r}}_{v} \otimes \dot{\omega}_{v}', \dot{\omega}_{v} ), \] defined by \eqref{eq: theta twisted intertwining operator}. In particular, $m( \lif{\dot{\r}} \otimes \dot{\omega}', \dot{\omega} ) = \pm 1$. For the endoscopic side, we can apply Lemma~\ref{lemma: twisted endoscopic expansion} and get \[ I^{(\lif{\dot{G}}, \dot{\omega})}_{disc, \dot{\phi}} ( \lif{\dot{f}} ) = \sum_{\dot{\omega}' \in Y / \a(\S{\dot{\phi}})} \lif{\dot{f}}'_{\lif{\dot{G}}} (\lif{\dot{\phi}} \otimes \dot{\omega}', \dot{x}), \] where $\a( \dot{x} ) = \dot{\omega}$ and $\S{\lif{\dot{\phi}}} = 1$. Therefore we have an identity \[ \sum_{\dot{\omega}' \in Y / \a(\S{\dot{\phi}})} m( \lif{\dot{\r}} \otimes \dot{\omega}', \dot{\omega} ) \lif{\dot{f}}_{\lif{\dot{G}}}( \lif{\dot{\r}} \otimes \dot{\omega}', \dot{\omega} ) = \sum_{\dot{\omega}' \in Y / \a(\S{\dot{\phi}})} \lif{\dot{f}}'_{\lif{\dot{G}}} (\lif{\dot{\phi}} \otimes \dot{\omega}', \dot{x}). \] Note that strong multiplicity one also holds for $\lif{\dot{\phi}}$. This either can be seen from Lemma~\ref{lemma: splitting parameters} or from the fact that $\widetilde{G} = GL(2)$ here. Then one can use the Satake parameters of representations of $\lif{\dot{G}}(\mathbb{A}_{\dot{F}})$ to distinguish the summands on both sides of the identity (cf. Lemma~\ref{lemma: trace formula component}). As a result, we get \[ m( \lif{\dot{\r}}, \dot{\omega} ) \lif{\dot{f}}_{\lif{\dot{G}}}( \lif{\dot{\r}}, \dot{\omega} ) = \lif{\dot{f}}'_{\lif{\dot{G}}} (\lif{\dot{\phi}}, \dot{x} ), \] where we may need to change $\lif{\dot{\r}}$ by twisting with some $\dot{\omega}' \in Y / \a(\S{\dot{\phi}})$ to get this equality. Therefore for any place $v$ one has \[ m( \lif{\dot{\r}}_{v}, \dot{\omega}_{v} ) \lif{\dot{f}}_{{\lif{\dot{G}}}_{v}}( \lif{\dot{\r}}_{v}, \dot{\omega}_{v} ) = \lif{\dot{f}}'_{\lif{\dot{G}}_{v}} (\lif{\dot{\phi}}_{v}, \dot{x}_{v} ), \] where $m( \lif{\dot{\r}}_{v}, \dot{\omega}_{v} )$ are some constants in $\mathbb{C}^{\times}$. If we take $\lif{\dot{f}}_{v}$ supported on $\lif{Z}_{\dot{F}_{v}}\dot{G}(\dot{F}_{v})$, then $ \lif{\dot{f}}_{{\lif{\dot{G}}}_{v}}( \lif{\dot{\r}}_{v}, \dot{\omega}_{v} ) = \lif{\dot{f}}'_{\lif{\dot{G}}_{v}} (\lif{\dot{\phi}}_{v}, \dot{x}_{v} )$ by character relation for $\dot{G}_{v}$, hence $m( \lif{\dot{\r}}_{v}, \dot{\omega}_{v} ) = 1$ and so is $m( \lif{\dot{\r}}, \dot{\omega} )$. In particular we have shown the twisted character relation for $\tilde{\phi}$ of type I. For type II, it also suffices to show the twisted character relation regarding Lemma~\ref{lemma: refined L-packet for non-discrete parameter}. In fact its proof is similar to the proof of the twisted character relation for a general parameter \[ \phi = \phi_{1} \+ \cdots \+ \phi_{q} \+ 2\phi_{q+1} \+ \cdots \+ 2\phi_{r} \in \cPel{G^{\theta}} - \cPdt{G}. \] So here we will carry out the general strategy. First we apply Lemma~\ref{lemma: global lifting} to $\phi$ and get a global lift $\dot{\phi}$ such that $\dot{\phi}_{u} = \phi$, $\S{\lif{\dot{\phi}}} = 1$, and $\Sigma_{0}$-strong multiplicity one holds for $\lif{\dot{\phi}}$ at the place $u$. In particular, for the case of type II parameters considered here, it is true that both $\Sigma_{0}$-multiplicity one and $\Sigma_{0}$-strong multiplicity one hold according to Lemma~\ref{lemma: splitting parameters}. Next we would like to apply Lemma~\ref{lemma: twisted spectral expansion} and Lemma~\ref{lemma: twisted endoscopic expansion} to get an identity of the spectral expansion and endoscopic expansion of the stabilized $(\theta, \dot{\omega})$-twisted trace formula. In view of Lemma~\ref{lemma: refined L-packet for non-discrete parameter}, we can assume semisimple $s \in \cS{\phi}^{\theta}$ satisfies $|\cS{\phi, s}^{0}| < \infty$. This implies $s \in \cS{\phi, ell}^{\theta}$ and we denote its preimage in $\cS{\dot{\phi}, ell}^{\theta}$ by $\dot{s}$. Let $\dot{x}$ be the image of $\dot{s}$ in $\S{\dot{\phi}, ell}^{\theta}$. Since $\S{\lif{\dot{\phi}}} = 1$, we have $\dot{\omega} \neq 1$ and \[ C_{\lif{\dot{\phi}}} \sum_{ \dot{\omega}' \in Y / \a(\S{\dot{\phi}})} i^{\theta}_{\dot{\phi}}(\dot{x}) \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}}( \lif{\dot{\phi}} \otimes \dot{\omega}', \dot{x} ) = C_{\lif{\dot{\phi}}} \sum_{ \dot{\omega}' \in Y / \a(\S{\dot{\phi}})} e'^{\theta}_{\dot{\phi}}(\dot{x}) \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}}( \lif{\dot{\phi}} \otimes \dot{\omega}', \dot{x} ), \] It follows from Proposition~\ref{prop: endoscopy of complex group} and the fact that $\dot{s} \in \cS{\dot{\phi}, ell}^{\theta}$, \[ i^{\theta}_{\dot{\phi}}(\dot{x}) = e'^{\theta}_{\dot{\phi}}(\dot{x}) \neq 0. \] Therefore \begin{align \label{eq: character case} \sum_{ \dot{\omega}' \in Y / \a(\S{\dot{\phi}})} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}}( \lif{\dot{\phi}} \otimes \dot{\omega}', \dot{x} ) = \sum_{ \dot{\omega}' \in Y / \a(\S{\dot{\phi}})} \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}}( \lif{\dot{\phi}} \otimes \dot{\omega}', \dot{x} ). \end{align} In case $\Sigma_{0}$-strong multiplicity one holds, we can again use the Satake parameters of admissible representations of $\lif{\dot{G}}(\mathbb{A}_{\dot{F}})$ to distinguish the summands on both sides of the identity. As a result, we get \[ \label{eq: character case 2} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}}( \lif{\dot{\phi}}, \dot{x} ) = \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}}( \lif{\dot{\phi}}, \dot{x} ) \] and hence there exist constants $n_{v}$ for all places such that \[ \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\phi}}_{v}, \dot{x}_{v} ) = n_{v} \cdot \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\phi}}_{v}, \dot{x}_{v} ). \] If we take $\lif{\dot{f}}_{v}$ supported on $\lif{Z}_{\dot{F}_{v}}\dot{G}(\dot{F}_{v})$, then $\lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\phi}}_{v}, \dot{x}_{v} ) = \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\phi}}_{v}, \dot{x}_{v} )$ by the twisted local intertwining relation for $\dot{G}_{v}$. Therefore $n_{v} = 1$ and we have shown the twisted local intertwining relation for $\tilde{\phi}$. By Lemma~\ref{lemma: twisted intertwining relation 1}, this implies the twisted character relation for $\tilde{\phi}$. So we have finished the proof for the parameters of type II. \end{proof} \begin{theorem \label{thm: twisted character relation for elliptic parameter} Suppose $F$ is a nonarchimedean local field and \( \phi \in \cPel{G^{\theta}} - \cPdt{G}, \) then the twisted character relations \eqref{eq: theta twisted character relation} holds for $\tilde{\phi}$. \end{theorem} \begin{proof} We first get \eqref{eq: character case} following the general strategy in the second part of the proof of Lemma~\ref{lemma: character case}. In this general case, we only know $\Sigma_{0}$-strong multiplicity one holds at the place $u$ for $\lif{\dot{\phi}}$. But now we can assume the twisted character relation for all places except $u$. This is because of the property of our lift $\dot{\phi}$ shown in Lemma~\ref{lemma: global lifting} and the fact that we have shown the twisted character relation for the exceptional cases considered in Lemma~\ref{lemma: character case}. Under these assumptions, we can conclude from the linear independence of twisted characters of $\otimes_{v \neq u} \bar{\mathcal{H}}(\lif{\dot{G}}_{v}, \lif{\dot{\chi}}_{v})$-modules (see \cite{Lemaire:2016}, A.4.1) that \[ (\lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{u}}( \lif{\dot{\phi}}_{u}, \dot{x}_{u} ) - \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{u}}( \lif{\dot{\phi}}_{u}, \dot{x}_{u} )) \prod_{v} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\phi}}_{v}, \dot{x}_{v} ) = 0, \] and hence \[ \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{u}}( \lif{\dot{\phi}}_{u}, \dot{x}_{u} ) = \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{u}}( \lif{\dot{\phi}}_{u}, \dot{x}_{u} ). \] This proves the twisted local intertwining relation, which implies the twisted character relation according to Lemma~\ref{lemma: twisted intertwining relation 1}. \end{proof} Now we can deal with the discrete parameters $\phi \in \cPdt{G}$. \begin{theorem \label{thm: refined L-packet for discrete parameter} Suppose $F$ is a nonarchimedean local field and $ \phi \in \cPdt{G}$, then the main local theorem (Theorem~\ref{thm: refined L-packet}) holds for $\tilde{\phi}$. \end{theorem} \begin{proof} We can apply Lemma~\ref{lemma: global lifting} to $\phi$, and because of Lemma~\ref{lemma: character case} and Theorem~\ref{thm: twisted character relation for elliptic parameter}, we can use the argument in Theorem~\ref{thm: standard argument on stability} to show part (1) and (2) of the main local theorem. At the same time we can deduce the stable multiplicity formula for the global lift $\lif{\dot{\phi}}$ (cf. Remark~\ref{rk: standard argument on stability}), i.e. \[ I^{\lif{\dot{G}}}_{disc, \dot{\phi}} (\lif{\dot{f}}) = S^{\lif{\dot{G}}}_{disc, \dot{\phi}} (\lif{\dot{f}}) = m_{\dot{\phi}} \sum_{\dot{\omega}' \in Y / \a(\S{\dot{\phi}})} \lif{\dot{f}}^{\lif{\dot{G}}}(\lif{\dot{\phi}} \otimes \dot{\omega}'). \] Hence the only thing left is to show the twisted character relations \eqref{eq: theta twisted character relation}. In order to deduce the $(\theta, \omega)$-twisted character relation we use the stabilized $(\theta, \dot{\omega})$-twisted trace formula. Note that \[ I^{(\lif{\dot{G}}^{\theta}, \dot{\omega})}_{disc, \dot{\phi}} (\lif{\dot{f}}) = tr R^{(\lif{\dot{G}}^{\theta}, \dot{\omega})}_{disc, \dot{\phi}} (\lif{\dot{f}}) = \sum_{\dot{\omega}'} \sum_{[\lif{\dot{\r}}] \in \cPkt{\lif{\dot{\phi}} \otimes \dot{\omega}' }} m(\lif{\dot{\r}}, \dot{\omega}) \prod_{v} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\r}}_{v}, \dot{\omega}_{v}), \] where $\lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\r}}_{v}, \dot{\omega}_{v})$ is normalized according to \eqref{eq: theta twisted intertwining operator}, the sum of $\dot{\omega}'$ is taken over \[ Y / \prod^{aut}_{v} \a(\S{\dot{\phi}_{v}}^{\Sigma_{0}}), \] and $|m(\lif{\dot{\r}}, \dot{\omega})|$ is some integer not larger than the multiplicity \[ m(\lif{\dot{\phi}}) := m_{\dot{\phi}} \, | \prod^{aut}_{v} \a(\S{\dot{\phi}_{v}}^{\Sigma_{0}}) | \, |\a(\S{\dot{\phi}}) |^{-1} \] of $\lif{\dot{\r}}$ as $\bar{\mathcal{H}}(\lif{\dot{G}}, \lif{\dot{\chi}})$-module. By Lemma~\ref{lemma: twisted endoscopic expansion}, we get \[ I^{(\lif{\dot{G}}^{\theta}, \dot{\omega})}_{disc, \dot{\phi}} (\lif{\dot{f}}) = m_{\dot{\phi}} \sum_{\dot{\omega}' \in Y / \a(\S{\dot{\phi}})} \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}}( \lif{\dot{\phi}} \otimes \dot{\omega}', \dot{x}), \] where $\a(\dot{x}) = \dot{\omega}$. By Lemma~\ref{lemma: character case} and Theorem~\ref{thm: twisted character relation for elliptic parameter}, we can assume the twisted character relations for all places $v \neq u$. Then it follows from the linear independence of twisted characters of $\otimes_{v \neq u} \bar{\mathcal{H}}(\lif{\dot{G}}_{v}, \lif{\dot{\chi}}_{v})$-modules and $\Sigma_{0}$-strong multiplicity one for $\lif{\dot{\phi}}$ at the place $u$ that \begin{align \label{eq: refined L-packet for discrete parameter} \sum_{[\lif{\dot{\r}}] \in \cPkt{\lif{\dot{\phi}}}} m(\lif{\dot{\r}}, \dot{\omega}) \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{u}} ( \lif{\dot{\r}}_{u}, \dot{\omega}_{u}) \prod_{v \neq u} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\r}}_{v}, \dot{\omega}_{v}) = m(\lif{\dot{\phi}}) \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{u}} (\lif{\dot{\phi}}_{u}, x) \prod_{v \neq u} ( \sum_{[\lif{\dot{\r}}_{v}] \in \cPkt{\lif{\dot{\phi}}_{v}}} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\r}}_{v}, \dot{\omega}_{v}) ). \end{align} Now we choose $\lif{\dot{f}} = \otimes_{v}\lif{\dot{f}}_{v}$ with $\lif{\dot{f}}_{u}$ supported on $\lif{Z}_{F}G(F)$ and thus \[ \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{u}} (\lif{\dot{\phi}}_{u}, \dot{x}_{u}) = \sum_{[\lif{\dot{\r}}_{u}] \in \cPkt{\lif{\dot{\phi}}_{u}}} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{u}} ( \lif{\dot{\r}}_{u}, \dot{\omega}_{u}). \] Substitute such test functions into the identity \eqref{eq: refined L-packet for discrete parameter}, one deduces that \[ m(\lif{\dot{\r}}, \dot{\omega}) = m(\lif{\dot{\phi}}). \] Therefore \[ m(\lif{\dot{\phi}}) ( \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{u}} (\lif{\dot{\phi}}_{u}, x) - \sum_{[\lif{\dot{\r}}_{u}] \in \cPkt{\lif{\dot{\phi}}_{u}}} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{u}} ( \lif{\dot{\r}}_{u}, \dot{\omega}_{u})) \prod_{v \neq u} ( \sum_{[\lif{\dot{\r}}_{v}] \in \cPkt{\lif{\dot{\phi}}_{v}}} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{v}}( \lif{\dot{\r}}_{v}, \dot{\omega}_{v}) ) = 0 \] for all $\lif{\dot{f}} \in \bar{\mathcal{H}}(\lif{\dot{G}}, \lif{\dot{\chi}})$. So we must have \[ \lif{\dot{f}}'_{\lif{\dot{G}}^{\theta}_{u}} (\lif{\dot{\phi}}_{u}, x) = \sum_{[\lif{\dot{\r}}_{u}] \in \cPkt{\lif{\dot{\phi}}_{u}}} \lif{\dot{f}}_{\lif{\dot{G}}^{\theta}_{u}} ( \lif{\dot{\r}}_{u}, \dot{\omega}_{u}), \] and this finishes the proof of the $(\theta, \omega)$-twisted character relation. \end{proof} At this point, we have proved our main local theorem (Theorem~\ref{thm: refined L-packet}) for $\widetilde{G} = \widetilde{G}(N)$, and the general case is just a corollary of that. \begin{corollary \label{cor: refined L-packet for general group} Suppose \[ G = G(n_{1}) \times G(n_{2}) \times \cdots \times G(n_{q}), \] with $n_{i} \leqslant N$ for $1 \leqslant i \leqslant q$ and $\phi \in \cPbd{G}$, then the main local theorem (Theorem~\ref{thm: refined L-packet}) holds for $\tilde{\phi}$. \end{corollary} \begin{proof} Let us write $\phi = \phi_{1} \times \phi_{2} \times \cdots \times \phi_{q}$ such that $\phi_{i} \in \cPbd{G(n_{i})}$ for $1 \leqslant i \leqslant q$. Note that \[ \widetilde{G} \subseteq \widetilde{G}(n_{1}) \times \widetilde{G}(n_{2}) \times \cdots \times \widetilde{G}(n_{q}) \] form a pair of groups satisfying the assumption in Section~\ref{subsubsec: notations}. Since $\cPkt{\tilde{\phi}_{i}}$ is well defined now, we can define $\cPkt{\tilde{\phi}}$ to be the restriction of $\bigotimes_{i = 1}^{q} \cPkt{\tilde{\phi}_{i}}$ to $\widetilde{G}(F)$. It is clear that $\cPkt{\tilde{\phi}}$ satisfies part $(1)$ and $(2)$ of Theorem~\ref{thm: refined L-packet}. Moreover, the twisted endoscopic groups of $\widetilde{G}$ lift to twisted endoscopic groups of $\widetilde{G}(n_{1}) \times \widetilde{G}(n_{2}) \times \cdots \times \widetilde{G}(n_{q})$ by Proposition~\ref{prop: lifting endoscopic group}. Then it is a consequence of Corollary~\ref{cor: twisted endoscopic transfer} that the twisted character relations of $\widetilde{G}$ follow from that of $\widetilde{G}(n_{1}) \times \widetilde{G}(n_{2}) \times \cdots \times \widetilde{G}(n_{q})$ again by restriction. This completes the proof in the general case. \end{proof} \subsection{Proof of global theorem} \label{subsec: proof of global theorems} In this section, we are going to prove the global theorem, i.e., Theorem~\ref{thm: main global}. We will keep the notations as in Section~\ref{subsec: beginning of proofs}. Suppose $F$ is global, \[ G = G(n_{1}) \times G(n_{2}) \times \cdots \times G(n_{q}), \] with $\sum_{i =1}^{q} n_{i} = N$. Now we can assume the main local theorem for $\widetilde{G}$ thanks to Section~\ref{subsec: proof of main local theorem}. We will first prove the corresponding statement of Conjecture~\ref{conj: global L-packet} for $\widetilde{G}$. \begin{theorem \label{thm: global L-packet for discrete parameter} We assume $\phi \in \cP{G}$ satisfies the assumption in Theorem~\ref{thm: main global}. \begin{enumerate} \item One can associate a global packet $\cPkt{\tilde{\phi}}$ of irreducible admissible representations of $\widetilde{G}(\mathbb{A}_{F})$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules, satisfying the following properties: \begin{enumerate} \item$ \cPkt{\tilde{\phi}} = \bigotimes'_{v} \cPkt{\tilde{\phi}_{v}}$, where $\cPkt{\tilde{\phi}_{v}}$ is some lift of $\cPkt{\phi_{v}}$ defined in Theorem~\ref{thm: refined L-packet}. \item there exists $[\tilde{\pi}] \in \cPkt{\tilde{\phi}}$, so that $\tilde{\pi}$ is isomorphic to an automorphic representation as $\bar{\mathcal{H}}(\widetilde{G})$-modules. \end{enumerate} Moreover, $\cPkt{\tilde{\phi}}$ is unique up to twisting by characters of $\widetilde{G}(\mathbb{A}_{F}) / \widetilde{G}(F)G(\mathbb{A}_{F})$. And we can define a global character of $\S{\tilde{\phi}}$ by \[ <x, \tilde{\pi}> := \prod_{v} <x_{v}, \tilde{\pi}_{v}> \,\,\,\,\, \text{ for } \, [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \text{ and } \, x \in \S{\tilde{\phi}}. \] \item If $\phi \in \cPdt{G}$, the $\phi$-component of the $\lif{\zeta}$-equivariant discrete spectrum of $\widetilde{G}(\mathbb{A}_{F})$ as $\bar{\mathcal{H}}(\widetilde{G})$-modules can be decomposed as follows \[ L^{2}_{disc, \phi}(\widetilde{G}(F) \backslash \widetilde{G}(\mathbb{A}_{F}), \lif{\zeta}) = m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega \\ <\cdot, \tilde{\pi}> = 1}} \tilde{\pi}, \] where $m_{\phi}$ is defined as in Remark~\ref{rk: discrete spectrum}. \end{enumerate} \end{theorem} \begin{proof} If $\phi$ factors through $\phi_{M} \in \cPdt{M}$ for some proper Levi subgroup $M$ of $G$, then by our induction assumption, we have a global $L$-packet $\cPkt{\tilde{\phi}_{M}}$ for $\widetilde{M}$, and we can define $\cPkt{\tilde{\phi}}$ to be the irreducible constituents induced from $\cPkt{\tilde{\phi}_{M}}$. So it is enough to consider the case $\phi \in \cPdt{G}$. Note that one can always define a global packet $\cPkt{\tilde{\phi}}$ as follows. If $\tilde{\pi} \in \mathcal{A}_{2}(\widetilde{G})$ and its restriction to $G(\mathbb{A}_{F})$ have irreducible constituents contained in $\cPkt{\phi}$, then we can take the local lift $\cPkt{\tilde{\phi}_{v}}$ of $\cPkt{\phi_{v}}$ to be the one containing $[\tilde{\pi}_{v}]$. We form a global packet \[ \cPkt{\tilde{\phi}} := \prod_{v} \cPkt{\tilde{\phi}_{v}}, \] and the uniqueness property should follow from Corollary~\ref{cor: modular character} and the decomposition in Part (2). To show Part (2), we can apply Lemma~\ref{lemma: twisted endoscopic expansion} to get \begin{align \label{eq: global L-packet for discrete parameter 1} \Idt{\widetilde{G}}{, \phi}(\tilde{f}) = \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) + C_{\tilde{\phi}} \sum_{\omega \in Y / \a(\S{\phi})} \sum_{x \in \S{\tilde{\phi}} - \{1\}} \tilde{f}'_{\widetilde{G}}(\tilde{\phi} \otimes \omega, x). \end{align} By the local character relation, one can define a global packet $\cPkt{\tilde{\phi}_{x}}$ transferred from $\cPkt{\tilde{\phi}'}$ for any $x \in \S{\tilde{\phi}} - \{1\}$. Next we would like to add \begin{align \label{eq: global L-packet for discrete parameter 2} 2 \cdot C_{\tilde{\phi}} \sum_{\omega \in Y / \a(\S{\phi})} \sum_{x \in \S{\tilde{\phi}} - \{1\}} \sum_{ \substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}_{x}} \\ <x, \tilde{\pi}> = -1}} \tilde{f}_{\widetilde{G}}(\tilde{\pi} \otimes \omega) \end{align} to both sides of \eqref{eq: global L-packet for discrete parameter 1}. Note that this sum does not include $x \in \S{\tilde{\phi}} - \{1\}$ such that $<x, \tilde{\pi}> = 1$ for all $[\tilde{\pi}] \in \cPkt{\tilde{\phi}_{x}}$. For those $x$ which does not contribute to \eqref{eq: global L-packet for discrete parameter 2}, we have $\tilde{f}'_{\widetilde{G}}(\tilde{\phi} \otimes \omega, x) =\tilde{f}^{\widetilde{G}}(\tilde{\phi}_{x} \otimes \omega)$ which is defined by $\cPkt{\tilde{\phi}_{x}}$ and is stable. Then the right hand side becomes \[ \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) + C_{\tilde{\phi}} \sum_{\omega \in Y / \a(\S{\phi})} \sum_{x \in \S{\tilde{\phi}} - \{1\}} \tilde{f}^{\widetilde{G}}(\tilde{\phi}_{x} \otimes \omega), \] which is again stable. So the left hand side \begin{align \label{eq: global L-packet for discrete parameter} \Idt{\widetilde{G}}{, \phi}(\tilde{f}) + 2 \cdot C_{\tilde{\phi}} \sum_{\omega \in Y / \a(\S{\phi})} \sum_{x \in \S{\tilde{\phi}} - \{1\}} \sum_{ \substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}_{x}} \\ <x, \tilde{\pi}> = -1}} \tilde{f}_{\widetilde{G}}(\tilde{\pi} \otimes \omega) \end{align} is also stable. By \eqref{eq: discrete part vs discrete spectrum}, \[ \Idt{\widetilde{G}}{, \phi}(\tilde{f}) = tr R_{disc, \phi}^{\widetilde{G}}(\tilde{f}). \] Let $\cPkt{\tilde{\phi}}$ be the global packet defined in the beginning with respect to some fixed $\tilde{\pi}^{0} \in \mathcal{A}_{2}(\widetilde{G})$. Since \eqref{eq: global L-packet for discrete parameter} is stable, it is stable at every place. So we can take $\tilde{f} = \otimes_{w} \tilde{f}_{w}$ and fix $\otimes_{w \neq v}\tilde{f}_{w}$ for any place $v$, then by Corollary~\ref{cor: refined L-packet} the coefficient of $\tilde{f}_{v}(\tilde{\pi}_{v})$ in \eqref{eq: global L-packet for discrete parameter} must be the same for all $\tilde{\pi}_{v} \in \cPkt{\tilde{\phi}_{v}}$. By varying $\otimes_{w \neq v}\tilde{f}_{w}$ and the linear independence of characters of $\otimes_{w \neq v} \bar{\mathcal{H}}(\widetilde{G}_{w}, \lif{\chi}_{w})$-modules, we have that \[ [\tilde{\pi}^{0}] = [\tilde{\pi}^{0}_{v}] \otimes (\otimes_{w \neq v} [\tilde{\pi}^{0}_{w}]) \] contributes to \eqref{eq: global L-packet for discrete parameter} if and only if all elements in \[ \cPkt{\tilde{\phi}_{v}} \otimes (\otimes_{w \neq v} [\tilde{\pi}^{0}_{w}]) \] also contribute to \eqref{eq: global L-packet for discrete parameter}. By repeating this kind of argument, one can show all elements in $\cPkt{\tilde{\phi}}$ contribute to \eqref{eq: global L-packet for discrete parameter}. Note for any $[\tilde{\pi}] \in \cPkt{\tilde{\phi}}$ such that $<\cdot, \tilde{\pi}> = 1$, it can only contribute to $\Idt{\widetilde{G}}{, \phi}(\tilde{f})$, which means it belongs to $\mathcal{A}_{2}(\widetilde{G})$. Then the decomposition in Part (2) will follow from Proposition~\ref{prop: discrete spectrum} immediately. \end{proof} \begin{remark \label{rk: global L-packet for discrete parameter} Following the proof, we can also apply the same argument to elements in $\cPkt{\tilde{\phi}_{x}}$ which contributes to \eqref{eq: global L-packet for discrete parameter}. It follows all elements in $\cPkt{\tilde{\phi}_{x}}$ contributes to \eqref{eq: global L-packet for discrete parameter}. For $[\tilde{\pi}] \in \cPkt{\tilde{\phi}_{x}}$ such that $<\cdot, \tilde{\pi}> = 1$, it can only come from $\Idt{\widetilde{G}}{, \phi}(\tilde{f})$. So $\cPkt{\tilde{\phi}_{x}} = \cPkt{\tilde{\phi}}$ up to twisting by some character in $Y$. Note this is only true for $x \in \S{\tilde{\phi}} - \{1\}$ in the sum \eqref{eq: global L-packet for discrete parameter 2}. As a result, \eqref{eq: global L-packet for discrete parameter} becomes \[ m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} \tilde{f}^{\widetilde{G}}(\tilde{\phi} \otimes \omega), \] where \[ \tilde{f}^{\widetilde{G}} (\tilde{\phi} \otimes \omega) = \prod_{v} \tilde{f}_{v}(\tilde{\phi}_{v} \otimes \omega_{v}). \] Moreover, we have \[ \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} \tilde{f}^{\widetilde{G}}(\tilde{\phi} \otimes \omega) - C_{\tilde{\phi}} \sum_{\omega \in Y / \a(\S{\phi})} \sum_{x \in \S{\tilde{\phi}} - \{1\}} \tilde{f}^{\widetilde{G}}(\tilde{\phi}_{x} \otimes \omega). \] Suppose $\cPkt{\tilde{\phi}_{x}} = \cPkt{\tilde{\phi}}$ up to twisting by some character in $Y$ for all $x \in \S{\tilde{\phi}} - \{1\}$, then \[ \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} |\S{\tilde{\phi}}|^{-1} \tilde{f}^{\widetilde{G}}(\tilde{\phi} \otimes \omega). \] This is the stable multiplicity formula in Conjecture~\ref{conj: stable multiplicity formula}. We will come back to this identity in Theorem~\ref{thm: stable multiplicity formula}. \end{remark} Next, we will prove the corresponding statement of Conjecture~\ref{conj: compatible normalization} for $\widetilde{G}$. \begin{theorem \label{thm: compatible normalization 1} Suppose $\phi \in \cPdt{G}$ satisfying the assumption in Theorem~\ref{thm: main global} and $x \in \S{\phi}^{\theta}$ with $\a(x) = \omega$ for $\theta \in \Sigma_{0}$ and some character $\omega$ of $\widetilde{G}(\mathbb{A}_{F})/\widetilde{G}(F)G(\mathbb{A}_{F})$. For $[\tilde{\pi}] \in \cPkt{\tilde{\phi}}$ with $<\cdot, \tilde{\pi}> =1$, the canonical intertwining operator \[ R(\theta)^{-1} \circ R(\omega) \] restricted to the $\tilde{\pi}$-isotypic component $I(\tilde{\pi})$ is equal to the product of $m(\tilde{\pi})$ and the local intertwining operators $A_{\tilde{\pi}_{v}}(\theta, \omega_{v})$ determined by $x_{v}$ (see \eqref{eq: theta twisted intertwining operator}), i.e. \begin{align \label{eq: compatible normalization 1} \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega' \in Y / \a(\S{\phi})} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega), \,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}), \end{align} where $ \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = \prod_{v} \tilde{f}_{\widetilde{G}^{\theta}_{v}}(\tilde{\pi}_{v}, \omega_{v})$, and it does not depend on $x$. \end{theorem} \begin{proof} It follows from Theorem~\ref{thm: global L-packet for discrete parameter} that \[ \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = tr R^{(\widetilde{G}^{\theta}. \omega)}_{disc, \phi}(\tilde{f}) = \sum_{\omega'} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} m(\tilde{\pi}, \theta, \omega) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) \] where the sum of $\omega'$ is taken over \[ Y / \prod^{aut}_{v} \a(\S{\phi_{v}}^{\Sigma_{0}}), \] and $|m(\tilde{\pi}, \theta, \omega)|$ is some integer less than or equal to \[ m(\tilde{\phi}) := m_{\phi} \, |\prod^{aut}_{v} \a(\S{\phi_{v}}^{\Sigma_{0}}) | \, | \a(\S{\phi})|^{-1}. \] By Lemma~\ref{lemma: twisted endoscopic expansion}, we have \[ \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = |\S{\tilde{\phi}}|^{-1} \sum_{\omega'} \sum_{x' \in \S{\phi}^{\theta}(\omega)} m(\tilde{\phi}) \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x'), \] where the sum of $\omega'$ is again over \[ Y / \prod^{aut}_{v} \a(\S{\phi_{v}}^{\Sigma_{0}}). \] Therefore \begin{align*} \sum_{\omega'} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} m(\tilde{\pi}, \theta, \omega) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = |\S{\tilde{\phi}}|^{-1} \sum_{\omega'} \sum_{x' \in \S{\phi}^{\theta}(\omega)} m(\tilde{\phi}) \tilde{f}'_{\widetilde{G}^{\theta}}(\tilde{\phi} \otimes \omega', x'). \end{align*} By the twisted character relation, one can define a global packet $\cPkt{\tilde{\phi}_{x'}}$ transferred from $\cPkt{\tilde{\phi}'}$ for any $x' \in \S{\phi}^{\theta}(\omega)$. Note $\S{\phi}^{\theta}(\omega) = x \cdot \S{\tilde{\phi}}$, then \begin{align \label{eq: compatible normalization 2} \sum_{\omega'} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} m(\tilde{\pi}, \theta, \omega) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = |\S{\tilde{\phi}}|^{-1} \sum_{\omega'} \sum_{y \in \S{\tilde{\phi}}} m(\tilde{\phi}) \sum_{[\tilde{\pi}] \in \cPkt{\tilde{\phi}_{xy}} \otimes \omega'} <y, \tilde{\pi}> \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega), \end{align} where $\tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega)$ is normalized by $x$ (cf. \eqref{eq: theta twisted intertwining operator}). This implies \begin{align*} \sum_{\omega'} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} m(\tilde{\pi}, \theta, \omega) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = |\S{\tilde{\phi}}|^{-1} \sum_{\omega'} \sum_{y \in \S{\tilde{\phi}}} m(\tilde{\phi}) \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}_{xy}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega). \end{align*} It follows from the linear independence of twisted characters of $\bar{\mathcal{H}}(\widetilde{G}, \lif{\chi})$-modules that we can choose $\cPkt{\tilde{\phi}_{x'}} = \cPkt{\tilde{\phi}}$ for all $x' \in \S{\phi}^{\theta}(\omega)$. Then \begin{align*} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \\ <\cdot, \tilde{\pi}> = 1}} m(\tilde{\pi}, \theta, \omega) \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega) = m(\tilde{\phi}) \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \\ <\cdot, \tilde{\pi}> = 1}} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega). \end{align*} So $m(\tilde{\pi}, \theta, \omega) = m(\tilde{\phi})$. Hence \begin{align*} \tIdt{\widetilde{G}^{\theta}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega' \in Y / \a(\S{\phi})} \sum_{\substack{ [\tilde{\pi}] \in \cPkt{\tilde{\phi}} \otimes \omega' \\ <\cdot, \tilde{\pi}> = 1}} \tilde{f}_{\widetilde{G}^{\theta}}(\tilde{\pi}, \omega). \end{align*} \end{proof} Now we are only left with proving the corresponding statement of Conjecture~\ref{conj: stable multiplicity formula}. From Remark~\ref{rk: global L-packet for discrete parameter}, we see the key is to prove the functoriality of endoscopic transfer. Here we would like to consider a more general notion of that, i.e., {\bf functoriality of twisted endoscopic transfer}, and we formulate it in our context as follows. For $\phi \in \cP{G}$ and semisimple $s \in \cS{\phi}$, let $(G', \phi') \rightarrow (\phi, s)$ and $\widetilde{G}' \in \tEnd{}{\widetilde{G}}$ be the lift of $G'$, the functoriality of twisted endoscopic transfer means the global $L$-packet $\cPkt{\tilde{\phi}'}$ transfers to a global $L$-packet $\cPkt{\tilde{\phi}}$ through the local twisted character relation \eqref{eq: theta twisted character relation}. By the same argument in the proof of Lemma~\ref{lemma: induced twisted character}, we see the transfer of $\cPkt{\tilde{\phi}'}$ only depends on the image $x$ of $s$ in $\S{\phi}$. So we can denote the transfer image by $\cPkt{\tilde{\phi}_{x}}$. \begin{lemma \label{lemma: functoriality for simple group} Suppose $\widetilde{G} = \widetilde{G}(n)$ for $n \leqslant N$, $\phi \in \cP{G}$ such that $\S{\tilde{\phi}} = 1$, then $\cPkt{\tilde{\phi}_{x}} = \cPkt{\tilde{\phi}}$ up to twisting by some character in $Y$ for any $x \in \S{\phi}$. \end{lemma} \begin{proof} For semisimple $s \in \cS{\phi}$, let $(G', \phi') \rightarrow (\phi, s)$. Suppose $|\cS{\phi, s}^{0}| = \infty$, let $T_{\phi, s}$ be a maximal torus of $(S_{\phi, s})^{0}$, then $\D{M}' = \text{Cent}(T_{\phi, s}, \D{G}')$ defines a proper Levi subgroup of $\D{G}'$ such that $\phi'$ factors through $\phi'_{M} \in \cPdt{M'}$. Moreover, $M' \in \End{ell}{M}$ for a proper Levi subgroup $M$ of $G$, which is determined by $\D{M} = \text{Cent}(T_{\phi, s}, \D{G})$. So $\phi$ factors through $\phi_{M} \in \cP{M}$, and we can reduce this case to $\widetilde{M}$. Next we assume $|\cS{\phi, s}^{0}| < \infty$, then $\phi \in \cPel{G}$. In particular, $s \in \cS{\phi, ell}$ and we let $x$ be its image in $\S{\phi}$. We can also assume $x \neq 1$, then $\S{\tilde{\phi}} = 1$ implies $\a(x) = \omega \neq 1$. By Lemma~\ref{lemma: twisted endoscopic expansion}, we have \[ \tIdt{\widetilde{G}}{, \phi} (\tilde{f}) = C_{\tilde{\phi}} \sum_{\omega' \in Y / \a(\S{\phi})} e'_{\phi}(x) \tilde{f}_{\widetilde{G}}' (\tilde{\phi} \otimes \omega', x). \] By Lemma~\ref{lemma: twisted spectral expansion} and Theorem~\ref{thm: compatible normalization 1}, we have \[ \tIdt{\widetilde{G}}{, \phi} (\tilde{f}) = C_{\tilde{\phi}} \sum_{\omega' \in Y / \a(\S{\phi})} i_{\phi}(x) \tilde{f}_{\widetilde{G}}(\tilde{\phi} \otimes \omega', x). \] Note $e'_{\phi}(x) = i_{\phi}(x) \neq 0$. Then by the linear independence of twisted characters, we have $\cPkt{\tilde{\phi}_{x}} = \cPkt{\tilde{\phi}}$ up to twisting by some character in $Y$. \end{proof} It is not hard to extend this result to the general case. \begin{lemma \label{lemma: functoriality} Suppose $G = G(n_{1}) \times G(n_{2}) \times \cdots \times G(n_{q})$, and $\phi = \phi_{1} \times \phi_{2} \times \cdots \times \phi_{q} \in \cP{G}$ with $\phi_{i} \in \cP{G(n_{i})}$ for $1 \leqslant i \leqslant q$. If $\S{\tilde{\phi}_{i}} = 1$ for all $i$, then $\cPkt{\tilde{\phi}_{x}} = \cPkt{\tilde{\phi}}$ up to twisting by some character in $Y$ for any $x \in \S{\phi}$. \end{lemma} \begin{proof} If we write the image of $x$ in $\S{\phi_{i}}$ by $x_{i}$, then by Lemma~\ref{lemma: functoriality for simple group}, $\bigotimes_{i = 1}^{q} \cPkt{\tilde{\phi}_{x_{i}}}$ is a global L-packet of $\widetilde{G}(n_{1}) \times \widetilde{G}(n_{2}) \times \cdots \times \widetilde{G}(n_{q})$, and $\cPkt{\tilde{\phi}_{x}}$ is its restriction to $\widetilde{G}$. Since the restriction of a global L-packet is again a global L-packet, then $\cPkt{\tilde{\phi}_{x}} = \cPkt{\tilde{\phi}}$ up to twisting by some character in $Y$. \end{proof} \begin{remark} We would like to point out in the case of this lemma, $\S{\tilde{\phi}}$ can be nontrivial even though $\S{\tilde{\phi}_{i}} = 1$ for all $i$. For example, let $G = Sp(2n) \times Sp(2n)$ and $\phi = (\phi_{1} \# \phi_{2}) \times (\phi_{1} \# \phi_{2}) \in \cPdt{G}$. We assume the central characters satisfy $\eta_{\phi_{1}} = \eta_{\phi_{2}} \neq 1$, then $\S{\tilde{\phi}} \cong \mathbb{Z}/2\mathbb{Z}$. \end{remark} Now we can prove the corresponding statement of Conjecture~\ref{conj: stable multiplicity formula}. \begin{theorem \label{thm: stable multiplicity formula} Suppose $G = G(n_{1}) \times G(n_{2}) \times \cdots \times G(n_{q})$, and $\phi = \phi_{1} \times \phi_{2} \times \cdots \times \phi_{q} \in \cPdt{G}$ with $\phi_{i} \in \cP{G(n_{i})}$ for $1 \leqslant i \leqslant q$. If $\S{\tilde{\phi}_{i}} = 1$ for all $i$, then \[ \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} |\S{\tilde{\phi}}|^{-1} \sigma( \com[0]{\cS{\phi}}) \tilde{f}^{\widetilde{G}} (\tilde{\phi} \otimes \omega), \,\,\,\,\,\,\, \tilde{f} \in \bar{\mathcal{H}}(\widetilde{G}, \lif{\chi}). \] \end{theorem} \begin{proof} It follows from Remark~\ref{rk: global L-packet for discrete parameter} and Lemma~\ref{lemma: functoriality} that \[ \Sdt{\widetilde{G}}{, \phi}(\tilde{f}) = m_{\phi} \sum_{\omega \in Y / \a(\S{\phi})} |\S{\tilde{\phi}}|^{-1} \tilde{f}^{\widetilde{G}} (\tilde{\phi} \otimes \omega). \] Note in this case $\sigma( \com[0]{\cS{\phi}}) = 1$. This finishes the proof. \end{proof} Up to now, we have proved the local and global theorems for $\widetilde{G}$ under our induction assumptions, when $G$ does not contain any factor of $SO(2N+2, \eta)$ (cf. Remark~\ref{rk: induction assumption}). By adding these results to our induction assumptions, we can prove the general case by repeating the previous arguments in Section~\ref{subsec: proof of main local theorem} and Section~\ref{subsec: proof of global theorems} without any change.
1,477,468,751,284
arxiv
\section[Introduction]{Introduction} Undoubtedly, stability analysis is one of the most important topics in dynamical systems theory. Traditionally, the stability of {\it a particular solution} of dynamical systems is analyzed, most often the origin, for example if we study the global error dynamics in the state trajectory tracking problem \cite{Liu_Huang}, \cite{Mazenc}. Stability analysis in the global sense could also have its benefits in the study of the convergence to zero of {\it all solution} in the cases, when the origin is not a solution of perturbed system, for example, in the situation when the origin is a stable equilibrium of nominal (unperturbed) system and we are interested in the effect of an external disturbance on the behavior of the systems as a part of the robustness analysis. Some new results in this field are direct consequences of the second part of Theorem~\ref{theorem_main} and demonstrated in Example~\ref{example1}. \section{ Notations and preliminaries} Our purpose here is to prove a new result regarding the global asymptotic and global uniform exponential stability of {\it all solutions} of perturbed nonlinear system \begin{equation}\label{original_system} \dot x=f(x,t)+\delta(t),\quad x\in\mathbb{R}^n, \quad t\geq t_0, \end{equation} given that $x=0$ may not be a solution for the nominal system $\dot x=f(x,t)$ and that nominal vector field $f$ and perturbation $\delta$ satisfy certain conditions described in the terms of logarithmic norm. In other words, we focus here on the systems whose trajectories converge to one another and, in general, without being attracted toward some equilibrium position. The underlying idea is obvious: {\it If we have proved the asymptotic stability of all solutions at once, we do not have to deal with the stability properties of a particular solution}, especially if finding it itself is a difficult task \cite{Pavlov}. Two similar, but not entirely equivalent \cite{Ruffer} stability notions were settled - one is the long established notion of convergent systems \cite{Demidovich}, \cite{Pavlov}, the other is the younger notion of incremental stability \cite{Angeli}. \begin{defi}[cf.~\cite{Ruffer}] \label{GIS} System (\ref{original_system}) is incrementally asymptotically stable in a positively invariant set $X\subset\mathbb{R}^n$ if there exists a function $\beta\in{\cal K}{\cal L}$ \cite{Khalil} such that for any two solution $ x(t)$ and $x^*(t)$ with $x(t_0), x^*(t_0)\in X$ and $t\geq t_0,$ \[ \vert x(t)-x^*(t)\vert\leq\beta\big(\vert x(t_0)-x^*(t_0)\vert, t-t_0 \big). \] In the case $X = \mathbb{R}^n$ we say that system (\ref{original_system}) is globally incrementally stable (GIS). \end{defi} The aim of this paper is to provide an alternative approach for assessment of GIS based on the logarithmic norm and the variation of constant formula applied to auxiliary linear time-varying systems. We obtain more general results as those achieved by using (quadratic) Lyapunov-like function which until now has been practically the only applicable method, see, e. g. \cite{Pavlov}, \cite{Ruffer} and the references therein. Moreover, the proposed approach turns out to be a bit simpler than through finding some implicit motion integral as in Lyapunov theory. As a completely new result seems to be the establishing of conditions for the convergence of all solutions of the system (\ref{original_system}) to $0$ as $t\to\infty$ if even $x=0$ is not the equilibrium position of the nominal system $\dot x=f(x,t)$, in Theorem~\ref{theorem_main}; the context and novelty are explained in Remarks~\ref{Demidovich1} and~\ref{Demidovich2}. From another point of view, if $f(0,t)=0$ for all $t\geq t_0,$ Theorem~\ref{theorem_main} gives a sufficient conditions for robustness of global asymptotic stability to external perturbation $\delta(t)$ of the equilibrium point $x=0,$ and we came to the surprising conclusion that the origin may remain \enquote{attractive} even for unbounded (and possibly unknown) perturbations $\delta(t),$ as is demonstrated in Example~\ref{example1}. This example shows at the same time that the conditions imposed on the system in Theorem~\ref{theorem_main} cannot be weakened too much. Thus the results achieved in this paper contradict the opinion formulated in the classic monograph on dynamical systems \cite[Chapter~9, p.~346]{Khalil}, where it is written: \enquote{{\it The origin $x = 0$ may not be an equilibrium point of the perturbed system. We can no longer study stability of the origin as an equilibrium point, nor should we expect the solution of the perturbed system to approach the origin as $t\to\infty.$ The best we can hope for is that $x(t)$ will be ultimately bounded by a small bound, if the perturbation term is small in some sense.}} \subsection{Notations} Let $\mathbb{R}^n$ denote an $n-$dimensional vector space endowed by any vector norm $\vert\cdot\vert,$ and $\norm{\cdot}$ be an induced norm for matrices, $\norm{A}=\max\{\vert{Ax}\vert;$ $\vert{x}\vert=1\}.$ In the specific situations, when the vector norm is derived from the weighted inner product $(x,y)_P\triangleq y^TPx$ on $\mathbb{R}^n$ and $\vert x\vert_P\triangleq(x,x)_P^{1/2},$ where $P$ is a symmetric and positive definite matrix, we use the notation with the subscript $P,$ $\vert\cdot\vert_P,$ $\norm{\cdot}_P,$ {\it etc.} Obviously, for $P=I$ (the unit matrix on $\mathbb{R}^n$) we obtain the Euclidean norm, $\vert\cdot\vert_I.$ Throughout the whole paper, the superscript \enquote{\,T\,} indicates the transpose operator. We always assume that the function $f:$ $\mathbb{R}^n\times[t_0,\infty)\to\mathbb{R}^n$ is continuously differentiable in $x$ and continuous in $t$ and that perturbation $\delta:$ $[t_0,\infty)\to\mathbb{R}^n$ is continuous. The perturbing term $\delta(t)$ aggregates all external disturbances which affect the nominal system $\dot x=f(x,t),$ where, as usual, the overdot represents the derivative of the state variable $x=x(t)$ with respect to time $t.$ Let us denote by $J_xf(y,t)$ the Jacobian matrix of $f$ with respect to variable $x$ and evaluated at $(y,t).$ We also assume that the solutions of (\ref{original_system}) are uniquely determined by $x(t_0)$ for all $t\geq t_0.$ For later reference, we introduce two useful relations from the calculus of vector functions. \begin{lem}\label{integral_eq} Let the function $f(x,t)$ from $\mathbb{R}^n\times[t_0,\infty)$ to $\mathbb{R}^n$ be a continuously differentiable in $x$ and continuous in $t.$ Then \begin{itemize} \item[(I)] \[ \bigg[\int\limits_0^1 J_xf(\xi x,t)d\xi\bigg]x=f(x,t)-f(0,t); \] \end{itemize} or more generally, \begin{itemize} \item[(II)] \[ f(x,t)-f(x^*,t)=\bigg[\int\limits_0^1 J_xf(x^*+\xi (x-x^*),t)d\xi\bigg](x-x^*) \] \end{itemize} for all $x,x^*\in\mathbb{R}^n.$ \end{lem} The proof of Lemma~\ref{integral_eq} is postponed in Appendix and for which we do not claim any originality. \medskip \noindent The key role in our analysis plays the logarithmic norm $\mu[A]$ of a matrix $A,$ which is in some sense analogous to a norm, albeit it is not actually a norm in the usual sense, but which gives principally the sharper estimates on asymptotic behavior of the solutions than norms, because $\mu[A]$ may take on also negative values. We define for any real $n\times n$ matrix $A$ the logarithmic norm by the relation \begin{equation}\label{lognorm_def} \mu[A]\triangleq\lim\limits_{\theta \to 0^+}\frac{\norm{I+\theta A}-1}{\theta}. \end{equation} Specifically, for the Euclidean norm, by \cite{Afanasiev}, \cite{Coppel1}, \cite{Dekker_Verwer}, \cite{Desoer2}, \begin{equation}\label{lognorm_euclidean} \mu_I[A]=\frac12\lambda_{\max}\left({A+A^T}\right), \end{equation} where $\lambda_{\max}\left({A+A^T}\right)$ denotes the maximum eigenvalue of the matrix $A + A^T.$ For a general $\mu_P[\cdot]$ see, e.~g. \cite{Hu_Liu}, \[ \mu_P[A]=\frac12\lambda_{\max}\left({\hat A+\hat A^T}\right),\ \hat A=P_0AP_0^{-1}, \ P_0=\sqrt{P}. \] The logarithmic norm has the properties \cite{Desoer_Vidyasagar, Desoer2, Soderlind1, Soderlind2} that are useful in the stability analysis not only for linear systems as we will see later: \medskip For any given $n\times n$ real matrices $A, B$ \begin{itemize} \item[(P1)] the limit in (\ref{lognorm_def}) exists; \item[(P2)] $\mu[cA+(1-c)B]\leq c\mu[A]+(1-c)\mu[B]$ for all $c\in[0,1]$ (convexity); \item[(P3)] $\vert \mu[A]-\mu[B]\vert\leq\norm{A-B}$ ($\vert\cdot\vert$ on the left-hand side denotes the absolute value of real number); \end{itemize} \begin{itemize} \item[(P4)] let $\Phi(t)$ be a fundamental matrix solution for linear time-varying system $\dot x=A(t)x,$ where $A(\cdot):$ $[t_0,\infty)\to\mathbb{R}^{n\times n}$ is a continuous matrix function. Then \[ e^{-\int\limits_{\tau}^t \mu[-A(s)]ds}\leq\norm{\Phi(t)\Phi^{-1}(\tau)}\leq e^{\,\int\limits_{\tau}^t \mu[A(s)]ds} \] for all $t_0\leq\tau\leq t<\infty;$ \item[(P5)] \cite[p.~34]{Desoer_Vidyasagar} the solution of linear time-varying system $\dot x =A(t)x$ satisfies for all $t\geq t_0$ the inequalities \[ \vert{x(t_0)}\vert e^{-\int\limits_{t_0}^t \mu[-A(s)]ds}\leq\vert{x(t)}\vert\leq\vert{x(t_0)}\vert e^{\,\int\limits_{t_0}^t \mu[A(s)]ds}. \] By the assumption on $A(t)$ and Property~P3, the integrals above are well-defined because $\mu[A(\cdot)]$ is continuous. \end{itemize} \section{Main result} The main results of the paper are summarized in the following theorem. \begin{thm}\label{theorem_main} Let us consider the system (\ref{original_system}), \[ \dot x=f(x,t)+\delta(t),\quad x\in\mathbb{R}^n, \quad t\geq t_0. \] Assume that for some vector norm on $\mathbb{R}^n$ \begin{itemize} \item[(A1)] there exists a continuous function $\alpha(t)$ and a real constant $\alpha_0>0$ such that \[ \mu[J_xf(x,t)]\leq-\alpha(t)\leq-\alpha_0<0\ for\ all\ (x,t)\in\mathbb{R}^n\times[t_0,\infty). \] \end{itemize} Then the difference between any two solutions $x(t)$ and $x^*(t)$ of system (\ref{original_system}) decreases exponentially (and uniformly), \begin{equation}\label{ineq_main} \vert x(t)-x^*(t)\vert\leq e^{-\alpha_0(t-t_0)}\vert x(t_0)-x^*(t_0)\vert,\quad t\geq t_0, \end{equation} that is, the system (\ref{original_system}) is GIS in the sense of Definition~\ref{GIS}. \medskip In addition, \begin{itemize} \item[(A2)] if the ratio $\frac{\vert f(0,t)+\delta(t)\vert}{\alpha(t)}\to 0$ as $t\to\infty,$ \end{itemize} then all solutions of (\ref{original_system}) converge to $0$ as $t\to\infty.$ \end{thm} \begin{pf} First we prove that the inequality (\ref{ineq_main}) holds. Let us denote by $z$ the difference $z(t)=x(t)-x^*(t),$ $t\geq t_0.$ Observe that $z$ is equal to the solution of linear time-varying system \[ \dot z=A(t)z,\ z(t_0)=x(t_0)-x^*(t_0) \] where, by Lemma~\ref{integral_eq}, \[ A(t)\triangleq \int\limits_0^1 J_xf(x^*(t)+\xi (x(t)-x^*(t)),t)d\xi. \] Due to the convexity of the logarithmic norm, Jensen inequality and by the assumption, we obtain \[ \mu[A(t)]=\mu\bigg[\int\limits_0^1 J_xf(x^*(t)+\xi (x(t)-x^*(t)),t)d\xi\bigg] \] \[ \leq\int\limits_0^1 \mu\bigg[J_xf(x^*(t)+\xi (x(t)-x^*(t)),t)\bigg]d\xi\leq-\int\limits_0^1 \alpha(t)d\xi=-\alpha(t)\leq-\alpha_0. \] Applying the consistency of the operator norm with the vector norm that induces it and Property~P4 of the logarithmic norm to $z(t)=\Phi(t)\Phi^{-1}(t_0)z(t_0)$ we get (\ref{ineq_main}). \medskip Now we prove the second part of Theorem~\ref{theorem_main}, the eventual convergence of all solution to $0$ as $t\to\infty.$ Observe that the solution $x(\cdot)$ of (\ref{original_system}) is equal to the solution of the linear time-varying system \[ \dot x = \tilde A(t)x+f(0,t)+\delta(t), \] where \[ \tilde A(t)=\int\limits_0^1 J_xf(\xi x(t),t)d\xi. \] By similar argument as above, \[ \mu[\tilde A(t)]=\mu\bigg[\int\limits_0^1 J_xf(\xi x(t),t)d\xi\bigg] \] \begin{equation}\label{est_tildeA} \leq\int\limits_0^1 \mu\bigg[J_xf(\xi x(t),t)\bigg]d\xi\leq-\int\limits_0^1 \alpha(t)d\xi=-\alpha(t)\leq-\alpha_0. \end{equation} Using the variation constant formula, we get \begin{equation*} x(t)=\tilde\Phi(t)\bigg[\tilde\Phi^{-1}(t_0)x(t_0)+\int\limits_{t_0}^t \tilde\Phi^{-1}(\tau)[f(0,\tau)+\delta(\tau)]d\tau\bigg], \end{equation*} that is, \begin{equation*} \vert{x(t)}\vert\leq \vert{x(t_0)}\vert e^{\,\int\limits_{t_0}^t\mu[\tilde A(s)]ds}+\int\limits_{t_0}^t e^{\,\int\limits_{\tau}^t\mu[\tilde A(s)]ds}\vert f(0,\tau)+\delta(\tau)\vert d\tau. \end{equation*} Obviously, by Assumption~A1, $\vert{x(t_0)}\vert e^{\,\int\limits_{t_0}^t\mu[\tilde A(s)]ds}\to 0$ (exponentially) for $t\to\infty$ and so it remains to analyze the second term on the right-hand side of the above inequality. We have, \[ \int\limits_{t_0}^t e^{\,\int\limits_{\tau}^t\mu[\tilde A(s)]ds}\vert f(0,\tau)+\delta(\tau)\vert d\tau= e^{\,\int\limits_{t_0}^t\mu[\tilde A(s)]ds}\int\limits_{t_0}^t e^{-\int\limits_{t_0}^{\tau}\mu[\tilde A(s)]ds}\vert f(0,\tau)+\delta(\tau)\vert d\tau \] \begin{equation}\label{estimate_for_limit} =\frac{\int\limits_{t_0}^t e^{-\int\limits_{t_0}^{\tau}\mu[\tilde A(s)]ds}\vert f(0,\tau)+\delta(\tau)\vert d\tau}{e^{-\int\limits_{t_0}^t\mu[\tilde A(s)]ds}}, \end{equation} and the L'Hospital rule yields \[ \lim\limits_{t\to\infty}\frac{\frac{d}{dt}\int\limits_{t_0}^t e^{-\int\limits_{t_0}^{\tau}\mu[\tilde A(s)]ds}\vert f(0,\tau)+\delta(\tau)\vert d\tau}{\frac{d}{dt}e^{-\int\limits_{t_0}^t\mu[\tilde A(s)]ds}} \] \[ =\lim\limits_{t\to\infty}\frac{e^{-\int\limits_{t_0}^t\mu[\tilde A(s)]ds}\vert f(0,t)+\delta(t)\vert }{e^{-\int\limits_{t_0}^t\mu[\tilde A(s)]ds}(-\mu[\tilde A(t)])}=\lim\limits_{t\to\infty}\frac{\vert f(0,t)+\delta(t)\vert }{-\mu[\tilde A(t)]}. \] Now, from (\ref{est_tildeA}), $-\mu[\tilde A(t)]\geq\alpha(t)$ which, together with Assumption~A2, gives the statement of the second part of Theorem~\ref{theorem_main}. \end{pf} \begin{rmk}\label{Demidovich1} A great Russian mathematician and one of the pioneers in the area of stability of dynamical systems, B.P.~Demidovich showed, see e.~g. \cite{Pavlov} or the original source in Russian \cite{Demidovich}, that if, for some positive definite matrix $P =P^T>0$, the matrix \begin{equation}\label{Demidovich} J(x,t)=\frac12\left[PJ_xF(x,t)+J^T_xF(x,t)P\right] \end{equation} is negative definite uniformly in $(x,t)\in\mathbb{R}^n\times\mathbb{R}$ then for any two solutions $x(t)$ and $x^*(t)$ of the dynamical system $\dot x=F(x,t)$ is \[ \vert x(t)-x^*(t)\vert_I\leq K e^{-\alpha(t-t_0)}\vert x(t_0)-x^*(t_0)\vert_I \] for all $t\geq t_0$ and some independent on $x$ and $x^*$ constants $K,\alpha>0.$ However, this condition is not very-well suited for reasoning about the convergence of all solutions to $0$ as $t\to\infty$ if \[ (F(0,t)=)\,f(0,t)+\delta(t)\neq0 \] because we cannot set $x^*(t)\equiv0.$ In the context of logarithmic norm, Demidovich condition (\ref{Demidovich}) is equivalent to the existence of positive definite symmetric matrix $P$ such that $\mu_P[J_xF(x,t)]\leq-\alpha<0.$ In fact, as follows from \cite{Dekker_Verwer} and \cite{Hu_Liu}, \[ \mu_P[A]=\max\limits_{x\neq 0}\frac{(Ax,x)_P}{\vert x\vert^2_P}=\max\limits_{x\neq 0}\frac{x^T(PA+A^TP)x}{2\vert x\vert^2_P},\ A=J_xF(x,t). \] But, taking into account that not every norm comes from an inner product, our result strengthens the Demidovich results. \end{rmk} \begin{rmk}\label{Demidovich2} The condition in Assumption~A1 might be relaxed to \[ \forall (x,t)\in\mathbb{R}^n\times[t_0,\infty):\ \mu[J_xf(x,t)]\leq-\alpha(t), \quad \int\limits_{t_0}^{\infty}\alpha(\tau)d\tau=\infty \] to obtain only asymptotic stability of solutions (not uniform and not exponential, in general), \[ \vert x(t)-x^*(t)\vert\leq e^{-\int\limits_{t_0}^{t}\alpha(\tau)d\tau} \vert x(t_0)-x^*(t_0)\vert, \quad t\geq t_0. \] Recall that proof by L'Hospital rule requires $\alpha(t)>0$ in some left neighborhood of $t=\infty.$ Notice also that, albeit under these circumstances the system may not satisfy the conditions for GIS from Definition~\ref{GIS}, still all solutions converge to one another as $t\to\infty.$ Thus we have extended the results presented in \cite{Lohmiller} to more general type of convergence and also to potentially unbounded perturbation $\delta(t)$ of the nominal system $\dot x=f(x,t).$ \end{rmk} \section{Simulation experiments} \begin{ex}\label{example1} As an academic example, let us consider the planar nonlinear system $\dot x =f(x,t)+\delta(t),$ $t\geq t_0$ with \begin{equation}\label{eq:example1} f(x,t)=\big(\phi(t)x_1 +\sin\left(x_1\right),\ bx_1+ [2+\phi(t)]x_2+\sin\left(x_2\right)\big)^T, \end{equation} where $\phi(t)$ is an arbitrary scalar continuous function on $[t_0,\infty)$ and $b$ is a real constant. By (\ref{lognorm_euclidean}) and with the help of MATLAB code, we have \[ \mu_I\big[J_x f(x,t)\big]=\phi(t)+\frac12\big[\cos\left(x_{1}\right)+\cos\left(x_{2}\right)+\sqrt{\vartheta }\big]+1, \] where \[ \vartheta = b^2+[{\cos\left(x_{1}\right)}-{\cos\left(x_{2}\right)}]^2-4\,\cos\left(x_{1}\right)+4\,\cos\left(x_{2}\right)+4 \] \[ =b^2+ [{\cos\left(x_{1}\right)}-{\cos\left(x_{2}\right)}-2]^2\geq 0. \] For example, if we choose $b=5$ and $\phi(t)=-6-t^3,$ the Assumptions~A1 and A2 of Theorem~\ref{theorem_main} hold for $\alpha(t)=0.5+t^3,$ $t\geq 0(=t_0)$ and so the vanishing of all solutions as $t\to\infty$ is ensured for perturbations satisfying $\vert\delta(t)\vert_I=o(t^3)$ in Landau's little-o notation. It means, that the system is GIS in the sense of Definition~\ref{GIS} and, in addition, all solutions converge to $0$ as long as the perturbing term $\delta(t)$ (its $\vert\cdot\vert_I-$norm, to be more precise) is of the order less than $t^3$ as $t\to\infty,$ demonstrating the global robust stability of the equilibrium point $x=0$ of the nominal system ($\delta=0$) even for unbounded perturbations $\delta.$ The results of simulation experiments are shown in Fig.~\ref{solution_example1} and Fig.~\ref{solution_example1b}, where for the simulation purpose we selected one representative from the class of admissible perturbations (Fig.~\ref{solution_example1}) and the borderline case for the second simulation experiment. The dynamics of the system on Fig.~\ref{solution_example1b} indicates that Assumption~A2, ensuring the convergence to zero of all solutions as $t\to\infty,$ cannot be weakened too much. The limiting value $(0,\, 4)^T$ can be in this particular example calculated explicitly thinking as follows: Separately analyzing the first equation by using Theorem~\ref{theorem_main} for $\delta=(\delta_1,\, \delta_2)^T=(5\sin^2\left(t\right),\, 4t^3)^T$ and $\phi(t)=-6-t^3,$ we obtain that $x_1(t)\to0$ as $t\to\infty.$ In the second equation, transforming the second component of state vector by $x_2=\tilde x_2+4$ and identifying $(bx_1(t)-16)$ as an inhomogeneous term $\tilde\delta_2(t),$ we get the scalar differential equation \[ \dot{\tilde x}_2=\tilde f_2(\tilde x_2,t)+\tilde\delta_2(t), \] where \[ \tilde f_2(\tilde x_2,t)\triangleq f_2(x_1,\tilde x_2+4,t)-bx_1+\delta_2(t)=-(4+t^3)\tilde x_2+\sin(\tilde x_2+4). \] Then Theorem~\ref{theorem_main} implies that $\tilde x_2(t)\to0$ as $t\to\infty$ because \[ \frac{\vert\tilde f_2(0,t)+\tilde\delta_2(t)\vert}{\tilde\alpha(t)}=\frac{\vert \sin(4)+bx_1(t)-16\vert}{3+t^3} \to 0\ \mathrm{as}\ t\to\infty, \] which is what we have to prove. \end{ex} \begin{figure}[H] \captionsetup{singlelinecheck=off} \centerline{ \hbox{ \psfig{file=example1_x_1,width=5.0cm, clip=} \hspace{1.cm} \psfig{file=example1_x_2,width=5.0cm,clip=} } } \caption{The numerical solution $x(t)=\left(x_1(t), x_2(t)\right)^T$ of the system $\dot x =f(x,t)+\delta(t),$ where $f(x,t)$ is given by (\ref{eq:example1}) with $b=5,$ $\phi(t)=-6-t^3,$ the admissible (unbounded) perturbation $\delta(t)=\big(5\sin^2\left(t\right),\, t\big)^T$ and the initial state $x(0)=(-2, \ 5)^T.$ } \label{solution_example1} \end{figure} \begin{figure}[H] \captionsetup{singlelinecheck=off} \centerline{ \hbox{ \psfig{file=example1_x_1b,width=5.0cm, clip=} \hspace{1.cm} \psfig{file=example1_x_2b,width=5.0cm,clip=} } } \caption{The numerical solution $x(t)=\left(x_1(t), x_2(t)\right)^T$ of the system $\dot x =f(x,t)+\delta(t),$ where $f(x,t)$ is given by (\ref{eq:example1}) with $b=5,$ $\phi(t)=-6-t^3,$ the borderline perturbation $\delta(t)=\big(5\sin^2\left(t\right),\, 4t^3\big)^T$ and the initial state $x(0)=(-2, \ 5)^T.$ } \label{solution_example1b} \end{figure} \section*{Conclusions} In this paper, the new result for assessment of the global incremental stability of the nonlinear systems $\dot x=f(x,t)+\delta(t)$ is derived. Roughly speaking, we have established here the sufficient condition for convergence of any two solutions of a system to each other and another condition for convergence of all solutions to the origin $x=0,$ which may or may not be the equilibrium position for the nominal system $\dot x=f(x,t).$ The fundamental advantage of the used approach based on the logarithmic norm is the fact that to estimate the norm of transition matrix for auxiliary linear time-varying system associated to the original nonlinear one, we do not need to know the fundamental matrix solution and all necessary estimates are based purely on the linear system's matrix entries. \section*{Appendix.} \noindent For completeness, we provide the proof of Lemma~\ref{integral_eq}. \begin{proofoflemma2} We prove Part~I only, in the proof of second statement we proceed analogously. Let $f_i,$ $i=1,\dots,n$ denote the components of $f(x,t)$ and define: $g_i: [0,1]\to\mathbb{R}$ by $g_i(\xi)=f_i(\xi x,t).$ Then we have \[ f_i(x,t)-f_i(0,t)=g_i(1)-g_i(0)=\int\limits_0^1 g'_i(\xi)d\xi \] \[ =\int\limits_0^1\bigg(\sum\limits_{j=1}^n\frac{\partial f_i}{\partial x_j}(\xi x,t)x_j\bigg)d\xi=\sum\limits_{j=1}^n\bigg(\int\limits_0^1 \frac{\partial f_i}{\partial x_j}(\xi x,t) d\xi\bigg)x_j, \ i=1,\dots,n. \] Now the statement of lemma follows immediately. \end{proofoflemma2}
1,477,468,751,285
arxiv
\section{Introduction}\label{sec:introduction} A planar photonic crystal waveguide (PCW) is a rich system finding applications in diverse areas of optical physics. Among those are slow light \cite{Baba2008}, topological photonics \cite{Wen2022}, chiral photonics \cite{Lodahl2017}, cavity quantum electrodynamics \cite{Vuckovic2003} and many others. An attractive feature of the planar photonic crystal is its flexibility for tailoring dispersion properties of light. In particular, the dispersion curve of a PCW mode inside the crystal bandgap can be engineered to reach extremely high values of group velocity at a target wavelength. This feature allows one to use a PCW as a platform for travelling-wave cavity quantum electrodynamics. A single emitter generating photons at wavelength $\lambda$ matched to the large group velocity range of the PCW mode dispersion curve is strongly coupled to the PCW mode and thus its emission exhibits a significant Purcell enhancement. This effect enabled the development of an on-demand single photon source compatible with planar photonic integrated circuits \cite{Lodahl2020}. Furthermore, the ability to strongly couple an emitter to a cavity mode while still being able to efficiently excite the emitter and read-out photons from the cavity boosted the research in nonlinear light-matter interaction at the single-photon level \cite{Lodahl2015}. Semiconductor quantum dots (QDs) are the most common type of emitters which can be coupled to a PCW to create a single photon source. Despite being well-studied, QDs with predefined parameters are still notoriously hard to fabricate deterministically. The most widespread Stransky-Krastanov growth process produces QDs with randomly distributed spectral characteristics. The emission wavelength of QDs typically falls in range of a few nanometers around the designed center wavelength. The first derivative $d\omega/dk$ of the PCW dispersion curve gets close to zero in a very narrow wavelength range $\lambda_{0} \pm \Delta \lambda/2$ and efficient Purcell enhancement is not guaranteed for most of the fabricated QDs with emission wavelengths missing the $\Delta\lambda$ region. Furthermore, the fabrication process introduces defects into the PCW structure which affect the dispersion properties of the PCW mode. The workaround for this issue is straightforward -- an array of structures is fabricated and only those which meet particular experimental requirements are selected. Although this method may be satisfactory for research purposes, the lack of reproducibility in the single-photon source fabrication is one of the major bottlenecks in contemporary quantum optical experiments \cite{Pan2021}. At the same time current trends in optical quantum computing demand the development of hybrid integration methods to place single emitters onto a photonic platform of choice \cite{Zwiller2020}. In this paper we will address the design approaches which mitigate the effect of fabrication imperfection on the Purcell factor at the source wavelength. We start with developing a heuristic PCW design approach which significantly simplifies the selection of a PCW geometric configuration. The theory behind this approach is based on simple optical phenomena -- interference and diffraction of light scattered inside the PCW membrane and leaking out of the membrane. The derived equations provide clear guidelines how to choose PCW geometric parameters in order to set the maximal Purcell enhancement at the required wavelength and completely eliminate the necessity to evaluate multiple time consuming 3D FDTD simulations. After the description of the heuristic PCW theory we address the problem of PCW robustness to fabrication imperfections. The question of a PCW dispersion curve robustness against fabrication defects has been previously highlighted in the series of works. These include studies of fabrication defects' influence on the quality factors of photonic crystal microcavities \cite{Painter2004, Li2016} and automated design methods to optimize the photonic crystal microcavity structure \cite{Savona2014, Johnson2014}. We focus on the development of a design approach which increases the robustness of the coupling between an emitter and a photonic crystal waveguide mode. We propose two design approaches increasing the robustness of coupling to the fabrication errors and test them using numerical simulations. \section{\label{sec:photonic_crystal}Photonic crystal} \begin{figure*}[!htp] \centering \includegraphics[width = \textwidth]{Figure-1.pdf} \caption{a) An overview of a photonic crystal waveguide structure. b) Typical dispersion curves of PCW eigenmodes inside the photonic bandgap. The example illustrates the existence of 3 modes (red dashed lines). Blue dashed lines indicate the PC bandgap prior to hole removal. Green areas indicate PCW bulk modes and yellow area corresponds to light waves with non-zero wavevector component orthogonal to the PCW membrane surface. Images c), d) and e) illustrate the geometric parameters used throughout the paper.} \label{fig:pcw_illustration} \end{figure*} A typical two-dimensional photonic crystal is a periodic arrangement of circular holes etched in a thin film of a material with high refractive index. A deleted row of holes forms a photonic crystal waveguide (see illustration in Fig.\ref{fig:pcw_illustration}(a)). A characteristic feature of a PCW is the existence of a frequency range where the group velocity of light decreases significantly. This fact makes a PCW structure an extremely appealing system for mediating interaction between light and an isolated dipole. A PCW effectively serves as a microresonator with small mode volume and high quality factor. These systems were demonstrated to suite the purpose of integration of $A_{3}B_{5}$ quantum dot single photon sources in a planar photonic structure \cite{Lodahl2020}. Quantum dot can be considered as a dipole, which is orientated perpendicular to the waveguide axis in the PC plane. A PCW microresonator forms an open cavity which can be smoothly interfaced with other integrated photonic waveguides. In this paper we study methods to increase robustness of PCW features to fabrication defects. We focus our attention on a PCW created by deleting a row of air holes from a 2D triangular array. The host material is chosen to be gallium arsenide (GaAs) because the target application is a planar semiconductor quantum dot single photon source. We start with the description of PCW characteristics and development of its heuristic model. Manga Rao and Hughes \cite{Manga_Rao} derived an expression for a Purcell factor $F_{p}$ in terms of PCW parameters: \begin{equation}\label{eq:purcell_factor_pcw} F_{p} = \frac{3\pi c^{3}a}{V_{eff}\omega_{d}^{2} \epsilon^{3/2}v_{g}}, \end{equation} where $a$ is the distance between air holes (if the lattice is triangular $a=a_x$, since we are focused on that type of lattice from now on we will write $a$ instead $a_x$, but for another lattice angle $a_x$ is the only correct option), $V_{eff}$ is the effective mode volume and $v_{g}$ is the group velocity at the resonant frequency of the dipole $\omega_{d}$. The formula indicates that the largest $F_{p}$ is achieved when the wavepacket group velocity reaches zero. Thus the design of a PCW efficiently coupled with a single emitter resonant at $\omega_{d}$ is equivalent to engineering a PCW dispersion law to meet the requirement $d\omega/dk(\omega_{d})=0$. Numerical methods for calculation of the dispersion structure of a PCW are well-known and straightforward \cite{Lodahl} and can be easily applied to a PCW with a defined geometry. However, there exists no recipe of how to estimate geometrical parameters of a PCW exhibiting high Purcell factor at a wavelength of interest. We devise heuristic expressions linking the target wavelength and the parameters of a hexagonal PCW which stem upon simple optical effects taking place inside a photonic crystal. Based on these results we introduce methods to increase the robustness of a PCW structure to fabrication defects. \section{\label{sec:purcell_peak_fit}A PCW Purcell factor heuristics} Figure~\ref{fig:pcw_illustration}(b) illustrates a typical dispersion structure of a PCW. The geometrical parameters for this example are as follows: PC hole pattern angle $\theta_{gr}=60^{\circ}$, period $a=0.238$~$\mu$m, hole radius $r=0.08$~$\mu$m and membrane thickness $h=0.16$~$\mu$m. These values were chosen to put the Purcell factor $F_{PCW}$ peak at $925$~nm. A natural question arises whether this configuration is unique. It turns out that the answer is negative. We performed an extensive numerical analysis of the Purcell enhancement happening in different PCW configurations, results are presented in Fig.~\ref{fig:purcell_peak}.The 3D FDTD simulation was carried out in Lumerical FDTD package, the details of the simulation are specified in Appendix~\ref{app:simulation}. We observed a continuous set of configurations of a triangular PCW with the same lattice angle corresponding to a peak value of $F_{PCW}$ at a target wavelength. The red line in Fig.~\ref{fig:purcell_peak} illustrates the numerically computed set of $(a,r)$ configurations corresponding to the most efficient coupling of a PCW mode to dipole radiation at $925$~nm. The curve in $(a,r)$ space closely follows the function $a=c_{1}+c_{2}r$, where the coefficients $c_{1}$ and $c_{2}$ are weakly dependent on $a$ and $r$. The $a(r)$ dependence is finely approximated by a linear function in the region where the $F_{PCW}$ reaches its highest levels. The yellow curve represents the values of $a$ and $r$ corresponding to $F_{PCW}$ peak which are provided by the proposed theoretical description. In the following subsections we provide a heuristic theoretical description for the origins of such dependence and the values of $c_{1}$ and $c_{2}$ coefficients. \subsection{The slope coefficient $c_{2}$} The Purcell factor $F_{PCW}$ defines the probability \begin{equation}\label{eq:pcw_coupling_probability} \beta =F_{PCW}/(1+F_{PCW}) \end{equation} of emitting a photon into a PCW mode. The existence of a PCW mode is a purely interferometric effect hence the probability $p$ should be related to the geometry of a photonic crystal. We expect to derive the connection between the geometric parameters which correspond to a configuration of a PCW structure reaching maximal Purcell factor for the required wavelength. We roughly split the emitter radiation into three categories: light exiting the PCW plane, light propagating inside the PCW structure, and light coupled to the PCW mode. For the light exiting the PCW plane we define the notion of a vertical Fabry-Perot resonator and the corresponding Purcell factor $F_{FP}$, which is used to evaluate a portion of light leaking from the PCW membrane. Then the fraction of light emitted in the PCW itself from the total amount of radiation which remains inside the crystal can be estimated using the effective angle $\theta_{wg}$ (see Fig.~\ref{fig:pcw_illustration}(d)). Under such assumptions we can derive the following equation: \begin{equation}\label{eq:detailed_main_relation} \left(1-\frac{F_{FP}}{1+F_{FP}}\right)\frac{\int_{\pi/2-\theta_{wg}}^{\pi/2+\theta_{wg}}\sin^3\theta{d}\theta}{\int_0^\pi\sin^3\theta{d}\theta}=\frac{F_{PCW}(a,r)}{1+F_{PCW}(a,r)}, \end{equation} where the first term on the left-hand side of the equation denotes the probability of the photon to stay inside the crystal and the second term denotes the fraction of the photons emitted into the waveguide mode. Here we assumed that if the total internal reflection angle is relatively small (approximately $16^{\circ}$ in case of GaAs to air transition), then the majority of the photons which leak out of the crystal can be attributed to the emission into the Fabry-Perot resonator mode. The right-hand side accounts for the probability of emitting the photon exactly into the waveguide mode using the $F_{PCW}$ value. Here we have three variables, which we need to calculate: $F_{FP}$, $F_{PCW}$ and $\theta_{wg}$. We calculate values $F_{FP}$ and $F_{PCW}$ and use equation \ref{eq:detailed_main_relation} to determine the value of $\theta_{wg}$. The $\theta_{wg}$ is different for each individual crystal configuration. The expression connecting geometrical parameters of a crystal to the $\theta_{wg}$ value is defined by the crystal configuration. For the $60^{\circ}$ triangular hole pattern the $\theta_{wg}$ definition is illustrated in Fig.~\ref{fig:pcw_illustration}(d). For a triangular lattice, the angle $\theta_{wg}$ is implicitly related to the period $a$ and the hole radius $r$ (see Appendix A): \begin{equation}\label{eq:a_to_r_ratio} \frac{a}{r}=\frac{\cos(\pi/4-\theta_{wg}/2)-\tan\theta_{wg}\sin(\pi/4-\theta_{wg}/2)}{\tan\theta_{gr}/2-\tan\theta_{wg}/2}, \end{equation} where $\theta_{gr}$ is the lattice angle. Once we have estimated $F_{FP}, F_{PCW}$, and $\theta_{wg}$ values we can substitute them into equation \ref{eq:a_to_r_ratio} and evaluate the $c_{2} = a/r$ coefficient. \begin{figure} \centering \includegraphics[scale=0.4]{Figure-2-v2.pdf} \caption{The set of configurations in $(a,r)$ parameter space corresponding to a peak $F_{PCW}$ at 925 nm. The 2D heatmap represents the $F_{PCW}$ values experienced by the emitter in a PCW with given $a$ and $r$. The $a$ and $r$ parameter values are unevenly distributed for time-saving purposes. We added extra points in the maximal $F_{PCW}$ region. The white tiles represent the points which were not computed. The red curve connects the $F_{PCW}$ maximal values calculated numerically using 3D FDTD simulation. The yellow curve contains values estimated by the proposed heuristic theoretical approach.} \label{fig:purcell_peak} \end{figure} To calculate $F_{FP}$ we note, that light emitted by a dipole at frequency falling into the photonic crystal bandgap exhibits propagation through a low-quality Fabry-Perot resonator between the top and the bottom surfaces of the PCW membrane. Resonance frequencies $\omega_{m}=(\pi c/nh)m=\frac{\pi\cdot{c}}{nh}=w_{1}$ and the linewidth $dw=c(1-R)/(n\sqrt{R}h)$ for m=1 of this Fabry-Perot resonator are expressed using the membrane thickness $h$ and the refractive index of the material $n$. Fresnel law yields the reflection coefficient $R=(n-1)^2/(n+1)^2$ for light incident normally to the membrane surfaces. A fraction of light emitted by the dipole into the vertical Fabry-Perot mode can be estimated using a Purcell factor \begin{equation}\label{eq:purcell_factor_vertical_fp} F_{FP} = \frac{3Q (\lambda/n)^3}{4\pi^{2}V_{eff}} \cdot \frac{d\omega^{2}}{4(\omega - \omega_{1})^{2}+d\omega^{2}}, \end{equation} From this equations we obtain the quality factor $Q=\frac{w_1}{dw}$. The effective mode volume is given by $V_{eff}=\frac{\int_{V}{\epsilon{E^2}}dV}{\underset{V}{\max}(\epsilon{E^2})}$ \cite{Hughes2004}, where the integration is carried out over a single unit cell for periodical structures (for example PCWs) and over the whole possible volume for non-periodical structures (the period equals to infinity). For such configuration the effective mode volume can be estimated as (see Appendix B) $V_{eff}=\frac{2}{3}h^3$. The $F_{FP}$ of the vertical mode resonator then equals to $F_{FP}\approx 0.64$ if we set $\lambda=925$~nm and $n_{GaAs}(\lambda = 925\,\mathrm{nm})=3.46$. Next we need to express $F_{PCW}(a,r)$. The effective volume of the PCW mode $V_{eff}$ from Eq.~\ref{eq:purcell_factor_pcw} is the last term which is not yet related to $a$ and $r$. We consider a unit cell as a rectangle with 4 quarter circles (see Fig.~\ref{fig:pcw_illustration}(d)). The effective mode volume for the light modes with the free-space dispersion relation are defined by \begin{equation}\label{eq:free-space_mode_volume} V_{eff}=\frac{\int_V I dV}{\underset{V}{\max I}}=\frac{\langle I\rangle V}{I_{0}}, \end{equation} where $I$ denotes the radiation intensity inside the unit cell. We consider a PCW dispersion curve with $d\omega/dk(\omega_{d})=0$ and focus on the system behaviour at a frequency $\omega$ slightly smaller than $\omega_{d}$. We assume that the mode volume of light states at the frequencies $\omega < \omega_{d}$ is roughly the same as at the resonance frequency $\omega_{d}$, because the slight change of the light frequency shouldn't drastically affect the mode volume. We also assumed that the emitter is preferentially coupled to the primary mode (mode 1 in Fig.~\ref{fig:pcw_illustration}b). Light at frequency $\omega$ can only populate the states satisfying the free space dispersion relation $\omega = ck/n(\omega)$. The dipole intensity is proportional to $\frac{1}{r^2}$ and the intensity at the 'entrance' of the unit cell is $I_0$. We can find the function $I(x)=\frac{const}{(x+1)^2}$, where $x$ is the dimensionless coordinate along the axis of the waveguide, $const$ equals numerically to $I_0$, and $x=0$ at the 'entrance' of the unit cell. Using $I(x)$ we can easily calculate the average intensity in the unit cell \begin{equation}\label{eq:average_intens_unit_cell} \langle I(x)\rangle =\frac{1}{a}\int_{0}^{a}\frac{const}{(x+1)^2}dx=\frac{const}{(\langle x \rangle +1)^{2}}, \end{equation} where $a$ is the lattice period along the propagation axis. This result implies, that we need to consider $\langle I \rangle$ as the intensity at the $\langle x \rangle$ coordinate taking into account diffraction effects at the unit cell 'entrance'. The estimated number of Fresnel zones (we assume, that the unit cell is located far from the dipole, so the light propagates almost parallel to the waveguide axis) \begin{equation}\label{eq:fresnel_zones} m=\frac{(a\tan\theta_{gr}-2r)^2}{4\lambda<x>}\approx1.2 , \end{equation} which means that the Fresnel approximation is eligible when accounting for the diffraction effects. Since the unit cell entrance cross-section is rectangular, we calculate the parameters of the Cornu spiral as \begin{equation}\label{eq:cornu_parameters} \begin{array}{c} u=\sqrt{2m},\\ c(u)=\int_{0}^u \cos(\frac{\pi}{2}\tau^2)d\tau,\\ s(u)=\int_{0}^u \sin(\frac{\pi}{2}\tau^2)d\tau. \end{array} \end{equation} Coefficients $c(u)$ and $s(u)$ allow to evaluate the required ratio \begin{equation}\label{eq:intensity_ratio} \frac{\langle I\rangle}{I_0}=\frac{c(w)^2+s(w)^2}{c(\infty)^2+s(\infty)^2}. \end{equation} Then the effective mode volume is equal to $V_{eff}=\frac{\langle I\rangle}{I_0}V$, where $V$ is the geometrical volume of the unit cell. Now we have all the ingredients to specify the explicit relation $a(r)$. The $F_{PCW}$, and $F_{FP}$ are all expressed as functions of $a$ and $r$ and after substitution of each one to the equation (\ref{eq:detailed_main_relation}) we get $\theta_{wg}$(a,r) and equation (\ref{eq:a_to_r_ratio}) gives $\frac{a}{r}=c_{2}$. The explicit formulas and a numerical calculation algorithm are provided in Appendix B. \subsection{The constant coefficient $c_1$} When the hole radius $r$ is close to zero, the PC becomes analogous to a Bragg grating (see Fig.~\ref{fig:pcw_illustration}). When the $F_{PCW}$ reaches its' peak, the crystal blocks all the photons, which are emitted outside the waveguide direction. The Bragg equations define destructive interference criteria for the photons, which are not emitted to the PCW mode. We will only take into account two types of reflective surfaces: the parallel and the sloped as shown in Fig.~\ref{fig:pcw_illustration}(e).These two sets of surfaces share similar properties: the distance between the circles along these surfaces is minimal over all other possible surfaces and equals to the crystal parameter $a$. We do not take into account the interference effects happening on every other possible set of surfaces due to the increasing distance between the circles and thus the necessity to take diffraction effects into consideration. For the sloped surfaces the Bragg condition is \begin{equation}\label{eq:sloped_bragg_condition} \begin{array}{c} 2dn\cos(\alpha)=\lambda,\\ \alpha=\theta_{gr}-\alpha_{\mathrm{avg}},\\ d=a_{slp}\sin(\theta_{gr}), \end{array} \end{equation} where $a_{slp}$ is the lattice period, found using the Bragg condition for sloped surfaces, and $\alpha_{\mathrm{avg}}$ is the average angle of emission, which can be determined using the following formula \begin{equation}\label{eq:average_angle} \alpha_{\mathrm{avg}}=\frac{2}{\pi}\int_0^{\pi/2}\sin^3\theta\cos\theta{d}\theta. \end{equation} To obtain this formula we take into account that we need to calculate the angle corresponding to the average direction of power emission in any quarter surface. The term $\sin^2\theta$ describes the dipole radiation pattern. Another $\sin\theta$ term arises from the transition to a spherical coordinate system. Lastly the $\cos\theta$ term implies, that we are interested in a projection on the $y$-axis, because we look for the destructive interference condition for light waves propagating perpendicular to the waveguide axis. The Bragg condition for the parallel surfaces is \begin{equation}\label{eq:parallel_bragg} 2\frac{a_{par}\cdot\tan{\theta_{gr}}}{2}n\cos\alpha_{\mathrm{avg}}=\lambda, \end{equation} where $a_{par}$ is the lattice period found using the Bragg condition for the parallel surfaces. Finally we need to estimate the fractions of radiation which preferentially interfere in the sloped and the parallel Bragg gratings. These fractions could be obtained using the dipole radiation pattern \begin{equation}\label{eq:sloped_and_parallel_fractions} \begin{array}{c} \eta_{\mathrm{slp}}=\frac{\int_{\pi/2-\theta_{gr}}^{\pi/2}\sin^3\theta{d}\theta}{\int_{0}^{\pi/2}\sin^3\theta{d}\theta},\\ \eta_{\mathrm{par}}=\frac{\int_{0}^{\pi/2-\theta_{gr}}\sin^3\theta{d}\theta}{\int_{0}^{\pi/2}\sin^3\theta{d}\theta}.\\ \end{array} \end{equation} Taking these fractions into account, we can now express the total constant coefficient through $a$ and $r$: \begin{equation}\label{eq:constant_coefficient_value} c_{1}=\eta_{par}a_{par}+\eta_{slp}a_{slp}. \end{equation} \label{sec:robustness} \section{Improvement of the PCW robustness to fabrication imperfections using composite PCW structures} The first method is based on Eq.~\ref{eq:a_to_r_ratio}. We assume, that $\theta_{wg}$ remains the same for all quarters of the PCW unit cell. However, it turns out that this angle shouldn't necessarily be realised by the same lattice period $a$ and radius $r$ in each quarter. We state that if two PCWs with different lattice periods $a_{1}, a_{2}$ and hole radii $r_{1}, r_{2}$, respectively, reveal the peak of $F_{PCW}$ at the same wavelength, then the compound crystal will also reveal the peak at the same wavelength (see fig. \ref{fig:compound_crystal}). \begin{figure}[t!] \centering \includegraphics[width = \linewidth]{Figure-3.pdf} \caption{Compound PCW composed of two half-PCWs, designed to exhibit peak value of $F_{PCW}$ at $\lambda_{1}$ and $\lambda_{2}$, which satisfy the condition $\lambda_{1}+\lambda_{2}=2\lambda_{0}$. } \label{fig:compound_crystal} \end{figure} We suggest that the PCW in the center of Figure~\ref{fig:compound_crystal} is more robust to the fabrication imperfections than the one on the right, and the one on the left. If we introduce slight random deviations to the values of radius $r$, the $F_{PCW}$ spectral curve shifts away from the target wavelength and the coupling of the dipole radiation to the PCW mode decreases substantially. We do not take into account random deviations of the PCW period $a$ because its value is several times larger than the deviation due to manufacturing imprecision introduced in state-of-the-art fabrication lines \cite{Fan2010}. It's also worth noting that a systematic bias of $a$ and $r$ is easily accounted for by our theoretical description. If the bias in each of the parameters can be determined experimentally, the other parameter can be adjusted accordingly using the formulas for $c_{1}$ and $c_{2}$ coefficients. The idea behind increased robustness to fabrication errors relies on simple reasoning. The $\beta$ factor value gets higher than $0.9$ even at moderate $F_{PCW} \approx 10$ or greater, which can already be considered as good coupling of the emitter radiation to the PCW mode. The width of the $F_{PCW}$ spectral dependence is extremely narrow due to the mode dispersion curve (see fig \ref{fig:pcw_illustration}) of an ideal PCW. Our goal is to 'spoil' the PCW structure in order to make the $F_{PCW}$ spectrally wider at the expense of the $F_{PCW}$ peak value becoming lower but still sufficient for good coupling. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Figure-4-v2.pdf} \caption{The graphs shows summarized results of the simulations of the compound crystals $F_{PCW}$ spectral curves. Each panel depicts the $F_{PCW}$ peak values and FWHMs for each of the 100 curves computed for compound PCWs with added randomized hole radius error. The left column shows the results for compound structures composed of two halves of PCW optimized for identical peak $F_{PCW}$ wavelength $\lambda_{0}=925$~nm. The right column shows the results for compound structures comprised of two halves of PCWs optimized for different $\lambda_{1}$ and $\lambda_{2}$ satisfying $\lambda_{1}+\lambda_{2} = 2\lambda_{0}$.The insets in each figure illustrate the used set of variable parameters.} \label{fig:compound_crystal_simulation_results} \end{figure} We prove our point by performing numerical simulations using the Lumerical software package. The model describes a composite PCW structure comprised of two half-crystals with different periods and hole radii. Both halves correspond to the PCWs delivering optimal $\beta$ at the same wavelength $\lambda = 925$~nm. We add random additive $\delta r_{i}$ sampled from the range of $[-10,10]$~nm to the radius each hole in the numerical PCW model and perform 100 simulation runs. Figure~\ref{fig:compound_crystal_simulation_results} illustrates the results of the simulation. The best combination corresponds to the compound crystal with periods $a_{1}=233$~nm and $a_{2}=238$~nm respectively. The geometry of both parts of the compound crystal was estimated using our model and each one reveals the highest value of $F_{PCW}$ at $925$~nm. This compound crystal configuration demonstrates better performance (Fig. \ref{fig:compound_crystal_simulation_results}, left column, central panel) with simulated fabrication defects compared to the standard PCW with identical halves (Fig. \ref{fig:compound_crystal_simulation_results} top row, left panel). To quantify the performance we introduce an average Purcell enhancement factor $\tilde{F}_{PCW}$ and a probability of Purcell enhancement $p$. The target wavelength lies within the width at half maximum of Purcell enhancement curve in $p\times 100$ instances of simulations, where $p$ is the probability of Purcell enhancement. The average $\tilde{F}_{PCW}$ is a mean value of $F_{PCW}$ in these instances. In the case of the compound crystal with randomized radii the probability of Purcell enhancement at $925$~nm equals to $p=0.35$ and the average value $\tilde{F}_{PCW} = 8.69$, whereas for the standard crystal the values are 24 out of 100 and $\tilde{F}_{PCW}=7.62$ respectively, confirming the robustness of the compound PCW in comparison with the non-compound one. Another option is to consider a compound PCW made of two parts designed to exhibit maximal $F_{PCW}$ value at different wavelengths $\lambda_{1}$ and $\lambda_{2}$. Here the question arises: at which wavelength the maximum $F_{PCW}$ of a compund crystal will be observed? The answer turns out to be simple: If the parts of the PCW reveal the peak $F_{PCW}$ value at wavelengths $\lambda_1$ and $\lambda_2$ respectively, then the compound PCW reveals the peak at $\frac{\lambda_1+\lambda_2}{2}$. One can easily prove this statement by taking into account that there are two different reflective surface systems (under an approximation of a small hole radius $r$, see Fig.~\ref{fig:pcw_illustration}e), and in order to prevent light from propagation in a direction perpendicular to the waveguide axis, the effective distance between each surface must be equal to $\lambda/2$, meaning that the total effective distance between the surfaces on both sides from the emitter should be $\frac{\lambda_1+\lambda_2}{2}$. \begin{table*} \begin{tabular}{c|c|c|c|c} Configuration & $\tilde{F}_{PCW}$ @ 925 nm & p & mean FWHM, nm & mean $F_{PCW}$ \\ \hline $\lambda_{1}=\lambda_{2}=925$~nm and $a_{1}=a_{2}=238$~nm & 7.6 & 0.24 & 3.0 & 24.9 \\ $\lambda_{1}=\lambda_{2}=925$~nm and $a_{1}=233$~nm, $a_{2}=238$~nm & 8.6 & 0.35 & 4.0 & 21.4 \\ $\lambda_{1}=\lambda_{2}=925$~nm and $a_{1}=223$~nm, $a_{2}=238$~nm & 7.0 & 0.57 & 7.4 & 10.2 \\ $\lambda_{1}=920$~nm, $\lambda_{2}=930$~nm and $a_{1}=a_{2}=238$~nm & 6.9 & 0.26 & 3.5 & 23.8 \\ $\lambda_{1}=915$~nm, $\lambda_{2}=935$~nm and $a_{1}=a_{2}=238$~nm & 5.4 & 0.18 & 4.4 & 20.6 \\ $\lambda_{1}=905$~nm, $\lambda_{2}=945$~nm and $a_{1}=a_{2}=238$~nm & 4.6 & 0.44 & 11.1 & 12.5 \\ \end{tabular} \caption{The summary of the combined crystal performance according to numerical simulations. The mean FWHM and mean $F_{PCW}$ values indicate the Purcell factor FWHM and maximal value averaged over 100 simulation runs.} \label{tab:result_summary} \end{table*} The numerical results for this composition method are also obtained using 100 FDTD simulations similar to those used previously. The results are shown in the right column of Fig.~\ref{fig:compound_crystal_simulation_results}. The conclusion for this case is the following: the greater is the difference between the wavelengths of both parts of a compound PCW, the less is the maximal value and the greater is the FWHM of the $F_{PCW}$ spectral curve. The periods for both parts can be the same or different, but the radius value must be set according to our model, so that each part of the PCW shows a peak at the required wavelength. Results of the simulation of the Purcell factor spectral curve in the combined crystals with randomized radius error $\delta r \in [-10,10]$~nm are summarized in Table \ref{tab:result_summary}. We observe the tendency of the probability $p$ to grow when the two halves of the PCW are designed with largely different parameters. \section{Discussion and conclusion}\label{sec:discussion} We have established a mathematical connection between geometrical parameters of a PCW structure and a Purcell enhancement factor at the specific wavelength. Compound PCW structures which are predicted to provide maximal enhancement at the target wavelength exhibit stronger robustness to the random hole radius deviation compared to standard PCW structures. We attribute the observed effect to the broken symmetry of the crystal. The improvement manifests itself in higher probability of a dipole emitter to be efficiently coupled to a PCW mode and the higher average Purcell enhancement factor $\tilde{F}_{PCW}$. The $\tilde{F}_{PCW}$ values are around $\approx 10$ which yields the $\beta$ factor to be $\approx 91\%$. Although such coupling efficiency values cannot be considered as satisfactory for the most demanding applications like fault-tolerant linear optical quantum computing \cite{Rudolph2021, Jeong2022}, they can nevertheless enable near-term experiments with multiple single-photon sources on an integrated platform. We would like to draw the readers' attention to a few interesting features which were uncovered in course of the simulations with randomized radii deviations. The configurations designed for identical target wavelengths $\lambda_{1}=\lambda_{2}=925$~nm with unequal periods $a_{1} \neq a_{2}$ (see Fig.~\ref{fig:compound_crystal_simulation_results})~b and c) both show clustering of points. The configurations in Fig.~\ref{fig:compound_crystal_simulation_results}c and Fig.~\ref{fig:compound_crystal_simulation_results}f have the most pronounced clustering along four vertical lines. We were unable to explain the origin of this behaviour, but we speculate that this effect might be related to the emergence of a topologically protected mode inside the PCW bandgap \cite{Proctor2020}. Another peculiar observation is the tendency to cluster along the vertical lines separated by an almost equal distance $\Delta \lambda$. This means that a discrete set of wavelengths exhibits a highly robust Purcell enhancement in the presence of the randomized hole radius error. Our results provide a clear understanding of the PCW parameter interplay and thus they significantly simplify the initial structure design procedure. They can also serve for augmenting sophisticated automated optimization design routines by narrowing down the parameter space or serving as a quick sanity check avoiding the necessity to run a 3D FDTD simulation task. Our heuristic model describes a triangular PCW structure only, but we believe that similar reasoning and mathematical analysis apply to any other photonic crystal layout. \section{Acknowledgements} A. S., I. D. and S. S. acknowledge support by Rosatom in the framework of the Roadmap for Quantum computing (Contract No. 868-1.3-15/15-2021 dated October 5, 2021 and Contract No.P2154 dated November 24, 2021). S. K. is supported by the Ministry of Science and Higher Education of the Russian Federation on the basis of the FSAEIHE SUSU (NRU) (Agreement No. 075-15-2022-1116).
1,477,468,751,286
arxiv
\section{Introduction} The Schwinger effect~\citep{sauter,schwinger} means the instability of a spatially homogeneous, purely electric field with respect to the decay into a state with pairs, e.g. electrons ($e^-$) and positrons ($e^+$), and a screened electric field, symbolically $\vert \vec E \rangle \to \vert \vec E^\prime e^+ e^- \rangle$ (cf.~\citep{gelis_schwinger_2015} for a recent review). The pair creation rate $w \propto \exp\{ - \pi E_c / \vert \vec E \vert \}$ for fields attainable presently in mesoscopic laboratory installations is exceedingly small since the Sauter-Schwinger (critical) field strength $E_c=m^2/\vert e\vert=\SI{1.3e18}{V\per m}$ is for electrons/positrons with masses $m$ and charges $\pm e$ so large (we employ here natural units with $c = \hbar = 1$). The notion of dynamical Schwinger process refers to a situation where the spatially homogeneous electric field has a time dependence, $\vec E(t)$. The particular case of a periodic field is dealt with in~\citep{brezin_pair_1970} with the motivation that tightly focused laser beams can provide high field strengths, e.g.\ in the anti-nodes of pair-wise counter propagating, linearly polarized beams. The superposition of many laser beams, as considered, e.g.\ in~\citep{narozhny_pair_2004}, can enlarge the pair yield noticeably. A particular variant is the superposition of strong laser beams and weaker but high-frequency beams which may be idealized as a common classical background field $\vec E(t) = \vec E_1(\omega t) + \vec E_2 (N \omega t)$. If the frequency of the second field, $N \omega$ is sufficiently large, the tunneling path through the positron-electron gap is shortened by the assistance of the multi-photon effect~\citep{schutzhold_dynamically_2008,dunne_catalysis_2009} and, as a consequence, the pair production is enhanced. This dynamically assisted Schwinger process supposes a Keldysh parameter $\gamma_1 = (E_c / E_1) (\omega/m) \ll 1$ to stay in the tunneling regime\footnote[2]{Similar to ionization in atomic physics, one can also for pair production distinguish between a tunneling ($\gamma\ll1$) and a multi-photon regime ($\gamma\gg1$), depending on the value of the Keldysh parameter $\gamma$.}. The combination $\gamma_1 < 1$ and $\gamma_2 = (E_c / E_2 ) (N\omega/m) > 1$ is dubbed assisted dynamical Schwinger effect since the field ``1'' with parameters $E_1$, $\omega$ refers to the dynamical Schwinger effect in the nomenclature of~\citep{brezin_pair_1970}, and the field ``2'' with parameters $E_2$, $N\omega$ is assisting. Various pulse shapes for $E_{1,2}$ have been studied with the goal to seek for optimal combinations~\citep{hebenstreit_optimization_2014,kohlfurst_optimizing_2013,akal_electron-positron_2014}. Current lasers reach intensities of $\SI{2e22}{W\per cm^2}$ (cf.~\citep{di_piazza_extremely_2012} for an overview) corresponding to an inverse Keldysh parameter of $\gamma^{-1}=10$. Planned facilities are, for example, ELI-NP~\citep{eli} and Apollon~\citep{zou_design_2015} ($\SI{10}{PW}$, $\SI{1e22}{W\per cm^2}$) or HiPER~\citep{hiper} ($\SI{100}{PW}$, $\SI{1e26}{W\per cm^2}$). (The Sauter-Schwinger field strength requires an intensity of $\SI{4e29}{W\per cm^2}$.) All these investigations aim at verifying the decay of the vacuum. Besides the mentioned strong (but presently not strong enough) fields also the Coulomb fields accompanying heavy and super-heavy atomic nuclei have been considered as an option to study the vacuum break down~\citep{greiner_3,rafelski_superheavy_1971,muller_solution_1972,muller_solution_1973,bialynicki-birula_phase-space_1991}. Previous experiments, however, have not been conclusive~\citep{heinz_positron_2000}. Another avenue for pair creation is the conversion of light into matter in the collision of photon beams. The Breit-Wheeler process~\citep{breit_wheeler} refers to the reaction $\gamma^\prime + \gamma \to e^+ + e^-$ which is a crossing channel of the Compton process or the time-reversed annihilation. The famous experiment E-144 at SLAC~\citep{burke_positron_1997} can be interpreted as a two-step process with Compton backscattering of a laser beam and subsequent reaction of the Compton backscattered photons with the laser beam in non-linear Breit-Wheeler pair production~\citep{burke_positron_1997,bamber_studies_1999}. The notion non-linear Breit-Wheeler process means the instantaneous reaction with a multiple of laser beam photons, i.e.\ $\gamma^\prime + n \omega_L \to e^+ + e^-$. Also here one can ask whether the laser assisted non-linear Breit-Wheeler process $\gamma^\prime + \omega_{XFEL} + n \omega_L \to e^+ + e^-$ shows peculiarities due to the superposition of the co-propagating XFEL and laser beams. Other field combinations, such as the nuclear Coulomb field and XFEL/laser beams, are also conceivable~\citep{augustin_nonlinear_2014,di_piazza_effect_2010} (cf.~\citep {di_piazza_extremely_2012} for a recent review and further references), but will not be addressed here. Our paper is organized as follows. In section 2 we consider the reasoning for forming resonance type structures in the phase space distribution of pairs created in the assisted dynamical Schwinger process. The considered classical background field configuration has been characterized above: the superposition of two spatially homogeneous fields of different strengths and frequencies with a common envelope, as investigated in~\citep{otto_lifting_2015, otto_dynamical_2015,panferov_assisted_2015}. Examples are given for the mutual amplification, and some glimpses on the time evolution in simple pulses are provided too. Section~3 deals with the laser assisted Breit-Wheeler process, where spectral caustics have identified already in~\citep{nousch_spectral_2016}. Specifically, we show here the sensitivity of the spectral caustics on the laser beam intensity which is important for multi-shot experiments with not perfectly tuneable intensity parameter. Our approach here utilizes the common XFEL + laser field again as a classical background field to be dealt with in the Furry picture, while the probe photon $\gamma'$ refers to a quantized radiation field. We briefly summarize in Section~4. \section{Assisted dynamical Schwinger process} In this section we consider pair production in the spirit of the Schwinger process, i.e.\ creation of $e^\pm$ pairs by a purely electric background field which is assumed to be spatially homogeneous. Int the following, we use the notation and formalism as introduced in~\citep {otto_lifting_2015}. The quantum kinetic equation~\citep{schmidt_quantum_1998} \begin{equation}\label{QKE} \dot f(\vec p ,t) = \frac{\lambda(\vec p, t)}{2} \int\limits^t_{- \infty} \mathrm d t^{\prime} \lambda(\vec p, t^{\prime}) (1 - 2 f(\vec p, t^{\prime})) \cos\theta(\vec p, t, t^{\prime}) \end{equation} determines the time $(t)$ evolution of the dimensionless phase space distribution function per spin projection degree of freedom\footnote[5]{ In~\citep{otto_lifting_2015,otto_dynamical_2015} we employ a different convention with a sum over spin degrees of freedom, i.e.\ $f\to\sum_sf$ which removes factors $2$ in front of $f$.} $f(\vec p, t) = \mathrm d N(\vec p, t) / \mathrm d^3p \, \mathrm d^3x$, where $N$ refers to the particle number and $\mathrm d^3p$ and $\mathrm d^3x$ are the three dimensional volume elements in momentum ($p$) and configuration ($x$) spaces. We emphasize that only $f(\vec p, t \to +\infty)$ can be considered as single particle distribution which may represent the source term of a subsequent time evolution of the emerging $e^+e^-$ plasma. The initial condition for solving (\ref{QKE}) is $f(\vec p, t \to -\infty) = 0$. Screening and backreaction are not included with virtue of the small values of $f$ in subcritical fields (cf.~\citep{gelis_formulation_2013} for recent work on that issue). Above the quantities $\lambda(\vec p, t) = \frac{eE(t)\, \varepsilon_\perp(p_\perp)}{\varepsilon^2(\vec p, t)}$ stand for the amplitude of the vacuum transition, and $\theta(\vec p, t, t') = 2\int^t_{t'}\mathrm d\tau\, \varepsilon(\vec p, \tau)$ for the dynamical phase, describing the vacuum oscillations modulated by the external field; the quasi-energy $\varepsilon$, the transverse energy $\varepsilon_\perp$ and the longitudinal quasi-momentum $P$ are defined as $\varepsilon(\vec p, t) = \sqrt{\varepsilon_\perp^2 (p_\perp) + P^2(p_\parallel, t)}$ and $\varepsilon_\perp (p_\perp)= \sqrt{m^2 + p^2_\perp}, $ $ P(p_\parallel, t) = p_\parallel -eA(t), $ where $p_\perp=|\vec p_\perp|$ is the modulus of the kinetic momentum ($\vec p$) component of positrons (electrons) perpendicular to the electric field, and $p_\parallel$ denotes the $E$-parallel kinetic momentum component. The electric field follows from the potential \begin{equation} A = K(\omega t) \left( \frac{E_1}{\omega} \cos (\omega t) + \frac{E_2}{N \omega} \cos (N \omega t) \right) \label{A_AO} \end{equation} by $E = - \dot A$ in Coulomb gauge. Equation (\ref{A_AO}) describes a bi-frequent field with frequency ratio $N$ (integer) and field strengths $E_1$ -- the strong field ``1'' -- and $E_2$ -- the weak field ``2''. The quantity $K$ is the common envelope function with the properties (i) absolutely flat in the flat-top time interval $-t_\text{f.t.}/2 < t < + t_\text{f.t.}/2$ and (ii) absolutely zero for $t < - t_\text{f.t.}/2 - t_\text{ramp}$ and $t > t_\text{f.t.}/2 + t_\text{ramp}$ and (iii) absolutely smooth everywhere, i.e.\ $K$ belongs to the $C^\infty$ class; $t_\text{ramp}$ is the ramping duration characterizing the switching on/off time intervals. \begin{figure} \centering \includegraphics[width=\textwidth]{Plots/cut.png} \caption{Top row: Asymptotic transverse momentum ($p_\perp$) spectrum at $p_\parallel=0$ for the bi-frequent field (\ref{A_AO}) (middle panel) and the field components ``1'' (left panel, $E_1=0.1\,E_c$, $\omega=0.02\,m$) and ``2'' (right panel, $E_2=0.05\,E_c$, $N=25$) alone. Bottom row: Fourier zero-modes $2\Omega(p_\perp, p_\parallel=0)$ scaled by $\omega$ (left and middle panels) and $N\omega$ (right panel) for the fields in the top row with resonance conditions (horizontal dashed lines for $\ell=341$ and $343$ (left; higher-$\ell$ resonances are not depicted since the peaks are underneath the scale displayed in the top panel), $\ell=341, \dots,373$ (middle) and $\ell=5$ (right); vertical dashed lines are for the resonance positions; peaks for even $\ell$ appear only for $p_\parallel\ne0$ but get a zero amplitude at $p_\parallel=0$, and thus their positions are not depicted).} \label{fig:1AO} \end{figure} Figure \ref{fig:1AO} (top row) exhibits three examples for the transverse phase space distribution $f(p_\perp,p_\parallel=0,t\to\infty)$ for $E_1=0.1\,E_c$, $E_2=0.05\,E_c$, $\omega=0.02\,m$, $N=25$, $t_\text{ramp}=5\,\omega^{-1}$ and $t_\text{f.t.}=25\,\omega^{-1}$ obtained by numerically solving Eq.~\eqref{QKE}. The chosen parameters are by far not yet in reach at present and near-future facilities. Due to the periodicity of the involved fields and their finite duration a pronounced peak structure emerges (the peaks become sharp, elliptically bend ridges with deep notches when continuing the spectrum to finite values of $p_\parallel$). The peak heights scale with $t_\text{f.t.}^2$ for not too long pulse duration. The peak positions are determined by the resonance condition~\citep {otto_lifting_2015} \begin{equation} 2 \Omega (p_\perp, p_\parallel) - \ell \omega = 0, \label{resonance} \end{equation} where $\Omega = \frac{m}{2\pi} \int_0^{2 \pi} \mathrm d x \sqrt{1 + (p_\perp / m)^2 + [(p_\parallel / m) - \gamma_1^{-1} \cos x - \gamma_2^{-1} \cos N x ]^2}$ is the Fourier zero-mode of $\varepsilon$. The values of $\ell$ (integer) where the resonance condition (\ref{resonance}) is fulfilled can be used to label the peaks. $\Omega (p_\perp = p_\parallel = 0)$ may be interpreted as effective mass $m^*$~\citep{kohlfurst_effective_2014} which determines $\ell_\text{min} = int(1+2m^*/\omega)$. The Fourier zero-modes as functions of $p_\perp$ at $p_\parallel=0$ are displayed in the bottom row in Fig.~\ref{fig:1AO} together with the resonance positions. For the field ``1'' alone (left bottom panel) one has to take the limit $\gamma_2\to\infty$ in the Fourier zero-mode, while field ``2'' alone (right bottom panel) corresponds to $\gamma_1\to\infty$ and the replacement $\omega\to N\omega$ in~\eqref{resonance}. The striking feature in Fig.~\ref{fig:1AO} (cf.~\citep{otto_lifting_2015, otto_dynamical_2015} for other examples with different parameters, in particular $t_\text{f.t.}$, and~\citep{haehnel_bachelor_2015} for a wider range of field strengths) is the lifting of the spectrum related to field ``1'' by the assistance of field ``2''. While the amplification of the created pair distribution by the assistance field can be huge, for sub-critical fields the frequency $N \omega$ must be ${\cal O} (m)$ to overcome the exponential suppression. This implies that intensities envisaged in ELI pillar IV~\citep{eli} must be at our disposal in conjunction with much higher frequencies to arrive at measurable pair numbers enhanced further by an assistant field~\citep{otto_dynamical_2015}. \begin{figure} \centering \includegraphics[width=\textwidth]{Plots/time_evo.png} \caption{Time evolution of $f(p_\perp = p_\parallel = 0, t)$ in the adiabatic basis for the Sauter pulse~\eqref{sauter} for $\tau=1\,m^{-1}$ (blue), $\tau=2\,m^{-1}$ (green), $\tau=5\,m^{-1}$ (red), $\tau=10\,m^{-1}$ (cyan), $\tau=20\,m^{-1}$ (purple), $\tau=50\,m^{-1}$ (yellow) and $E_0=0.2\,E_c$ (left panel), $E_0=0.15\,E_c$ (right panel). The dashed black curves depict the Schwinger case as the limit of large values of $\tau$. Note the vast drop of the residual phase space occupancy for larger values of $\tau$ when changing $E_0$ from $0.2\,E_c$ to $0.15\,E_c$.} \label{fig:2AO} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Plots/terms.png} \caption{Time evolution of the components defined in~\eqref{components} of the analytical solution~\eqref{f_components} of the Schwinger case depicted for $E_0=0.2\,E_c$. Cyan dashed curve: $\vert X \vert^2$, green curve: $\vert Y \vert^2$, blue curve: interference term $X Y^* + X^* Y$, red curve: $\vert X + Y \vert^2$. } \label{fig:3AO} \end{figure} Even with low pair creation probability a once produced pair may seed a further avalanche evolution~\citep{bell_possibility_2008,king_photon_2013, elkina_qed_2011} toward an electron-positron plasma. In this respect one may ask for the time scales to approach the asymptotic out-state. A unique answer seems not to be achievable within the present framework due to the unavoidable ambiguity of the particle definition (see, e.g.~\citep{dabrowski_super-adiabatic_2014} for examples of changing the time evolution of $f$ at intermediate times when changing the basis). Having this disclaimer in mind one can inspect nevertheless graphs of $f(t)$. Figure \ref{fig:2AO} exhibits the time evolution in the adiabatic basis for the Sauter pulse \begin{align} E(t) = \frac{E_0}{\cosh^2 (t/\tau)}\:. \label{sauter} \end{align} which is fairly different from~\eqref{A_AO}. The analytical solution~\citep{narozhny_the_1970,hebenstreit_diss_2011} of equation (\ref{QKE}) is useful for checking numerical codes which are challenged by dealing with rapidly changing functions over many orders of magnitude. For large values of the pulse duration parameter $\tau$ the Schwinger case is recovered, see~\citep{hebenstreit_diss_2011}: \begin{equation} f = \frac{1}{8}\left(1+\frac{u}{\sqrt{2\hat\eta+u^2}}\right) \mathrm e^{-\frac{\pi\hat\eta}{4}} \vert X + Y \vert^2 \label{f_components} \end{equation} with \begin{align} X = \left(\sqrt{2\hat\eta+u^2}-u\right)D_{-1+\frac{i\hat\eta}{2}} \left(-u\mathrm e^{-\frac{i\pi}{4}}\right)\:,\quad Y = -2\mathrm e^{\frac{i\pi}{4}}D_{\frac{i\hat\eta}{2}} \left(-u\mathrm e^{-\frac{i\pi}{4}}\right)\:, \label{components} \end{align} where $D$ is the parabolic cylinder function, $u=\sqrt{\frac{2}{|e|E_0}} (p_\parallel+eE_0t)$ and $\hat\eta=\frac{m^2+p_\perp^2}{|e|E_0}$. While for $E = 0.2 E_c$ the net function $\propto \vert X + Y \vert^2$ reaches its asymptotic value already at $t m \approx 20$ (see Fig.~\ref{fig:3AO}), the individual components $\vert X \vert^2$, $\vert Y \vert^2$ and $X Y^* + X^* Y$ display a violent time dependence on much longer times. Note also the subtle cancellations. In the case of the Sauter pulse, see Fig.~\ref{fig:2AO}, the asymptotic values of $f$ are reached at shorter times with decreasing values of $\tau$. The relatively large values of $f(t\approx 0)$ have tempted sometimes researchers to relate them to particular effects caused by the transient state. Clearly, only observables, e.g. provided by probe beams, at asymptotic times are reliable. It is questionable, however, whether such probes can disentangle transient state contributions and asymptotic state contributions in a unique manner. \section{Laser assisted Breit-Wheeler process} \begin{figure} \centering \includegraphics[width=\textwidth]{Plots/Abb_1.pdf} \caption{Spectra for the laser assisted Breit-Wheeler process for a probe photon of energy $\SI{60}{MeV}$ colliding head-on with an XFEL photon (energy $\SI{6}{keV}$) and a co-propagating laser beam (frequency $\SI{10}{eV}$). Further parameters are $\eta=1/600$, $\gamma_X=10^5$, $\tau_X=7\tau/(4\pi\eta)$, $\gamma_L=2$ and $\tau_L=8\pi$ in the field~\eqref{A_Tobias}. These parameters translate into intensities of $\SI{6.2e15}{W\per cm^2}$ and $\SI{4.3e19}{W\per cm^2}$ for XFEL and laser, respectively. Upper panel: $\mathrm d\sigma/\mathrm d\ell\mathrm d z\mathrm d\varphi$ at $z=0$ and $\varphi=\pi$ as a function of $\ell$ (lower axis; the corresponding values of $p_\perp$ are given at the upper axis). The calculated spectrum is smoothed by a Gaussian window function with width $\delta=1.3$ to get the red curve. Middle panel: smoothed spectrum separately. Lower panel: phase $\phi$ as a function of $\ell$ (see~\citep {nousch_spectral_2016} for details). The vertical dotted lines depict the positions of diverging $\mathrm d\phi/\mathrm d\ell$, where two branches of $\phi(\ell)$ merge.} \label{fig:1TN} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Plots/Var_a0.pdf} \caption{As middle panel in Fig.~\ref{fig:1TN} but for $\gamma_L=10$, laser intensity $\SI{1.7e18}{W\per cm^2}$ (top panel) and $\gamma_L=1$, laser intensity $\SI{1.7e20}{W\per cm^2}$ (bottom panel).} \label{fig:3TN} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Plots/Var_a0Box_6P.pdf} \caption{As middle panel in Fig.~\ref{fig:1TN} but variation of $\gamma_L$ around $\gamma_L=2$. Upper panel: $\gamma_L=2.22$, middle panel: $\gamma_L=1.82$, lower panel: superposition of smoothed spectra for $\gamma_L=1.88\dots2.12$ corresponding to the laser intensity parameter $a_0=\gamma_L^{-1}=0.5\pm0.03$.} \label{fig:2TN} \end{figure} The laser assisted, non-linear Breit-Wheeler process (cf.~\citep {jansen_strongly_2013,jansen_strong-field_2015,wu_nonlinear_2014, krajewska_breit-wheeler_2014,meuren_polarization-operator_2015}) is dealt with within the strong-field QED (Furry picture) as reaction $\gamma^\prime \to e^+_A + e^-_A$ where $e^\pm_A$ denote dressed electron/positron states as Volkov solutions of the Dirac equation in a plane wave model with vector potential of the common classical background field \begin{equation} A^\mu (\phi) = \gamma_X^{-1} f_X (\phi) \varepsilon^\mu_X \cos \phi + \gamma_L^{-1} f_L (\eta \phi) \varepsilon^\mu_L \cos \eta \phi, \label{A_Tobias} \end{equation} where the polarization four-vectors are $\varepsilon^\mu_{X,L}$ and the above defined Keldysh parameters $\gamma_{1,2}$ have been transposed to $\gamma_{X,L}$; $\gamma'$ denotes the high-energy probe photon traversing the field~\eqref{A_Tobias}. The XFEL (frequency $\omega$) and laser (frequency $\eta\omega$, we assume in the following $\eta\ll1$) beams are co-propagating and their linear polarizations are set perpendicular to each other to simplify the cumbersome numerical evaluation. Both ones are pulsed as described, for the sake of computational convenience, by the envelope functions $f_X = \exp\{ - \phi^2 /(2 \tau_X^2)\}$ and $f_L = \cos^2 \left(\pi \phi /(2 \tau_L) \right)$ for $-\tau_L \le \phi \le + \tau_L$ and zero elsewhere for the latter pulse shape. In contrast to~\eqref{A_AO} we treat here a somewhat more realistic case with different pulse durations $\tau_X$ and $\tau_L$. The invariant phase is $\phi = k \cdot x$ with the dot indicating the scalar product of the four-wave vector $k$ and the space-time coordinate $x$. It is convenient to parametrize the produced positron's phase space by the following three variables: (i) the momentum exchange parameter $\ell$, (ii) the azimuthal angle $\varphi$ with respect to the polarization direction of the assisting laser field and (iii) the shifted rapidity $z = \frac12 \log (p_+^+/p_+^-) + \frac 12 \log\left( (1 + \eta \ell) \omega_X / \omega_{X^\prime} \right)$. The energy-momentum balance for laser assisted pair production can be put into the form $k_{X^\prime}^\mu + k_X^\mu + \ell k_L^\mu = p_+^\mu + p_-^\mu$ ($\mu$ is a Lorentz index, as above), where $\ell$ represents here an hitherto unspecified momentum exchange between the assisting laser field $L$ and the produced pair. We define light-front coordinates, e.g. $x^\pm = x^0 \pm x^3$ and $\vec x_\perp = (x_1, x_2)$ and analogously the light front components of four-momenta of the probe photon $X^\prime$, the XFEL photon $X$, the laser beam photons $L$ and positron (subscript $+$) and electron (subscript $-$). They become handy because the laser four-momentum vectors only have one non-vanishing light-front component $k_{X,L}^- = 2 \omega_{X,L}$. In particular, the energy-momentum balance contains the three conservation equations in light-front coordinates $k_{X^\prime}^+ = p_+^+ + p_-^+$ and $\vec p_+^\perp = -\vec p_-^\perp$. Moreover, the knowledge of all particle momenta allows to calculate $\ell$ via the fourth equation $\ell = \left( (p_+^- + p_-^- - k_{X^\prime}^-) / k_X^- - 1) / \eta \right)$. Treating $(\ell, z, \varphi)$ as independent variables the positron's four-momenta are completely determined by the above energy-momentum balance equations, see~\citep{nousch_spectral_2016} for details, in particular for expressing the positron and electron momenta $p_\pm$ by $(\ell, z, \varphi)$. The theoretical basis for formulating and evaluating the cross section is outlined in~\citep{nousch_spectral_2016}. An example is displayed in the top panel of Fig.~\ref{fig:1TN} for $\eta=1/600$, $\gamma_X=10^5$, $\tau_X=7\tau/(4\pi\eta)$, $\gamma_L=2$, and $\tau_L=8\pi$ (examples for other parameters are exhibited in~\citep{nousch_spectral_2016}) for kinematical conditions, where the linear Breit-Wheeler effect for $X'+X$ is just above the threshold. The involved spectral distribution (note that without the laser assistance only the Breit-Wheeler peak centered at $\ell=0$ corresponding to $p_\perp=0.62\,m$ would appear with a finite width as a consequence of the finite x ray pulse duration; cf.~\citep{titov_enhanced_2012,titov_breit-wheeler_2013,nousch_pair_2012} for an enhancement of pair production in short laser pulses). The spectrum can be smoothed by a window function with a resolution scale of $\delta=1.3$ (which is an ad hoc choice to better show the strength distribution and which may be considered as a simple account for finite energy resolution respective $p_\perp$ distribution) resulting in the red curve which is exhibited separately in the middle panel. In line with the interpretation in~\citep{nousch_spectral_2016,seipt_caustic_2015} the prominent peaks are caustics related to stationary phase points determined by the turning points of the invariant phase $\phi$ as a function of the variable $\ell$, see bottom panel. This interpretation implies that the total cross section may be approximately factorized into a plain Breit-Wheeler production part and a final-state interaction part, where the latter one means the redistribution of the produced particles by the impact of the laser field. An analog interpretation of particle production in constant cross field approximation in very strong fields have been put forward in~\citep {meuren_semiclassical_2015}. Figure~\ref{fig:3TN} demonstrates the strong impact of the laser field intensity. For smaller values of $\gamma_L$, the transverse momentum spectrum becomes more stretched and its shape is changed. This challenges the observability of the peaks related to caustics in multi-shot experiments with fluctuating laser intensities. In fact, for the unfavorable case of equally weighted deviations, a window of less than $20\,\%$ is required to keep the peak structures, see Fig.~\ref{fig:2TN}. A truncated Gaussian distribution with $1\sigma$ width in the same interval is, of course, much more favorable for keeping the peaks, in particular for larger $p_\perp$. We consider here only one particular case of the laser assisted, linear Breit-Wheeler process which turns into the textbook Breit-Wheeler process upon switching off the laser. Non-linearities w.r.t.\ the XFEL beam, subthreshold (w.r.t.\ the $X'$ + XFEL kinematics) effects combined with larger laser intensities, carrier envelope phase effects, and a wider range of kinematical parameters (e.g.\ $\omega_L = \mathcal O(\SI{1}{eV}$) need to be explored as well to arrive at a complete picture. Among the furthermore to be analyzed issues w.r.t.\ an experimental proposal are non-monochromaticity and misalignment disturbances. \section{Summary} In summary we have supplied further important details of (i) the amplification effect of the assisted dynamical Schwinger effect and (ii) the phase space redistribution in the laser assisted Breit-Wheeler process. Both topics are motivated by the availability of x rays by XFELs and upcoming ultra-high intensity laser beams. We consider the perspectives offered by the combination of both beam types resulting in bi-frequent fields. Concerning the Schwinger related investigations we find that significant pair production by the dynamical assistance requires much higher frequencies than such ones provided by XFEL beams in conjunction with future ELI-IV field intensities. The crucial challenge for the laser assisted Breit-Wheeler process and an access to the predicted caustic structures is the high-energy probe photon beam in combination with dedicated phase space selective detector set-ups. The bi-frequent fields are dealt with as a classical background. An avenue for further work is the proper account of quantum fluctuations and a unifiying description of counter- and co-propagating fields. \bigskip \noindent\textbf{Acknowledgements}~~R. Sauerbrey, T. E. Cowan and H. Takabe are thanked for the collaboration within the HIBEF project~\citep{hibef}. D.B. and S.A.S. acknowledge support by NCN under grant number UMO-2014/15/B/ST2/03752.\bigskip \noindent Dedicated to the memory of Nikolay Borisovich Narozhny who pioneered this field of research. \bibliographystyle{jpp}
1,477,468,751,287
arxiv
\section{Introduction} Let $A$ be a bounded linear operator acting on a Hilbert space $\mathcal H$. In case $\dim\mathcal H=n<\infty$ we will identify $\mathcal H$ with $\mathbb C^n$ and $A$ with its $n$-by-$n$ matrix representation in the standard basis $\{e_1,\ldots,e_n\}$ of $\mathbb C^n$. We will write $A\in B(\mathcal H)$ when the dimension of $\mathcal H$ is irrelevant and $A\in\mathbb C^{n\times n}$ to emphasize that it is finite. We will denote the norm, the spectrum, and the numerical range of $A$ as $\norm{A}$, $\sigma(A)$ and $W(A)$, respectively. Recall that the latter (a.k.a. the {\em field of values}, or the {\em Hausdorff set} of $A$) is defined as \eq{nr} W(A)=\{ \scal{Ax,x}\colon x\in\mathcal H,\ \norm{x}=1\}. \en This notion goes back to classical papers by Toeplitz \cite{Toe18} and Hausdorff \cite{Hau}; \cite{GusRa} is a more recent standard reference for the properties of $W(A)$. It is known in particular that $W(A)$ is convex (Toeplitz-Hausdorff theorem), its closure $\operatorname{cl}{W(A)}$ contains $\sigma(A)$, and thus the convex hull of it: \eq{nrs} \operatorname{cl}{W(A)}\supseteq \operatorname{conv} \sigma(A). \en Operators for which the equality in \eqref{nrs} holds are called {\em convexoid}; this class includes in particular all normal (and even hyponormal) operators. On the other hand, already for non-normal $A\in\mathbb C^{2\times 2}$ the inclusion in \eqref{nrs} is strict: $\operatorname{conv}\sigma(A)$ is then the line segment connecting the eigenvalues $\lambda_1,\lambda_2$ of $A$ while $W(A)$ is an elliptical disk with the foci at $\lambda_1,\lambda_2$ (the Elliptical Range Theorem). A variation of \eqref{nr} is the so called {\em maximal} numerical range $W_0(A)$ consisting of the limits of all convergent sequences $\scal{Ax_n,x_n}$ with unit vectors $x_n\in\mathcal H$ such that $ \norm{Ax_n}\to\norm{A}$. This notion was introduced by J.~Stampfli in \cite{Sta70}, where it was also observed that $W_0(A)$ is a closed convex subset of $\operatorname{cl}{W(A)}$. For $A\in\mathbb C^{n\times n}$, $W_0(A)=W(B)$, where $B$ is the compression of $A$ onto the eigenspace of $A^*A$ corresponding to its maximal eigenvalue. In the same paper \cite{Sta70}, J.~Stampfli introduced the {\em center of mass} of $A$ as the (unique) value of $\lambda\in\mathbb C$ at which the minimum of $\norm{A-\lambda I}$ is attained. Since this term is overused, and to give credit where it is due, we will call this value of $\lambda$ the {\em Stampfli point} of $A$ and denote it $\operatorname{St}(A)$. In this notation, according to \cite[Corollary]{Sta70}: \eq{stw0} \operatorname{St}(A)=\lambda \text{ if and only if } 0\in W_0(A-\lambda I). \en Note that the statements in \eqref{stw0} are {\bf not} equivalent to $\lambda\in W_0(A)$, since the maximal numerical range does not behave nicely under shifts. Several other observations made in \cite{Sta70} are as follows: If $A$ is normal (or even hyponormal), then $\operatorname{St}(A)$ is the center of the smallest circle circumscribing $\sigma(A)$. In general, $\operatorname{St}(A)$ lies in the closure of the numerical range of $A$ but not necessarily in the convex hull of its spectrum. It is also mentioned in passing that the respective examples exist already when $A$ is nilpotent and $\dim\mathcal H=3$ but no specifics were provided. In this paper we further explore properties of the Stampfli point. Section~\ref{s:sdb} provides an explicit formula for $\operatorname{St}(A)$ for $A$ unitarily similar to 2-by-2 block operator matrices with the diagonal blocks being scalar multiples of the identity operator $I$. This covers in particular quadratic operators, as well as tridiagonal matrices with constant main diagonal. In Section~\ref{s:ano} the property $\operatorname{St}(A)\in\operatorname{conv}\sigma(A)$ is extended from normal to so called almost normal operators. Sections~\ref{s:tri}--\ref{s:tps} are devoted to 3-by-3 matrices. An explicit procedure for computing $\operatorname{St}(A)$ when $\sigma(A)$ is a singleton $\{\lambda\}$ is outlined in Section~\ref{s:trim}, based on some auxiliary results established in Section~\ref{s:tri}. One of these results is the criterion for $\operatorname{St}(A)$ to coincide with $\lambda$. As a generalization of the latter, in Section~\ref{s:tps} we characterize matrices $A\in\mathbb C^{3\times 3}$ with a doubleton spectrum and $\operatorname{St}(A)$ coinciding with the multiple eigenvalue. Section~\ref{s:rorth} contains some observations on the relation between the Stampfli point of $A$ and the Roberts orthogonality of $A$ to $I$. Finally, several figures illustrating results of Sections~\ref{s:ano}--\ref{s:trim} are presented in the Appendix. \section{Operators with scalar diagonal blocks} \label{s:sdb} \iffalse Every $A\in\mathbb C^{2\times 2}$ is almost normal, and therefore satisfies $\operatorname{St}(A)\in[\lambda_1,\lambda_2]$ (here $\lambda_1,\lambda_2$ are the eigenvalues of $A$, constituting in this case the entirety of its spectrum). However, this simple case can be handled directly, and the answer is more definite.\fi Let us start with the simplest possible case, in which the answer is explicit and can be obtained directly by a straightforward computation. \begin{prop} \label{th:2by2} Let $\in\mathbb C^{2\times 2}$. Then $\operatorname{St}(A)=\operatorname{trace} A/2$. \end{prop} \begin{proof} Using a unitary similarity, put $A$ in an upper-triangular form: \[ A =\begin{bmatrix} \lambda_1 & c \\ 0 & \lambda_2 \end{bmatrix}, \] where $\{\lambda_1,\lambda_2\}=\sigma(A)$, $c\geq 0$. Then, for any $\lambda\in\mathbb C$: \[ (A-\lambda I)^* (A-\lambda I)=\begin{bmatrix} \abs{\lambda_1-\lambda}^2 & c(\overline{\lambda_1}-\overline{\lambda}) \\ c(\lambda_1-\lambda) & c^2+\abs{\lambda_2-\lambda}^2 \end{bmatrix},\] and so \begin{multline}\label{2max} 2\norm{A-\lambda I}^2= \abs{\lambda_1-\lambda}^2+\abs{\lambda_2-\lambda}^2+c^2\\ + \sqrt{(\abs{\lambda_1-\lambda}^2-\abs{\lambda_2-\lambda}^2)^2+c^4+ 2c^2(\abs{\lambda_1-\lambda}^2+\abs{\lambda_2-\lambda}^2)}.\end{multline} Elementary planar geometry shows that the minimum with respect to $\lambda$ in the right hand side of \eqref{2max} is attained when $\lambda$ is a midpoint of $[\lambda_1,\lambda_2]$. \end{proof} Note that Proposition~\ref{th:2by2} can also be proved based on \eqref{stw0} and the explicit formula from \cite{HamS} for $W_0(A)$ in case of $2$-by-$2$ matrices. \begin{thm}\label{th:brs}Let $A\in \mathbb C^{n\times n}$ be unitarily similar to a matrix of the form \eq{brs} \begin{bmatrix} a_1 I_{n_1} & X \\ Y^* & a_2 I_{n_2} \end{bmatrix}, \en with $XY^*\in \mathbb C^{n_1\times n_1}$, $Y^*X\in \mathbb C^{n_2\times n_2}$ being normal. Then $\operatorname{St}(A)=(a_1+a_2)/2$. \end{thm} Proposition~\ref{th:2by2} is of course a particular case of Theorem~\ref{th:brs} (corresponding to $n_1=n_2=1$), but at the same time also the main ingredient of its proof. \begin{proof} As was shown in \cite{BS04} (see the proof of Theorem~2.1 there), matrices under consideration are unitarily similar to direct sums of $\min\{n_1,n_2\}$ two-dimensional blocks $A_k$, all having $a_1,a_2$ as its diagonal entries, with $\abs{n_1-n_2}$ one-dimensional blocks, equal $a_1$ or $a_2$. According to Proposition~\ref{th:2by2}, $\operatorname{St}(A_k)=(a_1+a_2)/2$ does not depend on $k=1,\ldots,\min\{n_1,n_2\}$. Since for any $\lambda\in\mathbb C$, $\norm{A_k-\lambda I}\geq\abs{a_j-\lambda}$ ($j=1,2$), the value of $\operatorname{St}(A)$ for the whole matrix $A$ coincides with that of its blocks $A_k$. \end{proof} The normality of $XY^*, Y^*X$ holds in a trivial way if $Y=0$, i.e., $A$ is unitarily similar to \eq{mquad} \begin{bmatrix} \lambda_1 I & Z \\ 0 & \lambda_2 I \end{bmatrix}, \en with $\{\lambda_1,\lambda_2\}=\sigma(A)$. This happens if and only if $A$ satisfies the equation \eq{quad} A^2+pA+qI=0 \en with $p=-(\lambda_1+\lambda_2), \ q=\lambda_1\lambda_2$. Here is an infinite-dimensional analogue of this situation. \begin{thm} Let $A\in B(\mathcal H)$ be a quadratic operator, i.e., \eqref{quad} holds for some $p,q\in\mathbb C$. Then $\operatorname{St}(A)=-p/2$.\label{th:quad} \end{thm} \begin{proof}As was observed in \cite{TsoWu}, for an operator $A$ satisfying \eqref{quad} there exists a partition $\mathcal H=\mathcal H_1\oplus\mathcal H_2$ with respect to which $A$ takes the form \eqref{mquad}. \iffalse Here $\lambda_1,\lambda_2$ are the roots of $\lambda^2+p\lambda+q$, and \[ A_0=\begin{bmatrix} \lambda_1 & \norm{Z} \\ 0 & \lambda_2 \end{bmatrix}\in \mathbb C^{2\times 2}.\] \fi But then for any $\lambda\in\mathbb C$ \[ \norm{A-\lambda I}=\norm{A_0-\lambda I}, \text{ where } A_0=\begin{bmatrix} \lambda_1 & \norm{Z} \\ 0 & \lambda_2 \end{bmatrix}\in \mathbb C^{2\times 2}, \] and so $\operatorname{St}(A)=\operatorname{St}(A_0)$. It remains to invoke Proposition~\ref{th:2by2}. \end{proof} Note that in the setting of Theorem~\ref{th:quad} $\sigma(A)=\{\lambda_1,\lambda_2\}$, and according to \cite[Theorem 2.1]{TsoWu} $W(A)$ is an elliptical disk with the foci $\lambda_1,\lambda_2$ (possible degenerating into the line segment $[\lambda_1,\lambda_2]$). So, for quadratic $A$ the position of $\operatorname{St}(A)$ is defined by $\sigma(A)$ uniquely, and is indeed at the center of both $\sigma(A)$ and $W(A)$. This justifies to some extent the ``center of mass'' term for $\operatorname{St}(A)$ coined by Stampfli. Finally, if in \eqref{brs} $a_1=a_2$, then no conditions on $X,Y$ are needed and, moreover, the formula for $\operatorname{St}(A)$ holds in the inifinite dimensional setting. \begin{thm} \label{th:xy} Let $A\in B(\mathcal H)$. If there exists a subspace $\mathcal L$ of $\mathcal H$ such that compressions of $A$ onto $\mathcal L$ and its orthogonal complement $\mathcal L^\perp$ are both multiples of the identity by the same scalar $a$, then $\operatorname{St}(A)=a$. \end{thm} \begin{proof}It suffices to consider the case $a=0$. The operator $A$ then can be represented as $\begin{bmatrix} 0 & X \\ Y^* & 0 \end{bmatrix}$ with respect to the decomposition $\mathcal H =\mathcal L\oplus\mathcal L^\perp$. We need to show that the norm of $\begin{bmatrix} \lambda I & X \\ Y^* & \lambda I \end{bmatrix}$ attains its minimum with respect to $\lambda$ at $\lambda=0$. Equivalently, the rightmost point of the spectrum (or, which is the same in this case, the numerical range) of \[ \begin{bmatrix} \overline{\lambda} I & Y \\ X^* & \overline{\lambda I} \end{bmatrix} \begin{bmatrix} \lambda I & X \\ Y^* & \lambda I \end{bmatrix}= \abs{\lambda}^2I+\begin{bmatrix} YY^* & \overline{\lambda}X+\lambda Y \\ \lambda X^*+\overline{\lambda}Y^* & X^*X \end{bmatrix} \] should be the smallest when $\lambda=0$. But this is indeed the case, simply because the norm of any block operator matrix $\begin{bmatrix} H_1 & Z \\ Z^* & H_2\end{bmatrix}$ with fixed positive semi-definite diagonal blocks is minimal when its off-diagonal blocks are equal to zero. \end{proof} \begin{cor}\label{co:tri} Let $A\in B(\mathcal H)$ be such that the entries $a_{ij}$ of its matrix in some orthonormal basis $\{f_j\}$ have the property: $a_{ij}=0$ if $i-j$ is even and different from zero; $a_{jj}:=a$ is independent of $j$. Then $\operatorname{St}(A)=a$. \end{cor} Indeed, such $A$ meets conditions of Theorem~\ref{th:xy} with $\mathcal L =\operatorname{Span}\{f_j\colon j \text{ even}\}$. This corollary covers in particular tridiagonal matrices $A$ with constant main diagonal. \section{Almost normal operators} \label{s:ano} We adopt the definition of almost normality which for $A\in\mathbb C^{n\times n}$ was introduced in \cite{Ikra11} as having at least $n-1$ pairwise orthogonal eigenvectors. We will therefore say that $A\in B(\mathcal H)$ is {\em almost normal} if it has an invariant subspace $\mathcal L$ of codimension one such that $A|\mathcal L$ is normal. \begin{thm}\label{th:an} Let $A\in B(\mathcal H)$ be an almost normal operator. Then \eq{stan} \operatorname{St}(A)\in\operatorname{conv}\sigma(A).\en \end{thm} \begin{proof}According to the definition of almost normality, $A$ can be represented in the matrix form \eq{an} A=\begin{bmatrix} N & b \\ 0 & \mu\end{bmatrix} \en with respect to the partition $\mathcal H =\mathcal L\oplus\mathbb C$. Here $N\in B(\mathcal L)$ is a normal operator, $b$ can be identified with a vector in $\mathcal L$, and $\mu\in\mathbb C$. Observe that $\sigma(A)=\sigma(N)\cup\{\mu\}$. Suppose \eqref{stan} does not hold. By shifting, rotating and scaling $A$ we may then without loss of generality assume that $\operatorname{St}(A)=0$ while $\sigma(A)\subset\{z\colon \operatorname{Re} z\geq 1\}$. Any unit vector $x\in\mathcal H$ can, up to inconsequential unimodular scalar mutliple, be written as $x=[\sqrt{t}\xi,\sqrt{1-t}]^T$. Here $\xi$ is a unit vector in $\mathcal L$ and $t\in[0,1]$. Then \[ Ax=[\sqrt{t}N\xi+\sqrt{1-t}b,\sqrt{1-t}\mu]^T, \] and \eq{nAx} \norm{Ax}^2=t\scal{N^*N\xi,\xi}+(1-t)(\norm{b}^2+\abs{\mu}^2)+2\sqrt{t(1-t)}\operatorname{Re}\scal{N\xi,b}. \en Using the symbolic calculus for normal operators (see e.g. \cite[Chapter 12]{Ru91}), \[ N=\int_{\sigma(N)}\zeta\,dE(\zeta), \quad N^*N=\int_{\sigma(N)}\abs{\zeta}^2\,dE(\zeta),\] where $E$ is the spectral decomposition of $N$, so \eqref{nAx} can be rewritten as \begin{multline*} \norm{Ax}^2=t\int_{\sigma(N)}\abs{\zeta}^2\,d\scal{E(\zeta)\xi,\xi}+(1-t)(\norm{b}^2+\abs{\mu}^2)\\ +2\sqrt{t(1-t)}\operatorname{Re}\scal{\int_{\sigma(N)}\zeta\,dE(\zeta)\xi,b}.\end{multline*} Considering vectors of the form $U\xi$ along with $\xi$, where \[ U=\int_{\sigma(N)}\phi(\zeta)\,dE(\zeta), \quad \abs{\phi}=1 \text{ on } \sigma(N), \] are unitary operators commuting with $N$, we conclude that $\norm{A}^2$ is the supremum of \eq{noa} t\int\displaylimits_{\sigma(N)}\abs{\zeta}^2d\scal{E(\zeta)\xi,\xi}+(1-t)(\norm{b}^2+\abs{\mu}^2) +2\sqrt{t(1-t)}\int\displaylimits_{\sigma(N)}\abs{\zeta}\abs{\scal{dE(\zeta)\xi,b}}\en taken over $t\in [0,1]$ and unit vectors $\xi\in\mathcal L$. A similar reasoning applied to $A-I$ in place of $A$ yields the conclusion that $\norm{A-I}^2$ is the supremum of \begin{multline} \label{noa1} t\int_{\sigma(N)}\abs{\zeta-1}^2d\scal{E(\zeta)\xi,\xi}+(1-t)(\norm{b}^2+\abs{\mu-1}^2)\\ +2\sqrt{t(1-t)}\int_{\sigma(N)}\abs{\zeta-1}\abs{\scal{dE(\zeta)\xi,b}}\end{multline} over the same set. Since $\operatorname{Re}\zeta,\operatorname{Re}\mu\geq 1$, we have \[ \abs{\zeta}^2-\abs{\zeta-1}^2\geq 1, \quad \abs{\mu}^2-\abs{\mu-1}^2\geq 1, \quad \abs{\zeta}\geq\abs{\zeta-1}, \] and so the difference between \eqref{noa} and \eqref{noa1} is not smaller than \[ t\int_{\sigma(N)}\, d\scal{E(\zeta)\xi,\xi}+1-t =1. \] But then $\norm{A}>\norm{A-I}$, in contradiction with $\operatorname{St}(A)=0$. \end{proof} For convenience of readers interested in finite-dimensional setting only, let us provide a (shorter and more elementary) adaptation of this proof to the case of almost normal $A\in\mathbb C^{n\times n}$. \smallskip {\em Proof of Theorem~\ref{th:an} in the finite dimensional setting.} Via an appropriate unitary similarity, for an almost normal $A\in\mathbb C^{n\times n}$ its representation \eqref{an} can be further specified to \iffalse $N=\operatorname{diag}[\lambda_1,\ldots,\lambda_{n-1}]$ and the entries of the column $b=[b_1,\ldots,b_{n-1}]^T$ are non-negative. \fi \eq{anf} A=\begin{bmatrix} \lambda_1 & 0 & \ldots & 0 & b_1 \\ 0 & \lambda_2 & \ldots & 0 & b_2 \\ \vdots & & \ddots & & \vdots \\ 0 & \ldots & 0 & \lambda_{n-1} & b_{n-1}\\ 0 & \ldots & \ldots & 0 & \mu \end{bmatrix}, \en with $b_j\geq 0$, $j=1,\ldots,n-1$. For $\xi=[\xi_1,\ldots,\xi_n]^T\in\mathbb C^n$, denote $\abs{\xi_j}=t_j$, and without loss of generality set $\xi_n=t_n$. Since the $j$-th entry of $A\xi$ is $\lambda_j\xi_j+b_jt_n$, its maximal absolute value is attained when \eq{argf} \arg\xi_j=-\arg\lambda_j \text{ if } \lambda_j b_jt_n\neq 0, \quad j=1,\ldots,n-1.\en So, condition \eqref{argf} holds for $\xi$ maximazing $\norm{A\xi}/\norm{\xi}$. Therefore, for such $\xi$ \eq{scalalm} \scal{A\xi,\xi}=\sum_{j=1}^{n-1}\left(\lambda_jt_j^2+\overline{\xi_j}b_jt_n\right)+\mu t_n^2 \en is a linear combination of the points in $\sigma(A)$ with non-negative coefficients not all equal to zero. Suppose now that \eqref{stan} does not hold. Shifting and rotating $A$ (as it was done in the proof for the general setting) we may without loss of generality assume that $\operatorname{St}(A)=0$ while $\lambda_1,\ldots, \lambda_{n-1},\mu$ all have positive real parts. According to \eqref{scalalm}, $\scal{A\xi,\xi}$ then also has a positive real part and is therefore different from zero. This is in contradiction with \eqref{stw0}. \qed Recall that an almost normal matrix is {\em pure} if it is unitarily irreducible. According to \cite[Theorem 2.1]{MoSp} this is the case if and only if in \eqref{anf} all $\lambda_j$ are distinct and $b_j\neq 0$, $j=1,\ldots n-1$. The following refinement of the inclusion \eqref{stan} in the finite dimensional case is of some interest. \begin{cor}\label{co:stanf} Let $A$ be as in \eqref{anf} with all $b_j$ different from zero. Then \iffalse \\ {\em (i)} the vector $\xi$ maximizing $\norm{Ax}/\norm{x}$ has all non-zero entries, and \\ {\em (ii)}\fi $\operatorname{St}(A)$ lies in the relative interior of $\operatorname{conv}\{\lambda_1,\ldots,\lambda_{n-1},\mu\}$.\end{cor} \begin{proof} We will continue using the notation $\abs{\xi_j}=t_j$ along with the convention $\xi_n=t_n$. Since $\norm{A}\geq\sqrt{\abs{\lambda_j}^2+b_j^2}$, under the condition imposed on $A$ we have $\norm{A}>\abs{\lambda_j}$ for all $j=1,\ldots,n-1$. So, $\xi$ cannot be orthogonal to $e_n$, i.e., $t_n>0$. Moreover, since \[ A^*A=\begin{bmatrix} \abs{\lambda_1}^2 & 0 & \ldots & 0 & \overline{\lambda_1}b_1 \\ 0 & \abs{\lambda_2}^2 & \ldots & 0 & \overline{\lambda_2}b_2 \\ \vdots & & \ddots & & \vdots \\ 0 & \ldots & 0 & \abs{\lambda_{n-1}}^2 & \overline{\lambda_{n-1}}b_{n-1}\\ \lambda_1 b_1 & \lambda_2 b_2 & \ldots & \lambda_{n-1}b_{n-1} & \mu \end{bmatrix}, \] the $j$-th entry of $A^*A\xi$ is $\abs{\lambda_j}^2\xi_j+\overline{\lambda_j}b_jt_n$ ($j=1,\ldots,n-1$). So, $t_j$ can only equal zero if $\lambda_j=0$. With these observations at hand, suppose that $\operatorname{St}(A)$ is not in the relative interior of $\operatorname{conv}\{\lambda_1,\ldots,\lambda_{n-1},\mu\}$. As in the proof above, we may assume that $\operatorname{St}(A)=0$, but instead of $\operatorname{Re}\lambda_j, \operatorname{Re} \mu$ being positive we have a weaker condition $\operatorname{Re}\lambda_j\geq 0, \operatorname{Re}\mu\geq 0$, with at least one of the inequalities strict. Since now all $t_j$ and $b_j$ are non-zero, we still may conclude from \eqref{scalalm} that $\operatorname{Re}\scal{A\xi,\xi}>0$, in contradiction with \eqref{stw0}. \end{proof} Note that all $A\in\mathbb C^{2\times 2}$ are either normal or pure almost normal, while Proposition~\ref{th:2by2} shows that for such matrices the statement of Corollary~\ref{co:stanf} holds in both cases. However, already for $n=3$ there exist unitarily reducible and almost normal (or even normal) matrices $A$ with $\operatorname{St}(A)$ lying on the relative boundary of $\operatorname{conv}\sigma(A)$. \iffalse Note also that formulas \eqref{argf}, \eqref{scalalm} hold for any almost normal $A\in\mathbb C^{n\times n}$, independent of the $\operatorname{St}(A)$ value. {\color{red} [I don't think we are using this. Should we still keep it?] } \fi \section{3-by-3 matrices with singleton spectra. Auxiliary statements}\label{s:tri} Let $A\in\mathbb C^{3\times 3}$ be such that its spectrum is a singleton: $\sigma(A)=\{\lambda\}$. Then, up to a unitary similarity, \eq{3by3} A=\begin{bmatrix} \lambda & x & y\\ 0 & \lambda & z \\0 & 0 &\lambda\end{bmatrix}. \en \begin{prop}\label{th:3by30}The matrix \eqref{3by3} has $\operatorname{St}(A)=\lambda$ if and only if $xyz=0$. \end{prop} Note that for matrices \eqref{3by3} $\operatorname{conv}\sigma(A)=\{\lambda\}$. So, condition $xyz=0$ is actually the criterion for $St(A)\in\operatorname{conv}\sigma(A)$ to hold in this case. \iffalse Perhaps, this is what Stampfli had in mind by his observation in \cite[p. 739]{Sta70}.\fi \begin{proof} To make use of \eqref{stw0}, compute \[ (A-\lambda I)^* (A-\lambda I)= \begin{bmatrix} 0 & 0 & 0 \\ 0 & \abs{x}^2 & \overline{x}y \\ 0 & x\overline{y} & \abs{y}^2+\abs{z}^2\end{bmatrix}. \] If $xy\neq 0$, then the maximal eigenvalue of $ (A-\lambda I)^* (A-\lambda I)$ is simple, and the respective eigenvector is $\xi=[0,\xi_2,\xi_3]^T$ with $\xi_2,\xi_3\neq 0$. But then $\scal{(A-\lambda I)\xi,\xi}=z\overline{\xi_2}\xi_3$. So, condition $W_0(A-\lambda I)\ni 0$ holds in this case if and only if $z=0$. On the other hand, if $xy=0$, then one of the standard basis vectors $e_2$ or $e_3$ maximizes the norm of $A-\lambda I$ while $\scal{(A-\lambda I)e_j,e_j}=0$. \end{proof} Our goal in the next Section~\ref{s:trim} is to compute $\operatorname{St}(A)$ for matrices \eqref{3by3} with $xyz\neq 0$. An intermediate step in this direction is a characterization of such matrices with $\lambda=1$ and $\operatorname{St}(A)=0$. By an additional (in this case, diagonal) unitary similarity we can arrange that $x,z>0$. \begin{prop}\label{th:crit10} Let the matrix $A$ be given by \eqref{3by3} in which $\lambda=1$ and $x,z>0$. Let also $\operatorname{St}(A)=0$. Then $y<0$ and \eq{res} \det \begin{bmatrix} a_1 & 0 & b_1 & 0 & 0 \\ a_2 & a_1 & b_2 & b_1 & 0 \\ a_3 & a_2 & b_3 & b_2 & b_1 \\ a_4 & a_3 & 0 & b_3 & b_2 \\ 0 & a_4 & 0 & 0 & b_3 \end{bmatrix}=0, \en where \eq{ab} \begin{aligned} & a_1 = 2x,\ a_2 = -3xz, \ a_3 = xz^2,\ a_4 = 2y - xz, \ b_1 = 4x^2 + y^2 + z^2 - xyz, \\ & b_2 = -(z^3 + x^2z + y^2z + 6xy - xyz^2), \ b_3 = x^2 + 4y^2 + z^2 - xyz.\end{aligned} \en \end{prop} \begin{proof}According to \eqref{stw0}, there exists a unit vector $\xi=[\xi_1,\xi_2,\xi_3]^T$ such that $\norm{A\xi}=\norm{A}$ and $\scal{A\xi,\xi}=0$. Let us denote $\abs{\xi_j}=t_j$; without loss of generality $\xi_3\geq 0$ and therefore $\xi_3=t_3$. A direct computation shows that \begin{align*} A\xi = \begin{bmatrix} \xi_1 + x \xi_2 + y \xi_3 \\ \xi_2 + z\xi_3 \\ \xi_3 \end{bmatrix} \end{align*} and so \eq{norm} \norm{A \xi}^2 = t_3^2 + \abs{\xi_1 + x \xi_2 + y \xi_3}^2 + \abs{\xi_2 + z \xi_3}^2. \en With $\xi_2,\xi_3$ and $\abs{\xi_1}$ being fixed, the right hand side of \eqref{norm} is maximal when \eq{arg} \arg{\xi_1} = \arg{(x \xi_2 + y \xi_3)}, \en and thus can be rewritten as \begin{align*} t_1^2 + t_3^2 + \abs{x \xi_2 + y \xi_3}^2 + \abs{\xi_2 + z \xi_3}^2 + 2t_1 \abs{x \xi_2 + y \xi_3}. \end{align*} At the same time \begin{align*} \scal{A\xi,\xi} = 1 + \bar{\xi_1}\left(x \xi_2 + y\xi_3 \right) + z\bar{\xi_2}\xi_3= 1 + \bar{\xi_1}\left(x \xi_2 + yt_3 \right) + z\bar{\xi_2}t_3, \end{align*} which due to \eqref{arg} implies \eq{sc0} \scal{A\xi,\xi} = 1 + t_1\abs{x\xi_2 + yt_3} + z\overline{\xi_2}t_3. \en Since $z>0$ and $t_3\geq 0$, condition $\scal{A\xi,\xi}=0$ can hold only if $t_3>0$, $\xi_2<0$ (and thus $\xi_2=-t_2$). Furthermore, $\xi=[\xi_1,-t_2,t_3]^T$ is an eigenvector of $A^*A$ corresponding to its eigenvalue $\sigma:=\norm{A}^2$. Since \eq{a*a} A^*A= \begin{bmatrix} 1 & x & y \\ x & x^2+1 & xy+z \\ \overline{y} & x\overline{y}+z & \abs{y}^2+z^2+1 \end{bmatrix}, \en equating the second entries of $A^*A\xi$ and $\sigma\xi$ we have in particular \eq{y} x\xi_1-t_2(x^2+1)+t_3(xy+z)=-\sigma t_2. \en Due to \eqref{arg}, $\xi_1=\mu(-xt_2+yt_3)$ with some $\mu\geq 0$. Plugging this into \eqref{y} and taking the imaginary parts: \[ \operatorname{Im} \left(xyt_3(\mu+1)\right)= xt_3(\mu+1)\operatorname{Im} y=0. \] So, $y\in\mathbb R$. According to \eqref{arg}, then also $\xi_1\in\mathbb R$. Suppose for a moment that $y>0$. Then \eqref{a*a} shows that the matrix $A^*A$ is entry-wise positive. By the Perron theorem, its maximal eigenvalue is simple and the respective eigenvectors have entries with coinciding arguments. This is in contradiction with $\xi_2,\xi_3$ having opposite signs. This proves the first assertion of the proposition: $y<0$. Returning to \eqref{arg} again, we see that $\xi_1<0$. So finally $\xi=[-t_1,-t_2,t_3]^T$ with all $t_j$ positive. Plugging this into \eqref{sc0} and recalling that $t_1^2+t_2^2+t_3^2=1$: \eq{s1} t_1^2+t_2^2+t_3^2+xt_1t_2-yt_1t_3-zt_2t_3=0. \en The collinearity of $A^*A\xi$ and $\xi$ yields two more homogeneous second degree equations in $t_j$: \eq{s2} xt_1^2-xt_2^2+x^2t_1t_2-(xy+z)t_1t_3+yt_2t_3=0 \en and \eq{s3} yt_1^2-yt_3^2+(xy+z)t_1t_2-(y^2+z^2)t_1t_3+xt_2t_3=0.\en Dividing \eqref{s1}--\eqref{s3} by $t_3^2$ and introducing new variables $u=t_1/t_3$, $v=t_2/t_3$: \eq{sys} \begin{aligned} u^2+v^2+xuv-yu-zv+1 & = 0, \\ xu^2-xv^2+x^2uv-(xy+z)u+yv & =0, \\ yu^2+(xy+z)uv-(y^2+z^2)u+xv-y & =0. \end{aligned} \en From the first two equations of \eqref{sys} it follows that \eq{u} u= \frac{-2xv^2+(xz+y)v-x}{z}, \en while the first, multiplied by $y$ and subtracted from the third, becomes \eq{s4} yv^2-zuv+z^2u-(x+yz)v+2y=0. \en Plugging \eqref{u} into \eqref{s4} and the first equation of \eqref{sys} yields the following system of two equations in one variable $v$: \eq{sys1} \begin{aligned} 2xv^3 - 3xzv^2 + xz^2v + 2y -xz &= 0, \\ 4x^2v^4 - \left(4xy + 6x^2z \right)v^3 + \left(4x^2 + y^2 + z^2 + 5xyz + 2x^2z^2 \right)v^2 & \\ -\left(3x^2z + y^2z + z^3 + xyz^2 + 2xy \right)v + (x^2 + z^2 + xyz) & = 0. \end{aligned} \en By some additional equivalent transformations aimed at lowering the degrees of polynomials involved, the system \eqref{sys1} can be reduced to \eq{sys2} \begin{aligned} 2xv^3 - 3xzv^2 + xz^2v + 2y -xz & = 0, \\ \left(4x^2 + y^2 + z^2 - xyz \right)v^2 - \left(z^3 + x^2z + y^2z + 6xy - xyz^2 \right)v & \\ + \left(x^2 + 4y^2 + z^2 - xyz \right) & = 0. \end{aligned} \en It remains to observe that the determinant in \eqref{res} is the resultant of the left hand sides of \eqref{sys2} and thus its equality to zero is equivalent to the system \eqref{sys2} being consistent. \end{proof} \section{3-by-3 matrices with singleton spectra. Main result} \label{s:trim} Admittedly, Proposition~\ref{th:crit10} does not look constructive. Nevertheless, it serves as the main ingredient in the construction of a procedure allowing to compute $\operatorname{St}(A)$ for all matrices of the form \eqref{3by3}. Due to Proposition~\ref{th:3by30}, only the case $xyz\neq 0$ needs to be considered. \begin{thm}\label{th:comp}Let $A$ be given by \eqref{3by3} with $xyz\neq 0$. Then $\operatorname{St}(A)=\lambda+\zeta$, where \eq{arg1} \arg\zeta=\arg(x\overline{y}z) \en and $\abs{\zeta}$ is a positive root of the 5-th degree polynomial \begin{multline}\label{s} P_A(s)= \ 4 s^5 (u^2+v^2)(u^6 + 3 u^4 (v^2 + w^2) + (v^2 + w^2)^3 + 3 u^2 (v^4 - 7 v^2 w^2 + w^4)) \\ + 4 s^4 uvw (4 u^6 + 6 u^4 (2 v^2 - 3 w^2) + (v^2 + w^2)^2 (4 v^2 + w^2) + 6 u^2 (2 v^4 - 6 v^2 w^2 + w^4)) \\ + 3 s^3 u^2 w^2 (2 u^6 + 7 v^6 + 13 v^4 w^2 + 6 v^2 w^4 + u^4 (11 v^2 - 5 w^2) + 2 u^2 (8 v^4 - 14 v^2 w^2 + w^4)) \\ + s^2 u^3 v w^3 (9 u^4 + 7 v^4 + 18 v^2 w^2 + 6 w^4 + 4 u^2 (4 v^2 - 3 w^2))\\ + s u^4 v^2 w^4 (-5 v^2 + 3 w^2) - 3u^5 v^3 w^5. \end{multline} \end{thm} Here for the notational convenience we have set \eq{uvw} u=\abs{x},\ v=\abs{y}, \text{ and } w=\abs{z}.\en \begin{proof}Denoting $\operatorname{St}(A)-\lambda :=\zeta$, from Proposition~\ref{th:3by30} we have $\zeta\neq 0$. So, we may consider the matrix \eq{StD} B=-\zeta^{-1}(A-(\lambda+\zeta)I)= \begin{bmatrix}1 & -x/\zeta & -y/\zeta \\ 0 & 1 & -z/\zeta \\ 0 & 0 & 1\end{bmatrix}, \en for which $\operatorname{St}(B)=0$. A diagonal unitary similarity can be applied to put $B$ in the form \eq{B1} \begin{bmatrix}1 & u/\abs{\zeta} & -ye^{-i(\arg x+\arg z-2\arg\zeta)}/\zeta \\ 0 & 1 & w/\abs{\zeta} \\ 0 & 0 & 1\end{bmatrix}, \en thus making Proposition~\ref{th:crit10} applicable. Consequently, the right upper entry of the matrix \eqref{B1} is negative. This is equivalent to \eqref{arg1} and also allows to rewrite this entry as $-v/\abs{\zeta}$. Replacing $x,y,z$ in \eqref{ab} by $u/s, -v/s$ and $w/s$, respectively, and expanding the determinant in \eqref{res}, we arrive at \eqref{s}. \end{proof} Note that the polynomial \eqref{s} has an odd number of positive roots, and so there is at least one. If such a root is unique, it of course delivers the correct value of $\abs{\zeta}$. In case of several such roots, the inclusion $W_0(B)\ni 0$ should be checked in order to choose the ``right'' one. In all our numerical experiments this was the smallest positive root, but we do not have a proof that this is always the case. Numerical examples are shown below, one with a unique root for \eqref{s} and one where there are several positive roots. Direct computations of $\operatorname{St}(X)$ for various matrices $X$ throughout the paper were carried out by the Mathematica script \textit{NMinimize}. \smallskip {\bf Example 1.} Consider A as follows, \begin{align*} A = \begin{bmatrix} \lambda & 8 & - 1 \\ 0 & \lambda & 7 \\ 0 & 0 & \lambda \end{bmatrix}. \end{align*} Then \eqref{StD} takes the form \begin{align*} B = \begin{bmatrix} 1 & 8/ \abs{\zeta} & - 1/ \abs{\zeta} \\ 0 & 1 & 7/ \abs{\zeta} \\ 0 & 0 & 1 \end{bmatrix}. \end{align*} There is only one positive root, $s \approx 0.7003$, to the corresponding polynomial $P_B$, and indeed for this value of $ \abs{\zeta}$, $St(B) = 0$. Since $\arg(x\overline{y}z) = \pi$, we then have that $\zeta \approx -0.7003$, implying that $St(A) \approx \lambda - 0.7003$ which is consistent with the value computed directly. \smallskip {\bf Example 2.} Now let \begin{align*} A = \begin{bmatrix} \lambda & 8 & - 1 \\ 0 & \lambda & 7.5 \\ 0 & 0 & \lambda \end{bmatrix}. \end{align*} Then \eqref{StD} takes the form \begin{align*} B = \begin{bmatrix} 1 & 8/ \abs{\zeta} & - 1/ \abs{\zeta} \\ 0 & 1 & 7/ \abs{\zeta} \\ 0 & 0 & 1 \end{bmatrix}. \end{align*} The respective polynomial $P_B$ has three positive roots, approximately equal 0.833, 1.367, and 2.101. Computations show that $\operatorname{St}(B)$ equals zero only for the matrix $B$ corresponding to the minimal value of $s$. So, $\abs{\zeta} \approx 0.833$. Since $\arg(x\overline{y}z) = \pi$, it follows that $\zeta \approx -0.833$, implying that $St(A) \approx \lambda - 0.833$. As in Example~1, this is consistent with the value of $\operatorname{St}(A)$ computed directly. Computation of the displacement $\zeta$ between the spectrum $\{\lambda\}$ of the matrix \eqref{3by3} and its Stampfli point becomes much simpler under an additional condition $\abs{x}=\abs{z}$. This covers in particular the case $x=z$ in which $A$ is a triangular Toeplitz matrix. \begin{thm}\label{th:toe}Let in \eqref{3by3} $0\neq \abs{x}=\abs{z}:=u$. Then $\operatorname{St}(A)=\lambda+\zeta$, where $\arg\zeta$ is determined by \eqref{arg1}, while \eq{abs} \abs{\zeta}= \begin{cases} \frac{u^2v}{u^2-v^2} & \text{ if } \frac{v}{u}\leq\sqrt{7-4\sqrt{3}},\\ \frac{u^2(2\sqrt{6u^2+v^2}-v)}{2(8u^2+v^2)} & \text{ otherwise}, \end{cases} \en where $\abs{y}:=v$. In particular, \eq{abs1} \abs{\zeta}=\frac{2\sqrt{7}-1}{18}u \text{ if }v=u.\en \end{thm} \begin{proof}Observe first of all that \eqref{abs} means that $\zeta=0$ if $y=0$, which is in agreement with Proposition~\ref{th:3by30}. So, only the case $y\neq 0$ is of interest. Further, under the requirement $\abs{x}=\abs{z}$ in the notation \eqref{uvw} we have $u=w$. Polynomial \eqref{s} then simplifies greatly, and actually factors into the product of $(s v^2 - (s-v) u^2)^2$ by the cubic \begin{align*} 4 s^3 v^4 + 4 s^2 (9 s + 2 v) u^2 v^2 + s (32 s^2 + 36 s v + v^2) u^4 -3 (s -v) u^6. \end{align*} So, it has a double root $u^2v/(u^2-v^2)$ (negative if $u<v$ and disappearing if $u=v$), and three simple roots \[ -\frac{u^2v}{u^2+v^2},\quad \frac{u^2(-v\pm 2\sqrt{6u^2+v^2})}{2(8u^2+v^2)}, \] exactly one of which being positive. This justifies the second line of \eqref{abs} when $u\leq v$, and its particular case \eqref{abs1} corresponding to $u=v$. It remains to consider the case $u>v$ and show that the matrix \[ C= \begin{bmatrix} 1 & \frac{u^2-v^2}{uv} & -\frac{u^2-v^2}{u^2}\\ 0 & 1 & \frac{u^2-v^2}{uv}\\ 0 & 0 & 1 \end{bmatrix}, \] obtained from \eqref{B1} by plugging in $\zeta$ from the first line of \eqref{abs} and replacing $w$ with $u$, satisfies $W_0(C)\ni 0$ if and only if $\frac{v^2}{u^2}\leq 7-4\sqrt{3}$. Direct computations show that $C^*C$ has the maximal eigenvalue $u^2/v^2$ of multiplicity two, and the respective eigenspace $\mathcal L$ is the span of \[ \xi_1=\begin{bmatrix} -v^2/u^2 \\ 0 \\ 1\end{bmatrix} \text{ and } \xi_2=\begin{bmatrix} v/u \\ 1 \\ 0\end{bmatrix}. \] Since $\scal{C\xi_2,\xi_2}=2\neq 0$, we need to figure out whether \eq{scC} \scal{C(\xi_1+t\xi_2),\xi_1+t\xi_2}=0\en for some $t\in\mathbb C$. But \[ \scal{C(\xi_1+t\xi_2),\xi_1+t\xi_2}=2\abs{t}^2+1+\frac{v^2}{u^2}-2\operatorname{Re} t\frac{v}{u}+\overline{t}\frac{u^2-v^2}{uv}. \] So, in order for \eqref{scC} to hold $t$ must be real and satisfy \eq{t} 2t^2+t\left(\frac{u}{v}-3\frac{v}{u}\right)+\left(1+\frac{v^2}{u^2}\right)=0. \en Such $t$ exists if and only if the discriminant of the quadratic equation \eqref{t} is non-negative, which amount to \[ \frac{v^2}{u^2}-14+\frac{u^2}{v^2}\geq 0, \] or, equivalently, \[ \frac{v^2}{u^2}\in [0,7-4\sqrt{3}]\cup [7+4\sqrt{3},+\infty). \] Since $u>v>0$, only the first interval in the right hand side is of relevance. \end{proof} It is worth mentioning that according to \eqref{abs} $\zeta$ is indeed the smallest positive root of the polynomial \eqref{s} in the case $\abs{x}=\abs{z}$. Here is a numerical example. \smallskip {\bf Example 3.} Consider the matrix $A$ as follows, \begin{align*} A = \begin{bmatrix} \lambda & 4 & - 2 \\ 0 & \lambda & 4 \\ 0 & 0 & \lambda \end{bmatrix}. \end{align*} Since $\frac{v}{u} = \frac{1}{2} > \sqrt{7 - 4 \sqrt{3}}$, by \eqref{abs} we have that \begin{align*} \abs{\zeta} = \frac{4^2(2\sqrt{6 \times 4^2 + 2^2} - 2)}{2(8 \times 4^2 + 2^2)} = \frac{12}{11} \end{align*} and $\arg\zeta = \arg(-2) = \pi$ which implies that $\zeta = -\frac{12}{11}$. Hence, $St(A) = \lambda - \frac{12}{11}$. \section{On 3-by-3 matrices with a two-point spectrum} \label{s:tps} A 3-by-3 matrix $A$ with a multiple eigenvalue is unitarily similar to \eq{me} \begin{bmatrix} \mu & x & y \\ 0 & \mu & z \\ 0 & 0 & \lambda \end{bmatrix}. \en This class of matrices is intermediate between \eqref{3by3} and the whole $\mathbb C^{3\times 3}$. As such, it does not admit (to the best of our knowledge) an explicit procedure for computing $\operatorname{St}(A)$ similar to Theorem~\ref{th:comp}, but still allows for an extension of Proposition~\ref{th:3by30}, delivering the criterion for $\operatorname{St}(A)$ to coincide with the multiple eigenvalue of $A$. We will use the same notation \eqref{uvw} as in Theorem~\ref{th:comp}, in addition abbreviating $\abs{\lambda-\mu}$ to $\rho$. \begin{thm}\label{th:lmm}A matrix $A$ unitary similar to \eqref{me} has $\operatorname{St}(A)$ coinciding with its repeated eigenvalue $\mu$ if and only if the following three conditions hold: \eq{arglm} \overline{x}y\overline{z}(\lambda-\mu)\leq 0, \en \iffalse \eq{inelm} \max\{0,\ uw-\sqrt{(u^2-\rho^2)(w^2+\rho^2)} \}\leq \rho v\leq uw, \en \fi \eq{inelm} \abs{\rho v-uw}\leq\sqrt{(u^2-\rho^2)(w^2+\rho^2)}, \en and \eq{eqlm} vw\rho^3+uv^2\rho^2+vw(v^2+w^2-u^2)\rho-uv^2w^2=0. \en \end{thm} Note that condition \eqref{inelm} implicitly contains the requirement $u\geq\rho$. \begin{proof} If $\lambda=\mu$, then $\rho=0$ and conditions \eqref{arglm}, \eqref{inelm} hold automatically, while \eqref{eqlm} boils down to $xyz=0$. This is in full agreement with Proposition~\ref{th:3by30}, so we need only to deal with $\lambda\neq\mu$. Replacing $A$ of the form \eqref{me} by \[ \frac{1}{\lambda-\mu}A-\frac{\mu}{\lambda-\mu}I=\begin{bmatrix} 0 & x/(\lambda-\mu) & y/(\lambda-\mu) \\ 0 & 0 & z/(\lambda-\mu) \\ 0 & 0 & 1 \end{bmatrix},\] we may without loss of generality suppose that in \eqref{me} $\lambda=1$ and $\mu=0$. In other words, it suffices to prove the statement for matrices \eq{me1} A= \begin{bmatrix} 0 & x & y \\ 0 & 0 & z \\ 0 & 0 & 1 \end{bmatrix}, \en where in addition we may (as was done earlier) assume $x,z\geq 0$. Respectively, conditions \eqref{arglm}--\eqref{eqlm} for the purpose of this proof should be replaced by \iffalse \eq{inelm1} x\geq 1,\quad \max\{ 0,\ xz-\sqrt{(x^2-1)(z^2+1)}\} \leq -y \leq xz \en \fi \eq{inelm1} x\geq 1, \ y\leq 0, \ \abs{xz+y}\leq\sqrt{(x^2-1)(z^2+1)} \en and \eq{eqlm1} yz-xy^2+yz(y^2+z^2-x^2)+xy^2z^2=0. \en According to \eqref{me1}, \eq{a*a1} A^*A=[0]\oplus \begin{bmatrix} x^2 & xy \\ x\overline{y} & \abs{y}^2+z^2+1 \end{bmatrix}. \en Eigenvectors $\xi$ corresponding to the maximal eigenvalue $\sigma$ of $A^*A$ therefore have the first coordinate equal to zero. Consequently, \eq{sca} A\xi = [x\xi_2+y\xi_3, z\xi_3, \xi_3]^T, \text{ and } \scal{A\xi,\xi}=\xi_3(z\overline{\xi_2}+\overline{\xi_3}). \en {\sl Case 1.} $x=0$. From \eqref{a*a1} we see that $\xi$ is collinear with $e_3$, and \eqref{sca} implies $\scal{A\xi,\xi}=\abs{\xi_3}^2\neq 0$. So, $\operatorname{St}(A)\neq 0$. Since condition \eqref{inelm1} fails, the statement holds. {\sl Case 2.} $y=0$. Then condition \eqref{eqlm1} holds while \eqref{inelm1} simplifies to $x^2\geq 1+z^2$. On the other hand, in this case $A^*A=\operatorname{diag}[0,x^2,z^2+1]$. So, $\xi$ is collinear to $e_3$ if $x^2 < 1+z^2$ while $\xi=e_2$ is admissible otherwise. By \eqref{sca}, $W_0(A)\ni 0$ is equivalent to $x^2\geq 1+z^2$. Once again, the statement holds. It remains to consider {\sl Case 3.} $x,y\neq 0$. The eigenvalue $\sigma$ of $A^*A$ is then simple, and the entries $\xi_2,\xi_3$ of the respective eigenvector $\xi$ are different from zero. By scaling, without loss of generality let $\xi_2=1$. According to \eqref{sca}, condition $\scal{A\xi,\xi}=0$ then holds if and only if $\xi_3=-z$. Rewritten entry-wise, $A^*A\xi=\sigma\xi$ takes the form \eq{ev} x^2-xyz=\sigma, \quad x\overline{y}-(\abs{y}^2+z^2+1)z=-\sigma z. \en Since $\sigma>\abs{y}^2+z^2+1$, the second equality in \eqref{ev} implies that $y<0$, and \eqref{eqlm1} follows. Finally, $\tau=\operatorname{trace}(A^*A)-\sigma= 1+y^2+z^2+xyz$ is the second non-zero eigenvalue of $A^*A$, and the upper bound on $\abs{xz+y}$ in \eqref{inelm1} is necessary and sufficient for $\tau$ not to exceed $\sigma$. \iffalse This shows the necessity of \eqref{inelm1}, \eqref{eqlm1} in Case 3. \eq{ev} \xi_1+x\xi_2+y=\sigma\xi_1,\quad x\xi_1=\xi_2,\quad \overline{y}(\xi_1+x\xi_2+y)+z^2=\sigma. \en In particular, $\xi_1\neq 0$ (because otherwise $\xi=e_3$ which is not an eigenvector of $A^*A$) and \eq{arg2} \arg(x\xi_2+y)=\arg\xi_1. \en Suppose now that $\operatorname{St}(A)=0$. Then \[ 0=\scal{A\xi,\xi}= \abs{\xi_1}^2+(x\xi_2+y)\overline{\xi_1}+z\overline{\xi_2}, \] which due to \eqref{arg2} can be rewritten as \eq{sc1} \abs{\xi_1}^2+\abs{x\xi_2+y}\abs{\xi_1}+z\overline{\xi_2}=0. \en Since $z>0$, we conclude from here that $\xi_2<0$, and \eqref{ev} takes the form \eq{ev1} -(1+x^2)t+y=-\sigma t, \quad -\overline{y}(1+x^2)t+\abs{y}^2+z^2=\sigma, \en where we relabeled $t:=-\xi_1\ (>0)$. Since $\sigma>z^2+\abs{y}^2$, this is only possible if $y<0$. Conditions \eqref{ev1} thus simplify further to \eq{ev2} -(1+x^2)t+y=-\sigma t, \quad -y(1+x^2)t+y^2+z^2=\sigma . \en Moreover, $x\xi_2+y=-x^2t+y<0$, and so \eqref{sc1} implies \eq{t1} t=\frac{xz+y}{1+x^2}. \en A direct computation shows that \eq{xi} \xi=[-t,-xt,1]^T \en with the value $t$ given by \eqref{t1} is an eigenvector of $A^*A$ corresponding to the eigenvalue $\sigma$ if and only if \eq{sigma} \sigma=z^2-xyz \en and \[ xz+x^3= xz^3-x^2yz^2+yz^2-xy^2z. \] The latter equality is equivalent to \eqref{eqlm1} since $y,z\neq 0$. The upper bound on $-y$ in \eqref{inelm1} follows from \eqref{t1} and positivity of $t$. Finally, with one of the eigenvalues of $A^*A$ given by \eqref{sigma} its other non-zero eigenvalue is $1+x^2+y^2+xyz$. The lower bound on $-y$ in \eqref{inelm1} stems from the inequality \[ 1+x^2+y^2+xyz\leq z^2-xyz. \] So, in Case~3 conditions \eqref{inelm1}, \eqref{eqlm1} follow from $\operatorname{St}(A)=0$. Let now conditions \eqref{inelm1}, \eqref{eqlm1} hold. Suppose for a moment that $-y=xz$. From \eqref{eqlm1} it would then follow $x=0$, and so $y=0$ which is in contradiction with the case we are considering. So, the rightmost inequality in \eqref{inelm1} is actually strict. According to the reasoning above, formulas \eqref{xi}, \eqref{t1} deliver an eigenvector $\xi$ of $A^*A$ corresponding to its maximal eigenvalue and satisfying $\scal{A\xi,\xi}=0$. So, indeed $\operatorname{St}(A)=0$, and these conditions are also sufficient. \fi \end{proof} \section{Roberts orthogonality} \label{s:rorth} Two vectors $x$ and $y$ of a normed linear space $X$ are called {\em Roberts orthogonal} (denoted: $x\perp_R y$) if for all scalars $\nu$: \[ \norm{x+\nu y}=\norm{x-\nu y}. \] If $X$ is an inner product space, Roberts orthogonality and usual orthogonality coincide: $x\perp_R y$ if and only if $\scal{x,y}=0$. While this notion goes back to \cite{Rob34}, a more recent treatment of the case when $X$ is a unital $C^*$-algebra can be found in \cite{ArBeRa}. In particular, \cite[Theorem 2.4]{ArBeRa} delivers the criterion for $A\in B(\mathcal H)$ to be Roberts orthogonal to the identity operator $I$. To describe this result, recall that the {\em Davis-Wielandt shell} of $A$ is the subset of $\mathbb C\times\mathbb R$ (identified with $\mathbb R^3)$ defined by \[ DW(A)=\{ (\scal{Ax,x}, \norm{Ax}^2)\colon x\in\mathcal H, \norm{x}=1\}.\] This set is a (possibly, degenerate) ellipsoid if $\dim\mathcal H=2$, and a convex body otherwise, closed if $\dim\mathcal H<\infty$, and located between the horizontal planes $\Pi_0=\mathbb C\times\{0\}$ and $\Pi_1=\mathbb C\times\{\norm{A}^2\}$. \iffalse Note that $W(A)$ and $W_0(A)$ are the projections onto $\Pi_0$ of $DW(A)$ and the intersection of its closure $\operatorname{cl}{DW(A)}$ with $\Pi_1$, respectively. \fi For each vertical line having a non-empty intersection with $\operatorname{cl}({DW(A)})$ choose the uppermost point of this intersection, and call the union $DW_{ub}(A)$ of all such points the {\em upper boundary} of $DW(A)$. It was shown in \cite{ArBeRa} that $A\perp_R I$ if and only if $DW_{ub}(A)=DW_{ub}(-A)$; in other words, if and only if $DW_{ub}(A)$ is symmetric with respect to the vertical coordinate axis. Note however that $\operatorname{cl}{W(A)}$ and $W_0(A)$ are the projections onto $\Pi_0$ of $DW_{ub}(A)$ and its intersection with $\Pi_1$, respectively. So, from $DW_{ub}(A)=DW_{ub}(-A)$ it immediately follows that both $\operatorname{cl}{W(A)}$ and $W_0(A)$ are central symmetric. The necessity of the former condition for $A$ and $I$ to be Roberts orthogonal (along with its sufficiency when $\dim\mathcal H=2$) was observed in \cite{ArBeRa}. From the necessity of the latter and \eqref{stw0} we obtain \begin{prop}\label{pr:rost}Let $A\in B(\mathcal H)$ be Roberts orthogonal to $I$. Then $\operatorname{St}(A)=0$. \end{prop} Indeed, the set $W_0(A)$ is convex, and its central symmetry implies therefore that it contains zero. \begin{thm}\label{th:rq}A quadratic operator $A$ is Roberts orthogonal to the identity if and only if it is nilpotent or a scalar multiple of an involution. \end{thm} \begin{proof}{\sl Necessity.} Combining Proposition~\ref{pr:rost} with Theorem~\ref{th:quad} we see that $A$ satisfies \eqref{quad} with $p=0$. If in addition $q=0$, then $A$ is nilpotent; otherwise $(-q)^{-1/2}A$ is an involution. {\sl Sufficiency.} The operator in question is unitarily similar to \eqref{mquad} with $\lambda_1=-\lambda_2:=a$. The observation $\norm{A-\lambda I}=\norm{A_0-\lambda I}$ from the proof of Theorem~\ref{th:quad} followed by the application of \eqref{2max} to $A_0$ in place of $A$ show that in our case \[ \norm{A-\lambda I}^2=\abs{a}^2+\abs{\lambda}^2+\norm{Z}^2/2 +\sqrt{(\abs{a}^2+\abs{\lambda}^2+\norm{Z}^2/2)^2-\abs{a^2-\lambda^2}^2}. \] The right hand side is indeed invariant under the change $\lambda\mapsto -\lambda$, and so $A\perp_R I$. \end{proof} So, for quadratic operators $A$ condition $\operatorname{St}(A)=0$ is not only necessary but also sufficient for $A\perp_R I$. As a by-product we see that the central symmetry of $W(A)$ is yet another equivalent condition; the fact for $n=2$ observed in \cite[Proposition~2.7]{ArBeRa} Moving to $A\in\mathbb C^{3\times 3}$, we restrict our attention to matrices with circular numerical ranges. \begin{thm}\label{th:r3} A matrix $A\in\mathbb C^{3\times 3}$ with a circular numerical range is Roberts orthogonal to the identity if and only if it is nilpotent or unitarily similar to a direct sum of a $1$-by-$1$ and a nilpotent $2$-by-$2$ block. \end{thm} \begin{proof} {\sl Necessity.} Let $A\perp_R I$ while $W(A)$ is a circular disk. The central symmetry of $W(A)$ means that this disk is centered at the origin. According to \cite[Corollary 2.5]{KRS} (see also \cite{CT94}), such $A$ is unitarily similar to \eq{Acd} \begin{bmatrix} 0 & x & y \\ 0 & 0 & z \\ 0 & 0 & \lambda \end{bmatrix}, \en where \eq{c1} x\overline{y}z=-\lambda(\abs{y}^2+\abs{z}^2)\en and \eq{c2} \abs{x}^2+\abs{y}^2+\abs{z}^2\geq 4\abs{\lambda}^2. \en In the notation of Theorem~\ref{th:lmm} condition \eqref{c1} implies $uvw=\rho(v^2+w^2)$. Plugging this into \eqref{eqlm} (which holds, since $\operatorname{St}(A)=\mu=0$) we conclude that $\lambda=0$ or $z=0$. If $\lambda=0$, the matrix \eqref{Acd} is nilpotent. Otherwise, \eqref{c1} implies that along with $z$ also $y=0$, and $A$ is therefore unitarily similar to \eq{Ar} [\lambda]\oplus B, \text{ where } B=\begin{bmatrix} 0 & x \\ 0 & 0\end{bmatrix}. \en {\sl Sufficiency.} If $A$ is nilpotent, the circularity of its numerical range implies that in \eqref{Acd} not only $\lambda=0$, but also $xyz=0$, due to \eqref{c1}. For any $\nu\in\mathbb C$ the matrix $A-\nu I$ by a rotation through $-\arg\nu$ and an appropriate diagonal unitary similarity can be reduced to \[ \begin{bmatrix} \abs{\nu} & \abs{x} & \abs{y} \\ 0 & \abs{\nu} & \abs{z}\\ 0 & 0 & \abs{\nu} \end{bmatrix}. \] So, $\norm{A-\nu I}$ in this case does not depend on the argument of $\mu$, only on its absolute value. In particular, $A\perp_R I$. Finally, if $A$ is unitarily similar to \eqref{Ar}, the circularity of $W(A)$ implies that $2\abs{\lambda}\leq\abs{x}$. From here, \[ \norm{B-\nu I}^2=\abs{\nu}^2+\frac{{\abs{x}}^2}{2}+\sqrt{\abs{x\nu}^2+\frac{\abs{x}^4}{4}}\geq \abs{\nu}^2+2{\abs{\lambda}^2+2\abs{\lambda}\sqrt{\abs{\nu}^2+\frac{\abs{\lambda}^2}{4}}}, \] so $\abs{\lambda-\nu}\leq\norm{B-\nu I}$ for all $\nu\in\mathbb C$. Consequently, \[ \norm{A-\nu I}=\max\{\abs{\lambda-\nu}, \norm{B-\nu I}\}= \norm{B-\nu I},\] and the relation $A\perp_R I$ follows from $B\perp_R I$. \end{proof} \begin{cor}\label{co:cst}Let $A\in\mathbb C^{3\times 3}$ have a circular numerical range. Then $A\perp_RI$ if and only if the disk $W(A)$ is centered at zero and $\operatorname{St}(A)=0$. \end{cor} A particular example \[ A= \begin{bmatrix} 0 & 1 & 1 \\ 0 & 0 & 1\\ 0 & 0 & -\frac{1}{2} \end{bmatrix} \] was considered in \cite{ArBeRa}. It was shown there that $A\not\perp_R I$ via computing \[ \norm{A+I}\approx 2.1617\neq 2.1366\approx \norm{A-I}, \] while $W(A)$ is a circular disk centered at the origin. This agrees with our Theorem~\ref{th:r3} since this matrix is neither unitarily reducible nor nilpotent. Note also that $\operatorname{St}(A)\approx 0.0203\neq 0$, in agreement with Corollary~\ref{co:cst}. \section*{Appendix} \label{s:ap} The figures below represent graphs of numerical ranges (bounded by green curves), spectra (blue dots) and Stampfli points (black dots) of several matrices, to illustrate some statement of the paper. The matrices in Fig.~1, 2 and 4 are nilpotent and unitarily irreducible, with the numerical range having each of the three possible shapes (circular, with a flat portion on the boundary, or ovular, as per \cite{KRS}). The one with the circular numerical range (Fig.~1) satisfies conditions of Proposition~\ref{th:3by30}, and indeed has the Stampfli point located at the origin. In Fig.~3 and~4, the Stampfli point differs from zero, and is positioned in agreement with Theorem~\ref{th:comp}. Finally, Fig.~2 illustrates that for almost normal matrices, $\operatorname{St}(A)$ lies in the interior of $\operatorname{conv}\sigma(A)$ (bounded by the red triangle), in agreement with Corollary~\ref{co:stanf}. \begin{figure}[htbp] \centerline{\includegraphics[scale= 0.42]{Disk.png}} \caption{$A =\protect\begin{bmatrix} 0 & 2 - i & 0 \protect\\ 0 & 0 & 2i \protect\\ 0 & 0 & 0 \protect\end{bmatrix}$; $St(A) = 0$.} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[scale= 0.42]{8_ANM_Interior.png}} \caption{$A =\protect\begin{bmatrix} 2 + i & 0 & 2 - 2i \protect\\ 0 & i & 2 \protect\\ 0 & 0 & -5 \protect\end{bmatrix}$; $St(A) = -1.008 + 0.0237i$} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[scale= 0.42]{4_FlatPortion2.png}} \caption{$A =\protect\begin{bmatrix} 0 & 3 - 4i & -5 \protect\\ 0 & 0 & -4+3i \protect\\ 0 & 0 & 0 \protect\end{bmatrix}$; $St(A) = -0.0145 -1.2143i$} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[scale= 0.42]{7_Oval3.png}} \caption{$A =\protect\begin{bmatrix} 0 & 1 - 4i & -3 - 2i \protect\\ 0 & 0 & 1 + 5i \protect\\ 0 & 0 & 0 \protect\end{bmatrix}$; $St(A) = -0.9363 + 0.5225i$} \end{figure} \clearpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,477,468,751,288
arxiv
\section{INTRODUCTION} \label{sec:Introduction} Measurements of time-dependent \ensuremath{C\!P}\xspace\ asymmetries in \ensuremath{B^0}\xspace\ meson decays through a Cabibbo-Kobaya\-shi-Maskawa (CKM) favored $b \ensuremath{\rightarrow}\xspace c \bar{c} s$ amplitude \cite{s2b,belles2b} have firmly established that \ensuremath{C\!P}\xspace\ symmetry is not conserved in the neutral $B$ meson system. The effect, arising from the interference between mixing and decay proportional to the \ensuremath{C\!P}\xspace-violating phase $\beta = \arg{(-V_{cd} V^*_{cb}/ V_{td} V^*_{tb})}$ of the CKM mixing matrix \cite{SM}, manifests itself as an asymmetry in the time evolution of the $\ensuremath{B^0}\xspace\ensuremath{\Bbar^0}\xspace$ pair. In the Standard Model, decays of $B^0$ mesons to charmless hadronic final states such as $\omega\ensuremath{K^0}\xspace$ proceed mostly via a single loop (penguin) amplitude with the same weak phase as the $b \ensuremath{\rightarrow}\xspace c \bar{c} s$ transition \cite{Penguin}, but CKM-suppressed amplitudes and multiple particles in the loop introduce additional weak phases whose contribution may not be negligible; see Refs. \cite{Gross,BN} for early quantitative work in addressing the size of these effects. We define \ensuremath{{\rm \Delta}S}\xspace\ as the difference between the magnitude of the time-dependent \ensuremath{C\!P}\xspace-violating parameter $S$ (given in detail below) measured in these decays and $S=\ensuremath{\sin\! 2 \beta }\xspace$ measured in decays to charmonium and a neutral kaon. For the decay \ensuremath{\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace\fomegaKz}, these additional contributions are expected to give $\ensuremath{{\rm \Delta}S}\xspace\sim$ 0.1 \cite{beneke,CCS}, although this increase may be nullified when final-state interactions are included \cite{CCS}. A value of \ensuremath{{\rm \Delta}S}\xspace\ inconsistent with this expectation could be an indication of new physics \cite{lonsoni}. We present an improved preliminary measurement of the time-dependent \ensuremath{C\!P}\xspace-violating asymmetry in the decay \ensuremath{\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace\fomegaKs}, previously reported by the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ and Belle Collaborations~\cite{BABAR, BELLE}. Charge-conjugate decay modes are implied throughout. \section{THE \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ DETECTOR AND DATASET} \label{sec:babar} The data were collected with the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector~\cite{BABARNIM} at the PEP-II asymmetric-energy $e^+e^-$ collider. An integrated luminosity of 316~fb$^{-1}$, corresponding to 347 million \BB\ pairs, was recorded at the $\Upsilon (4S)$ resonance (center-of-mass energy $\sqrt{s}=10.58\ \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$). Charged particles are detected and their momenta measured by the combination of a silicon vertex tracker (SVT), consisting of five layers of double-sided detectors, and a 40-layer central drift chamber, both operating in a 1.5 T axial magnetic field. Charged-particle identification is provided by the energy loss in the tracking devices and by the measured Cherenkov angle from an internally reflecting ring-imaging Cherenkov detector covering the central region. Photons and electrons are detected by a CsI(Tl) electromagnetic calorimeter. The instrumented flux return of the magnet allows discrimination of muons from pions. \section{ANALYSIS METHOD} \label{sec:Analysis} From a $\ensuremath{B^0}\xspace\ensuremath{\Bbar^0}\xspace$ pair produced in an \ensuremath{\Upsilon(4S)}\ decay, we reconstruct one of the $B$ mesons in the final state $f = \ensuremath{\omega\KS}$, a \ensuremath{C\!P}\xspace\ eigenstate with eigenvalue $-1$. For the time evolution measurement, we also identify (tag) the flavor (\ensuremath{B^0}\xspace\ or \ensuremath{\Bbar^0}\xspace) and reconstruct the decay vertex of the other $B$. The asymmetric beam configuration in the laboratory frame provides a boost of $\beta\gamma = 0.56$ to the center-of-mass in the lab frame, which allows the determination of the proper decay time difference $\dt \equiv t_f-\ensuremath{t_{\rm tag}}$ from the vertex separation of the two $B$ meson candidates. Ignoring the \dt\ resolution (about 0.5 ps), the distribution of \dt\ is \begin{equation} \label{eq:FCPdef} F(\dt) = \frac{e^{-\left|\ensuremath{{\rm \Delta}t}\xspace\right|/\tau}}{4\tau} [1 \mp\Delta w \pm (1-2w)\left( S\sin(\ensuremath{{\rm \Delta}m_d}\xspace\ensuremath{{\rm \Delta}t}\xspace) - C\cos(\ensuremath{{\rm \Delta}m_d}\xspace\ensuremath{{\rm \Delta}t}\xspace)\right)]. \end{equation} The upper (lower) sign denotes a decay accompanied by a \ensuremath{B^0}\xspace (\ensuremath{\Bbar^0}\xspace) tag, $\tau$ is the mean $\ensuremath{B^0}\xspace$ lifetime, $\ensuremath{{\rm \Delta}m_d}\xspace$ is the mixing frequency, and the mistag parameters $w$ and $\Delta w$ are the average and difference, respectively, of the probabilities that a true $\ensuremath{B^0}\xspace$\,($\ensuremath{\Bbar^0}\xspace$) meson is mistagged as a $\ensuremath{\Bbar^0}\xspace$\,($\ensuremath{B^0}\xspace$). The parameter $C$ measures direct \ensuremath{C\!P}\xspace\ violation. The flavor-tagging algorithm \cite{s2b} has seven mutually exclusive tagging categories of differing purities, including one for untagged events that we retain for yield determinations. The measured analyzing power, defined as efficiency times $(1-2w)^2$ summed over all categories, is $( 30.4\pm 0.3)\%$, as determined from a large sample of $B$ decays to fully reconstructed flavor eigenstates (\ensuremath{B_{\rm flav}}). We reconstruct a $B$ meson candidate by combining a \KS\ with an \ensuremath{{\omega\ensuremath{\rightarrow}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace\ensuremath{\pi^0}\xspace}}\ candidate. We select $\KS\ensuremath{\rightarrow}\xspace\pi^+\pi^-$ decays by requiring the $\pi^+\pi^-$ invariant mass to be within 12 \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace\ ($\sim$4$\sigma$) of the nominal \ensuremath{K^0}\xspace\ mass and by requiring a flight length greater than three times its error. We require the $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace\ensuremath{\pi^0}\xspace$ invariant mass (\ensuremath{m_{\rm res}}) to be between 735 and 825 \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace. Distributions from the data and from Monte Carlo (MC) simulations \cite{geant}\ guide the choice of these selection criteria. We retain regions adequate to characterize the background as well as the signal for those quantities taken subsequently as observables for fitting. We also use the angle $\theta_H$, defined in the $\omega$ rest frame as the angle of the direction of the boost from the $B$ rest frame with respect to the normal to the $\omega$ decay plane. The quantity $\ensuremath{{\cal H}}\equiv|\cos{\theta_H}|$ is approximately flat for background decays and distributed as $\cos^2{\theta_H}$ for signal decays. A $B$ meson candidate is characterized kinematically by the energy-substituted mass $\mbox{$m_{\rm ES}$}\xspace \equiv \sqrt{(\ensuremath{{1\over2}} s + {\bf p}_0\cdot {\bf p}_B)^2/E_0^2 - {\bf p}_B^2}$ and the energy difference $\ensuremath{\Delta E} \equiv E_B^*-\sqrt{s}/2$, where $(E_0,{\bf p}_0)$ and $(E_B,{\bf p}_B)$ are four-momenta of the \ensuremath{\Upsilon(4S)}\ and the $B$ candidate, respectively, and the asterisk denotes the center-of-mass rest frame. We require $|\ensuremath{\Delta E}|\le0.2$ GeV, $5.25\le\mbox{$m_{\rm ES}$}\xspace\le5.29\ \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$, $|\dt|<20$ ps and $\sigdt<2.5$ ps. To help reject the dominant background from continuum $\ensuremath{e^+e^-}\xspace\ensuremath{\rightarrow}\xspace\ensuremath{q\overline q}\xspace$ events ($q=u,d,s,c$), we use the angle $\theta_T$ between the thrust axis of the $B$ candidate and that of the rest of the tracks and neutral clusters in the event, calculated in the \ensuremath{\Upsilon(4S)}\ rest frame. The distribution of $\cos{\theta_T}$ is sharply peaked near $\pm1$ for jet-like $q\bar q$ pairs and is nearly uniform for the isotropic $B$ decays; we require $|\cos{\theta_T}|<0.9$ . From MC simulations of \ensuremath{\Bz {\kern -0.16em \Bzb}}\xspace\ and \ensuremath{\Bu {\kern -0.16em \Bub}}\xspace\ events, we find evidence for a small (0.3\% of the total sample) \BB\ background contribution. We have therefore added a \BB\ component to the fit described below. We use an unbinned, multivariate maximum-likelihood fit to extract signal yields and \ensuremath{C\!P}\xspace-violation parameters. We use the discriminating variables \mbox{$m_{\rm ES}$}\xspace, \ensuremath{\Delta E}, \ensuremath{m_{\rm res}}, \ensuremath{{\cal H}}, and a Fisher discriminant \ensuremath{{\cal F}}\ \cite{PRD}. The Fisher discriminant combines five variables: the polar angles with respect to the beam axis in the \ensuremath{\Upsilon(4S)}\ frame of the $B$ candidate momentum and of the $B$ thrust axis; the tagging category; and the zeroth and second angular moments of the energy flow, excluding the $B$ candidate, about the $B$ thrust axis \cite{PRD}. We use \dt to extract the \ensuremath{C\!P}\xspace-violation parameters, $S$ and $C$. We define the probability density function (PDF) for each event $i$, hypothesis $j$ (signal, \BB\ background and \ensuremath{q\overline q}\xspace\ background), and tagging category $c$: \begin{equation} \ensuremath{{\cal P}}_{j,c}^i \equiv \ensuremath{{\cal P}}_j (\mbox{$m_{\rm ES}$}\xspace^i) \ensuremath{{\cal P}}_j (\ensuremath{\Delta E}^i) \ensuremath{{\cal P}}_j(\ensuremath{{\cal F}}^i, c) \ensuremath{{\cal P}}_j (\ensuremath{m_{\rm res}}^i) \ensuremath{{\cal P}}_j(\ensuremath{{\cal H}}^i) \ensuremath{{\cal P}}_j (\dt^i, \sigdt^i, c)\,, \end{equation} where $\sigdt^i$ is the error on \dt\ for event $i$. We write the extended likelihood function as \begin{equation} {\cal L} = \prod_{c} \exp{(-\sum_j Y_{j} f_{j,c})} \prod_i^{N_c}\left[\sum_j Y_j f_{j,c} {\cal P}^i_{j,c}\right]\,, \end{equation} where $Y_j$ is the fitted yield of events of species $j$, $f_{j,c}$ is the fraction of events of species $j$ for each category $c$, and $N_c$ is the number of events of category $c$ in the sample. We fix $f_{{\rm sig},c}$ and $f_{\BB,c}$ to $f_{\ensuremath{B_{\rm flav}},c}$, the values measured with the large \ensuremath{B_{\rm flav}}\ sample \cite{s2b}. The PDF $\ensuremath{{\cal P}}_{\rm sig}(\dt,\, \sigdt, c)$ is given by $F(\dt)$ (Eq.\ \ref{eq:FCPdef}) with tag category ($c$) dependent mistag parameters convolved with the signal resolution function (a sum of three Gaussians) determined from the \ensuremath{B_{\rm flav}}\ sample. The other PDF forms are: the sum of two Gaussians for all signal shapes except \ensuremath{{\cal H}}, and for the peaking component of the \ensuremath{m_{\rm res}}\ background; the sum of three Gaussians for $\ensuremath{{\cal P}}_{\ensuremath{q\overline q}\xspace}(\dt, c)$ and $\ensuremath{{\cal P}}_{\BB}(\dt, c)$; an asymmetric Gaussian with different widths below and above the peak for $\ensuremath{{\cal P}}_j(\ensuremath{{\cal F}})$ (a small ``tail" Gaussian is added for $\ensuremath{{\cal P}}_{\ensuremath{q\overline q}\xspace}(\ensuremath{{\cal F}})$); Chebyshev functions of second to fourth order for the \ensuremath{{\cal H}}\ distribution for signal and the slowly-varying shapes of the \ensuremath{\Delta E}, \ensuremath{m_{\rm res}}, and \ensuremath{{\cal H}}\ distributions for backgrounds; and, for $\ensuremath{{\cal P}}_{\ensuremath{q\overline q}\xspace}(\mbox{$m_{\rm ES}$}\xspace)$, a phase-space-motivated empirical function \cite{argus}, with a small Gaussian added for $\ensuremath{{\cal P}}_{\BB}(\mbox{$m_{\rm ES}$}\xspace)$. We determine the PDF parameters from simulation for the signal and \BB\ background components. We study large control samples of $B\ensuremath{\rightarrow}\xspace D\pi$ decays of similar topology to verify the simulated resolutions in \ensuremath{\Delta E}\ and \mbox{$m_{\rm ES}$}\xspace, adjusting the PDFs to account for any differences found. For the \ensuremath{q\overline q}\xspace\ background we use (\mbox{$m_{\rm ES}$}\xspace,\,\ensuremath{\Delta E}) sideband data to obtain initial PDF-parameter values, but ultimately leave many of them free to vary in the final fit. \section{RESULTS} \label{sec:Physics} The free parameters in the fit are the following: the signal, \BB\ background, and \ensuremath{q\overline q}\xspace\ background yields; the three shape parameters of ${\cal P}_{\ensuremath{q\overline q}\xspace}(\ensuremath{{\cal F}})$; the slopes of ${\cal P}_{\ensuremath{q\overline q}\xspace}(\ensuremath{\Delta E})$ and ${\cal P}_{\ensuremath{q\overline q}\xspace}(\ensuremath{m_{\rm res}})$; the fraction of the peaking component of ${\cal P}_{\ensuremath{q\overline q}\xspace}(\ensuremath{m_{\rm res}})$; the \mbox{$m_{\rm ES}$}\xspace\ background shape parameter $\xi$ \cite{argus}; $S$; $C$; the fraction of background events in each tagging category; and the six primary parameters describing the \dt\ background shape. The parameters $\tau$ and $\ensuremath{{\rm \Delta}m_d}\xspace$ are fixed to world-average values \cite{PDG2006}. Table \ref{tab:results} shows the results of the fit. The errors have been scaled by $\sim$1.10 to account for a slight underestimate of the fit errors predicted by our simulations when the number of signal events is small. \begin{table}[ht] \caption{ Total sample size, detection efficiency, signal yield, \BB\ background yield and $CP$-asymmetry parameters $S$ and $C$ from the fit.} \label{tab:results} \begin{center} \begin{tabular}{lccc} \dbline Quantity & \ensuremath{\omega\KS}\ \\ \sgline Total fit sample & 12636 \\ Eff. (\%) & 23.0 \\ Fit signal yield & $142^{+17}_{-16}$ \\ \BB\ yield & $38^{+25}_{-22}$ \\ $S$ &\ensuremath{\phantom{-}}$0.62^{+0.25}_{-0.30}$ \\ $C$ & $-0.43^{+0.25}_{-0.23}$ \\ \dbline \end{tabular} \end{center} \vspace*{-0.5cm} \end{table} \begin{figure}[!htb] \begin{center} \includegraphics[angle=0,scale=0.7]{projPlot_omks.eps} \vspace{-.4cm} \caption{\label{fig:projMbDE} $B$ candidate projections for \ensuremath{\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace\fomegaKs}\ of (a) \ensuremath{m_{\rm ES}}, (b) \ensuremath{\Delta E}, (c) \ensuremath{{\cal F}}, (d) \ensuremath{{\cal H}}, and (e) \ensuremath{m_{\rm res}}, shown for a signal-enhanced subset of the data (points with error bars), with the fit function (solid line), and the background components (dashed line) overlaid.} \vspace{-.4cm} \end{center} \end{figure} Fig.\ \ref{fig:projMbDE}\ shows projections onto the fit variables for a subset of the data (including 45--65\% of signal events) for which the signal likelihood (computed without the variable plotted) exceeds a threshold that optimizes the sensitivity. Fig.~\ref{fig:dtproj} shows the $\Delta t$ projections and asymmetry of the time-dependent fit applying the same event selection criteria as for Fig.~\ref{fig:projMbDE}. Based on explicit variation of $C$ with $S$ allowed to float, we find the correlation between $S$ and $C$ to be negligible. \begin{figure}[!tbp] \begin{center} \includegraphics[scale=0.7]{projPlot_omks_dt.eps} \end{center} \vspace{-.3cm} \caption{Projections onto \ensuremath{{\rm \Delta}t}\xspace\ for \ensuremath{\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace\fomegaKs}, where $t_{CP}$ is the decay time for the signal $B$ meson. Data (points with errors), the fit function (solid line), background component (dashed line), and signal component (dotted line) are shown for events in which the tag meson is (a) \ensuremath{B^0}\xspace\ and (b) \ensuremath{\Bbar^0}\xspace, and the asymmetry $(N_{B^0}-N_{\kern 0.18em\overline{\kern -0.18em B}{}\xspace^0})/(N_{B^0}+N_{\kern 0.18em\overline{\kern -0.18em B}{}\xspace^0})$ is shown in (c), where $N$ indicates the total number of events passing the same cuts as for Fig.~\ref{fig:projMbDE}.} \label{fig:dtproj} \vspace{-0.4cm} \end{figure} \section{SYSTEMATIC UNCERTAINTIES} We estimate systematic uncertainties in $S$ and $C$ from the following sources: potential dilution due to \BB\ background (0.01); variation of the PDF shapes used in the fit (0.01); knowledge of the parameters used to model the signal \dt\ distribution (0.02); and interference between the CKM-suppressed $\bar{b}\ensuremath{\rightarrow}\xspace\bar{u} c\bar{d}$ amplitude and the favored $b\ensuremath{\rightarrow}\xspace c\bar{u}d$ amplitude for some tag-side $B$ decays \cite{dcsd} (0.02 for $C$, negligible for $S$), where the value in parentheses is the size of the estimated systematic uncertainty. By applying distortions to MC samples and refitting all tracks, we find that the uncertainty due to possible SVT misalignment and position and size of the beam spot are negligible. The uncertainties in the parameters of fits to the \ensuremath{B_{\rm flav}}\ sample are used for the uncertainties in the signal PDF parameters: \dt\ resolutions, tagging efficiencies, and mistag rates. Published measurements \cite{PDG2006} are used for $\tau_B$ and \ensuremath{{\rm \Delta}m_d}\xspace. Summing all systematic uncertainties in quadrature, we obtain 0.02 for $S$ and 0.03 for $C$. \section{SUMMARY} \label{sec:Summary} In conclusion, we have presented preliminary results for the time-dependent asymmetry parameters for the decay \ensuremath{\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace\fomegaKs}, $\skz = \ensuremath{0.xx^{+0.xx}_{-0.xx}\pm zz}$ and $\ckz = \ensuremath{-0.xx^{+0.xx}_{-0.xx}\pm zz}$, where the first uncertainty is statistical and the second systematic. If we fix $C=0$, we find $S=0.63^{+0.28}_{-0.33}$, where the uncertainty is statistical only. This value of $\skz$ and the world-average value of \ensuremath{\sin\! 2 \beta }\xspace\ \cite{s2b,belles2b} yield a value of $\ensuremath{{\rm \Delta}S}\xspace=-0.09\pm0.31$, in good agreement with the Standard Model expectation near zero. \section{ACKNOWLEDGMENTS} \label{sec:Acknowledgments} \input pubboard/acknowledgements
1,477,468,751,289
arxiv
\section{Introduction} \label{sec:introduction} The Traveling Thief Problem (TTP)~\cite{bonyadi2013travelling} is a well-studied multi-component problem that combines Traveling Salesman Problem (TSP) and Knapsack Problem (KP). The TTP has been designed in order to provide an academic abstraction of multi-component problems for the scientific community. In brief, in the TTP, a single thief has to visit all cities (TSP component) and can make a profit by stealing items and storing them in a rented knapsack (KP component). As stolen items are stored in the knapsack, it becomes heavier, and the thief travels more slowly, with a velocity inversely proportional to the knapsack weight. The thief's objective is to maximize the total profit of the stolen items while considering the price to pay for the knapsack, which is proportional to the rent time. The TTP has been gaining fast attention due to its challenging interconnected multi-components structure. Thus far, many approaches have been proposed for solving it, including iterative heuristics~\cite{polyakovskiy2014comprehensive}, metaheuristic approaches~\cite{el2015cosolver2b, faulkner2015approximate, el2016population, wagner2016stealing}, and exact approaches to study the quality of solutions for small instances~\cite{wu2017exact}. Some studies have investigated the structure and properties of the TTP~\cite{mei2016investigation,yafrani2018fitness}. We refer~\cite{wagner2018case} for a comparison of 21 algorithms in order to provide a TTP portfolio. The Thief Orienteering Problem (ThOP)~\cite{santos2018thief} has been designed as an academic multi-component problem with different interactions and constraints in mind: it combines the Orienteering Problem (OP) and the Knapsack Problem (KP). The OP is a well-studied problem in operational research (see, e.g., \cite{GoLeVo87, chao1996fast, vansteenwegen2011orienteering,GUNAWAN2016315}), where a participant starts on a given point, travels through a region visiting checkpoints, and has to arrive at a control point within a given time. Each checkpoint has a score, and the objective of the participant is to find the route that maximizes the total score, i.e., whose sum of scores of the checkpoints visited is maximal. Recent real-world examples of the OP include tourists planning their sight-seeing trips~\cite{Fang2014travelrouting}, rescue teams planning the visit of safe places in case of emergencies~\cite{Baffo2017emergencyrouting}, and politicians or music bands planning their tours~\cite{AksenS2016politicianrouting,Freeman2018bandrouting}. For the ThOP, Santos and Chagas~\cite{santos2018thief} have proposed a Mixed Integer Non-Linear Programming formulation for it, but no computational results have been presented due to the formulation's complexity. Instead, two simple heuristic algorithms have been proposed, i.e., one based on Iterated Local Search (ILS)~\cite{lourencco2003iterated} and one based on a Biased Random-Key Genetic Algorithm (BRKGA)~\cite{gonccalves2011biased}. The BRKGA outperformed ILS on large instances, and the authors have attributed this to the diversification introduced of the mutant individuals. In this work, we propose the use of a two-phase swarm intelligence approach based on Ant Colony Optimization (ACO) and a new greedy heuristic, to construct, respectively, the route and the packing plan (stolen items) of the thief. We investigate the importance of the components via automated algorithm configuration and then evaluate our approach on a broad set of instances. The remainder of this paper is structured as follows. In Section~\ref{sec:problem_description}, we formally describe the ThOP and present detailed solution examples to demonstrate the interwovenness characteristic of the multi-components of the problem. In Section~\ref{sec:max_min_ant_algorithm}, we present our solution approach proposed for addressing the ThOP. Section~\ref{sec:computational_experiments} reports the experiments and analyzes the performance of the proposed solution approach against previous ones already proposed in the literature. We conclude in Section~\ref{sec:conclusions} with a summary and outline possible future work. \section{Problem description} \label{sec:problem_description} As stated by Santos and Chagas~\cite{santos2018thief}, in the ThOP, there is a set of $n$ cities, labeled from $1$ to $n$, where the cities $1$ and $n$ are, respectively, the cities where the thief starts and ends their journey. A set of $m$ items scattered among the other cities $(2, \ldots, n-1)$, and each city has one or more items. Each item $i \in \{1, \ldots, m\}$ has a profit $p_i$ and weight $w_i$ associated. For any pair of cities $i$ and $j$, the distance $d_{ij}$ between them is known. In the ThOP, there is a single thief to steal items scattered among cities. The thief has a knapsack with a limited capacity $W$ to carry the items. Moreover, the thief has a maximum time $T$ to complete their whole robbery plan. The speed of the thief is inversely proportional to their knapsack weight. When the knapsack is empty, the thief can move with their maximum speed $v_{max}$. However, when the knapsack is full, the thief moves with the minimum speed $v_{min}$. The speed $v$ of the thief when the knapsack weight is $w \leq W$ is given by $v = v_{max} - w \times (v_{max} - v_{min})\,/\, W$. The objective of the ThOP is to provide a path from the start city $1$ to the end city $n$, as well as a set of items chosen from the cities visited throughout the route so that to maximize the total profit stolen, ensuring that the capacity of the knapsack $W$ is not surpassed and the total traveling time of the thief is within the time limit $T$. The thief does not need to visit all cities. Note that, while the ThOP and the TTP appear to be similar due to the KP as a component, it has been argued that the ThOP is more practical due to two key differences: in the ThOP there is (A) no need to visit all the cities, and (B) the interaction is not through a time-dependent rent for the knapsack, but through a constraint that imposes on the thief a time limit to complete the tour -- at the very least for the aforementioned real-world examples, speed typically remains constant, but time constraints have to be fulfilled, and only worthwhile places have to be visited. While the relaxation of difference (A) might appear trivial, the consideration of this constraint, i.e., to visit all cities, is typically reflected in the design of heuristic~\cite{wagner2018case} and exact~\cite{wu2017exact,Neumann2019fptasPWT} approaches to the TTP, with Chand and Wagner~\cite{chand2016fast}'s Multiple Traveling Thieves Problem (MTTP) being the only exception known to us. Regarding the difference (B) and the ThOP in general, applications of it can arise when there is no enough time and capacity to visit all possible cities. For an overview of time-dependent routing problems, we refer the interested reader to Gendreau et al.~\cite{gendreau2015time}. In order to clarify the characteristics of the ThOP, we depict in Figure~\ref{fig:example} a small worked example of a ThOP instance that involves 4 cities and 5 items. Note that there are no items in the start (1) and end (4) cities, whereas there are some items of different weights and profit distributed in the other cities (2 and 3). The distances from each pair of cities are given in the edges. In the following, we present in detail some solutions for this instance. For this purpose, let us consider $v_{min} = 0.1$, $v_{max} = 1.0$, $W = 3$, and $T = 75$. \begin{figure}[!ht] \centerin \includegraphics[scale=0.40]{thop_instance.pdf}\vspace{-3mm} \caption{A ThOP instance involving 4 cities and 5 items (from \cite{santos2018thief}).} \label{fig:example} \end{figure} We may represent a ThOP solution in two parts $(\pi, z)$. The first one consists of the route $\pi = \langle 1, \ldots, n \rangle$, a vector containing the ordered list of visited cities. Note that the first and last city are fixed for any feasible solution. The second part is the packing plan $z = \langle z_1, z_2, \ldots, z_m \rangle$, a binary vector representing the states of items ($z_i = 1$ if item $i$ is stolen, and $0$ otherwise). According to this representation, let us consider the following ThOP solutions for the instance previously described: \begin{itemize}[leftmargin=5mm] \item { $(\langle 1, 2, 3, 4 \rangle, \langle 1, 0, 0, 1, 0 \rangle)$: it is a feasible solution with a total profit of $20 + 40 = 60$. The total weight of stolen items is $3$ and the total traveling time is $75$, which satisfies both limits $W$ and $T$. The total traveling time is calculated as: \begin{itemize} \setlength\itemsep{0mm} \item travel from the start city to city $2$ at maximum speed: time is computed as $d_{12}/v_{max} = 5/1.0 = 5$; \item at city $2$ the thief steals item $1$: the speed decreases to $v = 1.0 - 2 \times (1.0 - 0.1)\,/\,3 = 0.4$; \item travel from city $2$ to city $3$: total traveling time is $5 + d_{23}/v = 5 + 8/0.4 = 5 + 20 = 25$; \item at city $3$ item $4$ is collected: the speed drops to $v = 1.0 - 3 \times (1.0 - 0.1)\,/\,3 = 0.1$; \item travel from city $3$ to the end city: total traveling time is $5 + 20 + d_{34}/v = 5 + 20 + 5/0.1 = 5 + 20 + 50 = 75$. \end{itemize} } \item { $(\langle 1, 3, 2, 4 \rangle, \langle 1, 0, 0, 1, 0 \rangle)$: it is an infeasible solution. Although the stolen items have been the same as the previous solution, the total traveling time $(83.43)$ exceeds the time limit: \begin{itemize} \setlength\itemsep{0mm} \item travel from the start city to city $3$ at maximum speed: time is computed as $d_{13}/v_{max} = 6/1.0 = 6$; \item at city $3$ the thief steals item $4$: the speed decreases to $v = 1.0 - 1 \times (1.0 - 0.1)\,/\,3 = 0.7$; \item travel from city $3$ to city $2$: total traveling time is $6 + d_{32}/v = 6 + 8/0.7 = 6 + 11.43 = 17.43$; \item at city $3$ item $4$ is collected: the speed drops to $v = 1.0 - 3 \times (1.0 - 0.1)\,/\,3 = 0.1$; \item travel from city $2$ to the end city: total traveling time is $6 + 17.43 + d_{24}/v = 6 + 17.43 + 6/0.1 = 6 + 17.43 + 60 = 83.43$. \end{itemize} } \item { $(\langle 1, 3, 4 \rangle, \langle 0, 0, 1, 0, 0 \rangle)$: it is the optimal solution for this instance with a total profit of $100$. The total weight is $3 \leq W$ and the total traveling time is $56 \leq T$: \begin{itemize} \setlength\itemsep{0mm} \item travel from the start city to city $3$ at maximum speed: time is computed as $d_{13}/v_{max} = 6/1.0 = 6$; \item at city $3$ the thief steals item $3$: the speed decreases to $v = 1.0 - 3 \times (1.0 - 0.1)\,/\,3 = 0.1$; \item travel from city $3$ to the end city: total traveling time is $6 + d_{34}/v = 6 + 5/0.1 = 6 + 50 = 56$. \end{itemize} } \end{itemize} Note that the packing plan of the optimal ThOP solution for the example instance happens to be the same as the optimal solution for the knapsack problem. However, it is not always that the thief can steal the best knapsack configuration within the time limit $T$. To exemplify this, let us now consider a tighter time limit equal to 20 for the previous instance. For this case, the optimal ThOP solution would be $(\langle 1, 3, 4 \rangle, \langle 0, 0, 0, 1, 1 \rangle)$, which has a total profit of $80$ and total traveling time of $18.5$. \section{Stealing items with ants} \label{sec:max_min_ant_algorithm} In the following, we describe our heuristic approach for the ThOP. It is loosely based on Wagner's TTP study~\cite{wagner2016stealing}. As in~\cite{wagner2016stealing}, we propose in this work the use of swarm intelligence based on Ant Colony Optimization (ACO)~\cite{dorigo1999ant} in order to solve ThOP's tour part, while a novel heuristic will be responsible for solving the ThOP's packing part, i.e., to select the set of stolen items. ACO algorithms consist of an essential class of probabilistic search techniques that are inspired by the behavior of real ants. These algorithms have proven to be efficient in solving a range of combinatorial problems \cite{dorigo2005ant}. The basic idea behind ACOs is that ants construct solutions for a given problem by carrying out walks on a so-called construction graph. These walks are influenced by the pheromone values that are stored along the edges of the graph. During the optimization process, the pheromone values are updated according to good solutions found during the optimization, which should then lead the ants to better solutions in further iterations of the algorithm. We refer the interested reader to the book by Dorigo and Birattari~\cite{dorigo2010ant} for a comprehensive introduction. In order to define the thief's route, we use Stützle's ACOTSP 1.0.3 framework\footnote{Publicly available online at \href{http://www.aco-metaheuristic.org/aco-code}{\textcolor{blue}{http://www.aco-metaheuristic.org/aco-code}}}. This framework implements several ACO algorithms for the symmetric TSP, i.e., its found solutions are tours that visit all cities. While this may not be efficient or even feasible for the thief due to the time limit to conclude their journey, we will adapt the output according to the solution found by the packing algorithm in order to determine efficient solutions for the ThOP. We note that ACOTSP builds complete TSP tours, not OP tours, hence possibly affecting the algorithm performance. We decided against the OP-tour approach: assuming that we have an OP tour and then consider the packing, we may (based on our two-phase approach) end up skipping cities anyhow if there are no interesting items to pick up; hence, a further dropping of cities may be required anyhow. Of course, this would be different if we would have an algorithm to solve the OP and KP parts simultaneously. \subsection{ACO framework and adjustments} The ACOTSP framework allows us to choose which ant colony optimization approach to be used. As in~\cite{wagner2016stealing}, we use the standard MAX-MIN ant system by Stützle and Hoos~\cite{stutzle2000max}, which restricts all pheromones to a bounded interval in order to prevent pheromones from dropping to arbitrarily small values. In Algorithm~\ref{alg:acothop}, we show the simplified overview of the proposed swarm intelligence approach, combined with the packing heuristic algorithm. \begin{algorithm}[!ht] \makeatletter \newcommand{\algorithmfootnote}[2][\footnotesize]{% \let\old@algocf@finish\@algocf@finis \def\@algocf@finish{\old@algocf@finis \leavevmode\rlap{\begin{minipage}{\linewidth} #1#2 \end{minipage}}% }% } \footnotesize \DontPrintSemicolon \SetKwData{Left}{left} \SetKwData{Up}{up} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} $\pi^{best} \gets \varnothing, z^{best} \gets \varnothing$ \label{alg:best_sol_init} \\ \Repeat{\upshape stopping condition is fulfilled} { \label{alg:stopping_criterion_begin} $\Pi \gets$ construct TSP tours using ants \label{alg:construct_tsp_tours} \\ \ForEach{\upshape TSP tour $\pi \in \Pi$} { \label{alg:for_each_tour_begin} $z \gets$ construct a packing plan from $\pi$ using \textsc{Pack($\pi$,~$ptries$)} \label{alg:packing_plan} \\ \If{\upshape profit of $z$ is higher than profit of $z^{best}$} { \label{alg:second_replace_begin} $\pi^{best} \gets \zeta(\pi$), $z^{best} \gets z$ \label{alg:clear_tsp_tour} } \label{alg:second_replace_end} } \label{alg:for_each_tour_end} update ACO statistics and pheromone trail \label{alg:update_pheromone} \\ } \label{alg:stopping_criterion_end} \Return $\pi^{best}$, $z^{best}$ \caption{ACO for the ThOP} \label{alg:acothop} \algorithmfootnote{$\zeta(\pi)$ returns a tour for the thief from the TSP tour $\pi$ by removing all cities where no item is stolen according to the packing plan $z$.} \end{algorithm}% Initially (line~\ref{alg:best_sol_init}), the best ThOP solution (tour and packing plan) found by the algorithm is initialized as an empty solution. The algorithm performs its iterative cycle (lines~\ref{alg:stopping_criterion_begin} to \ref{alg:stopping_criterion_end}) as long as the stopping criterion is not fulfilled. At line~\ref{alg:construct_tsp_tours}, each ant constructs a TSP tour. For each TSP tour $\pi$ (line~\ref{alg:for_each_tour_begin}), we apply our heuristic algorithm for defining a packing plan (line~\ref{alg:packing_plan}, Algorithm~\ref{alg:packing_algorithm}), thus defining a feasible ThOP solution $(\pi, z)$. At lines~\ref{alg:second_replace_begin} to \ref{alg:second_replace_end}, the best solution found is possibly updated according to the solution $(\pi, z)$ previously found. Note that we remove from $\pi$ all cities where no items have been stolen according to the packing plan $z$ (line \ref{alg:clear_tsp_tour}) in order to get a more efficient ThOP's tour (all ThOP instances use Euclidean distances rounded up). After all tours have been considered, ACO statistics and the pheromone values are updated according to the quality of the ThOP solutions found (line \ref{alg:update_pheromone}). At the end of the algorithm (line~\ref{alg:return_best_packing_plan}), the best solution found is returned. \paragraph{Implementation Notes} The overall logic of the ACOTSP framework remains unchanged in our proposed algorithm. Some minimal modifications have been performed to adapt it to the ThOP specifications. To construct the TSP tours, we just established that the first and last cities must be those where the thief begins and ends their robbery journey. In the ACOTSP framework, the pheromone trail update performs based on the quality of the TSP tours found by ants. Since the objective of the TSP is to find the shortest possible tour visiting each city, the fitness of a given tour is inversely proportional to its total distance. On the other hand, in our ACOTSP adaptation, the fitness of each tour is set in terms of the quality of the stolen items throughout the tour, which are defined by the heuristic packing plan. As the ACOTSP framework is developed explicitly for the TSP, a minimization problem where its solutions have positive objective values, we consider that the fitness of a ThOP's tour $\pi$ is inversely proportional to $UB + 1 - p(z)$, where $UB$ is an upper bound for the ThOP and $p(z)$ is the total profit of packing plan $z$. Note that in this way we can maintain the same behavior of fitness of the TSP solutions, without modifying the ACO framework structure. The upper bound $UB$ is defined as the optimal solution for the KP version that allows selecting fractions of items. This KP version can be solved in $O(m\log_2 m)$. \subsection{ThOP packing heuristic} In Algorithm \ref{alg:packing_algorithm}, we describe our heuristic strategy for constructing a packing plan from a fixed tour. Note that even when the tour of the thief is kept fixed, finding the optimal packing configuration is NP-hard~\cite{polyakovskiy2015packing}. \begin{algorithm}[!ht] \footnotesize \DontPrintSemicolon \SetKwData{Left}{left} \SetKwData{Up}{up} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} $z \gets \varnothing$, $try \gets 1$ \\ \Repeat{$try > ptries$} { \label{alg:repeat_begin} choose a real number for each parameter $\theta$, $\delta$, and $\gamma$ from a uniform distribution in the range [0, 1], so that $\theta + \delta + \gamma = 1$ \label{alg:random_values} \\ \ForEach{\upshape $i \gets 1$ \textbf{to} $m$} { \label{alg:compute_scores_begin} compute score $s_i$ for item $i$ \tcp*{Eq. \ref{eq:score}} } \label{alg:compute_scores_end} $z' \gets \varnothing$ \\ \For{\upshape $j \gets 1$ \textbf{to} $m$} { \label{alg:packing_begin} $i \gets $ get item with the $j$-th highest score \\ $z' \gets z' \cup \{i\}$ \\ \lIf{\upshape weight of $z'$ is higher than $W$} { \label{alg:weight_constraint} $z' \gets z' \setminus \{i\}$ } \Else{ $t \gets $ compute the required time to steal $z'$ by visiting only cities with selected items following the order of the TSP tour $\pi$ \label{alg:calculate_time} \\ \lIf{\upshape $t$ is longer than $T$} { \label{alg:time_constraint} $z' \gets z' \setminus \{i\}$ } } } \label{alg:packing_end} \lIf{\upshape profit of $z'$ is higher than profit of $z$} { \label{alg:best_packing_plan} $z \gets z'$ \label{alg:update_best_packing_plan} } $try \gets try + 1$ \\ } \label{alg:repeat_end} \Return $z$ \label{alg:return_best_packing_plan} \caption{Packing Algorithm: \textsc{Pack($\pi$, $ptries$)}} \label{alg:packing_algorithm} \end{algorithm}% Our packing heuristic algorithm seeks to find a good packing plan $z$ from multiple attempts for the same tour $\pi$. The number of attempts is defined by $ptries$. Each attempt is described between lines~\ref{alg:repeat_begin} to \ref{alg:repeat_end}. At the beginning of each attempt (line~\ref{alg:random_values}), we uniformly select three random values ($\theta$, $\delta$, and $\gamma$) between 0 and 1, and then normalize them so that their sum is equal to 1. These values are used to compute a score $s_i$ for each item $i \in \{1, \ldots, m\}$ (lines~\ref{alg:compute_scores_begin} to \ref{alg:compute_scores_end}), where $\theta$, $\delta$, and $\gamma$ define, respectively, exponents applied to profit $p_{i}$, weight $w_i$, and distance $d_{i}$ in order to manage their impact. The distance $d_{i}$ is calculated according to the tour $\pi$ by sum all distances from the city where is the item $i$ to the end city. Equation~\ref{eq:score} shows as the score of item $i$ is calculated. \begin{equation} \label{eq:score} s_{i} = \frac{{p_{i}}^{\theta}}{{w_{i}}^{\delta} \times {d_{i}}^{\gamma}} \end{equation} Note that each score $s_i$ incorporates a trade-off between a distance that item $i$ has to be carried over, its weight, and its profit. Equation \ref{eq:score} is based on the heuristic \textsc{PackIterative} that has been developed for the TTP~\cite{faulkner2015approximate}. However, unlike in~\cite{faulkner2015approximate}, we consider an exponent for the term of distance to vary the importance of its influence. Furthermore, the values of all exponents are randomly selected drawn between 0 and 1 for each attempt (and then normalized) to search the space for greedy packing plans. After computing scores for all items, we use their values to define the priority of each item in the packing strategy. The higher the score of an item, the higher its priority. Between lines~\ref{alg:packing_begin} and \ref{alg:packing_end}, we create the packing plan for the current attempt by considering the items according to their priorities. If an item violates the constraints of the ThOP (lines~\ref{alg:weight_constraint} and \ref{alg:time_constraint}), it is not selected. Note that we calculate travel time (line~\ref{alg:calculate_time}) from the cities listed on tour $\pi$, but we ignore those cities where no items are selected. After completing the current attempt's packing plan, its quality is compared to the best packing plan so far (line~\ref{alg:best_packing_plan}), which is then possibly updated (line~\ref{alg:update_best_packing_plan}). At the end of all attempts, the best packing plan found is returned (line~\ref{alg:return_best_packing_plan}). Note that our packing algorithm is non-deterministic (in contrast to the deterministic \textsc{PackIterative}~\cite{faulkner2015approximate}'s), as it has randomized components. In our preliminary experiments, we have observed that ants find identical or very similar routes throughout the iterations of the ACO algorithm. For this reason, we decided to design our packing algorithm in a non-deterministic way in order to increase the explore the packing plan space more broadly. Moreover, via the parameter $ptries$, we can control the number of attempts needed to reached good packing plans. \section{Computational experiments} \label{sec:computational_experiments} We now present the experiments performed to study the performance of the proposed framework concerning the quality of its solutions. We have rerun Santos and Chagas~\cite{santos2018thief}'s ThOP code to enable a fair comparison as the computational budget is based on wallclock time. Our framework has been implemented based on Thomas Stützle’s ACOTSP 1.0.3 framework, which has been implemented in C programming language. In our experiments, each run of the proposed algorithm has been sequentially (nonparallel) performed on an Intel(R) Xeon(R) E5-2660 (2.20GHz), running under CentOS Linux 7 (Core). Our code, as well as all results and solutions, can be found at \href{https://github.com/jonatasbcchagas/aco_thop}{\textcolor{blue}{https://github.com/jonatasbcchagas/aco\_thop}}. \subsection{Benchmarking instances} To assess the quality of the proposed algorithm, we have used all ThOP instances defined by Santos and Chagas~\cite{santos2018thief}. As stated by the authors, these instances have been created upon a benchmark of TTP instances~\cite{polyakovskiy2014comprehensive} by removing the items on city $n$ and by adding a maximum travel time. They have created 432 instances with the following characteristics: \begin{itemize} \item { numbers of cities: 51, 107, 280, and 1000 (TSP instances ({\tt XXX}): {\it eil51}, {\it pr107}, {\it a280}, {\it dsj1000}); } \item { numbers of items per city ({\tt YY}): {\it 01}, {\it 03}, {\it 05}, and {\it 10}; } \item { types of knapsacks ({\tt ZZZ}): weights and values of the items are bounded and strongly correlated ({\it bsc}), uncorrelated ({\it unc}), or uncorrelated with similar weights ({\it usw}); } \item { sizes of knapsacks ({\tt WW}): {\it 01}, {\it 05} and {\it 10} times the size of the smallest knapsack; } \item { maximum travel times ({\tt TT}): {\it 01}, {\it 02}, and {\it 03} classes. These values refer to 50\%, 75\%, and 100\% of instance-specific references times defined in the original ThOP paper~\cite{santos2018thief}. } \end{itemize} All 432 ThOP instances can be obtained by combining the different characteristics described above. Each instance is identified as {\tt XXX\_YY\_ZZZ\_WW\_TT.thop}. \subsection{Parameter tuning to gain insights} \label{sec:parameter_tuning} Our first study analyzes the influence of the values of the main parameters of our algorithm. As in the previous work~\cite{santos2018thief} on the ThOP, we have defined as stopping criteria the execution time equal to $\lceil\frac{m}{10}\rceil$ seconds, which is given in terms of the number of items $m$ of each particular instance. The ACOTSP framework allows setting a large number of parameters. We consider the following: \textit{ants} defines the number of ants used; \textit{alpha} controls the relative importance of pheromone trails in the construction of tours; \textit{beta} defines the influence of distances between cities for construction the tours; and \textit{rho} sets the evaporation rate of the pheromone trail. Besides, we analyze the influence of our parameter \textit{ptries}, which is used for deciding how many attempts our packing algorithm performs to determine the set of stolen items. Table \ref{table:parameter_values} shows the parameter values we have considered in our analysis. The ranges have been selected following preliminary experiments. \begin{table}[!ht] \centering \footnotesize \centering \caption{Parameter values considered during the tuning experiments.} \setlength{\tabcolsep}{15pt} \begin{tabular}{cc} \toprule \multicolumn{1}{c}{Parameter} & \multicolumn{1}{c}{Investigated values} \\ \midrule ants & $\{10, 20, 50, 100, 200, 500, 1000\}$ \\ alpha & $\{0.00, 0.01, 0.02, \ldots, 10.00\}$ \\ beta & $\{0.00, 0.01, 0.02, \ldots, 10.00\}$ \\ rho & $\{0.00, 0.01, 0.02, \ldots, 1.00\}$ \\ ptries & $\{1, 2, 3, 4, 5\}$ \\ \bottomrule \end{tabular} \label{table:parameter_values} \end{table} In order to find a suitable configuration of parameters among all possible ones, we use the Irace package \cite{lopez2016irace}, which is an implementation of the method I/F-Race \cite{birattari2010f}. The Irace package implements an iterated racing framework for the automatic configuration of algorithms. In our experiments, we have used all Irace default settings, except for the parameter \textit{maxExperiments}, which has been set to 5000. This parameter defines the stopping criteria of the tuning process. We refer the readers to \cite{lopez2016iraceguide} for a complete user guide of Irace package. To analyze the influence of parameter values across the different types of instances, we divide all 432 instances into 48 groups and then execute Irace on each of them. Each group is identified as {\tt XXX\_YY\_ZZZ}, where {\tt XXX} informs the TSP base group, {\tt YY} the number of items per city and {\tt ZZZ} the type of knapsack. Each group {\tt XXX\_YY\_ZZZ} contains all nine instances defined with different sizes of knapsacks and maximum travel time. \begin{figure*}[!ht] \centering \setcounter{subfigure}{0} \subfloat { \includegraphics[width=0.38\linewidth]{parallel_coord_eil51.pdf} }% \qquad \qquad \subfloat { \includegraphics[width=0.38\linewidth]{parallel_coord_pr107.pdf} }% \subfloat { \includegraphics[width=0.38\linewidth]{parallel_coord_a280.pdf} }% \qquad \qquad \subfloat { \includegraphics[width=0.38\linewidth]{parallel_coord_dsj1000.pdf} }% \caption{Best parameter configurations for the 48 groups of instances.} \centering \label{fig:irace_results} \end{figure*} In Figure \ref{fig:irace_results}, we plot for each group all configurations returned by Irace at the end of its run. Each parallel coordinate plot lists for each of the 48 groups (shown in the left-most column) the configurations returned by Irace (shown in the other columns). As Irace can return more than one configuration, multiple configurations are sometimes shown. Each axis indicates a parameter and its range of values, and each configuration of parameters is described by a line that cuts each parallel axis in its corresponding value. Through the concentration of the lines, we can see which parameter values have been most selected among all tuning experiments. We can make several observations. For example, the number of ants has a higher concentration between 50 and 200, with a higher frequency between 100 and 200 for the groups of instances that consider the TSP bases {\it pr107}, {\it a280}, and {\it dsj1000}. The importance of the pheromone trail has remained with values close to 1 for all groups of instances. This is generally compensated by the values of \textit{beta}, which varies based on the underlying TSP instance. This is not too surprising, as the underlying TSP instances are different in nature and not normalized, hence requiring different values of beta. We can also observe that only few packing attempts (as exhibited by the low \textit{ptries} values) are needed to reach good results, which is especially true for larger instances. In an attempt to furnish a single configuration of parameters that can generalize all tuning results and also be able to provide a more appropriate configuration for new unknown instances, we average the numerical values and take the mode of the categorical parameter \textit{ptries}. This results in the following configuration: \textit{ants}~=~196, \textit{alpha}~=~1.24, \textit{beta}~=~5.46, \textit{rho}~=~0.51, and \textit{ptries}~=~1. \subsection{Results} \label{sec:results} In order to analyze the efficiency of the proposed algorithm on all ThOP instances, we run our ACO algorithm 10 independent times on each instance, and then use the average value of the objective function and the best one found in these runs in our analysis. Our experiments analyze two versions of our ACO algorithm. In the first one, we consider the algorithm set with the best parameter values found by the Irace package (collectively called ACOThOP*). In contrast, the second version uses the general configuration of parameters derived from the Irace results of all tuning experiments (called ACOThOP). In Figure \ref{fig:santos_vs_aco}, we assess the quality of our algorithm, in its two versions, by comparing the solutions found by it with the best results reached by the algorithms proposed by Santos and Chagas~\cite{santos2018thief}. For each instance, we consider the best-known solution to be a lower bound on the achievable objective value. Then, we take the average results produced by each approach and compute the ratio between that average and the best objective value found, which gives us the approximation ratio. Note that the higher this metric, the higher the average efficiency of that particular solution method. In the figure, we show the results for the 48 previously defined groups of instances. We report the average approximation ratio obtained for the instances belonging to each group of instances. \begin{figure*}[!ht]% \centering \includegraphics[width=\linewidth]{santos_vs_aco_std.pdf} \caption{Approximation ratio of the solution approaches across different groups of instances -- whiskers show the standard deviation in the groups.} \label{fig:santos_vs_aco} \end{figure*} We can see in Figure \ref{fig:santos_vs_aco} that our algorithm has performed significantly better on all groups of instances, especially on those with larger instances. Note that the algorithms proposed by Santos and Chagas~\cite{santos2018thief} are highly affected by the number of items contained in each city, while our framework appears to do better everywhere, and especially well (relatively speaking) on larger instances, where is finds many new best solutions. Our approaches ACOThOP* and ACOThOP outperform the results achieved in~\cite{santos2018thief} on 419 and 410 out of 432 instances, based on the average solution quality. Regarding the best results found, our approaches have been able to find better solutions for 410 and 402 instances, respectively. On average, considering the best results obtained for all instances, our approaches ACOThOP* and ACOThOP have been, respectively, 320\% and 313\% better than the best solutions found in~\cite{santos2018thief}. In addition, our results show lower standard deviation values, which indicates a better convergence of our algorithm. To statistically compare the quality of the solutions, we use the Wilcoxon signed-rank test on the results achieved in the 10 independent runs of each solution method. With a significance level of 5\% (\mbox{$p$-value}~$<~0.05$), the performance compared to~\cite{santos2018thief} is as follows on the 432 instances: \begin{itemize} \item ACOThOP* is worse in only 2 cases, there is no difference in 21 cases, and it is better in 409 cases (95\%). \item ACOThOP is worse in only 18 cases, there is no difference in 12 cases, and it is better in 401 cases (93\%). \end{itemize} Table~\ref{table:solution_structures} summarizes a closer analysis of the solutions found. For each TSP base instance, which resulted in 108 instances each, we show averaged information concerning all the best solutions achieved by each approach. Column $\mathcal{D}$ shows the ratio between the total distance traveled and the number of cities visited by the thief, while columns \textit{\%T} and \textit{\%W} report the percentage spent of the time limit and the percentage used of the knapsack capacity. If values in these last two columns are close to 100\%, then these indicate limiting factors. Furthermore, by comparing the values in column $\mathcal{D}$ from the same TSP base instance, we can see which approach has found the most spread-out routes and/or with more edge crossings. As an example, we show this in Figure \ref{fig:pr107_solutions} for the instance \textit{pr107\_10\_usw\_10\_03.thop}; this is an instance with a high performance difference between the two shown approaches. The graphical representation of the solutions plots the cities in their respective coordinates. The initial and final cities are represented by a green triangle and a red square, respectively, while black points represent the other cities. The diameter of the point representing a city shows the proportion of profit available in that city. The continuous lines connecting pairs of cities represents the route performed by the thief. The line thickness increases according to the total weight picked by the thief. We can see that our solution has a significantly more efficient route, it travels a shorter distance and, from it, a higher profit has been achieved. \begin{table}[!ht] \centering \footnotesize \caption{Information on the structure of the best solutions found.} \setlength{\tabcolsep}{0mm} \renewcommand{\arraystretch}{0.9 \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}llrrrlrrrlrrr@{}} \toprule \multirow{2}{*}{TSP base} & & \multicolumn{ 3}{c}{Santos and } & & \multicolumn{ 3}{c}{\multirow{2}{*}{ACOThOP*}} & & \multicolumn{ 3}{c}{\multirow{2}{*}{ACOThOP}} \\ \multicolumn{1}{c}{\multirow{2}{*}{(\tt XXX)}} & & \multicolumn{ 3}{c}{Chagas (2018)} & & \multicolumn{ 3}{c}{} & & \multicolumn{ 3}{c}{} \\ \cmidrule{3-5} \cmidrule{7-9} \cmidrule{11-13} & & \multicolumn{1}{c}{$\mathcal{D}$} & \multicolumn{1}{c}{\%T} & \multicolumn{1}{c}{\%W} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$\mathcal{D}$} & \multicolumn{1}{c}{\%T} & \multicolumn{1}{c}{\%W} & & \multicolumn{1}{c}{$\mathcal{D}$} & \multicolumn{1}{c}{\%T} & \multicolumn{1}{c}{\%W} \\ \midrule eil51 & & 12.6 & 92.9 & 73.5 & & 9.7 & 93.1 & 82.6 & & 9.8 & 93.0 & 81.9 \\ pr107 & & 925.2 & 96.0 & 64.7 & & 537.3 & 99.7 & 81.8 & & 543.8 & 99.7 & 81.0 \\ a280 & & 35.3 & 97.9 & 50.5 & & 13.2 & 98.9 & 81.4 & & 13.7 & 98.9 & 80.3 \\ dsj1000 & & 178520.2 & 98.6 & 33.7 & & 30290.7 & 93.5 & 82.1 & & 31204.3 & 93.6 & 81.1 \\ \bottomrule \end{tabular*} \label{table:solution_structures} \centering \end{table} \begin{figure}[!ht] \centering \scriptsize \captionsetup[subfigure]{labelformat=empty, justification=centering} \centering \setcounter{subfigure}{0} \subfloat[][{\scriptsize Santos and Chagas (2018) Profit = 133925 Distance traveled = 54183}] { \fbox{% \includegraphics[width=0.46\linewidth]{{pr107_10_usw_10_03.thop.santos.sol}.pdf} }% }% % % \subfloat[][{\scriptsize ACOThOP* Profit = 474464 Distance traveled = 40427}] { \fbox{ \includegraphics[width=0.46\linewidth]{{pr107_10_usw_10_03.thop.acothopstar.sol}.pdf} }% }% \caption{Graphical representation of the best solution found in \cite{santos2018thief} (left) and the best solution found by our approach ACOThOP* (right) for the instance \textit{pr107\_10\_usw\_10\_03.thop}.} \label{fig:pr107_solutions} \centering \end{figure} We can observe in Table~\ref{table:solution_structures} that the routes found by our ACO algorithm, in its two versions, are more efficient than those found by Santos and Chagas~\cite{santos2018thief}. Also, we note that the ratio between the total distance traveled and the number of cities visited is higher for the best solutions found in~\cite{santos2018thief}, especially for instances with more cities. This behavior directly impacts solutions because they can be quickly limited by the travel time limit, which can be seen when analyzing the columns referring to the solutions found in~\cite{santos2018thief}. As our ACO algorithm -- together with our packing routine that fills the knapsack more -- has been able to find more efficient routes, a better balance between the limiting factors has been obtained, which resulted in significantly better solutions (see again Figure~\ref{fig:santos_vs_aco}). \section{Concluding remarks} \label{sec:conclusions} In this work, we have approached the Thief Orienteering Problem (ThOP), a recent academic multi-component problem that combines two classic combinatorial optimization problems: the Orienteering Problem and the Knapsack Problem. We have proposed a two-phase heuristic algorithm based on Ant Colony Optimization, and we have studied the effect of the components using automated algorithm configuration. Our experiments have shown that the best configurations as well as the average configuration are better on over 90\% of the 432 instances with an average fitness improvement higher of over 300\%; the largest improvements are on the largest instances, when compared to the best solutions in the literature. Based on our analysis, this is due to the efficiency of the ant colony optimization used to determine the thief's route together with our novel, randomized packing routine. As future work, we will investigate exact algorithms to solve small and mid-sized ThOP instances to establish global optima. Another interesting study will be to address a version of the problem that considers multiple thieves in order to provide a more generic problem, for example, to take a more fundamental approach to the above-mentioned scenarios of the politicians campaigning or the rescue-teams checking safety places. \vspace{2mm}\noindent\textbf{Acknowledgments.} This study has been financed in part by Coordena\c{c}\~{a}o de A\-per\-fei\-\c{c}o\-a\-men\-to de Pessoal de N\'{i}vel Superior - Brazil (CAPES) - Finance code 001. The authors would also like to thank Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de Minas Gerais (FAPEMIG), Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq), Universidade Federal de Ouro Preto (UFOP) and Universidade Federal de Vi\c{c}osa (UFV) for supporting this research. \bibliographystyle{splncs04}
1,477,468,751,290
arxiv
\section{Introduction} The lipid-bilayer of the plasma membrane, which provides basic structure to the eukaryotic cells, is sometimes termed as ``Nature's preferred liquid crystal'' \cite{mouritsenchapter} as it forms lyotropic smectic liquid crystal phase. Extensive experimental studies of the interactions between the components of the bilayered plasma membrane has revealed that it is comprised of laterally segregated and morphologically distinct domains known as `lipid rafts'. The lipid rafts are considered to be relatively small domains, which are depleted of phospholipids with unsaturated acyl chains but enriched in cholesterols and lipids having saturated acyl chains. Cholesterol-lipid interactions have long been believed to be an important factor behind the formation of cholesterol rich rafts \cite{silvius}. Cholesterol, which is present in a high concentration in plasma membrane, has crucial effects on the phase behaviour of such phospholipid membrane, as the introduction of cholesterol to a pure phospholipid bilayer changes the membrane properties to intermediate between liquid-disordered (ld) and liquid-ordered (lo) phases \cite{brown2000}. Further works have suggested that a cholesterol dependent lateral phase separation \cite{lingwood} between these two phases can occur in the binary mixtures of phospholipids with cholesterol. Thus, the co-existence of unsaturated phospholipids rich ld domains and cholesterols rich lo domains has been demonstrated as the structure of plasma membranes by extensive model membrane studies \cite{london2002, kane, dietrich, honigmann}. Cholesterols, when dispersed in aqueous media, forms crystals instead of bilayers, but when mixed with other lipids can adopt complex phase patterns of lateral organization \cite{silvius, yeagle, yeaglebook, mouritsen95} like the one found in plasma membrane. Inspite of significant advances in theoretical and experimental studies of lipid-cholesterol mixtures, the origin of the microscopic organization of lipid-cholesterol bilayers is still not completely known. From the structural point of view, a cholesterol molecule have amphipathic character, just like a phospholipid, due to the presence of polar head group in addition to a hydrophobic chiral tail \cite{yeagle}, but the dipole moment of the polar head group of a cholesterol is much smaller than that of a phospholipid \cite{gopalakrishna}. Cell-biological and biochemical experiments not only have recognized cholesterol as an important constituent of lipid `rafts' in mammalian cell-membranes but also have found that the membrane cholesterol level is a key factor in regulating raft stability and organization \cite{silvius}. The concentration of the cholesterols, present in the lipid bilayer, is an important controlling factor of many other physiological phenomena also. Cholesterol, as a major component of the cell-membrane, can be present in a wide concentration range, $10\%-45\%$, in the cell-membranes. Though the role of the concentration of cholesterol in domain formation was not known properly earlier, more recent studies of the mixture of cholesterols with saturated and unsaturated lipids have revealed that the presence of cholesterol at a typical concentration of $33$ mol$\%$ can create lo domains or rafts \cite{london2002}. The self-assembly process of amphiphilic molecules like lipids and cholesterols, which involves a number of complex phenomena, is due to a fine competition between different forces of physical origin \cite{cates}. Due to the considerable complexity of plasma membrane and model membranes, computer simulation is an essential technique to study the self-organization and phase-properties of the membrane components \cite{mouritsenchapter}. Some lattice-model simulation studies has been done \cite{nielsen96, nielsen99, banerjee} exploring the phase-equilibria in two-dimensional system of particles considering model interactions between them which imitates different interactions between lipids and cholesterols qualitatively. Effect of the cholesterol concentration on domain formation has been studied varying the percentage of cholesterol molecules in a model system of multi-component lipid bilayer using random lattice-model simulation allowing both translational and internal degrees of freedom \cite{banerjee}. Various computer simulation investigations \cite{saiz, brannigan, cooke, arnarez, marrink, farago, sodt, allen, whitehead, ayton, sun} on large-scale lipid membrane properties has also been done using coarse-grained generic models instead of all-atom models as the latter takes huge computation time. In this Molecular Dynamics (MD) simulation study of lipid-cholesterol mixture, coarse-grained modelling of lipid and cholesterol molecules have been done to study the phase-equilibria and the dependence of `raft' formation on the presence of cholesterol. As both the lipid and chiral molecules have a polar head-group part as well as long apolar tail, they have been modelled as ellipsoidal particles which are interacting via van-der-Waal type interaction, known as Gay-Berne (GB) potential \cite{GB}. In the field of liquid crystal, this GB force field has been widely used to investigate various phases \cite{allenreview}. In some previous MD simulation study of lipid bilayers \cite{whitehead, ayton, sun} also, GB ellipsoids have been used to model large length-scale properties. In our model, the polar interaction between two lipid molecules or two cholesterol molecules or one lipid and one cholesterol molecule has been taken as simply dipole-dipole interaction. More over, as cholesterol molecules are chiral in nature, two cholesterol molecules have been considered as interacting via a model chiral interaction \cite{paul19}. Thus a lipid molecule has been modelled as an achiral ellipsoidal molecule of length to breadth ratio $3:1$ and a cholesterol molecule has been modelled as a polar chiral molecule of ellipsoidal shape and of same size, with a point dipole embedded at $0.5\sigma_0$ distance from one end of both type of molecules, $\sigma_0$ being the breadth of the molecules. Direction of the dipole is fixed at $90\degree$ with respect to the molecular long axis, as the preferential direction of the dipolar head-groups of the lipid particles constituting bilayered membrane is to lie on the plane of the bilayer \cite{paul17}. Thus two such lipid molecules and one lipid with another cholesterol molecule interact between them via the pair potential given by, \begin{eqnarray} U_{1}(\vec{r}_{ij},\hat{u}_{i},\hat{u}_{j})&=& 4\epsilon(\hat{r}_{ij}, \hat{u}_{i}, \hat{u}_{j})(\rho_{ij}^{-12}-\rho_{ij}^{-6}) \nonumber \\ & & + \frac{1}{r^3_{d}}[\vec{\mu}_{d_{i}}\cdot\vec{\mu}_{d_{j}}-\frac{3}{r^{2}_{d}}(\vec{\mu}_{d_{i}}\cdot\vec{r}_{d})(\vec{\mu}_{d_{j}}\cdot\vec{r}_{d})] \nonumber \\ &=& U_{GB}(\vec{r}_{ij},\hat{u}_{i},\hat{u}_{j}) + U_{dd}(\vec{r}_{d},\hat{u}_{d_{i}},\hat{u}_{d_{j}}) \label{eq:U1} \end{eqnarray} whereas, the pair interaction between such two cholesterol molecules is taken as, \begin{eqnarray} U_{2}(\vec{r}_{ij},\hat{u}_{i},\hat{u}_{j})&=& 4\epsilon(\hat{r}_{ij}, \hat{u}_{i}, \hat{u}_{j})(\rho_{ij}^{-12}-\rho_{ij}^{-6}) \nonumber \\ & & -c\ 4\epsilon(\hat{r}_{ij}, \hat{u}_{i}, \hat{u}_{j})\rho_{ij}^{-7}\{(\hat{u}_{i}\times\hat{u}_{j})\cdot\hat{r}_{ij}\}(\hat{u}_{i}\cdot\hat{u}_{j}) \nonumber \\ & & +\frac{1}{r^3_{d}}[\vec{\mu}_{d_{i}}\cdot\vec{\mu}_{d_{j}}-\frac{3}{r^{2}_{d}}(\vec{\mu}_{d_{i}}\cdot\vec{r}_{d})(\vec{\mu}_{d_{j}}\cdot\vec{r}_{d})] \nonumber \\ &=& U_{GB}(\vec{r}_{ij},\hat{u}_{i},\hat{u}_{j})+\ U_{c}(\vec{r}_{ij},\hat{u}_{i},\hat{u}_{j}) \nonumber \\ & & +U_{dd}(\vec{r}_{d},\hat{u}_{d_{i}},\hat{u}_{d_{j}}) \label{eq:U2} \end{eqnarray} Here, $U_{GB}$, the well known GB potential \cite{GB, luckhurst}, is the molecular position and orientation dependent achiral potential of van-der-Waal type. $U_{c}$ is the chiral interaction potential \cite{memmer} which induces a twist angle between two side-by-side ellipsoidal molecules and energetically favours parallel arrangement for two end-to-end molecules, thus gives rise to chiral phases. The sign of the chirality strength parameter $c$ determines the handedness. In the expressions of both $U_{GB}$ and $U_{c}$, the term $\rho_{ij}$ is given by, $\rho_{ij} = [r_{ij} - \sigma(\hat{r}_{ij}, \hat{u}_{i}, \hat{u}_{j}) + \sigma_{0}] / \sigma_{0}$. Here, $\sigma(\hat{r}_{ij}, \hat{u}_{i}, \hat{u}_{j})$ is the orientation dependent separation term and $\epsilon(\hat{r}_{ij}, \hat{u}_{i}, \hat{u}_{j})$ is the orientation dependent well depth between two molecules $i$ and $j$. The vector $\vec{r}_{ij}=r_{ij}\hat{r}_{ij}$ is the separation between $i$-th and $j$-th molecules, $\hat{u}_{i}$ and $\hat{u}_{j}$ being the unit vectors along the long axis of the respective molecules. The values of the GB parameters used in our study are $\kappa=3.0$, $\kappa'=1/5$, $\mu=1$, $\nu=2$ for both types of molecules. Lastly, $U_{dd}$ is simply a dipole-dipole interaction potential separated by distance $\vec{r}_{d}$. The dipole moment vector of the point dipole embedded on $i$-th molecule is given by, $\vec{\mu}_{d_{i}}\equiv \mu^*\hat{u}_{d_{i}}$, where $\mu^*= (\mu^2/\varepsilon_{s}\sigma_{0}^3)^{1/2}$ is the scaled dipole moment strength, the value of which has been set fixed to $1.4$ for a lipid molecule while different set of values of $\mu^*$ has been considered for the model molecule of cholesterol. To consider the long-range nature of the dipolar interaction conventional reaction-field technique has been used within a sphere of cut-off radius $r_{RF}=0.5/\sigma_0$ and taking the dielectric constant of the continuum outside it as $\epsilon_{RF}=1.5$ \cite{berardi99}. To reduce computation time, explicit water interaction has not been considered in this simulation study. In this NVT-MD simulation study, the scaled density $\rho^*$ has been set fixed to an optimum value of $0.30$. For each system, an well equilibrated isotropic phase consisting both types of molecules has been used as the initial configuration. The scaled temperature $T^*(=k_{B}T/\epsilon_{s}\text{, }k_{B}\text{ being the Boltzmann Constant})$ has been decreased gradually to get the equilibrium stable phases at lower temperatures and at a particular temperature stage, the simulation run has been started from a higher temperature equilibrium phase. At each temperature stage, to realize the equilibrium phase, simulation run of nearly $10^6$ MD-steps has been performed and it has been ensured that the rms fluctuation of the average energy of the system remains within $2\%$ about a mean value at equlibrium. Production run of another $5\times 10^4$ steps has been performed to calculate required averages. The concentration of cholesterol, i.e., chiral polar molecules, has been taken as $30\%$ of the total number of molecules ($N$) and in the initial isotropic phase both types of molecules have been taken as distributed randomly over the whole simulation box. Simulation results for $N=864$ \& $1372$ molecules have been compared which shows qualitatively same behaviours. The strength of the chiral interaction, i.e. the value of the chirality strength parameter $c$, is an important factor which can control the phase properties of the assembly of polar chiral molecules \cite{paul19}. In this study of the mixture of polar chiral and achiral molecules, the effect of the chirality strength parameter $c$ has been checked by considering two different values of $c$ and performing separate simulation study. The results have been checked for two separate values of the scaled dipole moment for chiral molecules ($\mu^*_c$, say), $0.1$ and $0.5$, whereas that of the achiral polar molecules ($\mu^*_l$, say), representing lipid molecules, has been set to a relatively larger value $1.4$. When the value of $c$ is set as $1.0$, for both the systems having $\mu^*_c=0.1$ and $0.5$, nematic ordering has been formed in the system as the scaled temperature $T^*$ has been decreased gradually starting from an isotropic phase. The distribution of the orientational order, $g_2(r^*)=\frac{3}{2}\langle \sum_{i} \sum_{j\neq i}(\hat{u}_{i}(r^*)\cdot \hat{u}_{j}(r^*))\rangle -\frac{1}{2}$ ($r^*$ is the scaled intermolecular separation), has been checked (figure: \ref{fig:g2ofr_ch1.0}) which shows strong orientational correlation over the whole system. The plots of radial distribution function or pair distribution function, $g(r^*)=V\langle \sum_i \sum_{j\neq i}\delta(r^*-r^*_{ij})\rangle /N^2$ \cite{allenbook}, show short range positional order but less order in long range in this phase (figure: \ref{fig:gofr_ch1.0}). On further decrease in the temperature, smectic layers has been formed from nematic phase. But, in this case both types of molecules are mixed in such a way that there exist tiny domains of polar chiral molecules inside the matrix of polar non-chiral molecules. Such domains, though weakly formed, can be found observing the structures of the stable phases obtained in this case for different $\mu^*$ values. Snapshots of these phases obtained with $\mu^*_c=0.1$ and $0.5$ has been presented in figures \ref{qmga_ch1.0_1.4_0.1} and \ref{qmga_ch1.0_1.4_0.5} respectively. \begin{figure}[h] \begin{center} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{g2ofr_ch1.0.eps} \caption{\label{fig:g2ofr_ch1.0}} \end{subfigure}\hfill \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{gofr_ch1.0.eps} \caption{\label{fig:gofr_ch1.0}} \end{subfigure}\\ \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{g2ofr_ch1.5.eps} \caption{\label{fig:g2ofr_ch1.5}} \end{subfigure}\hfill \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{gofr_ch1.5.eps} \caption{\label{fig:gofr_ch1.5}} \end{subfigure} \caption{Plots of (a) $g_{2}(r^*)$ and (b) $g(r^*)$ for $c=1.0$; (c) $g_{2}(r^*)$ and (d) $g(r^*)$ for $c=1.5$.\label{fig:g2ofr_gofr}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{subfigure}{0.21\textwidth} \includegraphics[width=\textwidth]{qmgaconf_t1.6_ch1.0pl1.0transd0.30_1.4_0.1.eps} \caption{\label{qmga_ch1.0_1.4_0.1}} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\textwidth]{qmgaconf_t1.6_ch1.0pl1.0transd0.30_1.4_0.5.eps} \caption{\label{qmga_ch1.0_1.4_0.5}} \end{subfigure}\\ \begin{subfigure}{0.23\textwidth} \includegraphics[width=\textwidth]{qmgaconf_t1.8_ch1.0pl1.0transd0.30_1.4_1.4.eps} \caption{\label{qmga_ch1.0_1.4_1.4}} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\textwidth]{qmgaconf_t1.6_ch1.5pl1.0transd0.30_1.4_0.1.eps} \caption{\label{qmga_ch1.5_1.4_0.1}} \end{subfigure}\\ \begin{subfigure}{0.23\textwidth} \includegraphics[width=\textwidth]{qmgaconf_t1.6_ch1.5pl1.0transd0.30_1.4_0.5.eps} \caption{\label{qmga_ch1.5_1.4_0.5}} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\textwidth]{qmgaconf_t1.8_ch1.5pl1.0transd0.30_1.4_1.4.eps} \caption{\label{qmga_ch1.5_1.4_1.4}} \end{subfigure} \caption{Snapshots of the configurations for $c=1.0$, $\mu^*_l=1.4$, (a) $\mu^*_c=0.1$, (b) $\mu^*_c=0.5$, (c) $\mu^*_c=1.4$ and for $c=1.5$, $\mu^*_l=1.4$, (d) $\mu^*_c=0.1$, (e) $\mu^*_c=0.5$, (f) $\mu^*_c=1.4$. Polar achiral molecules are shown in red and polar chiral molecules in blue. Positions of the dipoles are shown in black dots.} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \begin{subfigure}{0.48\textwidth} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{layer1_t1.6_ch1.5pl1.0transd0.30_1.4_0.1.eps} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{layer2_t1.6_ch1.5pl1.0transd0.30_1.4_0.1.eps} \end{subfigure}\\ \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{layer3_t1.6_ch1.5pl1.0transd0.30_1.4_0.1.eps} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{layer5_t1.6_ch1.5pl1.0transd0.30_1.4_0.1.eps} \end{subfigure} \caption{$c=1.5$, $\mu^*_c=0.1$, $\mu^*_l=1.4$.\label{fig:layer_c1.5_1.4_0.1}} \end{subfigure}\\ \begin{subfigure}{0.48\textwidth} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer1_t1.6_ch1.5pl1.0transd0.30_1.4_0.5.eps} \end{subfigure} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer2_t1.6_ch1.5pl1.0transd0.30_1.4_0.5.eps} \end{subfigure}\\ \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer3_t1.6_ch1.5pl1.0transd0.30_1.4_0.5.eps} \end{subfigure} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer6_t1.6_ch1.5pl1.0transd0.30_1.4_0.5.eps} \end{subfigure} \caption{$c=1.5$, $\mu^*_c=0.5$, $\mu^*_l=1.4$.\label{fig:layer_c1.5_1.4_0.5}} \end{subfigure} \caption{Snapshots of the configurations of some of the smectic layers obtained with $c=1.5$, $\mu^*_l=1.4$, (a) $\mu^*_c=0.1$, and (b) $\mu^*_c=0.5$, for $N=864$. View from the top of the layers i.e. along the direction of the director axis has been presented to show the particle positions only. Particles in red represent polar achiral molecules and particles in blue are polar chiral molecules.\label{fig:layers_1}} \end{center} \end{figure} When the value of $c$ has been set to $1.5$, with relative concentration of $30\%$ chiral polar molecules, similar nematic ordering has been found to occur in the system by decreasing the temperature from an isotropic phase keeping the density of the system fixed. Plots of the orientational distribution function $g_2(r^*)$ in this case too, show strong long distance orientational order (figure: \ref{fig:g2ofr_ch1.5}) and the plots of pair distribution function $g(r^*)$ (figure: \ref{fig:gofr_ch1.5}) show the similar behaviour qualitatively as in the case with $c=1.0$. Interestingly, in this case it has been found that, the chiral polar molecules have been aggregated together to form small domains of chiral molecules as the system has been allowed to evolve further keeping temperature fixed. Similar phases have been formed for both the values of $\mu^*_c=0.1$ and $0.5$. As the temperature has been decreased further, starting from such a phase, smectic layers have been formed. Snapshots of the configurations of some of the smectic layers of considerable size, obtained in both the cases have been presented in figure \ref{fig:layers_1}, where the small domains of polar chiral molecules can clearly be identified. Comparison of the results obtained with $c=1.0$ and $c=1.5$ shows that the formation of domains enriched with polar chiral molecules is relatively stronger for $c=1.5$. \begin{figure}[h] \begin{center} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth, height=0.35\textwidth]{gofz_c1.5_1.4_0.1.eps} \caption{$c=1.5$, $\mu^*_c=0.1$, $\mu^*_l=1.4$.\label{gofz_c1.5_1.4_0.1}} \end{subfigure}\\ \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth, height=0.35\textwidth]{gofz_c1.5_1.4_0.5.eps} \caption{$c=1.5$, $\mu^*_c=0.5$, $\mu^*_l=1.4$.\label{gofz_c1.5_1.4_0.5}} \end{subfigure \caption{Plots of $g(z^*)$ and $g_d(z^*)$ for different system with $c=1.5$.\label{gofz_c1.5_0.1and0.5}} \end{center} \end{figure} In these cases with $\mu^*_c=0.1$ and $0.5$, partial bilayer ordering has been found in between the smectic layers. The polar achiral particles with dipole moment $\mu^*_l=1.4$, which is present relatively higher in concentration ($=70\%$) in the system, forms these bilayered smectic domains, whereas, the small smectic domains of chiral polar particles do not form bilayer. The plots of pair correlation functions $g(z)^*$ of molecular center of masses and that of dipolar positions $g_d(z^*)$ along the director axis (figure: \ref{gofz_c1.5_0.1and0.5}), computed in a fashion similar as $g(r^*)$, supports the smectic layer formation. But, if the functions are plotted taking both type of particles into account, then these plots do not show clear existence of bilayered domains (referred to the plots for `mixture' in figure: \ref{gofz_c1.5_1.4_0.1} and \ref{gofz_c1.5_1.4_0.5}). But when they are plotted considering different types of molecules separately, for polar achiral molecules (referred to the plots of `lipids' in the figure: \ref{gofz_c1.5_1.4_0.1} and \ref{gofz_c1.5_1.4_0.5}) alternate peaks of both the functions are of comparable heights, indicating partial presence of bilayer where dipolar parts of polar achiral molecules (the model lipid molecules) of two adjacent smectic layers have been gathered together, whereas, for polar chiral molecules (the model cholesterol molecules) all the peaks of the both functions are of comparable heights (referred to the plots of `cholesterols' in the figure: \ref{gofz_c1.5_1.4_0.1} and \ref{gofz_c1.5_1.4_0.5}). Clearly, this partial bilayer formation is more prominent in the system with $\mu^*_c=0.5$ than $\mu^*_c=0.1$. Snapshots of the configurations obtained in these cases are shown in the figure \ref{qmga_ch1.5_1.4_0.1} and \ref{qmga_ch1.5_1.4_0.5}. \begin{figure}[h] \begin{center} \includegraphics[width=0.48\textwidth]{gofz_c1.5_1.4_1.4.eps} \caption{Plots of $g(z^*)$ and $g_d(z^*)$ for the system with $c=1.5$, $\mu^*_c=1.4$, $\mu^*_l=1.4$.\label{gofz_c1.5_1.4_1.4}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{subfigure}{0.48\textwidth} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer1_t1.8_ch1.5pl1.0transd0.30_1.4_1.4.eps} \end{subfigure} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer4_t1.8_ch1.5pl1.0transd0.30_1.4_1.4.eps} \end{subfigure}\\ \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer5_t1.8_ch1.5pl1.0transd0.30_1.4_1.4.eps} \end{subfigure} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{layer6_t1.8_ch1.5pl1.0transd0.30_1.4_1.4.eps} \end{subfigure} \end{subfigure} \caption{Snapshots of the configurations of some of the smectic layers obtained with $c=1.5$, $\mu^*_l=1.4$, (a) $\mu^*_c=1.4$ for $N=864$. View from the top of the layers i.e. along the direction of the director axis has been presented to show the particle positions only. Particles in red represent polar achiral molecules and particles in blue are polar chiral molecules.\label{fig:layers_2}} \end{center} \end{figure} To check the effect of the dipole moment, a simulation run has been performed with the $\mu^*_c$ value same as that of $\mu^*_l$ i.e. both equal to $1.4$. With both the values of $c=1.0$ and $1.5$, in these systems too, nematic phases have been formed (figure: \ref{fig:g2ofr_ch1.0} and \ref{fig:g2ofr_ch1.5}) on decreasing the temperature from respective higher temperature isotropic phases. On further decrease in the temperature, smectic layers have been formed, but, most interestingly, complete bilayered ordering has been found here in the smectic layers, i.e., the dipolar parts of both types of molecules of adjacent smectic layers gathered together to form an uniform bilayered structure. Snapshot of the configurations obtained with such $\mu^*_l$ and $\mu^*_c$ values have been presented in the figure \ref{qmga_ch1.0_1.4_1.4} for a system having $c=1.0$ and in figure \ref{qmga_ch1.5_1.4_1.4} for $c=1.5$. The plots of $g(z^*)$ and $g_d(z^*)$ (figure: \ref{gofz_c1.5_1.4_1.4} with $c=1.5$) for the mixture of both type of molecules and separately for each type of molecules show similar variation. In these cases, the alternate peaks of comparable heights for both the functions have been occurred in same position indicating the presence of completely bilayered smectic layers. Small domains of chiral molecules have also been found to occur in this case. Snapshots of some smectic layers in the configuration obtained in a system with $c=1.5$ have been shown in figure \ref{fig:layers_2}, which shows the presence of small domains riched with polar chiral molecules i.e. the model cholesterol molecules. These domains are also bilayered in this case, as supported by the plots of $g(z^*)$ and $g_d(z^*)$ (figure: \ref{gofz_c1.5_1.4_1.4}). The highest scaled temperature at which this completely bilayered smectic phase has been formed is relatively greater, having value $T^*=1.8$, than that where partially bilayered smectic phases formed with lower values of $\mu^*_c$. For $\mu^*_c=0.1$ and $0.5$, the partial bilayers start forming at a scaled temperature $T^*=1.6$. In this NVT Molecular Dynamics simulation study, it has been found that, the strength of chiral interaction is an important factor which controls the domain formation in the system of mixture of polar achiral molecules and polar chiral molecules of same sizes. Here, the polar achiral molecules have been used as a coarse-grained model of lipid molecules and the polar chiral molecules as the model of cholesterol molecules. Formation of stable bilayers has been possible controlling the dipole moment of the chiral molecules. Despite the simple coarse-grained modelling of a system of the mixture of lipids and cholesterols, this simulation study successfully generates cholesterol rich small smectic domains in a matrix of lipid multi-bilayer considering molecular-level physical interactions. Thus, this study represents a computational model for the `rafts' that can be found in lipid membranes, the study of which is important not only from the point of its complex structure and phase behaviour but also of biological importance. Further studies focusing on molecular level coarse-grained modelling of certain affinities between different kinds of lipids e.g. phospholipids, sphingolipids etc., cholesterols and proteins are needed to develop complete understanding of the complex phase-behaviour of the plasma membrane. The phase separation between cholesterol and lipid molecules, as has been resulted in the present simulation, may as well be considered as the first step to simulate the organization of rafts. Considering all these points, it is our aim to study multicomponent bilayer, consisted of phospholipids, sphingolipids, cholesterols and proteins present in suitable percentage in appropriate environment to understand the mechanism behind raft formation.
1,477,468,751,291
arxiv
\section{Introduction} \label{sec:introduction} Tracking extended objects from image sequences in the presence of noise is required in many different fields. Within astronomy it is used in \ac{AO}, both for granular images of the sun during the day \citep{Scharmer2003,Rimmelea1998,Michau1993}, and elongated laser guide stars at night \citep{Thomas2008}. Image shift measurement is also used in other fields, for tracking biological samples \citep{Hand2009}, and video motion tracking applications. In solar \ac{AO}, Shack-Hartmann wave-front sensors \citep{shack1971} with large fields of view are typically employed. The cameras used in these sensors have large full well depths, as the instruments are photon-noise limited. In this work, we investigate tracking extended objects using data acquired from the Swedish Solar Telescope on-line gallery \citep{Scharmer1999a} as our wave-front sensor images (Fig.~\ref{fig:granules}), which have an rms contrast of $10\%$. We assume a photon-noise limited camera with a signal defined as the peak intensity above the background, and a noise level defined by the photon-noise. For camera pixels with a typical full well depth of 40000 electrons; the signal would be 4000 electrons, with photon-noise from 40000 electrons, giving a \ac{SNR} of 20. Shift measurements of extended objects in solar \ac{AO} are calculated in a two-step process \citep{Michau2006}. Initially an integer shift measurement is performed by locating the peak of a cross-correlation of the image with a reference image \citep{Miura2009}. Secondly the sub-pixel shift is estimated. The determination of the peak location to sub-pixel accuracy limits the accuracy to which the shift measurement can be performed. \begin{figure} \begin{center} \includegraphics{granules.pdf} \end{center} \caption{Image of solar granulation used as the input image in the simulations. The full image is $75 \times 75$ arc-seconds. Small regions of the image are taken and then shifted with respect to each-other to artificially generate shifts similar to the effect of turbulence of the atmosphere. One such region is shown to the right of the full image. It has been re-sampled to the resolution used in the simulations, $0.4 \mathrm{arc-seconds}/\mathrm{pixel}$. Data obtained from the Swedish Solar Telescope On-line Gallery \citep{sst_gallery}.} \label{fig:granules} \end{figure} We concern ourselves with how to best estimate the peak location to a sub-pixel accuracy for an arbitrarily shaped correlation function derived from cross correlating the object with a reference image. For point sources and the resultant Airy functions there are analytical methods to determine optimal parameters for peak location at a given \ac{SNR} \citep{Pan2008}. However, no such analytical treatment exists for images of arbitrary content, such as the results of a correlating wave-front sensor. Motivated by this we developed a method to optimise the parameters for a windowed, thresholded, center of mass measurement for a given \ac{SNR}. We compare this technique with an analytic 2D parabolic fit to the central $3 \times 3$ pixel region around the correlation peak, as described in \citet{Lofdahl2010}. This method was chosen as a comparison as its performance is similar to the 2D quadratic interpolation method, and significantly better than the 1D techniques \citep{Lofdahl2010} and Gaussian fitting algorithms \citep{Waldmann2007}. \section{Correlation Image Generation} \label{sec:simulation} Simulations for this paper were run on images containing solar granulation of a size, field of view and contrast typical for solar \ac{AO}, taken from the large image shown in Fig.~\ref{fig:granules} \citep{Scharmer1999a}. The image was shifted and binned in order to generate images containing sub-pixel shifts using the Python language, and numpy routines \citep{Vanderwalt2011}. The solar granulation case used here is an example to demonstrate the use of the centroiding technique, though in general it should be applicable to any extended image. Most of the computational load in centroiding extended objects lies in cross-correlating images. By varying parameters applied to centroiding the correlation images, the same correlation image can be used. Regions of $240 \times 240$ pixels were taken from Fig.~\ref{fig:granules}, corresponding to a $9.6$ arc-second field of view. Integer shifts were performed on the full resolution image, with a Gaussian distribution of mean 0 and standard deviation of 1 pixel, then the resultant images were binned by a factor of 10 and had shot noise applied, creating typical sub-aperture images of $24 \times 24$ pixels, making fully described shifted images down to 0.1 pixels. These values were chosen to be indicative of residuals in a closed loop \ac{AO} system. The images, along with the known applied shifts were used to compare the windowed 2D parabolic fit \citep{Lofdahl2010} and the windowed, thresholded center of mass methods. \section{Peak Location on a Correlation Image} \label{sec:centroiding} \subsection{Windowed Parabolic Fit} \label{sec:parabolic} A small $3 \times 3$ region around the peak of the correlation image can be fitted by a 2D parabola, as described in \citet{Lofdahl2010}. The parabola takes the form: \begin{equation} f(x, y) = a_1 + a_2x + a_3x^2 + a_4y + a_5y^2 + a_6xy, \end{equation} where the location of the minima, in $x$ and $y$ respectively are given analytically by: \begin{align} x_{\text{min}} &= i_{\text{min}} + (2a_2a_5 - a_4a_6)/(a^2_6 - 4a_3a_5) \\ y_{\text{min}} &= j_{\text{min}} + (2a_3a_4 - a_2a_6)/(a^2_6 - 4a_3a_5). \end{align} where $i_{\text{min}}$ and $j_{\text{min}}$ are the integer positions of the peak of the correlation in $x$ and $y$ respectively, and the solution to a least squares fit can be found analytically: \begin{equation} \begin{array}{l l l l l} a_2 &= \left( \left< s_{1,j} \right>_j - \left< s_{-1,j} \right> _j \right) /2 \\ a_3 &= \left( \left< s_{1,j} \right> _j - 2\left< s_{-1,j} \right> _j + \left< s_{-1,j} \right> _j\right)/2 \\ a_4 &= \left( \left< s_{i,1} \right> _i - \left< s_{i,-1} \right> _i \right)/2 \\ a_5 &= \left( \left< s_{i,1} \right> _i - 2 \left< s_{i,0} \right> _i + \left< s_{i, -1} \right> _i \right) /2 \\ a_6 &= \left(s_{1,1} - s_{-1,1} - s_{1,-1} + s_{-1,-1} \right) /4 \end{array} . \label{eqn:2d_quad} \end{equation} where $s$ describes the $3 \times 3$ windowed region around the correlation peak, $s_{i,j}$ describes the $i^{\text{th}}$ and $j^{\text{th}}$ element of $s$, and $i$,$j$ can take values from $-1$ to $1$ around the center of the peak (located at $s_{0,0}$). In high \ac{SNR} the limiting error in this technique arises from the biased sampling of the core of the correlation peak, illustrated in Fig.~\ref{fig:aliasing}. The sampling of the correlation peak results in a systematic rounding effect which biases the shift estimates towards integer values. The cause of this error is apparent in Fig.~\ref{fig:bar_alias}. Here we see the regions windowed for use in the centroid highlighted in red. This is a good mask for Fig.~\ref{fig:bar_alias_a}, however centering on the brightest pixel in Fig.~\ref{fig:bar_alias_b} shows that the peak is being under sampled, and not taking into account the full shape of the peak, giving an incorrect estimate of the peak location. \begin{figure} \centering \subfigure[]{\label{fig:aliasing_a} \includegraphics{aliasing_a.pdf} } \\ \subfigure[]{\label{fig:aliasing_b} \includegraphics{aliasing_b.pdf} } \caption{Measured image shift plotted against the actual shift applied to images. The negative y shifts are plotted in \subref{fig:aliasing_a} to make them easier to distinguish. There is a ``wobble'' apparent in the two lines, which is more clearly visible as a systematic effect in \subref{fig:aliasing_b}, where the residuals are plotted and take a ``sawtooth'' like pattern. This aliasing effect arises from under-sampling the correlation peak.} \label{fig:aliasing} \end{figure} \begin{figure} \centering \subfigure[]{\label{fig:bar_alias_a} \includegraphics{demo_a.pdf} } \\ \subfigure[]{\label{fig:bar_alias_b} \includegraphics{demo_b.pdf} } \caption{An illustrative 1D cut through a correlation peak, with the region used by the parabolic fit highlighted in red. Using only 3 pixels around the correlation peak, the shift estimate can be unavoidably biased away from the true location of the peak. While \subref{fig:bar_alias_a} shows the ideal case for using this method, there are some cases where the shift differs from the measured position due to the limited size of the region used, as demonstrated in \subref{fig:bar_alias_b}. This is shown by the arrows above the plots, the green arrow indicates where the parabolic centroid estimates the correlation peak, while the blue arrow shows the true location of the peak.} \label{fig:bar_alias} \end{figure} \subsection{Windowed, adaptive thresholding Center of Mass} \label{sec:adaptive} The simplest way to avoid under sampling the correlation peak is to use a larger window, however this allows more noise into the shift estimate. The noise can be removed to some extent by using a threshold to reject contributions from parts of the signal, of a similar strength as the noise. For a given autocorrelation shape and noise level there will be an optimal window size and threshold value, which gives the best estimate of the image shift. Our proposed method is a two step process. Initially a window is placed around the correlation peak, then a thresholded center of mass is taken of the windowed region. The size of the window function and the threshold value are variable for each set of images. The threshold value is taken as a fraction of the relative peak intensity (max-min of the whole correlation image). Re-normalising intensity for every image is sympathetic to the shape and size of the correlation peak, and ensures that proportionally the same amount of the core of the peak is used in every measurement of the image shift, reducing bias effects. The correlation image initially has a threshold applied, where pixels are rejected if their intensity falls below the threshold level, defined by: \begin{equation} I_{\mathit{thresh}} < \left(I_{\mathit{max}} - I_{\mathit{min}}\right) \times pct, \label{eqn:threshold} \end{equation} where $I_{\mathit{thresh}}$ is the threshold intensity, $I_{\mathit{max}}$ is the maximum intensity in the correlation image, $I_{\mathit{min}}$ is the minimum intensity of the correlation image and $pct$ is the fractional threshold value. The thresholded correlation image then is masked to the chosen window size and is background subtracted, where the background value is the threshold intensity. The centroid estimate of image $i$, using a reference image $r$, can be described as a vector $\mathbf{R}^{i, r}$: \begin{equation} \mathbf{R}^{i, r} = \left[ \begin{array}{l} x_0 \\ y_0 \end{array} \right] \end{equation} where $x_0$ and $y_0$ are the $x$ and $y$ components of the centroid estimate $\mathbf{R}^{i, r}$. $\mathbf{R}^{i, r}$ is calculated using: \begin{equation} \mathbf{R}^{i, r} = \frac{1}{I} \sum^{y_{\mathrm{max}}}_{y=1} \sum^{x_{\mathrm{max}}}_{x=1} I_{x,y} \mathbf{R}^{i, r}_{x,y} \label{eqn:com} \end{equation} where $I$ is the total intensity of the correlation image, $I_{x,y}$ is the intensity of pixel $x, y$ in the correlation image with the threshold applied, and $\mathbf{R}^{i,r}_{x,y}$ is the vector position of $[x, y]$ in the correlation image. The size of the window is a relatively small parameter space to explore, going from a single pixel around the core (corresponding to an integer shift measurement), to the wings of the correlation peak succumbing to background noise. If any larger boxes are used a drop off in performance is seen as more noise is included in the centroid estimate, without any extra useful information being added. The outer threshold for this parameter needs to be set arbitrarily. If too small a window is used, a similar effect to the windowed parabolic fit is seen, in that the measurements are biased toward integer shifts. The optimum window size is chosen as a trade off between including as much of the correlation peak as possible, but also minimizing the number of pixels which only contribute noise to the measurement. The centroid threshold value is normalised such that a value of 1 uses only pixels with the maximum flux and a threshold of 0 uses all available pixels. This parameter behaves similarly to the window size, in that using more pixels increases the noise contribution, reducing the accuracy of the shift estimate. Using high thresholds gives rise to a bias towards integer shift measurements, similar to that seen in the parabolic fit. The optimum threshold value lies somewhere between these two regimes, and is liable to change depending on the window size. This means the whole parameter space needs to be explored for all window sizes to identify the best combination of parameters for the centroids. We optimise the threshold and window size for a set of images, which all use a common reference image. The parameters then only need to be updated when the reference image is changed to take into account slow changing effects, such as the evolving granulation pattern on the solar surface. The optimum set of parameters will depend on a number of different obvious factors, including the image shape, the shape of the resulting correlation function, and the \ac{SNR} of the images, assuming an arbitrary unknown correlation shape. There is no obvious analytic way to determine the best parameters for a given set of images, or circumstances, hence we explore the parameter space to find the optimal solution. However once the optimum set of parameters is found for a given object, at a set \ac{SNR} level, then it should be constant until one of these factors changes. In solar \ac{AO} the regions used for wavefront sensing are constantly evolving, causing the reference image used to be updated on a frame by frame basis. This also means that over time the optimum parameters are subject to change and need to be updated. As the parameters chosen are based on normalised intensity, they are insensitive to changes in flux for a given \ac{SNR}, such as scintillation effects. \subsection{Error estimation} \label{sec:estimation} Given a set of shifted images of matching content and \ac{SNR}, it is possible to make multiple independent estimates of the image shift. By comparing the spread of the shift estimates we can get an estimate of the error on the shift measurement. Using different reference images allows us to estimate the shift in the image multiple times, we can then use the standard deviation of the shift estimates as an indicator of the error on the shift estimate. As this is a statistical process, the estimated error will not be accurate for the shift estimate of a single image in the set. However when averaged over the set of images, we can estimate the magnitude of the shift error on the set. This set of images may be drawn from a single temporal wave-front sensor frame in \ac{AO}, guaranteeing spatial similarity of the images. Alternatively the set could be drawn from a time sequence in correlation video tracking. Care must be taken that the object does not change its spatial characteristics significantly over the duration of the set. We use multiple different reference images, \emph{e.g.} using the first 10 sub-aperture images in the wave-front sensor frame, and use each of them as a reference to estimate image shifts. The global tip/tilt terms are then removed to compensate for the systematic error in shift estimation, due to the unknown shift applied to the reference image. This is a common practice in \ac{AO} systems to negate effects like wind shake from measurements. The subtraction of the global tip/tilt term can be described with: \begin{equation} \mathbf{R}^{r}_{\mathit{t/t}} = \mathbf{R}^{r} - \left<{\mathbf{R}^{r}}\right>_{r}, \label{eqn:average} \end{equation} for a given reference image, where $\mathbf{R}^{r}_{\mathit{t/t}}$ is the center of mass estimate of a set of images with tip/tilt removed, and $\left<{\mathbf{R}^{r}}\right>_{r}$ is the averaged tip/tilt term over all of the images using a given reference. This removes the shift due to each of the reference images, making the centroid estimates from different reference images directly comparable. The standard deviation of the resultant shifts estimate the error, $\sigma_{\mathbf{R}^{r}_{\mathit{t/t}}}$. This method of estimating centroiding errors allows for the parameter space to be explored on real data, where the actual shifts are unknown, and not just on simulated data. \section{Results} \label{sec:results} The full parameter space was explored in simulation for a range of threshold values and window sizes applied to the correlation images. Fig.~\ref{fig:paramspace_a} shows the magnitude of residual errors for different sets of parameters. Fig. \ref{fig:paramspace_b} shows the standard deviation of the centroid measurements using ten different reference images. This has the same characteristics as the real error values, showing it can be used to estimate the location of the optimum parameters for the centroiding algorithm. The optimal parameters from each of the methods is highlighted with a white marker. \begin{figure} \centering \subfigure[]{\label{fig:paramspace_a} \includegraphics{centroiding_a.pdf} } \\ \subfigure[]{\label{fig:paramspace_b} \includegraphics{centroiding_b.pdf} } \caption{Full parameter space for the box size and threshold value in the center of mass algorithm. \subref{fig:paramspace_a} shows the real error associated with the parameters used in the center of mass technique, and \subref{fig:paramspace_b} gives the error estimate taken from the standard deviation on centroids using multiple reference images. The shape of the two plots is similar, indicating multiple references is a suitable estimator of the error. The white spots on the plot show where the optimum parameters lie for the respective methods. The estimated error position does not directly overlap with the location of the real minima, but it can be seen that the difference in error is minimal.} \label{fig:paramspace} \end{figure} In the thresholding axis ($x$) of Fig.~\ref{fig:paramspace} it is possible to see the effects of aliasing towards the large thresholding values on the right of the plots. This effect is similar to the aliasing in the parabolic fit, and in all cases the error approaches that of integer pixel estimation, as at the largest threshold value only the brightest pixel is considered, equivalent to an integer pixel shift estimation. In the window size axis ($y$) of Fig.~\ref{fig:paramspace} the structure is more complicated. Initially the aliasing is apparent for small window sizes, similar to the parabolic fit. This problem decreases as the window size increases, until its optimal region. However the performance begins to degrade again for large windows for low thresholding values. This happens where the region is so large that as well as including all of the peak of the correlation, it includes increasing amount of noise, which isn't filtered out by the thresholding. The centroid optimisation was performed on a range of different noise levels (using photon-noise) to demonstrate how noise affects the centroid estimates. The parameters dependence on \ac{SNR} is demonstrated in Fig.~\ref{fig:SNR_detail}, with Fig. \ref{fig:SNR_detail_a} showing how the threshold level affects the accuracy of the centroid estimates, and Fig. \ref{fig:SNR_detail_b} illustrating how changing the window size affects the accuracy of the centroid estimates. The estimation was performed on ten different regions of the granule image, with the errors taken to be the standard error. \begin{figure} \centering \subfigure[]{\label{fig:SNR_detail_a} \includegraphics{pair_a.pdf} } \\ \subfigure[]{\label{fig:SNR_detail_b} \includegraphics{pair_b.pdf} } \caption{\subref{fig:SNR_detail_a} shows how the optimum threshold value is affected by different \ac{SNR} levels. \subref{fig:SNR_detail_b} shows how the window size affects the error on the centroid estimate for different \ac{SNR} levels. Above a \ac{SNR} of 5 the curves no longer change, staying at their high SNR shapes.} \label{fig:SNR_detail} \end{figure} The optimal values for the parameters varied with \ac{SNR} as can be seen in Fig.~\ref{fig:best_params}. Fig. \ref{fig:best_params_a} shows the optimal thresholding values for the various \ac{SNR} levels, both best performing and the best estimated. The estimated threshold levels differ from the true value up to a SNR of 2, where the estimated threshold value is consistent, and in a region where small variations have little effect on the accuracy of the shift estimate. This trend is also seen in Fig. \ref{fig:best_params_b}, at low \ac{SNR} levels the estimated box size is larger than the actual optimal value, but at higher \ac{SNR} levels they agree more. \begin{figure} \centering \subfigure[]{\label{fig:best_params_a} \includegraphics{params_SNR_a_new.pdf} } \\ \subfigure[]{\label{fig:best_params_b} \includegraphics{params_SNR_b_new.pdf} } \caption{\subref{fig:best_params_a} shows the optimal thresholding value for the different \ac{SNR}s of the images used in the centroiding. Initially the thresholding is high, to remove as much noise as possible from the correlation image, then the thresholding drops to its optimum value for images which have low noise. \subref{fig:best_params_b} shows the box size for the different \ac{SNR} levels. This shows a similar tend, of increasing window size at high \ac{SNR}, using more pixels when the noise is reduced. At low \ac{SNR} the estimated parameters disagree with the true optimal parameters, but this disagreement decreases at higher \ac{SNR}.} \label{fig:best_params} \end{figure} The optimal parameters for thresholding and window generally reduce the number of pixels used in the centroid in low \ac{SNR} conditions by using small thresholds and small window sizes to reduce the amount of noise in the centroid. At higher \ac{SNR} values the parameters stabilise for a given set of images to give the most accurate shift estimate. Our technique fails in the low \ac{SNR} regime. This is due to the different sources of noise in the correlation image, and our sampling of them. The simplest way to do this is to have the images and their noise terms separate, as in equation~\ref{eqn:noise}: \begin{equation} \begin{array}{ll} \mathrm{Im} &= \mathrm{Im}_{signal} + \mathrm{Im}_{noise} \\ \mathrm{Ref} &= \mathrm{Ref}_{signal} + \mathrm{Ref}_{noise} \end{array} \label{eqn:noise} \end{equation} where $\mathrm{Im}$ represents the overall image being centroided, $\mathrm{Im}_{signal}$ describes the signal in the image, and $\mathrm{Im}_{noise}$ describes the noise associated with the image, in our case shot noise. $\mathrm{Ref}$ follows similar definitions for the reference image. When combined, assuming a linear regime, the correlation image has four terms: \begin{multline} \mathrm{Corr} = \mathrm{Corr}\left[_{\mathbf{Im}_{signal} \mathbf{Ref}_{signal}}\right] + \mathrm{Corr}\left[_{\mathbf{Im}_{signal} \mathbf{Ref}_{noise}}\right] \\ + \mathrm{Corr}\left[_{\mathbf{Im}_{noise} \mathbf{Ref}_{signal}}\right] + \mathrm{Corr}\left[_{\mathbf{Im}_{noise} \mathbf{Ref}_{noise}}\right] \end{multline} where $\mathrm{Corr}$ is the total signal in the correlation image, with the contributing factors all described to the right. If we assume that the contribution of $\mathrm{Corr}\left[_{\mathbf{Im}_{noise} \mathbf{Ref}_{noise}}\right]$ is negligible, then there are two remaining error terms which affect our estimate of the centroid. However by taking an average over different references in our estimate of the error, we are in effect averaging out the $\mathrm{Corr}\left[_{\mathbf{Im}_{signal} \mathbf{Ref}_{noise}}\right]$ term. This term becomes more dominant at lower \ac{SNR} levels, hindering the performance of our technique. There are other methods of estimating the error of a centroid on an extended object, such as \citet{Saunter2010}, which don't have this problem, but this requires an oversampling of the correlation peak, something avoided in \ac{AO} to reduce data rates and computation time. The overall performance of the centoriding techniques for the different \ac{SNR}s is shown in Fig.~\ref{fig:SNRs}. For high \ac{SNR}, the best performance is given by the thresholded, windowed center of mass measurements, with little difference between the theoretical best performance and the performance derived from error estimation. The overall boost in accuracy is $3 \times$ for the high \ac{SNR}. For \ac{SNR} below 1, the windowed parabolic fit outperforms the thresholded, windowed center of mass method. This could be due to the crude error estimator implemented here and it may be possible to improve this using other error estimation techniques \citep{Saunter2010}. However this still could only bring the performance back to the level of the 2D parabolic fit at best. Our technique is best suited to high \ac{SNR} regimes. \begin{figure} \begin{center} \includegraphics{SNR_scan.pdf} \end{center} \caption{This plot shows the performance of the center of mass algorithms and the 2D parabolic fit for a range of different \ac{SNR}s. It can be seen above a \ac{SNR} of 1 the windowed, thresholded center of mass outperfroms a 2D parabolic fit. The 2D parabolic fit tapers off in performance at 0.05 pixel error, whereas the windowed center of mass has a much lower performance threshold. The vertical line on the plot show the expected \ac{SNR} for a solar granule image with a contrast of 10\%, and a camera with a full well depth of 40000 electrons, which represents typical conditions in solar \ac{AO}. It can also be seen that the performance from estimating the errors on the center of mass is worse than the optimal case, but does still reach close to peak performance.} \label{fig:SNRs} \end{figure} \section{Conclusions} \label{sec:conclusions} We have demonstrated that for tracking extended sources, a method of error estimation allows different centroiding parameters to be explored on real data, allowing for the optimum parameters to be chosen. While this does take extra computation, the correlation images only needs to be generated once for each reference, minimizing the increase in computation effort required. Also once the optimum set of parameters has been found, they should hold as the best parameters until something in the system changes, i.e. a change of target, or reference image. The parameters for the centroiding algorithm should be updated regularly to keep it optimal. Exploring the parameter space is a parallelisable process, so can be performed quickly. With the use of SIMD \citep{furht2008} and more advanced optimisation algorithms, rather than the brute force method exploring the full parameter space implemented here, the method should be viable for use in a real-time system. The method of noise estimation used here is crude, though good enough for our purposes, and could be used for different parameters in other techniques, such as \citet{Li2008}. There are more efficient algorithms for estimating noise on centroiding of extended objects, such as \citet{Saunter2010}, which could also be implemented to give quantitative estimators of centroiding accuracy, as well as being computationally less intensive than the multiple reference approach. Overall, for the solar case, with high \ac{SNR}, the use of an optimised, thresholded, windowed center of mass algorithm offers a factor of $3 \times$ improvement in centroiding accuracy over the windowed parabolic fit. This could be used real-time in solar \ac{AO} for better wave-front estimations, and also with post processing techniques, such as measuring more accurate atmospheric profiles. Further investigation should be performed in the low \ac{SNR} regime, where both the center of mass and 2D parabolic fit methods give poor performance, to see if more accurate centroids can be extracted. There is also more work to be done in implementing the technique into a real system which performs centroiding on extended objects, to see how it affects system performance. \section*{Acknowledgments} M. J. T gratefully acknowledge support from the Science and Technology Facilities Council (STFC) in the form of a Ph.D studentship (ST/K501979/1). The authors would like to thank the Institute of Solar Physics, Sweden, Mats Carlsson, Viggo Hansteen, Luc Rouppe van der Voort, Astrid Fossum, and Elin Marthinussen for taking the raw image used in this paper, and Mats L\"{o}fdahl, for performing the image reconstruction to produce the final image. M. J. T. would like to thank Prof. Gordon Love for all of his advice and guidance, and the referee for their insightful advice and feedback. Data used is available from the author on request. \bibliographystyle{mn2e}
1,477,468,751,292
arxiv
\section{Introduction}\label{S:Intro} The notion of Lagrangian multiforms was introduced in \cite{LobbNij2009} to provide a variational formalism for systems integrable in the sense of multidimensional consistency (MDC). This novel variational approach to integrable systems allows for the derivation of an en tire system (called a \textit{hierarchy}) of simultaneous compatible equations from a single variational framework, in which the conventional Lagrange function is replaced by a Lagrangian $d$-form integrated over arbitrary hyper-surfaces in a space of independent variables of arbitrary dimension. Lagrangian multiform theory has undergone a significant development in the last decade, (cf. e.g. \cite{Sleigh-thesis}, or \cite{HJN16} and references therein). It has become evident that Lagrangian multiforms (alternatively referred to as \textit{pluri-Lagrangian systems}) form a universal variational aspect of integrability. It distinguished itself from the conventional least-action principle in that, where the latter produces through the standard Euler-Lagrange (EL) equations only one equation per component of the field variable, the multiform EL equations comprise a multitude of compatible equations for every component of the fields. Furthermore, the Lagrangian components themselves have to be very special (they have to be 'admissable', which implies 'integrable'), and in a precise sense the Lagrangians themselves can be considered as solutions of the systems of generalised EL equations. In this note I will focus on the Darboux system of equations, \cite{Darboux}, which in the original notation of Darboux reads \begin{equation}\label{eq:Darboux} \frac{\partial \beta_{kk'}}{\partial \rho_{k''}}= \beta_{kk''}\beta_{k''k'}\ , \quad \frac{\partial \beta_{k'k}}{\partial \rho_{k''}}= \beta_{k'k''}\beta_{k''k}\ , \end{equation} where the indices $k,k',k''$ run over a set of integers, and the quantities $\beta_{kk'}$, etc., are functions of a set of coordinates $\rho_1,\cdots,\rho_n$. These equations describe conjugate nets for a system of curvilinear orthogonal coordinates, following on from earlier wotk by Lam\'e, \cite{Lame}. It is well-known that the set of equations \eqref{eq:Darboux}, or generalisations thereof, are closely related to integrable three-dimensional equations, cf. e.g. \cite{Doliwa-etal,Doliwa}, in particular the $N$-component wave equation. In fact, in \cite{MartinezKonop} it was shown that they form a realisation of the KP hierarchy in terms of so-called \textit{Miwa variables}, \cite{Miw82}, which are variables depending on a continuous parameter associated with an underlying lattice structure. Here, I will show that this set of equations possesses a Lagrangian 3-form structure, in the sense of \cite{LNQ,SNC21}. Whereas our previous treatment of the Lagrange multiform strucure of the continuous KP hierarchy used a representation in terms of pseudo-differential operators, going back to \cite{DickeyKP, Dickey-book}, the multiform structure of the Darboux system is more compact, and can be viewed as a generating system for the KP hierarchy, encoding the latter in a more covariant way. In the next section I will present this 3-form structure and demonstrate the salient multiform features, while in the ensuing section I will discuss the connection to the KP hierarchy, and further generalisations in the remainder. Some speculative applications are discussed in the Conclusion section. \section{Lagrangian 3-form structure for the generalised Darboux system} The generalised Darboux system reads \begin{subequations}\label{eq:Beqs}\begin{align} &\frac{\partial B_{qr}}{\partial\xi_p}=B_{qp}B_{pr}\ , \quad \frac{\partial B_{rq}}{\partial\xi_p}=B_{rp}B_{pq}\ ,\label{eq:Beqsa} \\ &\frac{\partial B_{pr}}{\partial\xi_q}=B_{pq}B_{qr}\ , \quad \frac{\partial B_{rp}}{\partial\xi_q}=B_{rq}B_{qp}\ , \label{eq:Beqsb}\\ &\frac{\partial B_{pq}}{\partial\xi_r}=B_{pr}B_{rp}\ , \quad \frac{\partial B_{qp}}{\partial\xi_r}=B_{qr}B_{rp}\ , \label{eq:Beqsc} \end{align} \end{subequations} where the $B_{pq}$, etc., are scalar functions (but can be readily generalised to matrices) of the independent variables $\xi_p$, $\xi_q$ and $\xi_r$, which are continuous variables labelled by parameters $p$, $q$ and $r$ respectively, which themselves are in principle continuous variables taking values in a continuous subset of the real or complex numbers (hence, the term 'generalised'). We assume that these parameters are distinct, and we will not consider for now quantities $B$ for which they coincide (quantities of the type $B_{pp}$). A main property of the system \eqref{eq:Beqs} is that it can be extended in a consistent way to an arbitrarily large set of copies of these equations in terms of additional variables $\xi_s$, etc. similarly labeled by values of the parameters. This compatibility is expressed as follows. \begin{theorem} The PDE system \eqref{eq:Beqs} for the quantities $B_{\cdot\cdot}$ is multidimensionally consistent. \end{theorem} \begin{proof} The proof is by direct computation, introducing a fourth variable $\xi_s$ and associated lattice direction with parameter $s$, such that the system of independent variables is extended to include $B_{ps},B_{qs},B_{rs}$ and $B_{sp},B_{sq},B_{sr}$ obeying relations of the form \[ \frac{\partial B_{ps}}{\partial\xi_q}=B_{pq}B_{qs}\ , \quad \] etc. and where the other variables depend also on $\xi_s$ such that \[ \frac{\partial B_{pq}}{\partial\xi_s}=B_{ps}B_{sq}\ , \quad \] etc. We then establish by direct computation from the extended system of equations comprising \eqref{eq:Beqs} and the PDEs w.r.t. $\xi_s$, the relation \[ \frac{\partial}{\partial\xi_s}\left(\frac{\partial}{\partial\xi_p}B_{qr} \right) = \frac{\partial}{\partial\xi_p}\left(\frac{\partial}{\partial\xi_s}B_{qr} \right)\ , \] by direct computation. Similarly all relations obtained from cross-differentiation hold by the same token. \end{proof} The system \eqref{eq:Beqs} possesses a \textit{Lax multiplet}, cf. e.g. \cite{Doliwa}, in the following sense. \begin{proposition} The system \eqref{eq:Beqs} arises as the compatibility conditions for the linear overdetermined system of the form \begin{equation}\label{eq:Lax} \frac{\partial \Phi_{q}}{\partial\xi_p}=B_{qp}\Phi_{p}\ , \quad \frac{\partial \Psi_{r}}{\partial\xi_p}=\Psi_{p}B_{pr}\ , \end{equation} and similar relations for all variables $\xi_\cdot$. \end{proposition} \begin{proof} Again, this is by direct computation. Cross-differentiation of two copies of the first equation of \eqref{eq:Lax} we get the equality \begin{align*} &\frac{\partial}{\partial\xi_r}\left(\frac{\partial \Phi_{q}}{\partial\xi_p}\right)= \left(\frac{\partial}{\partial\xi_r}B_{qp}\right)\Phi_{p}+ B_{qp}B_{pr}\Phi_r\ \\ & = \frac{\partial}{\partial\xi_p}\left(\frac{\partial \Phi_{q}}{\partial\xi_r}\right)= \left(\frac{\partial}{\partial\xi_p}B_{qr}\right)\Phi_{r}+ B_{qr}B_{rp}\Phi_p \ , \end{align*} and hence the equality of the coefficients of $\Phi_r$ and $\Phi_p$ give us the desired differential equations for $B_{qp}$ and $B_{qr}$ respectively. The same hold true for the 'adjoint' Lax multiplet in terms of the functions $\Psi_\cdot$. \end{proof} We note that the Lax multiplets \eqref{eq:Lax} can be obtained from the Darboux system itself, relying on the multidimendional consistency, by identifying the Lax wave functions $\Phi$ and $\Psi$ by fixing two, possibly separate, directions in the space of independent variables, $\xi_k$ and $\xi_l$, say (where $k$ and $l$ play the role of spectral parameters), such that $\Phi_p=B_{pk}$ and $\Psi_p=B_{lp}$. Furthermore, the quantities $\Phi$ and $\Psi$ obey a linear homogeneous set of equations of the form \begin{subequations}\label{eq:homLax} \begin{align} & \partial_p\partial_q\Phi_r= (\partial_p\ln \Phi_q)\partial_q\Phi_r + (\partial_q\ln \Phi_p)\partial_p\Phi_r\ , \\ & \partial_p\partial_q\Psi_r= (\partial_p\ln \Psi_q)\partial_q\Psi_r + (\partial_q\ln \Psi_p)\partial_p\Psi_r\ , \end{align}\end{subequations} thus obeying both an identical equation. We now introduce the Lagrangian structure. Let us consider the following Lagrangian components \begin{align}\label{eq:Lagr} \mathcal{L}_{pqr} = & \tfrac{1}{2}\left(B_{rq}\partial_{\xi_p}B_{qr}-B_{qr}\partial_{\xi_p}B_{rq} \right) +\tfrac{1}{2}\left(B_{qp}\partial_{\xi_r}B_{pq}-B_{pq}\partial_{\xi_r}B_{qp} \right) \nonumber \\ & +\tfrac{1}{2}\left(B_{pr}\partial_{\xi_q}B_{rp}-B_{rp}\partial_{\xi_q}B_{pr} \right) + B_{rp} B_{pq} B_{qr} - B_{rq}B_{qp} B_{pr} \ . \end{align} \noindent Then we have the following main statement \begin{theorem} The differential of the Lagrangian 3-form \begin{align}\label{eq:Lagr3form} {\sf L}:= &\mathcal{L}_{pqr}\,{\rm d}\xi_p\wedge{\rm d}\xi_q\wedge{\rm d}\xi_r+ \mathcal{L}_{qrs}\,{\rm d}\xi_q\wedge{\rm d}\xi_r\wedge{\rm d}\xi_s+ \nonumber \\ & \quad +\mathcal{L}_{rsp}\,{\rm d}\xi_r\wedge{\rm d}\xi_s\wedge{\rm d}\xi_p+ \mathcal{L}_{spq}\,{\rm d}\xi_s\wedge{\rm d}\xi_p\wedge{\rm d}\xi_q\ , \end{align} has a ``double zero'' on the solutions of the set of genmeralised Darboux equations \eqref{eq:Beqs}, i.e. ${\rm d}{\sf L}$ can be written as \begin{equation} {\rm d}\mathsf{L}= \,\mathcal{A}_{pqrs}\,{\rm d}\xi_p\wedge{\rm d}\xi_q\wedge{\rm d}\xi_r\wedge{\rm d}x_s \end{equation} with the coefficient $\mathcal{A}_{pqrs}$ being a sum of products of factors which vanish on solutions of the EL equations. \end{theorem} \begin{proof} Computing the the components of the differential ${\rm d}\mathsf{L}$ we obtain \begin{align*} & \partial_{\xi_s}\mathcal{L}_{pqr}-\partial_{\xi_p}\mathcal{L}_{qrs} +\partial_{\xi_q}\mathcal{L}_{rsp} -\partial_{\xi_r}\mathcal{L}_{spq}= \\ & \quad \Gamma_{s;rq}\Gamma_{p;qr} - \Gamma_{p;rq}\Gamma_{s;qr} + \Gamma_{s;qp}\Gamma_{r;pq} - \Gamma_{r;qp}\Gamma_{s;pq} \\ &\quad +\Gamma_{s;pr}\Gamma_{q;rp} - \Gamma_{q;pr}\Gamma_{s;rp} +\Gamma_{q;sr}\Gamma_{p;rs} - \Gamma_{p;sr}\Gamma_{q;rs} \\ &\quad +\Gamma_{p;sq}\Gamma_{r;qs} - \Gamma_{r;sq}\Gamma_{p;qs} +\Gamma_{q;ps}\Gamma_{r;sp} - \Gamma_{r;ps}\Gamma_{q;sp}\ , \end{align*} where \[ \Gamma_{p;qs}=\partial_{\xi_p}B_{qs}-B_{qp}B_{ps}\ , \] and similarly for the other indices. The set of generalised EL equations in this case are obtained from $\delta\mathcal{A}_{pqrs}=0$, repeating the general argument, cf. e.g. \cite{SurVerm,SNC20,Sleigh-thesis}, for deriving the EL equations from the differential of the Lagrangian multiform. Thus, since all the variations $\delta B_{pq}$ etc. and their first derivatives, are independent, the coefficients are precisely all the combinations $\Gamma_{r;pq}$, etc. which will have to vanish at the critical point for the action \begin{equation}\label{eq:action} {\sf S}[\mathbf{B}(\boldsymbol{\xi});\mathcal{V}]=\int_{\mathcal{V}} {\sf L}= \int_{\partial\mathcal{V}}\,{\rm d}{\sf L}\ , \end{equation} integrated over any arbitrary 3-dimensional closed hypersurfaces $\mathcal{V}$ in the multivariable space of all the $\xi_p$'s. \end{proof} As a corollary the statementof Theorem 2.2 holds more generally for Lagrangian 3-forms embedded in a higher-dimensional space of independent variables, namely \begin{corollary} In a space of variables $\{ \boldsymbol{p}=(p_j)_{j\in I} \}$ where the $p_i$ denote complex valued continuous variables labelled by an index set $I$, the Lagrangian 3-form \[ \mathsf{L}= \sum_{i,j,k\in I} \mathcal{L}_{p_i,p_j,p_k}\,{\rm d}\xi_{p_i}\wedge {\rm d}\xi_{p_j} \wedge{\rm d}\xi_{p_k}\ , \] with $\mathcal{L}_{p_i,p_j,p_k}$ as given in \eqref{eq:Lagr}, we have that \[ {\rm d}\mathsf{L}= \sum_{i,j,k,l\in I} \mathcal{A}_{p_i,p_j,p_k,p_l}\, {\rm d}\xi_{p_i}\wedge {\rm d}\xi_{p_j}\wedge{\rm d}\xi_{p_k}\wedge{\rm d}\xi_{p_l}\ , \] has a double zero on solutions of the system \eqref{eq:Beqs} written in the relevant variables labeled by $p_i,p_j,p_k,p_l$. \end{corollary} \noindent The proof is an obvious extension of the one for Theorem 2.2, assuming that all labels of the $p_{i_\nu}$ are distinct. The generalised (multiform) Euler-Lagrange equations for the Lagrangian multiform \eqref{eq:Lagr3form} obtained from $\delta{\rm d}{\sf L}=0$ imply the vanishing of all the factors $\Gamma$ in the above computation, which imply the set of generalised Darboux equations. As a consequence of the double-zero form for ${\rm d}{\sf l}$ the Lagrangian multiform ${\sf L}$ is closed on solutions of the Darboux system \eqref{eq:Beqs} (but not trivially closed, only 'on-shell') which implies that for the critical fields which obey the Darboux system the action is invariant under smooth deformations of the hypersurface $\mathcal{V}$. This is precisely the phenomenon of multidimensional consistency: the Darboux system is compatible on any hypersurface in the multidimensional space of Miwa variables. Another corollary of the multiform structure, we also have a variational description of the Lax system \eqref{eq:Lax}. We, just need to extend the set of Miwa variables to include the variables $\xi_k$ associated with a spectral parameter. Thus, we are led to the following statement. \begin{corollary} The Lagrangian 3-form \begin{align}\label{eq:LaxLagr3form} {\sf L}_{(k)}:= &\mathcal{L}_{pq(k)}\,{\rm d}\xi_p\wedge{\rm d}\xi_q\wedge{\rm d}\xi_k+ \mathcal{L}_{qr(k)}\,{\rm d}\xi_q\wedge{\rm d}\xi_r\wedge{\rm d}\xi_k+ \nonumber \\ & \quad +\mathcal{L}_{rp(k)}\,{\rm d}\xi_r\wedge{\rm d}\xi_p\wedge{\rm d}\xi_k+ \mathcal{L}_{pqr}\,{\rm d}\xi_p\wedge{\rm d}\xi_q\wedge{\rm d}\xi_r\ , \end{align} with components \begin{align}\label{eq:LaxLagr} \mathcal{L}_{pq(k)} = & \tfrac{1}{2}\left(\Psi_{q}\partial_{\xi_p}\Phi_{q}-(\partial_{\xi_p}\Psi_{q})\Phi_q \right) -\tfrac{1}{2}\left(\Psi_{p}\partial_{\xi_q}\Phi_{p}-(\partial_{\xi_q}\Psi_{p})\Phi_p \right) \nonumber \\ & +\tfrac{1}{2}\left(B_{qp}\partial_{\xi_k}B_{pq}-B_{pq}\partial_{\xi_k}B_{qp} \right) + \Psi_{p} B_{pq} \Phi_{q} - \Psi_{q}B_{qp} \Phi_{p} \ , \end{align} through the EL equations $\delta{\rm d}\mathsf{L}_{(k)}=0$ constitutes a variational description of the Lax multiplet \eqref{eq:Lax}. \end{corollary} A similar variational description of the Lax system was obtained in \cite{SNC19} for the 1+1-dimensional Lax system associated with the so-called Zakharov-Mikhailov action. We note that the Lagrangian 3-form \eqref{eq:LaxLagr3form} should really be considered as a Lagrangian 2-form when integrating out the direction $\xi_k$ associated with the (fixed) spectral variable. \section{Discrete Darboux system} A discrete analogue of the Darboux system of orthogonal coordinate systems, i.e. was found in \cite{DS97} where its integrability was inserted. In fact, an interesting connection with with integrable quadrilateral lattices was discovered, as well as with multidimensional circular lattices, cf. \cite{CDS97,MDS97,CDS97}. The corresponding discrete analogue of the generalised Darboux system \eqref{eq:Beqs} reads \begin{subequations}\label{eq:dBeqs}\begin{align} & \Delta_p B_{qr}=B_{qp}T_pB_{pr}\ , \quad \Delta_p B_{rq}=B_{rp}T_pB_{pq}\ , \label{eq:dBeqsa} \\ &\Delta_q B_{rp}=B_{rq}T_qB_{qp}\ , \quad \Delta_q B_{pr}=B_{pq}T_qB_{qr}\ , \label{eq:dBeqsb}\\ &\Delta_r B_{pq}=B_{pr}T_rB_{rq}\ , \quad \Delta_r B_{qp}=B_{qr}T_rB_{rp}\ , \label{eq:dBeqsc} \end{align} \end{subequations} where $\Delta_p$ denotes the difference operator $\Delta_p=T_p-{\rm id}$. This system is related to other multidiumensional lattice systems that were formulated in \cite{Nij85LMP}. \begin{theorem} The system of difference equations \eqref{eq:dBeqs} is multidimensionally consistent, and furthermore, it is consistent with the differential system \eqref{eq:Beqs}. \end{theorem} \begin{proof} The consistency of the set of difference equations is by direct computation. For instance, rewriting the difference equation as follows \begin{align*} & B_{qr}= T_pB_{qr}-B_{qp}T_pB_{pr}= \\ & = T_p\left(T_sB_{qr}-B_{qs}T_sB_{sr}\right)- B_{qp}T_p\left(T_sB_{pr}-B_{ps}T_sB_{sr}\right) \\ & =T_pT_sB_{qr}-(T_pB_{qs})T_pT_sB_{sr}-B_{qp}T_pT_sB_{pr} +B_{qp}(T_pB_{ps})T_pT_sB_{sr} \end{align*} which is equal to the same expression with the labels $p$ and $s$ interchanged. Thus, the latter is equal to \[ =T_sT_pB_{qr}-(T_sB_{qp})T_sT_pB_{pr}-B_{qs}T_sT_pB_{sr} +B_{qs}(T_sB_{sp})T_sT_pB_{pr}\ . \] Assuming that the shifts $T_p$ and $T_s$ commute, and collecting the factors with $T_pT_sB_{pr}$ and the ones with $T_pT_sB_{pr}$, regarding the latter as independent, we obtain the relations \[ T_pB_{qs}-B_{qs}-B_{qp}T_pB_{ps}=0 \ , \quad {\rm and}\quad T_sB_{qp}-B_{qp}-B_{qs}T_sB_{sp}=0\ , \] which are two of the discrete Darboux equations. Thus, the relations are consistent under mutual shifts. The compatibility with the continuous Darboux system \eqref{eq:Beqs} follows from a similar computation. Abbreviating $\partial/\partial\xi_p$ by $\partial_p$ we get \begin{align*} & \partial_p (T_sB_{qr})= \partial_p\left(B_{qr}+B_{qs}T_sB_{sr} \right)=\partial_pB_{qr}+(\partial_pB_{qs})T_sB_{sr}+ B_{qs}\partial_pT_sB_{sr}\\ & {\rm whereas} \\ & T_s\partial_p B_{qr} = T_s(B_{qp}B_{pr}) =(T_sB_{qp})T_sB_{pr}=(B_{qp}+B_{qs}T_sB_{sp})T_sB_{pr} \\ &\Rightarrow \quad B_{qp}B_{pr}+B_{qp}B_{ps}T_sB_{sr}+\cancel{B_{qs}T_s\left(B_{sp}B_{pr}\right)} \\ &\qquad\quad =B_{qp} T_sB_{pr}+\cancel{B_{qs}(T_sB_{sp})T_sB_{pr}}\ , \end{align*} and the remaining terms cancel as well due to the discrete Darboux relation. \end{proof} Similarly to the continuous case we have a Lax system, and its adjoint, given by \begin{equation} \Delta_p\Phi_q=B_{qp}T_p\Phi_p\ , \quad \Delta_p\Psi_q=\Psi_p T_pB_{pq}\ , \end{equation} and the homogeneous linear difference system for an eigenfunctions $\Phi_r$, $\Psi_r$ respectively, \begin{align} & \Delta_p\Delta_q \Phi_r =\frac{\Delta_p(T_q\Phi_q)}{T_q\Phi_q}\,\Delta_q\Phi_r +\frac{\Delta_q(T_p\Phi_p)}{T_p\Phi_p}\, \Delta_p\Phi_r\ , \\ & \Delta_p\Delta_q \Psi_r =\frac{\Delta_p\Psi_q}{T_p\Psi_q}\,\Delta_q(T_p\Psi_r) +\frac{\Delta_q\Psi_p}{T_q\Psi_p}\,\Delta_p(T_q\Psi_r)\ . \end{align} Note that in the discrete case the equations for the eigenfunction and its adjoint are no longer the same. It is natural to assume that the discrete Darboux system \eqref{eq:dBeqs}, like its continuous counterpart \eqref{eq:Beqs}, admits a Lagrangian 3-form structure. I intend to settle this question in a future pubication \cite{Nijtbp}. \section{connection with the (scalar) KP system} The KP system of equations is often introduced as the set of Lax equations arising from a Lax operator in a ring of pseudo-differential operators with respect to a singled-out variable $x$, cf. \cite{Sato}. This has a disadvantage that the inherent covariant structure of the KP system is broken, and not all independent variables (the higher time variables) appear on the same footing as the variable $x$. A more covariant approach is provided by the 'direct linearisation' set-up, cf. e.g. \cite{FN17} and references therein, where there is no need to single out a particular variable to describe the KP hierarchy. It can be argued that the generalised Darboux system \eqref{eq:Beqs} provides also a covariant description but in the sense of encoding the hierarchy through Miwa variables, cf. \cite{MartinezKonop}. In that sense the generalised Darboux system is similar in spirit as the 'hierarchy generating PDEs' of \cite{NHJ,TN}, but for a 3-dimensional system of PDEs instead of the KdV or Boussinesq hierarchies respectively. Solutions of the discrete KP system were considered in \cite{NCWQ84,NC90,FN16} using the \textit{direct linearisation} (DL) approach, cf. also \cite{SAF1984}. The dynamics is governed by \textit{plane-wave factors} which take the form \begin{subequations}\label{eq:freeref}\begin{align} & \rho_k= \left[\prod_\nu (p_\nu-k)^{n_\nu}\right]\,\exp\left\{k\xi-\sum_\nu \frac{\xi_{p_\nu}}{p_\nu-k} \right\}\ , \\ & \sigma_{k'}= \left[\prod_\nu (p_\nu-k')^{-n_\nu}\right]\,\exp\left\{-k'\xi+\sum_\nu \frac{\xi_{p_\nu}}{p_\nu-k'} \right\}\ . \end{align} \end{subequations} Here the $\xi_{p_\nu}$ are the independent variables of the generalised Darboux system, and the $n_\nu$ are associated discrete variables, in terms of which the KP $\tau$ function obeys the compatible set Hirota bilinear equations\footnote{In the literature, cf. e.g. \cite{Doliwa-etal,MartinezKonop}, the dependence on discrete shifts in what is essentially the \emph{scalar} KP system, is confusingly often referred to as the `multi-component KP hierarchy', (because lattice-shifted variables are considered as components). This should not be confused with the \emph{matrix KP system}, cf. e.g. \cite{Konop82,Nij85LMP}, which in my opinion more rightfully deserves the name 'multicomponent', and which is related to the system in section 5. The difference between the two resides in that the scalar KP is governed essentially by a scalar integral measure in the underlying DL framework, while the matrix KP is governed by a matrix measure, and hence has a much richer solution structure.} \begin{equation}\label{eq:Hirota} (p-q)(T_pT_q\tau)T_r\tau+ (q-r)(T_qT_r\tau)T_p\tau+(r-p)(T_rT_p\tau)T_q\tau=0\ , \end{equation} where $T_{p_\nu}$ ($p,q,r$ being any three of the $p_\nu$) denotes the elementary shift in the variable $n_\nu$ associated with $p_\nu$ (which in this context) has the interpretation of a \textit{lattice parameter} measuring the grid width in the discrete direction labelled by $n_\nu$. The interplay between discrete and continuous variables turns out to be an essential feature of the structure. In fact, the $\tau$-function obeys the relations \begin{equation}\label{eq:taurel} \frac{\partial\tau}{\partial\xi_p}=- \left(T_p^{-1}\frac{{\rm d}}{{\rm d}p}T_p\right)\tau:= \lim_{\varepsilon\to 0} \frac{T_p^{-1}T_{p-\varepsilon}\tau-\tau}{\varepsilon\tau}\ , \end{equation} for any of the parameters $p_\nu=p$, where we should think of the $p-\varepsilon$ as the lattice parameter associated with yet another lattice direction. Using the identification between lattice shifts and derivatives as in \eqref{eq:taurel}, we can perform a limit $r\to p$ on \eqref{eq:Hirota} and thus obtain the following differential-difference equation for $\tau$ \begin{equation}\label{eq:diffdifftau} (p-q) \left(\tau\,T_q\frac{\partial\tau}{\partial \xi_p}-(T_q\tau)\,\frac{\partial\tau}{\partial \xi_p}\right) =\tau\,T_q\tau - (T_p\tau)\,T_qT_p^{-1}\tau\ . \end{equation} Furthermore, the $\tau$-function also obeys the differential-difference equation \begin{equation}\label{eq:taudiff} 1+(p-q)^2 \frac{\partial^2\ln\tau}{\partial \xi_p\,\partial \xi_q}=\frac{(T_pT_q^{-1}\tau)T_qT_p^{-1}\tau}{\tau^2}\ , \end{equation} which can be readily cast into bilinear form. In fact, eq. \eqref{eq:taudiff} is the bilinear form of the 2D Toda equation (with the discrete variable along the skew-diagonal lattice direction in the lattice generated by the $T_p$ and $T_q$ shifts). It turns out that the Darboux variables of the system \eqref{eq:Beqs} can be expressed in terms of the KP $\tau$-function exploiting the underlying discrete structure\footnote{In \cite{MartinezKonop,Doliwa-etal} a similar connection was exhibited, which differs from the present one in that my presentation is based on results from the DL approach, whereas the Sato or fermionic type approach seems to cover a more restricted solution sector of the theory}. To do so consider the quantities \begin{equation}\label{eq:S} S_{a,b}= \frac{T_a^{-1}T_b\tau}{\tau} \ , \end{equation} as a consequence of \eqref{eq:Hirota} and \eqref{eq:diffdifftau} obey the following relations \begin{subequations}\label{eq:Srels} \begin{align} & (p-b)T_pS_{a,b}-(p-a)S_{a,b}=(a-b) S_{a,p}T_pS_{p,b}\ , \label{eq:Sdrels} \\ &(p-a)(p-b)\frac{\partial S_{a,b}}{\partial\xi_p}= (a-b)\left(S_{a,p}S_{p,b}-S_{a,b}\right)\ . \label{eq:Screls} \end{align} \end{subequations} Similar relations appeared in \cite{MartinezKonop,Doliwa-etal} derived from a different perspective from \cite{FN16}. These relations are compatible for all parameters $p$ and corresponding shifts and derivatives w.r.t. the corresponding Miwa variables $\xi_p$. They form the basis for the generalised Darboux and a discrete analogue of the Darboux system, where the latter can be obtained by a gauge transformation with factors $\rho_{-a}\sigma_{b}$ of the form \eqref{eq:freeref}. Furthermore, the quantity $S=S_{a,b}$ obeys the following 3-dimensional partial difference equation, \cite{NCWQ84}, \begin{align} & \frac{\left[(p-b)T_pT_qS-(p-a)T_qS\right]\, \left[(q-b)T_qT_rS-(q-a)T_rS\right]} {\left[(p-b)T_pT_rS-(p-a)T_rS\right]\, \left[(q-b)T_pT_qS-(q-a)T_pS\right]} \nonumber \\ & \qquad \times\frac{\left[(r-b)T_pT_rS-(r-a)T_pS\right]} {\left[(r-b)T_qT_rS-(r-a)T_qS\right]}=1 \end{align} which is essentially the lattice Schwarzian KP equation, first given in its well-known pure form in \cite{DN91}. The KP hierarchy can be obtained by the expansions \begin{align}\label{eq:Miwa} & t_j=\delta_{j,1}\xi+\sum_{\nu}\left(\frac{\xi_{p_\nu}}{p_\nu^{j+1}} + \frac{1}{j}\,\frac{n_\nu}{p_\nu^j}\right) \nonumber \\ & \Rightarrow\quad T_{p_\nu}\tau=\tau\left(\{t_j+\frac{1}{jp_\nu^j}\}\right)\quad {\rm and}\quad \frac{\partial\tau}{\partial p_\nu}=\sum_{j=1}^\infty \frac{1}{p_\nu^{j+1}}\frac{\partial\tau }{\partial t_j}\ , \end{align} where the $t_j$ are the usual independent time-variables in the hierarchy. From \eqref{eq:Screls} it follows that the Darboux quantities can be identified as \begin{equation}\label{eq:B} B_{pq}= \frac{\sigma_{p}\rho_{q} S_{p,q}}{q-p}= \sigma_{p}\rho_{q} \frac{T_p^{-1}T_q\tau}{(q-p)\tau} \ , \quad q\neq p\ , \end{equation} from which, together with \eqref{eq:freeref} we get the relations \eqref{eq:Beqs} whenever $q\neq p$. When $q=p$, we have $B_{pp}=\mathcal{C}\partial_{\xi_p}(\ln\tau)$, where $\mathcal{C}$ is some constant normalisation factor. The eigenfunctions of the Lax multiplet are obtained from \begin{equation}\label{eq:eigen} \phi_a(k)=\frac{S_{a,k}\rho_k}{a-k}\ , \quad \psi_b(k')=\frac{S_{k',b}\sigma_{k'}}{b-k'}\ , \end{equation} which obey the set of relations \begin{subequations}\label{eq:ukrels}\begin{align} & (p-a) \phi_a(k)=T_p\phi_a(k)-S_{a,p}T_p\phi_p(k)\ , \\ & (p-b)T_p\psi_b(k')=\psi_b(k')-\psi_{p}(k')T_pS_{p,b}\ , \\ & (a-p)\frac{\partial}{\partial\xi_p}\phi_a(k) =\phi_a(k) -S_{a,p}\phi_p(k)\ , \\ & -(b-p)\frac{\partial}{\partial\xi_p} \psi_b(k') = \psi_b(k') -\psi_{p}(k')S_{p,b}\ , \end{align}\end{subequations} the compatibility conditions of which reproduce eqs. \eqref{eq:Srels}. Within the setting of the DL approach, the following combination of the quantities $S$, for arbitrary values of $c$, possess a quadratic eigenfunction expansion of the form \begin{equation}\label{eq:quadexps} S_{a,b}-S_{a,c}S_{c,b} = \iint_{D} d\zeta(l,l^\prime) (l-l') \frac{(c-a)(c-b)}{(c-l)(c-l')}\,\phi_a(l)\psi_{b}(l')\ . \end{equation} Here the integration is over an arbitrary measure in a region $D\subset \mathcal{C}\times\mathcal{C}$ of values of $l,l'$ in a spectral space with measure ${\rm d}\zeta(l,l')$. Under special conditions these integrals correspond to the generalised Cauchy integrals arising in the $\bar{\partial}$ problem or nonlocal Riemann-Hilbert problems for the KP type spectral problems, cf. \cite{ZM85,Konop}. (The choices of $c$ must be such that singularities in the integrals are avoided, and this requires some conditions on the integrations, which play a role when we consider special solutions. We will not address these issues of analysis here.) Note that when $c=p_\nu$, i.e. coincides with any of the parameters associated with the Miwa variables $\xi_p$ , then the left hand side of \eqref{eq:quadexps} coincides with the expression on the right-hand side of \eqref{eq:Srels}. I.o.w. the right-hand side of \eqref{eq:quadexps} provides a quadratic eigenfunction expansion for the derivative of $S_{a,b}$ w.r.t. $\xi_p$ (modulo a constant factor). However \eqref{eq:quadexps} is independent of the choice of Miwa variables and holds for any $c$. In particular in the limit $c\to\infty$ we obtain the following fundamental bilinear identity for the solution of the $\tau$ function associated with the choice of measure and integration region $D$: \begin{align}\label{eq:fundtau} & \iint_{D} d\zeta(l,l^\prime) \frac{(l-l')\rho_l\sigma_{l'}}{(a-l')(a'-l)}(T_{a'}^{-1}T_{l}\tau)(T_{l'}^{-1}T_{a}\tau)= \nonumber \\ & \qquad = \tau\,(T_{a'}^{-1}T_{a}\tau)-(T_{a'}^{-1}\tau)(T_{a}\tau)\ , \end{align} which can be considered as a \textit{bilinear integro-difference equation} for the $\tau$ function. The relation \eqref{eq:fundtau} is reminiscent of the fundamental bilinear identity that plays a central role in the Sato approach to the KP hierarchy, cf. also \cite{BogdKonopel}, which is however not the approach taken here to derive this relation. It is maybe useful to mention at this juncture that, while all definitions of a $\tau$-function are in a sense non-universal and depend on the solution class under consideration, what may be the most general definition of a $\tau$-function was formulated in \cite{Nij-tau85}, namely in terms of a fermionic path integral associated with the direct linearising transform structure. \section{Generalisation to the matrix case} In a talk at the June 1987 NEEDS meeting I presented a 2+1-dimensional Lagrangian matrix KP system which effectively amounts to a matrix generalisation of the Darboux system, that became a focus of attention in the mid 1990s. We proposed the following Lagrangian, cf. \cite{NijMaill}, \begin{align}\label{eq:GLagr} \mathcal{L}_{ijk} & = \tfrac{1}{2}\textrm{tr}\left\{ G_{ij}J_i(\partial_k G_{ji})J_j-(\partial_k G_{ij})J_i G_{ji}J_j +\textrm{cycl. (ijk)} \right\} \nonumber \\ & \quad - \textrm{tr}\left\{G_{ij} J_i G_{ki} J_k G_{jk} J_j -G_{ji} J_j G_{kj} J_k G_{ik} J_i \right\}\ , \end{align} which is a matrix generalisation of \eqref{eq:Lagr}. In fact, the $G_{ij}$ are $N\times N$ matrix functions of dynamical variables $x_i=\xi^{J_i}_{l_i},x_j=\xi^{J_j}_{l_j}, x_k=\xi^{J_k}_{l_k}, \dots $, which are labelled not only by a continuous parameter $l_{\cdot}$ (like the $p,q,r$ in the scalar case), but also by a matrix $J_{\cdot}$ which in a sense 'tunes' a hierarchy of associated KP type equations. while the $J_i,J_j,J_k$ are constant $N\times N$ matrices, which commute among themselves\footnote{In fact, one can also consider the non-commutative case $[J_i,J_j]=\Gamma_{ij}^kJ_k$, in which case we get non-commuting flows on a loop group, for which a Lagrangian description was proposed recently in \cite{CNSV2022} for (1+1)-dimensional systems.}, i.e., $[J_i,J_j]=[J_j,J_k]=[J_k,J_i]=0$. In \eqref{eq:GLagr} we have denoted $\partial/\partial \xi_{l_j}=:\partial_j$, etc. for the sake of brevity. Like \eqref{eq:Lagr} the Lagrangian \eqref{eq:GLagr} can be viewed as a component of a Lagrangian 3-form \begin{equation} \label{eq:matLagr3form} \mathsf{L}= \sum_{i<j<k} \mathcal{L}_{ijk}\,{\rm d}x_i\wedge {\rm d}x_j\wedge {\rm d}x_k\ , \end{equation} which is closed on solutions of the Euler Lagrange equations \begin{equation}\label{eq:Geqs} \partial_i G_{jk}=G_{ik}J_iG_{ji}\ , \quad i\neq j\neq k\neq i\ . \end{equation} The main statement is that these Lagrangians form the components of a Lagrangian 3-form. Thus, we have \begin{theorem} The Lagrangian 3-form \eqref{eq:matLagr3form} has a double zero on solutions of the fundamental set of equations \eqref{eq:Geqs}. \end{theorem} \begin{proof} The proof is again computational, and in essence similar to the one of Theorem 2.2, with the main difference occurring in the matrix ordering within the trace. Computing the differential of $\mathsf{L}$ we get in the matrix case \[ {\rm d}\mathsf{L}=\sum_{i,j,k,l} \mathcal{A}_{ijkl}\,{\rm d}x_i\wedge {\rm d}x_j\wedge {\rm d}x_k\wedge {\rm d}x_l\ , \] with \begin{align*} \mathcal{A}_{ijkl}=& \tfrac{1}{2}{\rm tr}\left\{ \Gamma_{l;i,j} J_i\Gamma_{k;j,i} J_j - \Gamma_{k;i,j} J_i\Gamma_{l;j,i} J_j \right. \\ &\qquad + \Gamma_{l;k,i} J_k\Gamma_{j;i,k} J_i - \Gamma_{j;k,i} J_k\Gamma_{l;i,k} J_i \\ &\qquad \left. \Gamma_{l;j,k} J_j\Gamma_{i;k,j} J_k - \Gamma_{i;j,k} J_j\Gamma_{l;k,j} J_k\pm {\rm cycl}\ \ (ijkl) \right\}\ , \end{align*} where the cyclic permutation over the indices $(i,j,k,l)$ is done with alternating signs of the six terms inside the bracket, resulting in 24 terms in total. Here the quantities $\Gamma$ are given by \[ \Gamma_{i;j.k}=\partial_iG_{jk}-G_{ik}J_i G_{ji}\ , \] and hence we have a double zero expansion of ${\rm d}\mathsf{L}$ imnplying that the generalised Euler-Lagrange equations arising from $\delta{\rm d}\mathsf{L}=0$ for all $G_{ij}$ varied independently (for different indices) goves rise to the entire system of matrix Darboux equations to yield the critical point of the action \[ S[G_{\cdot,\cdot}(\boldsymbol{x});\mathcal{V}]=\int_{\mathcal{V}} \mathsf{L} \ , \] as a functional of all the matrix fields $G_{\cdot,\cdot}$ as well as of the hypersurfaces $\mathcal{V}$ in the space of independent variables. As a consequence of the double-zero expansion form we have ${\rm d}\mathsf{L}=0$ for the fields $G$ obeying the set of EL equations, and hence the action is independent of the choice of hypersurface for those critical fields. \end{proof} More or less simultaneously to our paper \cite{NijMaill}, and independently, Bogdanov and Manakov investigated a (2+1)-dimensional Lagrangian matrix system, cf. \cite{BogdMan}. In retrospect both systems are very similar and originate from the consideration of nonlocal inverse problems, either through Direct Linearisation in the case of \cite{NijMaill}, in a framework also exploited for the case of 3-dimensional matrix lattice equations, cf. \cite{Nij85LMP,NC90}, or using nonlocal $\bar{\partial}$ in the case of \cite{BogdMan}. Like in the scalar case of section 2, these Lagrangians can be seen as components of a Lagrangian 3-form, and in a precise sense they generate the entire hierarchy of matrix KP equations (I will not dwell on that aspect in the present note). To be more precise let us first, in the notation of \cite{NijMaill}, specify the matrix (or, in the parlance of the last decade, the non-Abelian or non-commutative) KP structure. Note, that one of the first papers that addressed the matrix KP system, from an inverse scattering point of view, was \cite{Konop82}. The main set of equation, in fact the matrix generalisation of \eqref{eq:Srels}, is the family of relations given by \begin{equation}\label{eq:matHeq} \partial_k^J H_{ab}=\frac{J}{k-a}H_{ab}-H_{ab}\frac{J}{k-b} +H_{ak}JH_{kb}\ , \end{equation} where $k\neq a\neq b\neq k $ are complex valued parameters and the derivatives $\partial_k^J$ is with respect to some Miwa type variables $\xi^J_k$ characterised by the constant matrix $J$ as well as the label $k$ which here is complex parameter. The family of equations \eqref{eq:matHeq} is multidimensionally consistent for different values of $k$ and commuting sets of matrices $J$, as can be readily verified. A Lagrangian for the set of equations \eqref{eq:matHeq} is given by \begin{align}\label{eq:HLagr} \mathcal{L}_{klm} & =\tfrac{1}{2}{\rm tr}\left\{ H_{ml}\widetilde{J}(\partial^J_k H_{lm})\widehat{J} - (\partial_k^J H_{ml})\widetilde{J} H_{lm} \widehat{J} \right. \nonumber \\ & + H_{km}\widehat{J}(\partial^{\widetilde{J}}_l H_{mk})J - (\partial_l^{\widetilde{J}} H_{km})\widehat{J} H_{mk} J \nonumber \\ & \left. + H_{lk}J(\partial^{\widehat{J}}_m H_{kl})\widetilde{J} - (\partial_m^{\widehat{J}} H_{lk})J H_{kl} \widetilde{J} \right\} \nonumber \\ & + {\rm tr} \left\{ H_{ml}\widetilde{J}H_{lm}\frac{J\widehat{J}}{k-m} -H_{ml} \frac{\widetilde{J}J}{k-l} H_{lm} \widehat{J} \right. \nonumber \\ & + H_{km}\widehat{J}H_{mk}\frac{\widetilde{J}J}{l-k} -H_{km} \frac{\widehat{J}\widetilde{J}}{l-m} H_{mk} J \nonumber \\ & \left. + H_{lk}JH_{kl}\frac{\widehat{J}\widetilde{J}}{m-l} -H_{lk} \frac{\widehat{J}J}{m-k} H_{kl} \widetilde{J}\right\} \nonumber \\ & + {\rm tr}\left\{ H_{lm}\widehat{J} H_{mk} J H_{kl} \widetilde{J} - H_{ml}\widetilde{J} H_{lk} J H_{km} \widehat{J} \right\}\ , \end{align} which essentially is equivalent to the Lagrangian of \cite{BogdMan}. The variational equations \[ \frac{\delta\mathcal{L}_{klm}}{\delta H_{ml}^T}=0 \quad \Rightarrow \quad \partial_k^J H_{lm}=\frac{J}{k-l}H_{lm}-H_{lm}\frac{J}{k-m} +H_{lk}JH_{km}\ , \] and similarly the other equations with $k,l,m$ and $J,\widetilde{J}, \widehat{J}$ respectively, all permuted, follow from this Lagrangian. By expanding the Miwa variables we can derive Lagrangians for the matrix KP hierarchy (examples of matrix KP hierarchy equations arising from the analogous Lagrange structure were provided in \cite{BogdMan}). The main new insight provided here, and which is a direct consequence, in fact a specification, of Theorem 5.1, is that this Lagrangian structure can be extended to a Lagrangian 3-form structure for the matrix KP hierarchy in (matrix) Miwa variables\footnote{Since integrable matrix hierarchies comprise not a single sequence of higher time-flows, but several families, each generated by a zeroth order time-flow associated with a constant matrix $J$, this matrix serves as the label for the corresponding sequence of higher times $t^J_j$, cf. e.g. \cite{Dickey-book}, and associated Miwa type variables $\xi_p^J$ can be defined by 'compounding' those hierarchies in the sense of \cite{Nij88}, i.e. constructing weighted sums of higher time derivatives as in \eqref{eq:Miwa}. }, provided by \iffalse \begin{align*} \mathsf{L} = & \mathcal{L}_{klm} {\rm d}x^J_k\wedge {\rm d}x_l^{\widetilde{J}}\wedge {\rm d}x_m^{\widehat{J}}+ \mathcal{L}_{lmn} {\rm d}x^{\widetilde{J}}_l\wedge {\rm d}x_m^{\widehat{J}}\wedge {\rm d}x_n^{\overline{J}} \nonumber \\ & + \mathcal{L}_{mnk} {\rm d}x^{\widehat{J}}_m\wedge {\rm d}x_n^{\overline{J}}\wedge {\rm d}x_k^{J} +\mathcal{L}_{nkl} {\rm d}x^{\overline{J}}_n\wedge {\rm d}x_k^{J}\wedge {\rm d}x_l^{\widetilde{J}}\ , \end{align*} \fi \newcommand{{}^t\!\varphi}{{}^t\!\varphi} \begin{align*} \mathsf{L} = & \mathcal{L}_{klm} {\rm d}\xi^J_k\wedge {\rm d}\xi_l^{\widetilde{J}}\wedge {\rm d}\xi_m^{\widehat{J}}+ \mathcal{L}_{lmn} {\rm d}\xi^{\widetilde{J}}_l\wedge {\rm d}\xi_m^{\widehat{J}}\wedge {\rm d}\xi_n^{\overline{J}} \nonumber \\ & + \mathcal{L}_{mnk} {\rm d}\xi^{\widehat{J}}_m\wedge {\rm d}\xi_n^{\overline{J}}\wedge {\rm d}\xi_k^{J} +\mathcal{L}_{klm} {\rm d}\xi^J_k\wedge {\rm d}\xi_l^{\widetilde{J}}\wedge {\rm d}\xi_m^{\widehat{J}} \end{align*} (which can be readily extended to a multi-sum involving more variables of the type $x_k^J$ with different labels and different matrices $J$). As a conclusion this provides the proper variational structure of the matrix KP hierarchy in its generating form. This is direct consequence of Theorem 5.1, where the correspondence between the matrices $G_{kl}$ and the matrices $H_{kl}$ is obtained by introducing matrix analogues of the plane-wave factors $\rho_k$ and $\sigma_{k'}$, given by nonsingular $N\times N$ matrices $\varphi^0_k$ and ${}^t\!\varphi^0_l$ obeying \[ \partial_m^J\varphi^0_k=\frac{J}{m-k}\varphi^0\ , \quad \partial_m^J {}^t\!\varphi^0_l= -{}^t\!\varphi^0_l\frac{J}{m-l}\ , \] (where the superscript $\phantom{}^0$ denotes the aspect that these are 'free' solutions of the underlying linear system), and setting $$ G_{kl}={}^t\!\varphi^0_l H_{lk}\varphi_k^0\ , \quad {\rm and}\quad J_m=({}^t\!\varphi^0_m)^{-1} J (\varphi^0_m)^{-1}\ . $$ As a consequence, relying on Theorem 5.1, the Lagrangian \eqref{eq:HLagr} form the components of a Lagrangian 3-form whose generalised EL equations provide the system of equations \eqref{eq:matHeq}. This is essentially the generating set of equations for the matrix KP hierarchy. \section{Discussions}\label{S:Concl} The results in this paper generalise in an essential way those of \cite{SNC21} where the multiform structure of the KP hierarchy was established in the conventional presentation in terms of pseudo-differential operators. In this paper we consider the KP hierarchy from the point of view of \textit{generating PDEs}, namely through their representation in terms of Miwa variables. This has the advantage that the structure becomes much more covariant. Thus the KP hierarchy is being treated as multi-parameter family of equations in the sense of what we called a \emph{generating PDE}, i.e. a PDE in terms of Miwa type variables, which by expansion in powers of the parameters lead to the conventional hierarchy of KP equations in the multi-time form (in the cases of (1+1)-dimensional hierarchies these generating PDEs are obtained from the conventional enumerative hierachies, by a process of 'compounding', cf \cite{Nij88}). A connection between Lagrangian multiforms in this parameter-family representation and the classical $r$-matrix was recently put forward in \cite{CaudStopp}. Lifting those results to the case of the (2+1)-dimensional KP hierarchy could provide a novel route to the quantisation of the KP system. Identifying a classical (and possible in due course a quantum) $R$-matrix for the KP system would form a major step towards both a canonical as well as path integral route towards its quantisation. In this context it is worth mentioning another connection. In the Direct Linearising Transform (DLT) approach to the KP system (discrete as well as continuous), cf. \cite{NCWQ84,Nij85LMP,NC90,FN16} the invariance under integral transforms with a kernel $G_{kk'}$ is the key element of the construction. This kernel is a path-independent line-integral in the space of independent variables of the system, constructed from of a closed (on solutions of the equation of motion) 1-form constructed from the Lax multiplet eigenfunctions. The kernel $G_{kk'}$ solves the following class of generalised Darboux systems \begin{equation}\label{eq:Geq} \partial_i G_{kk'}= \iint_{D_i} G_{lk'}\,{\rm d}\zeta_i(l,l')\, G_{kl'}\ , \quad i\in I, \end{equation} where the integration is over a set (labelled by $I$) of domains $D_i\in \mathbb{C}\times\mathbb{C}$ in some spectral type variables $l$ and $l'$ over a set of matrix valued measures ${\rm d}\zeta_i(l,l')$ in that domain. The independent variables are assumed to be characterised by the integration data: $x_i=x(\zeta_i,D_i)$ and $\partial_i=\partial/\partial x_i$. Notable is the dual role played by $G_{kk'}$, on the one hand as the integral kernel of an integral transform, on the other hand as a solution of a parameter-family of nonlinear equations of Darboux type, which can be reconstructed from the quantities $H_{k,k'}$ of the previous section. Most important in the present context is the observation that this general system can be endowed with a Lagrangian 3-form structure very similar to the ones described in the previous section, namely given by Lagrangian components \begin{align}\label{eq:dZLagr} \mathcal{L}_{ijk}=& \tfrac{1}{2}\iint_{D_i}\iint_{D_j} {\rm tr}\left\{ G_{l,k'} \,{\rm d}\zeta_i(l,l')\, (\partial_k G_{k,l'})\, {\rm d}\zeta_j(k,k') \right. \nonumber\\ & -\left. (\partial_k G_{l,k'})\, {\rm d}\zeta_i(l,l')\, G_{k,l'}\, {\rm d}\zeta_j(k,k') + {\rm cycl}(ijk) \right\} \nonumber \\ & + \iint_{D_i} \iint_{D_j} \iint_{D_k} {\rm tr}\left\{ G_{l,k'}\,{\rm d}\zeta_i(l,l')\,G_{m,l'}\,{\rm d}\zeta_j(m,m')\,G_{k,m'}\, {\rm d}\zeta_k(k,k') \right. \nonumber \\ & \qquad \qquad \left. - G_{l,k'}\,{\rm d}\zeta_i(l,l')\,G_{m,l'}\,{\rm d}\zeta_k(m,m')\,G_{k,m'}\, {\rm d}\zeta_j(k,k') \right\} \ . \end{align} It can be proven by similar computations, and under some generous assumptions on the integrations in the formula, that analogous statements to the ones in the previous sections, that the Lagrangian 3-form with components given by \eqref{eq:dZLagr} possesses a Lagrangian multiform structure. This forms arguably the most general multiform structure so far considered in the theory. Note also that the corresponding action functional $S[G_{\cdot,\cdot}(\boldsymbol{x});\mathcal{V}]$ where as before $\mathcal{V}$ is an arbitrary 3-dimensional hypersurface in the space of independent variables $\boldsymbol{x}=(\{ x_i, i\in I\})$, shows some resemblance some action functionals associated with the Chern-Simons theory in topological field theory, but this connection still remains to be explored. \iffalse \begin{align*} \mathsf{L} = & \mathcal{L}_{klm} {\rm d}\xi^J_k\wedge {\rm d}\xi_l^{\widetilde{J}}\wedge {\rm d}\xi_m^{\widehat{J}}+ \mathcal{L}_{lmn} {\rm d}\xi^{\widetilde{J}}_l\wedge {\rm d}\xi_m^{\widehat{J}}\wedge {\rm d}\xi_n^{\widebar{J}} \nonumber \\ & + \mathcal{L}_{mnk} {\rm d}\xi^{\widehat{J}}_m\wedge {\rm d}\xi_n^{\widebar{J}}\wedge {\rm d}\xi_k^{J} +\mathcal{L}_{klm} {\rm d}\xi^J_k\wedge {\rm d}\xi_l^{\widetilde{J}}\wedge {\rm d}\xi_m^{\widehat{J}} \end{align*} is such that ${\rm d}\mathsf{L}$ has a double zero on the solutions of the system \eqref{eq:Geq} , and hence that the variational system given by $\delta{\rm d}\mathsf{L}=0$ forms the system of generalised Euler-Lagrange equations for the 3-form. Note that the corresponding action functonal \[ S[G_{\cdot,\cdot}(\boldsymbol{x});\mathcal{V}]=\int_{\mathcal{V}} \mathsf{L} \ , \] \fi \subsection*{Acknowledgements} The author has benefited from many discussions with V. Caudrelier, L. Peng, D. Sleigh and M. Vermeeren on many issues regarding Lagrangian multiform theory. He has support from EPSRC grant EP/W007290/1 \small
1,477,468,751,293
arxiv
\section{Introduction} A compact Riemannian symmetric space $M$ is of the form $M=G/H$, where $G$ is a connected compact Lie group with an involutive automorphism $\theta$, and $(G^{\theta})^{0}\subset H\subset G^{\theta}$. Here $G$ is endowed with a Riemannian metric inducing from an adjoint invariant positive-definite inner product on the Lie algebra of $G$; and $M=G/H$ is endowed with the quotient metric, which is $G$-invariant and makes $M$ into a symmetric space. For any point $x\in M$, there is a {\it geodesic symmetry} $s_{x}$ characterized by the following property: (1), $s_{x}=x$; (2), it is an isometry of $M$, hence a diffeomorphism; (3), the tangent map $(s_{x})_{\ast}: T_{x}(M)\rightarrow T_{x}(M)$ is $-1$. The geodesic symmetry at the origin $o=eH\in M=G/H$ is just $s_{o}(gH)= \theta(g)H$ ($\forall g\in G$). We call a nonempty subset $X$ of $M$ an {\it antipodal set} if $s_{x}(y)=y$ for any $x,y\in X$. Any antipodal set is a finite set as it is a discrete set and $M$ is compact. An antipodal set is said to be a {\it maximal antipodal set} if it is not properly contained in any other antipodal set. In \cite{Chen-Nagano}, Chen and Nagano introduced and calculated the invariant 2-number $\#_{2}(M)$ of a compact symmetric space, which is the maximal cardinality of antipodal sets in a compact symmetric space. After this paper, there are many studies on maximal antipodal sets. Particularly Tanaka and Tasaki made the classification of maximal antipodal sets for some kinds of compact symmetric spaces (\cite{Tanaka-Tasaki1}, \cite{Tanaka-Tasaki2}). The readers may consult \cite{Chen} for an excellent survey on the study of 2-numbers and maximal antipodal sets. In this paper, we study maximal antipodal sets in general compact symmetric spaces. Precisely to say, we give a method of classifying maximal antipodal sets in compact symmetric spaces, and present an explicit classification of maximal antipodal sets in irreducible compact symmetric spaces. \smallskip Set $$\overline{G}=G\rtimes\langle\overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta.$ Write $$C_{\overline{\theta}}=\{g\overline{\theta}g^{-1}: g\in G\}.$$ The Cartan quadratic morphism (cf. \cite{Chen}) is a map $\phi: G/G^{\theta}\rightarrow G$ defined by $$\phi(gG^{\theta})=g\theta(g)^{-1}$$ ($\forall g\in G$). Let $X$ be a subset of $M$ containing the origin $o=G^{\theta}\in M$. Write $$\phi(X)=\{\phi(x): x\in X\}$$ and $$F_{2}(X)=\langle\phi(X),\overline{\theta} \rangle\subset\overline{G}.$$ Using the Cartan quadratic morphism, we show a correspondence between maximal antipodal sets in $G/G^{\theta}$ and certain elementary elementary abelian 2-subgroups in $\overline{G}$. \begin{theorem}\label{T1} Let $X$ be a subset of $M=G/G^{\theta}$ containing the origin $o=eG^{\theta}\in M$. Then $X$ is an antipodal set in $M$ if and only if $F_{2}(X)$ is an elementary abelian $2$-subgroup of $\overline{G}$ generated by elements in $C_{\overline{\theta}}$. Moreover, $X$ is a maximal antipodal set if and only if: $F_{2}(X)$ is a maximal element in the set of elementary abelian $2$-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$, and \begin{equation*}X=\{x\in M: \phi(x)\in F_{2}(X)\cap C_{\overline{\theta}} \overline{\theta}^{-1}\}.\end{equation*} \end{theorem} A compact symmetric space is said to be ``irreducible'' if it is not isogenous to the product of two positive-dimensional compact symmetric spaces (cf. Definition \ref{D:irreducible}). We give a precise list of irreducible compact symmetric spaces. As a byproduct, we show that they are all of the form $M=G/G^{\theta}$. \begin{theorem}\label{T2} Let $M$ be an irreducible compact symmetric space. Then there is a compact Lie group $G$ and an involutive automorphism $\theta$ of $G$ such that $M\cong G/G^{\theta}$. \end{theorem} With the list of irreducible compact symmetric spaces, we show a precise classification of maximal antipodal sets in them using Theorem \ref{T1}. The only cases haven't been treated completely are spin and half-spin groups and some quotient symmetric spaces of them. \smallskip The content of this paper is organized as follows. In Proposition \ref{P:characterization1}, we give a criterion of antipodal sets using the Cartan quadratic morphism $\phi: G/H\rightarrow G$. In case $H= G^{\theta}$, this criterion simplifies greatly in Proposition \ref{P:characterization2}. In Theorem \ref{T:characterization3}, we relate maximal antipodal sets in $G/G^{\theta}$ to certain elementary abelian 2-subgroups and show that they determine each other in some way. In Subsection \ref{SS:Weyl}, we discuss Weyl groups of maximal antipodal sets. In Section \ref{S:symmetric space}, we give a precise list of irreducible compact symmetric spaces not of group form. Particularly we show that they are all of the form $M=G/G^{\theta}$. In Section \ref{S:classification}, we present an explicit classification of maximal antipodal sets in most irreducible compact symmetric spaces. The remaining ones which haven't been treated completely are listed in Subsection \ref{SS:open}. In Section \ref{S:anitipodal'}, we give a way of calculating antipodal sets with another meaning. \smallskip \noindent{\it Notation and conventions.} Write $\omega_{m}=e^{\frac{2\pi i}{m}}$, which is a primitive $m$-th root of unity. Let $\operatorname{E}_6^{sc}$ (or $\operatorname{E}_6$) denote a connected and simply-connected compact simple Lie group of type $\operatorname{E}_6$; let $\operatorname{E}_{6}^{ad}$ denote a connected adjoint type compact simple Lie group of type $\operatorname{E}_6$. Similarly, we have the notation $\operatorname{E}_7^{sc}$, $\operatorname{E}_7$, $\operatorname{E}_7^{ad}$, $\operatorname{E}_8$, $\operatorname{F}_4$, $\operatorname{G}_2$. The last three connected compact Lie groups are both simply-connected and of adjoint type. Write \[J_{m}=\left(\begin{array}{cc} 0&I_{m}\\-I_{m}&0\\\end{array}\right),\quad I_{p,q}=\left(\begin{array} {cc}-I_{p}&0\\0&I_{q}\\\end{array}\right).\] We have involutions $\sigma_{i}$ as specified in Table 1 below (cf. \cite[Section 2 and Section 3]{Huang-Yu} for precise explanation). In $\operatorname{Spin}(2n)$, write $c=c_{n}= e_{1}\cdots e_{2n}$, where $\{e_1,e_2,\dots,e_{2n}\}$ is a standard normal basis of the Euclidean space based on which $\operatorname{Spin}(2n)$ is defined. In this paper $G$ always means a connected compact Lie group, and $\theta$ means an involutive automorphism of it. \smallskip \noindent{\it Acknowledgement.} A part of this work was done when the author visited MPI Bonn in summer 2016, and a part of the paper was written when the author visited National University of Singapore in January 2018. The author would like to thank both institutions for their support and hospitality. \section{Method of classification}\label{S:method} \subsection{A criterion for antipodal sets}\label{SS:antipodal1} Let $G$ be a connected compact Lie group, $\theta$ an involutive automorphism of $G$, and $H$ a closed subgroup of $G$ with $$(G^{\theta})^{0}\subset H\subset G^{\theta}.$$ Write $M=G/H$, which is a {\it symmetric space}. Let $o=eH$ denote the {\it origin} in $M=G/H$. For any $g\in G$, set $$\phi(g)=g\theta(g)^{-1}.$$ For any $x=gH\in M=G/H$, set $$\phi(x)=\phi(g)=g\theta(g)^{-1}.$$ Apparently, $\phi(x)$ does not depend on the choice of $g$. Thus, we have a well-defined map $$\phi: M=G/H\rightarrow G.$$ The map $\phi$ is called the {\it Cartan quadratic morphism} in the literature (cf. \cite{Chen}). There is a left $G$ action on $G/H$ through $$L_{g}(g'H)=g\cdot g'H=gg'H,$$ and a $G$-action on itself through $$g\cdot g'=gg'\theta(g)^{-1}.$$ The map $\phi$ is $G$-equivariant with regard to these two actions. That is, for any $g\in G$ and any $x\in G/H$, $$\phi(g\cdot x)=g\cdot\phi(x)=g\phi(x)\theta(g)^{-1}.$$ It is clear that $\phi^{-1}(e)=G^{\theta}/H.$ Hence, $\phi$ is an imbedding if and only if $H=G^{\theta}$. \begin{prop}\label{P:characterization1} A nonempty subset $X$ of $M=G/H$ is an antipodal set if and only if $\phi(g_{2}^{-1}g_{1})\in H$ for any points $x_{1}=g_{1}H\in M=G/H$ and $x_{2}=g_{2}H\in M=G/H.$ \end{prop} \begin{proof} As $L_{g_1}(o)=g_{1}H=x_1$, we have $$s_{x_1}=L_{g_1}s_{o}L_{g_1}^{-1}.$$ Then, $$s_{x_1}(x_2)=L_{g_1}s_{o} L_{g_1}^{-1}(g_2H)=L_{g_1}s_{o}(g_{1}^{-1}g_2H)=L_{g_1}(\theta(g_{1}^{-1}g_{2})H)=g_{1}\theta(g_{1}^{-1}g_{2}) H.$$ Thus, $x_2=s_{x_1}(x_2)$ if and only if $g_{1}\theta(g_{1}^{-1}g_{2})H=g_{2}H.$ That is equivalent to $$g_{2}^{-1}g_{1}\theta(g_{1}^{-1}g_{2})\in H.$$ Or, $\phi(g_{2}^{-1}g_{1})\in H$. This shows the conclusion of the proposition. \end{proof} By Proposition \ref{P:characterization1}, a nonempty subset $X$ of $M$ is an antipodal set if and only if $g\cdot X$ is an antipodal set for any $g\in G$. Hence, we may always assume that $o=eH\in X$. \subsection{Antipodal sets and elementary abelian 2-subgroups}\label{SS:antipodal2} Now assume $H=G^{\theta}$. Set $$\overline{G}=G\rtimes\langle\overline{\theta}\rangle\footnotemark,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta.$ Write $$C_{\overline{\theta}}= \{g\overline{\theta}g^{-1}: g\in G\}.$$ Let $X$ be a subset of $M$ containing the origin $o=eH\in M=G/H$. Write $$\phi(X)=\{\phi(x): x\in X\},$$ $$F_{1}(X)=\langle\phi(X)\rangle\subset G,$$ $$F_{2}(X)= \langle F_{1}(X),\overline{\theta}\rangle\subset\overline{G}.$$ \footnotetext{Alternatively, we could choose an element $c\in(Z_{G})^{\theta}$ and set $$\overline{G}_{c}= G\rtimes\langle\overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=c$ and $\operatorname{Ad}(\overline{\theta})|_{G} =\theta.$ In this way, $\overline{G}_{c}$ may be a group which is more familiar to us than $\overline{G}$. Then, in Theorem \ref{T:characterization3}, $F_{2}(X)$ is an abelian subgroup of $\overline{G}_{c}$ generated by elements in $C_{\overline{\theta}}$, not necessarily an elementary abelian 2-subgroup.} \begin{prop}\label{P:characterization2} Assume $H=G^{\theta}$. Let $X$ be a subset of $M$ containing the origin $o=eH\in M=G/H$. Then $X$ is an antipodal set if and only if $\phi(x)\in H$ and $\phi(x)^{2}=1$ for any $x\in X$, and $\phi(x)$ commutes with $\phi(y)$ for any $x,y\in X$. \end{prop} \begin{proof} Necessarity. Suppose $X$ is an antipodal set. Write $x=gH\in X$. In Proposition \ref{P:characterization1} by inputting $x_1=x$ and $x_2=o$, we get $\phi(x)=g\theta(g)^{-1}\in H$. This is to say, $\theta(\phi(x))= \phi(x)$. We also have $$\theta(\phi(x))=\theta(g\theta(g)^{-1})=\theta(g)g^{-1}=\phi(x)^{-1}.$$ Thus, $\phi(x)=\phi(x)^{-1}$ and $\phi(x)^{2}=1$. Write $x=g_{1}H\in X$ and $y=g_{2}H\in X$. By Proposition \ref{P:characterization1} we have $\phi(g_{2}^{-1}g_{1})\in H$. By the argument above this leads to $$\phi(g_{2}^{-1}g_{1})^{2}=1.$$ Equivalently, $$(g_{2}^{-1}g_{1}\theta(g_{1}^{-1})\theta(g_{2}))^{2}=1.$$ This is also equivalent to $$(\phi(x)\phi(y)^{-1})^{2}=(g_{1}\theta(g_{1}^{-1})\theta(g_{2})g_{2}^{-1})^{2} =1.$$ By the above, $\phi(x)^2=\phi(y)^2=1$. Thus, $(\phi(x)\phi(y)^{-1})^{2}=1$ is equivalent to: $\phi(x)$ commutes with $\phi(y)$. Sufficiency. Suppose $\phi(x)\in H$ and $\phi(x)^{2}=1$ for any $x\in X$, and $\phi(x)$ commutes with $\phi(y)$ for any $x,y\in X$. For any two points $x,y\in M$, write $x=g_{1}H$ and $y=g_{2}H$. Reverse to the above argument, by the conditions of $\phi(x)^{2}=\phi(y)^{2}=1$ and $\phi(x)$ commutes with $\phi(y)$, one gets $\phi(g_{2}^{-1}g_{1})^{2}=1$. Again by the above argument, this is equivalent to $\phi(g_{2}^{-1}g_{1})\in H$. By Proposition \ref{P:characterization1}, $X$ is an antipodal set. \end{proof} \begin{theorem}\label{T:characterization3} Assume $H=G^{\theta}$. Let $X$ be a subset of $M$ containing the origin $o=eH\in M=G/H$. Then $X$ is an antipodal set if and only if $F_{2}(X)$ is an elementary abelian $2$-subgroup of $\overline{G}$ generated by elements in $C_{\overline{\theta}}$. Moreover, $X$ is a maximal antipodal set if and only if: $F_{2}(X)$ is a maximal element in the set of elementary abelian $2$-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$, and \begin{equation}\label{Eq:X-F2} X=\{x\in M: \phi(x)\in F_{2}(X) \cap C_{\overline{\theta}}\overline{\theta}^{-1}\}.\end{equation} \end{theorem} \begin{proof} For the first statement, assume $X$ is an antipodal set. Then, $F_{2}(X)$ is an elementary abelian $2$-subgroup of $\overline{G}$ by Proposition \ref{P:characterization2}. For any $x=gH\in X$, we have $$\phi(x)=g\theta(g)^{-1}=g\overline{\theta}g^{-1}\overline{\theta}^{-1}=(g\overline{\theta}g^{-1}) \overline{\theta}^{-1}.$$ As $\overline{\theta}$ and $g\overline{\theta}g^{-1}$ (for any $g\in G$) are in $C_{\overline{\theta}}$, it follows that $F_{2}(X)$ is generated by elements in $C_{\overline{\theta}}$. The converse follows from Proposition \ref{P:characterization2} as well. For the second statement, Assume $X$ is a maximal antipodal set. Then, $F_{2}(X)$ is an elementary abelian $2$-subgroup of $\overline{G}$ by the above argument. Write $$X'=\{x\in M: \phi(x)\in F_{2}(X)\cap C_{\overline{\theta}}\overline{\theta}^{-1}\}.$$ Then, $X\subset X'$ by the equation $\phi(gH)= (g\overline{\theta}g^{-1})\overline{\theta}^{-1}$ shown above. By the definition of $X'$, we have $F_{2}(X')\subset F_{2}(X)$. Thus, $F_{2}(X')$ is an elementary abelian $2$-subgroup of $\overline{G}$ as well. By Proposition \ref{P:characterization2}, $X'$ is an antipodal set. By the maximality of $X$, we get $X=X'$. By a similar argument, one shows that $F_{2}(X)$ is a maximal element in the set of elementary abelian $2$-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$. The converse is clear. \end{proof} Theorem \ref{T:characterization3} gives a way of classifying maximal antipodal sets $X$ in $G/G^{\theta}$ through classifying maximal elements in the set of elementary abelian 2-subgroups $F$ of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$ and taking $$X=\{x\in M: \phi(x)\in F\cap C_{\overline{\theta}}\overline{\theta}^{-1}\}.$$ \smallskip In the case of $H\neq G^{\theta}$, write $M'=G/G^{\theta}$, and let $$\pi: M=G/H\rightarrow M'=G/G^{\theta}$$ be the projection map. We only give a simple proposition in this case. \begin{prop}\label{P:antipodal-projection} Let $X$ be a nonempty subset of $M$. Then $X$ is an antipodal set in $M$ if and only if $\pi^{-1}(\pi(X))$ is. \end{prop} \begin{proof} Assume that $X$ is an antipodal set in $M$. Any two points in $\pi^{-1}(\pi(X))$ are of the form $x'_{1}=g_{1}h_{1}H$, $x'_{2}=g_{2}h_{2}H$, where $x_{1}=g_{1}H\in X$, $x_{2}=g_{2}H\in X$, and $h_{1},h_{2}\in G^{\theta}$. Then, $$\phi((g_{2}h_{2})^{-1} g_{1}h_{1})=\phi(h_{2}^{-1}g_{2}^{-1}g_{1} h_{1})=\phi(h_{2}^{-1}g_{2}^{-1}g_{1})=h_{2}^{-1}\phi(g_{2}^{-1}g_{1})h_{2}.$$ Since $X$ is an antipodal set, we have $\phi(g_{2}^{-1}g_{1})\in H$ by Proposition \ref{P:characterization1}. As $H$ is a normal subgroup of $G^{\theta}$ (by Lemma \ref{L:stabilizer-abelian} below), we get $$\phi((g_{2}h_{2})^{-1} g_{1}h_{1})=h_{2}^{-1}\phi(g_{2}^{-1}g_{1})h_{2}\in H.$$ By Proposition \ref{P:characterization1} again, $\pi^{-1}(\pi(X))$ is an antipodal set in $M$. The converse follows from $X\subset\pi^{-1}(\pi(X))$. \end{proof} \begin{lemma}\label{L:stabilizer-abelian} For any connected compact Lie group $G$ and any automorphism $\theta$ of $G$, $G^{\theta}/(G^{\theta})^{0}$ is an abelian group. \end{lemma} \begin{proof} Write $Z=Z(G)^{0}$ and let $G_{der}^{sc}$ be a universal covering of $G_{der}=[G,G]$. Then, $Z\times G_{der}^{sc}$ is a covering of $G$ and $\theta$ lifts to an involutive automorphism of it, still denoted by $\theta$. Let $p: Z\times G_{der}^{sc}\rightarrow G$ be the projection. Write $Z'=\ker p\subset Z\times Z(G_{der}^{sc})$. Put $$H'=\{g'=(z,g)\in Z\times G_{der}^{sc}: (z,g)^{-1}\theta(z,g)\in Z'\}.$$ Then, $Z'\subset H'$ and $G^{\theta}=H'/Z'$. We show that $H'/H'^{0}$ is abelian, then so is $G^{\theta}/(G^{\theta})^{0}$. For $g'_{1}=(z_1,g_1)\in H'$ and $g'_{2}=(z_2,g_2)\in H'$, there exists $z'_1,z'_2\in Z'$ such that $\theta(g'_1)=g'_{1}z'_{1}$, $\theta(g'_2)=g'_{2}z'_{2}$. Hence, $\theta(g'_{1}g'_{2})=g'_{1}g'_{2}z'_1z'_2$ and $\theta(g'_{2}g'_{1})=g'_{2}g'_{1}z'_{1}z'_{2}$. Then, $\theta(g'_{1}g'_{2}(g'_{2}g'_{1})^{-1})=(g'_{1}g'_{2})(g'_{2}g'_{1})^{-1}$. This is to say, $$[g'_1,g'_2] =(g'_{1}g'_{2})(g'_{2}g'_{1})^{-1}\in(Z\times G_{der}^{sc})^{\theta}.$$ As $Z$ is abelian, $[g'_1,g'_2]= (1,h)$ for some $h\in G$. By a theorem of Steinberg, $(G_{der}^{sc})^{\theta}$ is connected. Thus, $$[g'_1,g'_2]=(1,h)\in (G_{der}^{sc})^{\theta}\subset H'^{0}.$$ This indicates $[H',H]'\subset H'^{0}$. That is, $H'/H'^{0}$ is abelian. \end{proof} \subsection{Irreducible compact symmetric spaces of adjoint type}\label{SS:antipodal3} Let $G$ be a connected compact simple Lie group\footnotemark of {\it adjoint type} and $\theta$ be an involutive automorphism of $G$. Set $H=G^{\theta}$ and $M=G/H$. Let $X$ be an antipodal set in $M$ containing the origin $o=eH\in M$. \footnotetext{In this paper a compact Lie group $G$ is called ``simple'' if its Lie algebra is a non-abelian real Lie algebra with no proper nonzero ideal.} Let $\mathfrak{u}_0$ be the Lie algebra of $G$, which is a compact simple Lie algebra. As $G$ is assumed to be of adjoint type, we have $G\cong\operatorname{Int}(\mathfrak{u}_0)$. For simplicity we identify $G$ with $\operatorname{Int}(\mathfrak{u}_0)$, and regard $\theta$ as an element of $\operatorname{Aut}(\mathfrak{u}_0)$ which acts on $G=\operatorname{Int}(\mathfrak{u}_0)$ by conjugation. Divide the discussion into two cases: (1), $\theta$ is an inner automorphism; (2), $\theta$ is an outer automorphism. In the first case, $\theta\in\operatorname{Int}(\mathfrak{u}_0)=G$, and $\overline{\theta}\theta^{-1}$ is a central element of $\overline{G}$. Thus, $$\overline{G}=G\times\langle\overline{\theta}\theta^{-1}\rangle.$$ Let $$p:\overline{G}\rightarrow\operatorname{Int}(\mathfrak{u}_0)=G$$ be the adjoint homomorphism. Then $p|_{G}=\operatorname{id}$ and $\ker p= \langle\overline{\theta}\theta^{-1}\rangle$. Write $$F(X)=p(F_{2}(X)),$$ which is an elementary abelian 2-subgroup of $G$. Due to $\overline{\theta}\in F_{2}(X)$, we have $\theta\in F(X).$ Write $$C_{\theta}= \{g\theta g^{-1}: g\in\operatorname{Int}(\mathfrak{u}_0)\}\subset\operatorname{Int}(\mathfrak{u}_0).$$ By Theorem \ref{T:characterization3}, $F_{2}(X)$ is generated by elements in $C_{\overline{\theta}}$. Thus, $F(X)$ is generated by elements in $C_{\theta}$. In the second case, $\theta\in\operatorname{Aut}(\mathfrak{u}_0)-\operatorname{Int}(\mathfrak{u}_0)$. We could identify $\overline{\theta}$ with $\theta\in\operatorname{Aut}(\mathfrak{u}_0)$ and regard $\overline{G}$ as a subgroup of $\operatorname{Aut}(\mathfrak{u}_{0}).$ Let $F(X)=F_{2}(X)$. By Theorem \ref{T:characterization3}, $F(X)$ is generated by elements in $$C_{\theta}=\{g\theta g^{-1}: g\in\operatorname{Int}(\mathfrak{u}_0)\}\subset\operatorname{Aut}(\mathfrak{u}_0).$$ Due to $\overline{\theta}\in F_{2}(X)$, we have $\theta\in F(X).$ \begin{theorem}\label{T:X-F} Let $G$ be a compact simple Lie group of adjoint type, and $H=G^{\theta}$. Assume that $X$ is a maximal antipodal set in $M=G/H$ containing the origin $o=eH$. Then, $F(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\operatorname{Aut}(\mathfrak{u}_0)$ which contain $\theta$ and are generated by elements in $C_{\theta}$, \begin{equation}\label{Eq:F2-F} F_{2}(X)=\langle\{g\overline{\theta}g^{-1}: g\in G, g\theta g^{-1}\in F(X)\}\rangle,\end{equation} \begin{equation}\label{Eq:F1-F}F_{1}(X)=\langle\{g\overline{\theta}g^{-1}\overline{\theta}^{-1}: g\in G,g\theta g^{-1}\in F(X)\}\rangle,\end{equation} and \begin{equation}\label{Eq:X-F}X= \{gH:g\theta g^{-1}\in F(X)\}.\end{equation} Conversely, if $F(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\operatorname{Aut}(\mathfrak{u}_0)$ which contain $\theta$ and are generated by elements in $C_{\theta}$, then $X=\{gH:g\theta g^{-1}\in F(X)\}$ is a maximal antipodal set in $M$. \end{theorem} \begin{proof} By Theorem \ref{T:characterization3}, $F(X)$ is an elementary abelian 2-subgroup of $\operatorname{Aut}(\mathfrak{u}_0)$ generated by elements in $C_{\theta}$. Other parts of the first statement follow from Equation (\ref{Eq:F2-F}). Precisely to say, by Theorem \ref{T:characterization3}, $F_{2}(X)$ is a maximal element in the set of elementary abelian $2$-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$. By Equation (\ref{Eq:F2-F}), $F_{2}(X)$ and $F(X)$ determine each other. Thus, $F(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\operatorname{Aut}(\mathfrak{u}_0)$ which are generated by elements in $C_{\theta}$. By the definitions of $F_{1}(X)$, $F_{2}(X)$, $F(X)$, and the maximality of $X$, Equations (\ref{Eq:F1-F}) and (\ref{Eq:X-F}) follow from Equation (\ref{Eq:F2-F}). Now we show Equation (\ref{Eq:F2-F}). Write $$F'_2(X)=\langle\{g\overline{\theta}g^{-1}: g\in G, g \theta g^{-1}\in F(X)\}\rangle.$$ By the definition of $F_{2}(X)$, $F(X)$ and $F'_{2}(X)$, we get $F_{2}(X)\subset F'_{2}(X)$. Thus, $\overline{\theta}\in F'_{2}(X)$ as $\overline{\theta}\in F_{2}(X)$. Then, $F'_{2}(X)$ is generated by elements in $C_{\overline{\theta}}$. By Theorem \ref{T:characterization3}, $F_{2}(X)$ is a maximal element in the set of elementary abelian $2$-subgroup of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$. Thus, $F_{2}(X)=F'_{2}(X)$. The converse statement is clear. \end{proof} \begin{remark}\label{R:X-F} By Theorem \ref{T:X-F}, one can classify maximal antipodal sets in $$M=\operatorname{Int}(\mathfrak{u}_0)/\operatorname{Int}(\mathfrak{u}_0)^{\theta}$$ for any involution $\theta\in\operatorname{Aut}(\mathfrak{u}_0)$ from the classification of elementary abelian 2-subgroups of $\operatorname{Aut}(\mathfrak{u}_0)$ given in \cite{Yu}. This is just a routine work, we omit the details here. In Section \ref{S:classification}, we refer back to this several times. \end{remark} \subsection{Weyl group}\label{SS:Weyl} Now let $H=G^{\theta}$, $M=G/H$. Assume that $X$ is an antipodal set in $M$. Define $$\psi: M=G/H\rightarrow \overline{G}$$ by $$\psi(gH)=g\overline{\theta}g^{-1}.$$ Put $\psi(X)=\{\psi(x): x\in X\}.$ By Theorem \ref{T:characterization3}, one can show that $X$ is an antipodal set in $M$ if and only if $\psi(X)$ generates an elementary abelian 2-subgroup of $G$ (without assuming $o=eG^{\theta}\in X$), still denote by $F_{2}(X)$. Set $$N_{G}(X)=\{g\in G: g\cdot X=X\},\quad Z_{G}(X)=\{g\in G:g\cdot x=x,\ \forall x\in X\},$$ and $W_{G}(X)=N_{G}(X)/Z_{G}(X)$. Apparently, the conjugation action of any $g\in N_{G}(X)$ on $G$ stabilizes $F_{2}(X)$, and the inducing action on $F_{2}(X)$ is trivial if and only if $g\in Z_{G}(X)$. Thus, we have an injective homomorphism $$W_{G}(X)\hookrightarrow W_{G}(F_{2}(X)).$$ There is a characterization of $W_{G}(X)$, $$W_{G}(X)=\{w\in W_{G}(F_{2}(X)): w\cdot \psi(X)=\psi(X)\}.$$ \begin{prop} In the above, if $X$ is a maximal antipodal set in $M$, then $$W_{G}(X)=W_{G}(F_{2}(X)).$$ \end{prop} \begin{proof} Write $X'=\{x\in M: \psi(x)\in F_{2}(X)\}$. By Proposition \ref{P:characterization1}, one can show that $X'$ is an antipodal set. Apparently, $X\subset X'$. By the maximality of $X$, we have $X=X'$.Thus, $$\psi(X)= \psi(X')= C_{\overline{\theta}}\cap F_{2}(X).$$ As the conjugation of any $w\in W_{G}(F_{2}(X))$ on $F_{2}(X)$ preserves conjugacy classes of elements, hence it stabilizes $C_{\overline{\theta}}\cap F_{2}(X)=\psi(X)$. Therefore, $$W_{G}(X)=\{w\in W_{G}(F_{2}(X)): w\cdot \psi(X)=\psi(X)\}=W_{G}(F_{2}(X)).$$ \end{proof} Now assume that $G=\operatorname{Int}(\mathfrak{u}_0)$ for a compact simple Lie algebra $\mathfrak{u}_0$, $\theta\in\operatorname{Aut}(\mathfrak{u}_0)$ an involution, $M=G/G^{\theta}$, and $o\in X$. In this case we have an identification $$W_{G}(F_{2}(X))= W_{G}(F(X)).$$ Thus, $$W_{G}(X)=W_{G}(F(X)).$$ \section{A precise list of irreducible compact symmetric spaces}\label{S:symmetric space} Analogous to Lie groups, we use coverings to define isogeny for symmetric spaces. \begin{definition}\label{D:isogeny} Two compact symmetric spaces $M_1,M_2$ are said to be isogenous if there is another compact Riemannian symmetric space $M$ and two (finite) Riemannian coverings $\phi_1: M_1\rightarrow M$ and $\phi_2: M_2\rightarrow M$. \end{definition} We define irreducible symmetric spaces as follows. \begin{definition}\label{D:irreducible} A compact symmetric space $M$ is said to be irreducible if there exists no positive-dimensional compact symmetric spaces $M_1,M_2$ such that $M$ is isogenous to $M_{1}\times M_{2}$. \end{definition} It is easy to show that any irreducible compact symmetric space $M$ is either a compact simple Lie group, or is of the form $M=G/H$ with $G$ a compact simple Lie group, $\theta$ an involutive automorphism of $G$, and $(G^{\theta})^{0}\subset H\subset G^{\theta}$. In the first case we say $M$ is of {\it group form}. \smallskip In this section we give a list of all irreducible compact symmetric spaces not of group form. This is done through a case by case verification spreading in the following subsections. The following result is shown as a byproduct. \begin{theorem}\label{T:standard-form} Let $M$ be an irreducible compact symmetric space not of group form. Then there is a compact simple Lie group $G$ and an involutive automorphism $\theta$ of $G$ such that $M\cong G/G^{\theta}$. \end{theorem} \begin{question}\label{Q:symmetric} Is any compact symmetric space $M$ of the form $M\cong G/G^{\theta}$ for some compact Lie group $G$ and an involutiveautomorphism $\theta$ of $G$? \end{question} \begin{remark}\label{R:symmetric} To answer Question \ref{Q:symmetric} affirmatively, we need to find an explicit list of all triples $(G,\theta,G^{\theta})$ and all compact symmetric spaces, and to see if $G/G^{\theta}$ in the former realizes all symmetric spaces\footnotemark. If the answer to this question is affirmative, then it is possible to extend the classification in this paper to the classification of maximal antipodal sets of many other symmetric spaces.\footnotetext{Let $M$ be a compact symmetric space. Take a point $x\in M$. Write $G'=\operatorname{Isom}(M)^{0}$ and $H'=\operatorname{Stab}_{G}(x)$ for some $x\in M$. Let $\theta\in G$ be the geodesic symmetry of $M$ at $x$. Then, $\theta \in G$, $H'\subset G'^{\theta}$, and $M\cong G'/H'$. We could approach the question by showing $(G'^{\theta})^{0}\subset H'\subset G'^{\theta}$, listing all possible $(G',\theta,H')$, and lifting $\theta$ to a covering $G$ of $G'$ such that $G'/H'\cong G/G^{\theta}$.} \end{remark} Table 1 is just \cite[Table 2]{Huang-Yu}. It includes a list of representatives of conjugacy classes of involutions $\theta$ in $\operatorname{Aut}(\mathfrak{u}_0)$ for $\mathfrak{u}_0$ a compact simple Lie algebra. It gives some information for the corresponding symmetric pairs $(\mathfrak{u}_0,\mathfrak{u}_0^{\theta})$, and describes the symmetric subgroups $\operatorname{Aut}(\mathfrak{u}_0)^{\theta}$. The notation AI-G are Cartan's notation for symmetric spaces. \begin{table}[ht] \caption{Symmetric pairs and symmetric subgroups}\label{Ta:symmetricSubgroup} \centering \begin{tabular}{|c |c |c |c |c |} \hline Type & $(\mathfrak{u}_0,\mathfrak{k}_0)$ & rank & $\theta$ & symmetric subgroup $\operatorname{Aut}(\mathfrak{u}_0)^{\theta}$\\ [0.3ex] \hline {\bf AI} &$(\mathfrak{su}(n),\mathfrak{so}(n))$ & $n-1$&$\overline{X}$&$(\operatorname{O}(n)/\langle-I\rangle)\!\times\!\langle \theta\rangle$\\ \hline {\bf AII} &$(\mathfrak{su}(2n),\mathfrak{sp}(n))$&$n-1$&$J_{n}\overline{X}J_{n}^{-1}$&$(\operatorname{Sp}(n)/\langle-I\rangle) \!\times\!\langle\theta\rangle$\\ \hline {\bf AIII} $p\!<\!q$&$(\mathfrak{su}(p\!+\!q),\!\mathfrak{s}(\mathfrak{u}(p)\!+\!\mathfrak{u}(q)))$&$p$& $I_{p,q}X I_{p,q}$&$(S(\operatorname{U}(p)\!\times\operatorname{U}(q))/Z_{p+q})\!\rtimes\!\langle\tau\rangle$\\&&&& $\operatorname{Ad}(\tau)=\textrm{comlex conjugation}$\\ \hline {\bf AIII} $p\!=\!q$&$(\mathfrak{su}\!(2p),\!\mathfrak{s}(\mathfrak{u}(p)\!+\!\mathfrak{u}(p)))$&$p$& $I_{p,p}X I_{p,p}$&$(S(\operatorname{U}(p)\times\operatorname{U}(p))/Z_{2p})\rtimes\langle\tau,J_{p}\rangle$\\&&&&$\operatorname{Ad}(J_{p})(X,Y)= (Y,X)$\\ \hline {\bf BDI} $p\!<\!q$&$(\mathfrak{so}(p\!+\!q),\mathfrak{so}(p)+\mathfrak{so}(q))$&$p$&$I_{p,q}X I_{p,q}$& $(\operatorname{O}(p)\times\operatorname{O}(q))/\langle(-I_{p},-I_{q})\rangle$\\ \hline {\bf DI} $p>4$&$(\mathfrak{so}(2p),\mathfrak{so}(p)+\mathfrak{so}(p))$&$p$& $I_{p,p}XI_{p,p}^{-1}$& $((\operatorname{O}(p)\!\times\operatorname{O}(p))/\langle(-I_{p},-I_{p})\rangle)\!\rtimes\! \langle J_{p}\rangle$\\&&&& $\operatorname{Ad}(J_{p})(X,Y)=(Y,X)$\\ \hline {\bf DI} $p=4$&$(\mathfrak{so}(8),\mathfrak{so}(4)+\mathfrak{so}(4))$&$4$& $I_{4,4}X I_{4,4}$& $((\operatorname{Sp}(1)^{4})/Z')\rtimes S_4$ \\ &&&&$S_4$ acts by permutaions\\ \hline {\bf DIII\footnotemark}&$(\mathfrak{so}(2n),\mathfrak{u}(n))$&$n$&$J_{n}X J_{n}^{-1}$&$(\operatorname{U}(n)/\{\pm{I}\}) \rtimes\langle I_{n,n}\rangle$\\&&&&$\operatorname{Ad}(I_{n,n})=\textrm{complex\ conjugation}$\\ \hline {\bf CI}&$(\mathfrak{sp}(n), \mathfrak{u}(n))$& $n$&$(\!\textbf{i}I\!)\!X\!(\!\textbf{i}I\!)^{\!-\!1}$& $(\operatorname{U}(n)/\{\pm{I}\})\rtimes\langle\textbf{j}I\rangle$\\&&&&$\operatorname{Ad}(\textbf{j}I)=\textrm{complex\ conjugation}$\\ \hline {\bf CII} $p\!<\!q$&$(\mathfrak{sp}(p\!+\!q),\mathfrak{sp}(p)\!+\!\mathfrak{sp}(q))$&$p$& $I_{p,q}X I_{p,q}$&$(\operatorname{Sp}(p)\times\operatorname{Sp}(q))/\langle(-I_{p},-I_{q})\rangle$\\ \hline {\bf CII} $p\!=\!q$&$(\mathfrak{sp}(2p),\mathfrak{sp}(p)\!+\!\mathfrak{sp}(p))$&$p$&$I_{p,p}X I_{p,p}$& $((\operatorname{Sp}(p)\!\times\operatorname{Sp}(p))/\langle(-I_{p},-I_{p})\rangle)\!\rtimes\! \langle J_{p}\rangle$\\&&&& $\operatorname{Ad}(J_{p})(X,Y)=(Y,X)$\\ \hline {\bf EI} &($\mathfrak{e}_{6}$, $\mathfrak{sp}(4)$)&6 &$\sigma_4$&$(\operatorname{Sp}(4)/\langle-1\rangle)\times\langle\theta \rangle$\\ \hline {\bf EII}&($\mathfrak{e}_{6}$,$\mathfrak{su}(6)\!+\!\mathfrak{sp}(1)$)&4&$\sigma_1$&$((\!\operatorname{SU}(6)\!\times\! \operatorname{Sp}(1))/\langle(e^{\frac{2\pi i}{3}}\!I\!,\!1\!),\!(\!-\!I\!,\!-\!1\!)\rangle\!)\!\rtimes\!\langle\tau \rangle$\\&&&&$\mathfrak{k}_0^{\tau}=\mathfrak{sp}(3)\oplus\mathfrak{sp}(1)$\\ \hline {\bf EIII}&($\mathfrak{e}_{6}$, $\mathfrak{so}(10)+i\mathbb{R}$)&2&$\sigma_2$&$(\operatorname{Spin}(10)\times\operatorname{U}(1)/\langle(c,i) \rangle)\rtimes\langle\tau\rangle$\\&&&&$\mathfrak{k}_0^{\tau}=\mathfrak{so}(9)$\\ \hline {\bf EIV}&($\mathfrak{e}_{6}$, $\mathfrak{f}_4$)&$2$&$\sigma_3$&$\operatorname{F}_4\times\langle\theta\rangle$\\ \hline {\bf EV}&($\mathfrak{e}_{7}$, $\mathfrak{su}(8)$)&7&$\sigma_3$&$(\operatorname{SU}(8)/\langle iI\rangle)\rtimes\langle\omega \rangle$\\&&&&$\mathfrak{k}_0^{\omega}=\mathfrak{sp}(4)$\\ \hline {\bf EVI}&($\mathfrak{e}_{7}$,$\mathfrak{so}(12)+\mathfrak{sp}(1)$)&$4$&$\sigma_1$&$(\operatorname{Spin}(12)\!\times \!\operatorname{Sp}(1))/\langle(c,1),(-1,-1)\rangle$\\ \hline {\bf EVII}&($\mathfrak{e}_{7}$, $\mathfrak{e}_6+i\mathbb{R}$)&3&$\sigma_2$& $((\operatorname{E}_{6}\times\operatorname{U}(1))/\langle(c,e^{\frac{2\pi i}{3}}) \rangle)\rtimes\langle\omega\rangle$\\&&&&$\mathfrak{k}_0^{\omega}=\mathfrak{f}_4$\\ \hline {\bf EVIII}&($\mathfrak{e}_{8}$, $\mathfrak{so}(16)$)&$8$&$\sigma_2$&$\operatorname{Spin}(16)/\langle c\rangle$\\ \hline {\bf EIX}&($\mathfrak{e}_{8}$, $\mathfrak{e}_7+\mathfrak{sp}(1)$)&4&$\sigma_1$&$(\operatorname{E}_{7}\times\operatorname{Sp}(1))/\langle(c,-1) \rangle$\\ \hline {\bf FI}&($\mathfrak{f}_{4}$,$\mathfrak{sp}(3)+\mathfrak{sp}(1)$)&$4$&$\sigma_1$&$(\operatorname{Sp}(3)\times \operatorname{Sp}(1))/\langle(-I,-1)\rangle$\\ \hline {\bf FII}&($\mathfrak{f}_{4}$, $\mathfrak{so}(9)$)&$1$&$\sigma_2$&$\operatorname{Spin}(9)$\\ \hline {\bf G}&($\mathfrak{g}_{2}$, $\mathfrak{sp}(1)+\mathfrak{sp}(1)$)&2&$\sigma$&$(\operatorname{Sp}(1)\times \operatorname{Sp}(1))/\langle(-1,-1)\rangle$\\ \hline \end{tabular} \end{table} \smallskip \footnotetext{When $n=4$, DIII is identical to BDI when $p=2$ and $q=6$.} \smallskip \subsection{Grassmannians}\label{SS:Grassmannian} \begin{prop}\label{P:Grassmannian-list} Let $M$ be an irreducible compact symmetric space which is isogenous to a (real, complex or quaternion) Grassmannian. Then $M$ is isomorphic to one the following, \begin{enumerate} \item $M=G/G^{\theta}$, where $G=\operatorname{PSU}(p+q)$, $\operatorname{PSO}(p+q)$ or $\operatorname{PSp}(p+q)$ ($q\geq p\geq 1$), and $\theta=\operatorname{Ad}(I_{p,q})$. \item $M=G/G^{\theta}$, where $G=\operatorname{SU}(2p)$ ($p\geq 1$), and $\theta=\operatorname{Ad}(I_{p,p})$. \item $M=G/G^{\theta}$, where $G=\operatorname{Sp}(2p)$ ($p\geq 1$), and $\theta=\operatorname{Ad}(I_{p,p})$. \item $M=G/G^{\theta}$, where $G=\operatorname{SO}(2p)$ ($p\geq 4$), and $\theta=\operatorname{Ad}(I_{p,p})$. \item $M=G/G^{\theta}$, where $G=\operatorname{Spin}(p+q)$ ($q\geq p\geq 1$), and $\theta=\operatorname{Ad}(e_{1}\dots e_{p})$. \item $M=G/G^{\theta}$, where $G=\operatorname{Spin}(4m)/\langle c\rangle$ ($m\geq 2$), and $\theta=\operatorname{Ad}(e_{1}\dots e_{2m})$. \end{enumerate} \end{prop} \begin{proof} By assumption we have $M=G/H$, where $G$ is isogenous to one of $\operatorname{SU}(p+q)$, $\operatorname{Sp}(p+q)$, $\operatorname{SO}(p+q)$, $\theta= \operatorname{Ad}(I_{p,q})$, and $(G^{\theta})^{0}\subset H\subset G^{\theta}$. When $G$ is isogenous to $\operatorname{SU}(p+q)$, $G^{\theta}$ is not connected only when $p=q$. When $p\neq q$, $M$ is in item 1. When $p=q$, $M$ is in item 1 or item 2. When $G$ is isogenous to $\operatorname{Sp}(p+q)$, $G^{\theta}$ is not connected only when $p=q$. When $p\neq q$, $M$ is in item 1. When $p=q$, $M$ is in item 1 or item 3. When $G$ is isogenous to $\operatorname{SO}(p+q)$, we list all possible pairs $(G,\theta)$ and give the formulas for $G^{\theta}$ in Table 2. By the formulas for $G^{\theta}$, we see that there are two non-isomorphic $M$ for a pair $(p,q)$ when $p\neq q$. In this case $M$ is in item 1 or item 5. There are three non-isomorphic $M$ for a pair $(p,q)$ when $p=q\geq 3$ and $p$ is odd. In this case $M$ is in item 1, item 4, or item 5. There are four non-isomorphic $M$ for a pair $(p,q)$ with $p=q\geq 4$ and $p$ is even. In this case $M$ is in item 1, item 4, item 5, or item 6. \end{proof} \begin{table}\label{Ta:BDI} \caption{Formulas of $G^{\theta}$ in type B and D} \centering \begin{tabular}{|c|c|c|c|c|} \hline $G$ & $\theta=\operatorname{Ad}(I_{p,q})$, $p\neq q$ & $\theta=\operatorname{Ad}(I_{p,p})$ \\ \hline $\operatorname{Spin}(n)$ & $\operatorname{Spin}(p)\cdot\operatorname{Spin}(q)$ & $\operatorname{Spin}(p)\cdot\operatorname{Spin}(p)$ \\ \hline $\operatorname{SO}(n)$ & $S(\operatorname{O}(p)\times\operatorname{O}(q))$ & $S(\operatorname{O}(p)\times\operatorname{O}(p))$ \\ \hline $\operatorname{SO}(2n)/\langle-I\rangle$ & $S(\operatorname{O}(p)\!\times\!\operatorname{O}(q))/\langle-I\rangle$ & $S(\!\operatorname{O}(p)\times\!\operatorname{O}(p)) \!\rtimes\!\langle\!J_{p}\rangle/\langle-I\rangle$ \\ \hline $\operatorname{Spin}(4n)/\langle c\rangle$\footnotemark & $(\operatorname{Spin}(p)\!\cdot\!\operatorname{Spin}(q))/\langle c\rangle$, $p$ even & $(\operatorname{Spin}(p)\!\cdot\!\operatorname{Spin}(p))\!\rtimes\!\langle\!L_{p}\!\rangle/\langle c\rangle$\\ \hline \end{tabular} \end{table} \footnotetext{In Table 2, $$c=e_1e_2\cdots e_{4n}\in Z(\operatorname{Spin}(4n)),$$ $$J_{p}=\left(\begin{array}{cc}0_{p}& I_{p}\\-I_{p}&0_{p}\\\end{array}\right),$$ $$L_{p}=\frac{1+e_{1}e_{p+1}}{\sqrt{2}}\cdots \frac{1+e_{p}e_{2p}}{\sqrt{2}}.$$} \subsection{Types AI and AII}\label{SS:A1-2} Write $$G_{n,m}=\operatorname{SU}(n)/\langle\omega_{m}I\rangle$$ for integers $m|n$. Let $\tau$ be the complex conjugation on $\operatorname{SU}(n)$ (and on $G_{n,m}$). Write $\tau'=\tau\circ\operatorname{Ad}(J_{n/2})$ in case $n$ is even. \begin{lemma}\label{L:A1-2} Let $G=G_{n,m}$ and $\theta=\tau$ or $\tau'$ (in case $n$ is even). Then, $\pi_{0}(G^{\theta})\cong 1$ or $\mu_{2}$, and $\pi_{0}(G^{\theta})=1$ if and only if $m$ is odd. When $2m|n$, $$G_{n,2m}/(G_{n,2m}^{\theta})^{0}\cong G_{n,m}/G_{n,m}^{\theta}.$$ \end{lemma} \begin{proof} We show it for $\theta=\tau$. The proof for $\theta=\tau\circ\operatorname{Ad}(J_{n/2})$ case is similar. For $g=[X] \in G_{n,m}$ ($X\in\operatorname{SU}(n)$), $\tau(g)=g$ if and only if $$\overline{X}=\omega_{m}^{k}X$$ for some $0\leq k \leq m-1$. If $m$ is odd, write $X'=\omega_{m}^{t}X$ with $m|2t-k$. Then, $X'\in\operatorname{SU}(n)$ and $\overline{X'}=X'$. Thus, $X'\in\operatorname{SO}(n)$. Hence, $$G^{\theta}=\operatorname{SO}(n)$$ and $\pi_{0}(G^{\theta})\cong 1$. If $m$ and $n/m$ are both even, write $X'=\omega_{2m}^{k}X$. Then, $X'\in\operatorname{SU}(n)$ and $\overline{X'}=X'$. Thus, $X'\in\operatorname{SO}(n)$. Hence, $$G^{\theta}=(\operatorname{SO}(n)\cdot\langle\omega_{2m}I\rangle)/\langle\omega_{m}I\rangle$$ and $\pi_{0}(G^{\theta})\cong\mu_{2}$. If $m$ is even and $n/m$ is odd, write $X'=\omega_{2m}^{k}X$. Then $\operatorname{det} X'=(-1)^{k}$ and $\overline{X'}=X'$. Thus, $X'\in\operatorname{O}(n)$ with $\operatorname{det} X'=(-1)^{k}$. Hence, $$G^{\theta}=H'/\langle\omega_{m}I\rangle,$$ where $$H'= \{\omega_{2m}^{-k}X': X'\in\operatorname{O}(n), k\in\mathbb{Z},\operatorname{det} X'=(-1)^{k}\}.$$ By this, $\pi_{0}(G^{\theta})\cong\mu_{2}$. These show the first statement of the lemma. From the above description of $(G_{n,m})^{\theta}$, it is clear that $$G_{n,2m}/(G_{n,2m}^{\theta})^{0} \cong G_{n,m}/G_{n,m}^{\theta}$$ in case $2m|n$. This is the second statement of the lemma. \end{proof} The following proposition follows from Lemma \ref{L:A1-2} directly. \begin{prop}\label{P:A1-2-list} Let $M$ be an irreducible compact symmetric space which is of type AI or AII in Cartan's notation. Then $M$ is isomorphic to one the following, \begin{enumerate} \item $M=G_{n,m}/G_{n,m}^{\tau}$, where $m|n$. \item $M=G_{n,m}/G_{n,m}^{\tau'}$, where $m|n$ and $n$ is even. \end{enumerate} \end{prop} \begin{prop}\label{P:A1-2-subgroup} Let $G=G_{n,m}$. \begin{enumerate} \item If $m$ is odd, then $G^{\tau}\cong\operatorname{SO}(n)$ and $G^{\tau'}\cong\operatorname{Sp}(n/2)$ (in case $n$ is even). \item If $m$ and $\frac{n}{m}$ are both even, then $G^{\tau}\cong\operatorname{PSO}(n)\times\mathbb{Z}/2\mathbb{Z}$ and $G^{\tau'}\cong\operatorname{PSp}(n/2)\times\mathbb{Z}/2\mathbb{Z}$. \item If $m$ is even and $\frac{n}{m}$ is odd, then $G^{\tau}\cong\operatorname{PO}(n)$ and $G^{\tau'}\cong\operatorname{PSp}(n/2)$. \end{enumerate} \end{prop} \begin{proof} For $\theta=\tau$, this follows from the calculation of $G^{\tau}$ given in the proof for Lemma \ref{L:A1-2}. For $\theta=\tau'$, the calculation is similar. \end{proof} \subsection{Types CI and DIII}\label{C1-D3} \begin{prop}\label{P:D1-D3-list} Let $M$ be an irreducible compact symmetric space which is of type DI or DIII in Cartan's notation. Then $M=G/G^{\theta}$ for $(G,\theta)$ in the following list, \begin{enumerate} \item $G=\operatorname{PSp}(n)$ ($n\geq 1$), $\theta=\operatorname{Ad}(\mathbf{i}I)$. \item $G=\operatorname{Sp}(n)$ ($n\geq 1$), $\theta=\operatorname{Ad}(\mathbf{i}I)$. \item $G=\operatorname{PSO}(2n)$ ($n\geq 3$), $\theta=\operatorname{Ad}(J_{n})$. \item $G=\operatorname{SO}(2n)$ ($n\geq 3$), $\theta=\operatorname{Ad}(J_{n})$. \end{enumerate} \end{prop} \begin{proof} By assumption $M=G/H$, where $G$ is an isogenous form of $\operatorname{Sp}(n)$ (or $\operatorname{SO}(2n)$), $\theta=\operatorname{Ad}(\mathbf{i}I)$ (or $\theta=\operatorname{Ad}(J_{n})$), and $(G^{\theta})^{0}\subset H\subset G^{\theta}$. In the $\operatorname{Sp}(n)$ case, $G=\operatorname{Sp}(n)$ or $\operatorname{PSp}(n)$. We have $\operatorname{Sp}(n)^{\theta}=\operatorname{U}(n)$ and $\operatorname{PSp}(n)^{\theta}=(\operatorname{U}(n)\cdot \langle\mathbf{j}I\rangle)/\langle-I\rangle.$ In the second case, $\pi_{0}(G^{\theta})\cong\mathbb{Z}/2\mathbb{Z}$, and $$G/(G^{\theta})^{0}\cong\operatorname{Sp}(n)/\operatorname{U}(n).$$ Hence, $M\cong G/G^{\theta}$ for a pair $(G,\theta)$ in item 1 or item 2. In the $\operatorname{SO}(2n)$ ($n\geq 3$) case, $G=\operatorname{Spin}(2n)$, $\operatorname{Spin}(2n)/\langle c\rangle$ ($n$ even), $\operatorname{SO}(2n)$, or $G=\operatorname{PSO}(2n)$. When $G=\operatorname{Spin}(2n)$ or $\operatorname{SO}(2n)$, $G^{\theta}$ is connected and the corresponding symmetric spaces are isomorphic to $\operatorname{SO}(2n)/\operatorname{U}(n)$. When $G=\operatorname{Spin}(2n)/\langle c\rangle$ ($n$ even) or $\operatorname{PSO}(2n)$ ($n$ even), $\pi_{0}(G^{\theta})\cong \mathbb{Z}/2\mathbb{Z}$. In this case, $$G/(G^{\theta})^{0}\cong\operatorname{SO}(2n)/\operatorname{U}(n)$$ and $$G/G^{\theta}\cong \operatorname{PSO}(2n)/\operatorname{PSO}(2n)^{\theta}.$$ When $G=\operatorname{PSO}(2n)$ ($n$ odd), $G^{\theta}$ is connected. In this case, $$G/G^{\theta} \cong\operatorname{SO}(2n)/\operatorname{U}(n).$$ Hence, $M\cong G/G^{\theta}$ for a pair $(G,\theta)$ in item 3 or item 4. \end{proof} \subsection{Irreducible symmetric spaces from exceptional Lie groups}\label{SS:exceptional} We say an irreducible compact symmetric space $M$ is of {\it exceptional type} if the neutral subgroup of its isometry group is an exceptional compact simple Lie group. \begin{prop}\label{P:exceptional-list} Let $M$ be an irreducible compact symmetric space of exceptional type. Then $M=G/G^{\theta}$ for $(G,\theta)$ in the following list, \begin{enumerate} \item $G$ is of adjoint type. \item $G=\operatorname{E}_{6}^{sc}$, $\theta$ an outer involutive automorphism. \item $G=\operatorname{E}_{7}^{sc}$, $\theta\sim\sigma_2$ or $\theta\sim\sigma_3$ as in Table 1. \end{enumerate} \end{prop} \begin{proof} By assumption $M=G/H$, where $G$ is a connected exceptional compact simple Lie group, $\theta$ is an involutive automorphism of $G$, and $(G^{\theta})^{0}\subset H\subset G^{\theta}$. When $G$ is of type $\operatorname{E}_8$, $\operatorname{F}_{4}$ or $\operatorname{G}_2$, it is both simply-connected and of adjoint type. In this case $G^{\theta}$ is connected, and $M$ falls into item 1. When $G$ is of type $\operatorname{E}_6$, $G=\operatorname{E}_{6}^{sc}$ or $\operatorname{E}_{6}^{ad}$. From Table 1, we see that $G^{\theta}$ is connected. In case $\theta$ is an inner involutive automorphism, we have $\operatorname{E}_{6}^{sc}/(\operatorname{E}_{6}^{sc})^{\theta}\cong \operatorname{E}_{6}^{ad}/(\operatorname{E}_{6}^{ad})^{\theta}$. Thus, $M$ falls into item 1. In case $\theta$ is an outer involutive automorphism, $M$ falls into item 1 or item 2. When $G$ is of type $\operatorname{E}_7$, $G=\operatorname{E}_{7}^{sc}$ or $\operatorname{E}_{7}^{sc}$. In case $\theta$ is conjugate to $\sigma_1$, $G^{\theta}$ is connected, and we have $\operatorname{E}_{7}^{sc}/(\operatorname{E}_{7}^{sc})^{\theta}\cong\operatorname{E}_{7}^{ad}/(\operatorname{E}_{7}^{ad})^{\theta}$. In case $\theta$ is conjugate to $\sigma_2$ or $\sigma_3$, $(\operatorname{E}_{7}^{sc})^{\theta}$ is connected and $\pi_{0}((\operatorname{E}_{7}^{ad})^{\theta})\cong\mathbb{Z}/2\mathbb{Z}$. We have $\operatorname{E}_{7}^{sc}/(\operatorname{E}_{7}^{sc})^{\theta}\cong \operatorname{E}_{7}^{ad}/((\operatorname{E}_{7}^{ad})^{\theta})^{0}$. Thus, $M$ falls into item 1 or item 3. \end{proof} \section{Explicit classification of maximal antipodal sets}\label{S:classification} In this section we classify maximal antipodal sets in irreducible compact symmetric spaces $M$. We treat almost all irreducible compact symmetric spaces. The un-treated cases are summarized in the last subsection. Recall that in case $M=\operatorname{Int}(\mathfrak{u}_0)/\operatorname{Int}(\mathfrak{u}_0)^{\theta}$ for an involution $\theta\in\operatorname{Aut}(\mathfrak{u}_0)$, this is treated in Remark \ref{R:X-F}. \subsection{Irreducible compact symmetric spaces of group form}\label{SS:group} Let $M=G$ be an irreducible compact symmetric space, which is also a compact Lie group. Then, either $G\cong\operatorname{U}(1)$, or $G$ is a compact simple Lie group. In this case, the geodesic symmetry $s_{x}(y)=xy^{-1}x$ ($\forall x,y\in G$). Let $X$ be a subset of $M$ containing the origin (=identity) $e$. It is clear that $X$ is an antipodal set in $M$ if and only if it is an elementary abelian $2$-subgroup of $G$, and $X$ is a maximal antipodal set if and only if it is a maximal elementary abelian $2$-subgroup of $G$. Let $X$ be a maximal elementary abelian $2$-subgroup of $G$. When $G=\operatorname{U}(1)$, then $X=\{\pm{1}\}$. When $G$ is of adjoint type, maximal elementary abelian $2$-subgroups of $G$ are classified in \cite{Griess} and \cite{Yu}. The other connected compact simple Lie groups fall into the following list, \begin{enumerate} \item $\operatorname{SU}(n)/\langle e^{\frac{2\pi i}{m}}I\rangle$ ($m|n$, $m\neq n$). \item $\operatorname{Sp}(n)$ ($n\geq 2$). \item $\operatorname{Spin}(n)$ ($n\geq 7$). \item $\operatorname{SO}(n)$ ($n\geq 8$, even). \item $\operatorname{Spin}(4m)/\langle c\rangle$ ($m\geq 2$). \item $\operatorname{E}_{6}^{sc}$. \item $\operatorname{E}_{7}^{sc}$. \end{enumerate} In item 1, for $G=\operatorname{SU}(n)/\langle e^{\frac{2\pi i}{m}}I\rangle$, when $m$ is odd, any maximal elementary abelian 2-subgroup is conjugate to the subgroup consisting of diagonal matrices with entries $\pm{1}$; when $m$ is even, the map $X\mapsto p(X)$ with $p$ the projection $G\rightarrow\operatorname{PSU}(n)$ gives a bijection between conjugacy classes of maximal elementary abelian 2-subgroups in $G$ and that in $\operatorname{PSU}(n)$ (by \cite[Proposition 2.4]{Yu}). In item 2 or item 4, there is a unique conjugacy of maximal elementary abelian 2-subgroups, i.e., those conjugate to the subgroup consisting of diagonal matrices with entries $\pm{1}$. In item 6, due to $Z(\operatorname{E}_{6}^{sc})\cong\mathbb{Z}/3\mathbb{Z}$ is of odd degree, the map $X\mapsto p(X)$ with $p$ the projection $\operatorname{E}_{6}^{sc}\rightarrow\operatorname{E}_{6}^{ad}$ gives a bijection between conjugacy classes of maximal elementary abelian 2-subgroups in $\operatorname{E}_{6}^{sc}$ and that in $\operatorname{E}_{6}^{ad}$. There are two conjugacy classes, corresponding to the subgroups $F'_{2,3}$ and $F'_{0,1,0,2}$ (cf. \cite[Pages 272-273]{Yu}). In item 7, $X\sim p^{-1}(X')$ with $p$ the projection $\operatorname{E}_{7}^{sc}\rightarrow\operatorname{E}_{7}^{ad}$, and $X'=F'''_{0,3}$ (rank 6) or $F''_{2}$ (rank 5) in \cite[Page 284]{Yu}. We do not know yet a complete classification of maximal elementary 2-subgroups for groups in {\it item 3 and item 5}, i,e., for $\operatorname{Spin}(n)$ ($n\geq 7$) and $\operatorname{Spin}(4m)/\langle c\rangle$ ($m\geq 2$). \subsection{Grassmannians}\label{SS:Grassmannian-classification} In this subsection we classify maximal antipodal sets in an irreducible compact symmetric space which is isogenous to a Grassmannian. As stated in Proposition \ref{P:Grassmannian-list}, there are six cases to consider. Item 1 is the adjoint type case, which is treated in Remark \ref{R:X-F}. In {\it item 5 and item 6}, we do not have a full classification yet. In the below we treat items 2-4 separately using Theorem \ref{T:X-F}. \begin{example}\label{E:Grassmannian-complex} Let $M=\operatorname{SU}(2p)/S(\operatorname{U}(p)\times\operatorname{U}(p))$, and $X\subset M$ be a maximal antipodal set containing the origin $o$. Write $G= \operatorname{SU}(2p)$. Define $\theta\in\operatorname{Aut}(G)$ by $$\theta(g)=I_{p,p}gI_{p,p}^{-1},\ \forall g\in G.$$ Then, $M=G/G^{\theta}$. Set $$\overline{G}=G\rtimes\langle\overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta$. As in Subsection \ref{SS:antipodal2}, we have a map $\phi: M\rightarrow G$, a conjugacy class $C_{\overline{\theta}}\subset\overline{G}$, a subset $\phi(X)$ of $G$, a subgroup $F_{1}(X)$ of $G$, and a subgroup $F_{2}(X)$ of $\overline{G}$. By Theorem \ref{T:X-F}, $F_{2}(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$, and $$X=\{x\in M: \phi(x)\in F_{2}(X)\cap C_{\overline{\theta}}\overline{\theta}^{-1}\}.$$ Due to $o=G^{\theta} \in X$, $\overline{\theta}\in F_{2}(X)$. Thus, $$|X|=|F_{2}(X)\cap C_{\overline{\theta}}\overline{\theta}^{-1}|= |F_{2}(X)\cap C_{\overline{\theta}}|.$$ When $p$ is odd, we may identity $\overline{G}$ with $\operatorname{SU}^{\pm{}}(2p)$ and identify $\overline{\theta}$ with $I_{p,p}$. Then, $F_{2}(X)$ is diagonalizable. Due to its maximality, $F_{2}(X)$ is conjugate to the subgroup of $\overline{G}=\operatorname{SU}^{\pm{}}(2p)$ consisting of diagonal matrices with entries $\pm{1}$. Then, $$|X|=|F_{2}(X)\cap C_{\overline{\theta}}|=\binom{2p}{p}.$$ When $p$ is even, we may identify $\theta$ with $I_{p,p}$. Then, $$\overline{G}=G\times\langle\overline{\theta} \theta^{-1}\rangle,$$ $F_{2}(X)=F_{1}(X)\times\langle\overline{\theta}\rangle$, and $F_{1}(X)$ is diagonalizable. Due to the maximality of $F_{2}(X)$, $F_{1}(X)$ is conjugate to the subgroup of $G=\operatorname{SU}(2p)$ consisting of diagonal matrices with entries $\pm{1}$. Then, $$|X|=|F_{2}(X)\cap C_{\overline{\theta}}|=|F_{1}(X)\cap C_{\theta}|= \binom{2p}{p}.$$ \end{example} \begin{example}\label{E:Grassmannian-quaternion} Let $M=\operatorname{Sp}(2p)/(\operatorname{Sp}(p)\times\operatorname{Sp}(p))$, and $X\subset M$ be a maximal antipodal set containing the origin $o$. The classification could be done in the same way as in Example \ref{E:Grassmannian-complex}. There is a unique maximal antipodal set $X$ in $M$ up to translation, and $|X|=\binom{2p}{p}$. \end{example} \begin{example} Let $M=\operatorname{SO}(2p)/S(\operatorname{O}(p)\times\operatorname{O}(p))$ ($p\geq 3$), and $X\subset M$ be a maximal antipodal set containing the origin $o$. The classification could be done in the same way as in Example \ref{E:Grassmannian-complex}. There is a unique maximal antipodal set $X$ in $M$ up to translation, and $|X|=\binom{2p}{p}$. \end{example} \subsection{Types AI and AII}\label{SS:A1-2-classification} By Proposition \ref{P:A1-2-list}, any irreducible compact symmetric space which is of type AI or AII in Cartan's notation is isomorphic to one the following, \begin{enumerate} \item $M=G_{n,m}/G_{n,m}^{\tau}$, where $m|n$. \item $M=G_{n,m}/G_{n,m}^{\tau'}$, where $m|n$ and $n$ is even. \end{enumerate} When $m=n$, $M=G/G^{\theta}$ with $G=\operatorname{PU}(n)$ of adjoint type, $\theta=\tau$ or $\tau'$. This case is treated in Remark \ref{R:X-F}. According to \cite[Propositions 2.12 and 2.16]{Yu}, there are several conjugacy classes of maximal elementary abelian 2-subgroups in $\operatorname{PO}(n)$ (or $\operatorname{PSp}(n/2)$), the number of conjugacy classes is equal to $1+k$, where $k$ is the 2-power index of $n$ (or $\frac{n}{2}$). When $m=1$, $M\cong\operatorname{SU}(n)/\operatorname{SO}(n)$ or $\operatorname{SU}(n)/\operatorname{Sp}(n/2)$, we treat this case below. \begin{example}\label{E:A1-classification} Let $M=\operatorname{SU}(n)/\operatorname{SO}(n)$, and $X\subset M$ be a maximal antipodal set containing the origin $o$. Write $G= \operatorname{SU}(n)$ and $\theta=\tau\in\operatorname{Aut}(G)$. Then, $M=G/G^{\theta}$. Set $$\overline{G}=G\rtimes\langle \overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta$. As in Subsection \ref{SS:antipodal2}, we have a map $\phi: M\rightarrow G$, a conjugacy class $C_{\overline{\theta}}\subset\overline{G}$, a subset $\phi(X)$ of $G$, a subgroup $F_{1}(X)$ of $G$, and a subgroup $F_{2}(X)$ of $\overline{G}$. By Theorem \ref{T:X-F}, $F_{2}(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$, and $$X=\{x\in M: \phi(x)\in F_{2}(X)\cap C_{\overline{\theta}}\overline{\theta}^{-1}\}.$$ Due to $o=G^{\theta} \in X$, $\overline{\theta}\in F_{2}(X)$. Thus, $$|X|=|F_{2}(X)\cap C_{\overline{\theta}}\overline{\theta}^{-1}|= |F_{2}(X)\cap C_{\overline{\theta}}|.$$ As we assume $o\in X$, we have $\theta=\tau\in F_{2}(X)$. Thus, $F_{1}(X)\subset G^{\tau}=\operatorname{SO}(n)$. Then, $F_{1}(X)$ is diagonalizable. Due to the maximality of $F_{2}(X)$, $F_{1}(X)$ is conjugate to the subgroup of $\operatorname{SU}(n)$ consisting of diagonal matrices with entries $\pm{1}$. Then, $$|X|=|F_{2}(X)\cap C_{\overline{\theta}}|=|F_{1}(X)|=2^{n-1}.$$ \end{example} \begin{example}\label{E:A2-classification} Let $M=\operatorname{SU}(n)/\operatorname{Sp}(\frac{n}{2})$ ($n\geq 4$, even), and $X\subset M$ be a maximal antipodal set containing the origin $o$. The discussion is along the same line as in Example \ref{E:A1-classification}, just replace $\tau$ by $\tau'$, and replace $\operatorname{SO}(n)=\operatorname{SU}(n)^{\tau}$ by $\operatorname{Sp}(\frac{n}{2})=\operatorname{SU}(n)^{\tau'}$. There is a unique maximal antipodal set $X$ in $M$ up to translation, and $|X|=2^{\frac{n}{2}}$. \end{example} In general, when $m$ is odd, we have $G^{\tau}\cong\operatorname{SO}(n)$ and $G^{\tau'}\cong\operatorname{Sp}(n/2)$ (Proposition \ref{P:A1-2-subgroup}). Then, the classification is the same as in the case of $m=1$. The group $F_{1}(X)$ is conjugate the subgroup of $G^{\theta}$ consisting of diagonal matrices, and $$|X|=|F_{2}(X) \cap C_{\overline{\theta}}|=|F_{1}(X)|.$$ It is equal to $2^{n-1}$ for $\theta=\tau$, and is equal to $2^{\frac{n}{2}}$ for $\theta=\tau'$. When $m$ is even and $n/m$ is odd, we have $G^{\tau}\cong\operatorname{PO}(n)$ and $G^{\tau'}\cong\operatorname{PSp}(n/2)$ (Proposition \ref{P:A1-2-subgroup}). Then, the classification is the same as in the case of $m=n$. When $m$ and $n/m$ are both even, we have $G^{\tau}\cong\operatorname{PSO}(n)\times\mathbb{Z}/2\mathbb{Z}$ and $G^{\tau'}\cong \operatorname{PSp}(n/2)\times\mathbb{Z}/2\mathbb{Z}$ (Proposition \ref{P:A1-2-subgroup}). Using the classification of elementary abelian 2-subgroups of $\operatorname{PSO}(n)$ and of $\operatorname{PSp}(\frac{n}{2})$ given in \cite[Propositions 2.12 and 2.16]{Yu}, we can classify maximal antipodal sets. It is similar as in the case of $m=n$. \subsection{Types CI and DIII}\label{SS:C1-D3-classification} Let $M$ be a compact symmetric space of type CI or DIII. By Proposition \ref{P:D1-D3-list}, there are three cases to consider, \begin{enumerate} \item $M\cong G/G^{\theta}$ with $G$ of adjoint type, the pair $(G,\theta)=(\operatorname{PSp}(n),\operatorname{Ad}(\mathbf{i}I))$ or $(\operatorname{PSO}(2n),\operatorname{Ad}(J_{n}))$. \item $M\cong\operatorname{SO}(2n)/\operatorname{U}(n)$ \item $M\cong\operatorname{Sp}(n)/\operatorname{U}(n)$. \end{enumerate} Item 1 is treated in Remark \ref{R:X-F}. We treat item 2 and item 3 below. \begin{example}\label{E:D3-classification} Let $M=\operatorname{SO}(2n)/\operatorname{U}(n)$, and $X\subset M$ be a maximal antipodal set containing the origin $o$. Write $G= \operatorname{SO}(2n)$ and $\theta=\operatorname{Ad}(J_{n})\in\operatorname{Aut}(G)$. Then, $M=G/G^{\theta}$. Set $$\overline{G}=G\rtimes\langle \overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta$. As in Subsection \ref{SS:antipodal2}, we have a map $\phi: M\rightarrow G$, a conjugacy class $C_{\overline{\theta}}\subset\overline{G}$, a subset $\phi(X)$ of $G$, a subgroup $F_{1}(X)$ of $G$, and a subgroup $F_{2}(X)$ of $\overline{G}$. By Theorem \ref{T:X-F}, $F_{2}(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$, and $$X=\{x\in M: \phi(x)\in F_{2}(X)\cap C_{\overline{\theta}}\overline{\theta}^{-1}\}.$$ Due to $o=G^{\theta} \in X$, $\overline{\theta}\in F_{2}(X)$. Thus, $$|X|=|F_{2}(X)\cap C_{\overline{\theta}}\overline{\theta}^{-1}|= |F_{2}(X)\cap C_{\overline{\theta}}|.$$ As we assume $o\in X$, we have $\theta\in F_{2}(X)$. Thus, $F_{1}(X)\subset G^{\theta}=\operatorname{U}(n)$. Then, $F_{1}(X)$ is diagonalizable. Due to the maximality of $F_{2}(X)$, $F_{1}(X)$ is conjugate to the subgroup of $\operatorname{U}(n)$ consisting of diagonal matrices with entries $\pm{1}$ and with determinant $1$ (this condition is forced by $F_{2}(X)$ is generated by elements in $C_{\overline{\theta}}$). Then, $$|X|=|F_{2}(X)\cap C_{\overline{\theta}}|=|F_{1}(X)|=2^{n-1}.$$ \end{example} \begin{example}\label{E:C1-classification} Let $M=\operatorname{Sp}(n)/\operatorname{U}(n)$, and $X\subset M$ be a maximal antipodal set containing the origin $o$. The discussion is along the same line as in Example \ref{E:D3-classification}. There is a unique maximal antipodal set $X$ in $M$ up to translation, and $|X|=2^{n}$. \end{example} \subsection{Exceptional type}\label{SS:exceptional-classification} Let $M$ be an irreducible compact symmetric space of exceptional type. By Proposition \ref{P:exceptional-list}, $M=G/G^{\theta}$ for $(G,\theta)$ in the following list, \begin{enumerate} \item $G$ is of adjoint type. \item $G=\operatorname{E}_{6}^{sc}$, $\theta$ an outer involutive automorphism. \item $G=\operatorname{E}_{7}^{sc}$, $\theta\sim\sigma_2$ or $\theta\sim\sigma_3$ as in Table 1. \end{enumerate} Symmetric spaces in item 1 are treated in Remark \ref{R:X-F}. We discuss symmetric spaces in item 2 and item 3 below. \begin{example} Let $M=\operatorname{E}_{6}^{sc}/(\operatorname{E}_{6}^{sc})^{\theta}$ for $\theta$ an outer involution, and $X\subset M$ be a maximal antipodal set containing the origin $o$. Write $G=\operatorname{E}_{6}^{sc}$. Set $$\overline{G}=G\rtimes\langle \overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta$. As in Subsection \ref{SS:antipodal2}, we have subgroups $F_{1}(X)$ and $F_{2}(X)$ of $\overline{G}$. By Theorem \ref{T:X-F}, $F_{2}(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$, and $$X=\{x\in M: \phi(x)\in F_{2}(X)\cap C_{\overline{\theta}} \overline{\theta}^{-1}\}.$$ As we assume $o\in X$, we have $\theta\in F_{2}(X)$. Thus, $F_{1}(X)\subset G^{\theta}$. When $\theta\sim\sigma_{3}$, $G^{\theta}\cong\operatorname{F}_4$. In this case, $F_{1}(X)$ is conjugate to the subgroup $F_{2,0}$(cf. \cite[Page 269]{Yu}) of $\operatorname{F}_4$, and $$|X|=4.$$ When $\theta\sim\sigma_{4}$, $G^{\theta}\cong\operatorname{PSp}(4)$. As in \cite[Definition 2.15]{Yu}, it is associated with invariants $(\epsilon,\delta,r,s)$ to $F_{1}(X)$. Then, $F_{1}(X)$ is a maximal elementary abelian 2-subgroup of $\operatorname{PSp}(4)$. By \cite[Proposition 2.16]{Yu}, we have $(\epsilon,\delta)=(0,1)$, $(r,s)=$ $(4,0)$ or $(1,2)$. When $(r,s)=(4,0)$, $|X|=28$; when $(r,s)=(1,2)$, $|X|=2^{6}=64.$ \end{example} \begin{example} Let $M=\operatorname{E}_{7}^{sc}/(\operatorname{E}_{7}^{sc})^{\theta}$ for $\theta\sim\sigma_2$ or $\theta\sim\sigma_3$, and $X\subset M$ be a maximal antipodal set containing the origin $o$. Write $G=\operatorname{E}_{7}^{sc}$. Set $$\overline{G}=G\rtimes\langle \overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta$. As in Subsection \ref{SS:antipodal2}, we have subgroups $F_{1}(X)$ and $F_{2}(X)$ of $\overline{G}$. By Theorem \ref{T:X-F}, $F_{2}(X)$ is a maximal element in the set of elementary abelian 2-subgroups of $\overline{G}$ which are generated by elements in $C_{\overline{\theta}}$, and $$X=\{x\in M: \phi(x)\in F_{2}(X) \cap C_{\overline{\theta}}\overline{\theta}^{-1}\}.$$ As we assume $o\in X$, we have $\overline{\theta}\in F_{2}(X)$. Thus, $F_{1}(X)\subset G^{\theta}$. When $\theta\sim \sigma_{2}$, $$G^{\theta}\cong(\operatorname{E}_{6}\times\operatorname{U}(1))/\langle(c,e^{\frac{2\pi i}{3}})\rangle.$$ In this case, $F_{1}(X)$ is of the form $F_{1}(X)=J\times\langle(1,-1)\rangle$, where $J$ is an elementary abelian 2-subgroup of $\operatorname{E}_6$ which projects to the $F'_{0,1,0,2}$ of $\operatorname{E}_{6}^{ad}$. Hence, $$|X|=2\times\frac{2^{6}-2^{3}}{2}=56.$$ When $\theta\sim\sigma_{3}$, $$G^{\theta}\cong\operatorname{SU}(8)/\langle -I\rangle.$$ In this case, $F_{1}(X)$ projects to a maximal elementary abelian 2-subgroup of $\operatorname{PSU}(8)$. It is associated with two integers $(r,s)$ with $r\cdot 2^{s}=8$ and $r\neq 2$. Then, $(r,s)=(8,0)$, $(4,1)$, or $(1,8)$. Using \cite[Lemma 7.11]{Yu}, we can count the cardinality of $X$. Precisely to say, when $(r,s)=(8,0)$, $|X|=72$; when $(r,s)=(4,1)$, $|X|=56$; when $(r,s)=(1,3)$; $|X|=128$. \end{example} \subsection{Open cases}\label{SS:open} In summary, the only irreducible compact symmetric spaces for which we do not have a complete classification of maximal antipodal sets yet are in the following list. \begin{enumerate} \item (Group case) $M=\operatorname{Spin}(n)$ ($n\geq 7$), and $M=\operatorname{Spin}(4n)/\langle c\rangle$ ($n\geq 3$). \item (Some isogeny forms of real Grassmannians) $M=\operatorname{Spin}(p+q)/\operatorname{Spin}(p)\cdot\operatorname{Spin}(q)$ ($p\geq q\geq 1$ and $p+q\geq 7$), and $M\!=\!\operatorname{Spin}(4n)/((\operatorname{Spin}(2n)\cdot\operatorname{Spin}(2n))\rtimes\langle L_{2n}\rangle)$\footnotemark ($n\geq 3$). Here $L_{2n}=\frac{1+e_{1}e_{1+2n}}{\sqrt{2}}\!\cdots\!\frac{1+e_{2n}e_{4n}}{\sqrt{2}}$. \footnotetext{Another form of $M$ is $M\cong G/G^{\theta}$ for $G=\operatorname{Spin}(4n)/\langle c\rangle$ and $\theta= \operatorname{Ad}(e_{1}e_{2}\dots e_{2n})$. Here $c=e_1e_2\cdots e_{2n}$. We also note that $\operatorname{Spin}(8)/\langle c\rangle\cong \operatorname{SO}(8)$, so the case of $n=2$ is treated.} \end{enumerate} \section{Antipodal set with another meaning}\label{S:anitipodal'} For a compact symmetric space $M$, the fixed point set of geodesic symmetry at a point $p$ is also called an ``antipodal set'' by some authors (\cite{Tirao}, \cite{Liu-Deng}). Antipodal sets with this meaning are determined in many cases, but not all. In this subsection we briefly describe a method of determining antipodal sets in irreducible compact symmetric spaces with this meaning . We may assume that $p$ is the origin $p=o=H\in G/H$. As in Subsection \ref{SS:antipodal2}, we set $$\overline{G}=G\rtimes\langle\overline{\theta}\rangle,$$ where $\overline{\theta}^{2}=1$ and $\operatorname{Ad}(\overline{\theta})|_{G}=\theta.$ Write $$C_{\overline{\theta}}=\{g\overline{\theta}g^{-1}: g\in G\}.$$ \begin{prop}\label{P:characterization2'} Assume $H=G^{\theta}$. Then, $x=gH$ is in the fixed point set of $s_{o}$ if and only if $\phi(x)\in H$ and $\phi(x)^{2}=1$. The $H$ orbits in the fixed point set of $s_{o}$ are in one-to-one correspondence with $G$ orbits of ordered pairs $(\theta_1,\theta_2)\in\overline{G}\times\overline{G}$ such that $\theta_1,\theta_2 \in C_{\overline{\theta}}$ and $\theta_1\theta_2=\theta_2\theta_1$. \end{prop} \begin{proof} The first statement is shown in the proof of Proposition \ref{P:characterization2}. Let $x=gH$ be in the fixed point set of $s_{o}$. Write $\theta_1=g\overline{\theta}g^{-1}$ and $\theta_2= \overline{\theta}$. Then, $\phi(x)=g\theta(g)^{-1}=\theta_1\theta_{2}^{-1}$. Due to $\theta(\phi(x))= \phi(x)^{-1}$, the following two conditions are equivalent to each other: (1), $\phi(x)\in H$ (this is equivalent to $\theta(\phi(x))=\phi(x)$); (2), $\phi(x)^{2}=1$. Moreover, they are both equivalent to: $\theta_1$ commutes with $\theta_2$. Apparently, $\theta_1,\theta_2\in C_{\overline{\theta}}$. Moreover, replacing $x=gH$ by $x'=h\cdot x=hgH$ for $h\in H$ amounts to replacing $(\theta_1,\theta_2)$ by $(h\theta_{1}h^{-1},h\theta_{2}h^{-1})=(h\theta_{1}h^{-1},\theta_2)$. This shows one direction for the correspondence in the second statement. Another direction could be proved similarly. \end{proof} By Proposition \ref{P:characterization2'}, to determining $G^{\theta}$ orbits in $G/G^{\theta}$ it suffices to classify {\it ordered pairs of commuting involutions} in $(\theta_1,\theta_2)$ in $\overline{G}$ with $\theta_1,\theta_2\in C_{\overline{\theta}}$. If $M$ is an irreducible compact symmetric spaces not of group form, we have shown that it is of the form $M=G/G^{\theta}$ for $G$ a compact simple Lie group and $\theta$ an involutive automorphism of $G$. When $G$ is of adjoint type, ordered pairs of commuting involutions are classified in \cite{Huang-Yu}. When $G$ is not of adjoint type, the classification can be made through considering the projection $p: G\rightarrow\operatorname{Int}(\operatorname{Lie} G)$ and referring to the classification in \cite{Huang-Yu}, plus a little more addition work. If $M$ is of group form, i.e., it is itself a compact simple Lie group, then the fixed point of $s_{e}$ except $e$ itself is the set of involutions in $G$, and the $G$ orbits (with respect to conjugation action) of it is the set of conjugacy classes of involutions in $G$. When $G$ is of adjoint type, conjugacy classes of involutions in $G$ were classified by Elie Cartan. When $G$ is not of adjoint type, the classification can be made through considering the projection $p: G \rightarrow\operatorname{Int}(\operatorname{Lie} G)$ and referring to Cartan's classification, plus a little more addition work.
1,477,468,751,294
arxiv
\section{Introduction} Inspired by the applications of game theory to the study of classical information problems, quantum information theorists began to include quantum probability amplitudes and quantum entanglement into classical game theory \cite{1,2,3,4,5,6,7}, creating what is now known as quantum game theory. The main motivation behind this new area was, and still is, the applications it has in secure quantum communications, as quantum eavesdropping can be treated as a game in which the spy's goal is to extract the maximum amount of information from a quantum communication channel. In 1999, Meyer provided the techniques to include quantum strategies into classical game theory \cite{1}, showing that the use of these actually increases the expected payoffs of the players. In the same year, motivated by the lack of an unconditionally secure remote gambling, Goldenberg et al. presented a protocol that allows two remote parties to play a quantum fair-gambling game \cite{2}. Later on, a quantum version of the famous game, the prisoner's dilemma, was discussed by Eisert et al. and Benjamin and Hayden in Refs. \cite{3,4,5}. Collective quantum games, in which more than two players take part, were also studied by Benjamin and Hayden in Ref. \cite{6}, concluding that quantum entanglement enables different kinds of cooperative behavior, preventing players from betraying one another. One interesting problem that arises in quantum game theory is the relation between the classical and quantum results. This issue is briefly discussed by van Enk and Pike in Ref. \cite{7}. In their paper, they analyze to what extent the quantum solution of the prisoner's dilemma solves the classical problem. Another well-known dilemma in classical information theory is the Monty Hall problem because of its sharply counter-intuitiveness. It has been suggested that a quantum version of the Monty Hall problem may be of interest in the study of quantum strategies of quantum measurement \cite{Chuan}. Also, it has been recently used to reformulate the Pusey-Barret-Rudolph (PBR) theorem \cite{PBR}, as well as to improve the reliability of quantum teleportation \cite{QReports}. PBR addresses the question of whether a quantum state corresponds to a $\psi$-ontic model or to a $\psi$-epistemic model. When expressed as a Monty Hall game, wining probabilities for switching doors depend on whether it is a $\psi$-ontic or a $\psi$-epistemic game \cite{QReports}. Other attempts at a quantum version of the Monty Hall problem can be found in the literature, see for example Refs. \cite{Chuan,Abbott,Ariano,Zander,Gawron,Khan, Kurzyk, Paul}. All these studies show that there is not a unique way to formulate a quantum version of this classical game. In this paper we propose a quantum version of the Monty Hall problem inspired by an experimentally-feasible, quantum-optical set-up that resembles the classical game with the addition of some quantum features. In order to introduce our model, in the following we summarize both the classical and (the various versions of) the quantum Monty Hall game. \subsubsection*{Classical Monty Hall problem} The Monty Hall problem is a famous, seemingly paradoxical problem in probability \cite{8,9,10}; closely related to other dilemma problems like the ``three prisoners problem'' and the ``Bertrand's box paradox''. It describes a contest in which a player is asked to choose between three boxes. Inside one of the boxes, a prize was randomly placed beforehand. There are two main characters in this contest: the host (Monty Hall), who knows in which box the prize is, and the player, who does not have any information about its location. The contest begins with the player choosing (but not opening) one of the boxes. If the chosen box is the one with the prize inside, the host, who knows where the prize hides, randomly opens one of the two empty boxes. On the other hand, if the player chooses one of the empty boxes, the host opens the other remaining empty box. In both cases the host shares this information with the player. Lastly, the host asks the player if she wants to open her initial choice or prefers to open the other box that remains closed. The apparent paradox results from the fact that, when doing the calculations, it is found that the probability of the player finding the prize in the box she initially chose is $\frac{1}{3}$, while the probability of finding the prize if she decides to open the other box is $\frac{2}{3}$. The above result can be thought of as follows: the location of the prize, with a probability of $\frac{1}{3}$ for each box, and the initial choice of the player, also with a probability of $\frac{1}{3}$ for each box, are independent events; therefore, the probability of the prize being in box $j$ and the player initially choosing box $i$ is $P(i,j) = \frac{1}{9}$ for all $i,j = 1,2,3$. Table \ref{tb:1} shows the elements $(i,j)$ of the corresponding sample space. The cases in which the player initially chooses the box in which the prize is located, that is, the cases in which the player wins if she decides to open her first option, are of the form $(j,j)$. The elements $(i,j)$ with $i\neq j$ represent, after the host has opened box $k$ ($k\neq i,j$), the cases in which the player wins if she decides to change her initial choice and open the other box. The probability $P_{ns}$ of the player winning by not switching her initial choice, is calculated by adding the probabilities of the elements corresponding to that event, that is: \begin{align} P_{ns} = P(1,1) + P(2,2) + P(3,3) = \frac{1}{3}. \label{eq:a} \end{align} Analogously, the probability $P_{s}$ of the player winning by switching her initial choice, is: \begin{align} P_{s} = P(1,2) + P (1,3) + P(2,1) + P(2,3) + P(3,1) + P(3,2) = \frac{2}{3}.\label{eq:b} \end{align} \begin{table}[t] \begin{tabular}{ccc} $(1,1)$\quad $(1,2)$\quad $(1,3)$ & \\ $(2,1)$\quad $(2,2)$\quad $(2,3)$ & \\ $(3,1)$\quad $(3,2)$\quad $(3,3)$ & \end{tabular} \caption{Elements of the sample space of the Monty Hall problem. The prize being in box $j$ and the player initially choosing box $i$ is represented as $(i,j)$. \label{tb:1}} \end{table} The Monty Hall problem can be generalized to include an arbitrary number of boxes, more players (all win the prize if they choose the correct box) and more empty boxes to be opened by the host. In this case, using a similar analysis as before, the probabilities are found to be: \begin{equation} P_{ns} = \frac{1}{n}, \end{equation} \begin{equation} P_{s} = \left( \frac{n-1}{n-m-1}\right) \frac{1}{n}, \end{equation} where $n$ is the total number of boxes and $m$ is the number of empty boxes to be opened by the host. Here, $m$ is restricted to $0\leq m \leq n-k$, where $k$ is the total number of players (counting the host as a player). \subsubsection*{Quantum Monty Hall game} In quantum game theory, the construction of a quantum version of a classical game is an entirely subjective task, usually referring to considering the different elements of the classical game (for example, the location of the prize, the opening of a box by the host, etc.) as elements present in the study of a quantum system (such as superpositions and projective measurements, for instance). Consequently, to this day different approaches and quantum versions of the Monty Hall problem have already been proposed \cite{Chuan,Abbott,Ariano,Zander,Gawron,Khan, Kurzyk}, and there is even a quantum algorithm developed so that two persons can play a version of the Monty Hall quantum game on a quantum computer \cite{Paul}. Some of the most interesting quantum versions are those of Flitney and Abbot \cite{Abbott}, D'Ariano et al. \cite{Ariano} and C.-F. Li \cite{Chuan}. Let us briefly summarize these three versions of the quantum Monty Hall game. \begin{itemize} \item In Ref. \cite{Abbott}, Flitney and Abbott describe a quantum version in which the location of the prize, the initially chosen box of the player and the box opened by the host, are each represented by a state in a three dimensional Hilbert space: $\mathcal H_{a}$, $\mathcal H_{b}$ and $\mathcal H_{o}$, respectively. The full system is initially prepared in an arbitrary state $\left| \psi \right\rangle \in \mathcal H = \mathcal H_{o} \otimes \mathcal H_{b} \otimes \mathcal H_{a}$, a feature that does not have a classical analogy. After this initial preparation, the game begins: the host hides the prize by acting a unitary operator on $\mathcal H_{a}$, the act of the player choosing a box is also implemented by acting a unitary operator on $\mathcal H_{b}$ and the opening of a box by the host is performed too by acting a unitary operator on the full space $\mathcal H$. Lastly, the decision of switching between boxes is made by acting a superposition of two operators, a switching operator and a not-switching operator, on the full space $\mathcal H$, allowing a non-classical feature: a superposition of switching and not switching between boxes. The authors conclude that if the host has access to quantum strategies and the player does not, the former can make the game fair with a expected payoff of $1/2$ for each player. Otherwise, if the player has access to quantum strategies and the host does not, then the player can win the game all the times. \item In Ref. \cite{Ariano}, D'Ariano et al. present a scheme in which they represent the location of the prize as a state in a three dimensional space $\mathcal H$, information about the state is given to the host classically or via measurements on an ancillary system entangled with the prize. The initial choice of the player is in this case, not an operation, but a simple choice of a state $\left|p_{i} \right\rangle \in \mathcal H$. The opening of an empty box by the host, is performed as a projective measurement, reducing the space $\cal H$ to a two-dimensional space $\mathcal H _{p}$ spanned by $\left|p_{i} \right\rangle$ and the prize state. Lastly, the player can choose to stay with her initial choice or to change to a different state $\left|p_{f} \right\rangle \in \mathcal{H}_p$. An interesting variation of the Monty Hall game which is closely related with that of D'Ariano et al. is the Ignorant Monty Hall game \cite{QReports}, where the host does not know where the prize is and can accidentally reveal it. In contrast to the original version where the player expects a payoff of 1/3 if sticking to the initial choice and 2/3 if switching, in this variation the contestant probability of winning is the same whether the contestant chooses to switch the door or not. \item A different way to implement quantum strategies in the Monty Hall game is proposed by C.-F. Li et al. in Ref. \cite{Chuan}. There are one quantum particle (the prize) and three boxes $ \left| 0 \right> $, $\left| 1 \right>$, and $\left| 2 \right> $. The host puts the particle into the boxes, which may be a superposition state like $\left| \psi \right> _{p} = \frac{1}{\sqrt{3}} ( \left| 0 \right> + \left| 1 \right> + \left| 2 \right> )$. The player now picks a box, for instance $\left| 0 \right>$, and has 1/3 chance to win. If the host reveals no particle in $\left| 2 \right>$, the state of the particle may be described by the density matrix $\rho _{0} = \frac{1}{3} \left| 0 \right> \left< 0 \right| + \frac{2}{3} \left| 1 \right> \left< 1 \right| $. Now the host is allowed to perform a von Neumann measurement on the particle (in orthogonal bases which are linear superpositions of $\left| 0 \right>$ and $\left| 1 \right>$), thus reducing the problem to a coin tossing game. Hence, the conclusion is the same as in the Ignorant Monty Hall game described above: 1/2 of probability of being correct by either staying or switching. \end{itemize} In this work we introduce a different approach to the Monty Hall problem which is inspired by a quantum-optical set-up that resembles the classical game. A nonlinear crystal allows us to have entanglement between the prize's location and the initial choice of the player, while the use of polarized beam splitters and polarization rotators enable the superposition feature of both the prize's location and the initial choice of the player. In our quantum version, however, the decision of switching between boxes is binary and based on the classical sample space in table \ref{tb:1}. We will also analyze the influence of noise on the game outcome. \section{Proposed Quantum Version} \label{PQS} The system under study will be modeled by two Hilbert spaces $\mathcal H_{a}$ and $\mathcal H_{b}$, each with a respective orthonormal basis $\left\lbrace \left| 1_{a} \right\rangle,\left| 2_{a} \right\rangle,\left| 3_{a} \right\rangle \right\rbrace$ and $\left\lbrace \left| 1_{b} \right\rangle,\left| 2_{b} \right\rangle,\left| 3_{b} \right\rangle \right\rbrace$. States in $\mathcal H_{a}$ correspond to the initially chosen box by the player (which we will refer to as Alice). Notice that the initial choice of Alice can be a superposition of boxes. On the other hand, states in $\mathcal H_{b}$ correspond to the prize's location, initially prepared by the host (which we will refer to as Bob). Analogously, the prize can also be in a superposition of boxes. The game begins with Bob preparing the prize in a state \begin{equation} \left| \psi_{b} \right\rangle = \sum^{3}_{i=1} b_{i}\left| i_{b} \right\rangle,\label{choice_b} \end{equation} which Alice will have no information about. She then proceeds to choose a box, i.e. a state in $\mathcal H_{a}$: \begin{equation} \left| \psi_{a} \right\rangle = \sum^{3}_{i=1} a_{i}\left| i_{a} \right\rangle.\label{choice_a} \end{equation} The state that describes both the prize and Alice's initial choice is thus \begin{equation} \left| \psi_{0} \right\rangle = \left| \psi_{a} \right\rangle \otimes \left| \psi_{b} \right\rangle \in \mathcal H_{a} \otimes \mathcal H_{b}. \label{eq:1} \end{equation} Once Alice has chosen a box, Bob has to do something analogous to open one of them. A useful way (for our proposed experimental set-up) of mathematically doing this, is to apply the door-opening operator \begin{equation} \hat{D}_{o} := \cos(\varphi_1) \left| 1_b \right\rangle \left\langle 1_b \right| + \sin(\varphi_2) \left| 2_b \right\rangle \left\langle 2_b \right| + \sin(\varphi_3) \left| 3_b \right\rangle \left\langle 3_b \right|\label{doo} \end{equation} to $\left| \psi_{b} \right\rangle$, where $\varphi_{1},\varphi_{2},\varphi_{3} \in \left[ 0, \frac{\pi}{2} \right]$. The door-opening operator must also satisfy a two-doors-remained-closed condition, which can be modeled as \begin{equation} \cos^{2}(\varphi_1) + \sin^{2}(\varphi_2) + \sin^{2}(\varphi_3) = 2.\label{tdrcc} \end{equation} There are two main things worth noticing here. The first is that the door-opening operator acts as a projection onto the two-dimensional subspace generated by $\left\lbrace \left| 2_{b} \right\rangle,\left| 3_{b} \right\rangle \right\rbrace$, $\left\lbrace \left| 1_{b} \right\rangle,\left| 3_{b} \right\rangle \right\rbrace$ and $\left\lbrace \left| 1_{b} \right\rangle,\left| 2_{b} \right\rangle \right\rbrace$ when ($\varphi_{1} = \frac{\pi}{2},\varphi_{2} = \frac{\pi}{2},\varphi_{3} = \frac{\pi}{2}$), ($\varphi_{1} = 0,\varphi_{2} = 0,\varphi_{3} = \frac{\pi}{2}$) and ($\varphi_{1} = 0,\varphi_{2} = \frac{\pi}{2},\varphi_{3} = 0$) respectively. The second is that the application of $\hat{D}_{o}$ on $\left| \psi_{b} \right\rangle$ de-normalizes it. Thus, in order to maintain the interpretation of the inner product as a probability amplitude, the resulting state must be renormalized, leading to define \begin{equation} \sum^{3}_{i=1} \beta_{i}\left| i_{b} \right\rangle = \frac{\hat{D}_{o} \left| \psi_{b} \right\rangle}{\sqrt{\left\langle \psi_b \right| \hat{D}^{\dagger}_{o} \hat{D}_{o} \left| \psi_{b} \right\rangle}},\label{betas} \end{equation} for some $\beta_{i}\in \mathbb{C}$ such that $\left| \beta_{1} \right| ^{2} + \left| \beta_{2} \right| ^{2} + \left| \beta_{3} \right| ^{2} = 1$. The composite state of Alice's initial choice and the prize's location thus becomes \begin{equation} \left| \psi \right\rangle = \sum^{3}_{i=1} \sum^{3}_{j=1} a_{i} \beta_{j} \left| i_{a}, \; j_{b} \right\rangle. \label{obs} \end{equation} In analogy with the classical case, and as it was shown in table \ref{tb:1}, the states $\left\lbrace \left| 1_{a} \; 1_{b} \right\rangle,\left| 2_{a} \; 2_{b} \right\rangle,\left| 3_{a} \; 3_{b} \right\rangle \right\rbrace$ correspond to Alice winning with her initial choice, while the states $\left\lbrace \left| 1_{a} \; 2_{b} \right\rangle,\left| 1_{a} \; 3_{b} \right\rangle,\left| 2_{a} \; 1_{b} \right\rangle,\left| 2_{a} \; 3_{b} \right\rangle,\left| 3_{a} \; 1_{b} \right\rangle,\left| 3_{a} \; 2_{b} \right\rangle \right\rbrace$ correspond to Alice winning by switching her initial choice. Therefore, the probability of Alice winning by not switching and the probability of her winning by switching are respectively \begin{equation} P_{ns} = \sum^{3}_{i=1} \left| a_{i} \beta_{i} \right|^{2}, \label{eq:2} \end{equation} \begin{equation} P_{s} = \sum^{3}_{i,j=1 \; (i\neq j)} \left| a_{i} \beta_{j} \right|^{2}. \label{eq:3} \end{equation} The classical Monty Hall problem may be thought of using this quantum version by restricting Alice and Bob to only use ``classical'' states $\left\lbrace \left| 1_{a} \right\rangle,\left| 2_{a} \right\rangle,\left| 3_{a} \right\rangle \right\rbrace$ and $\left\lbrace \left| 1_{b} \right\rangle,\left| 2_{b} \right\rangle,\left| 3_{b} \right\rangle \right\rbrace$ respectively. Nevertheless, expressions \eqref{eq:2} and \eqref{eq:3} would only yield the objective quantum probability (i.e. $1$ or $0$) of Alice winning given her choices and the state of the prize. In order to calculate the subjective classical probability (i.e. the one due to the lack of information from Alice) and obtain the same results as in \eqref{eq:a} and \eqref{eq:b}, one must perform a classical probability calculation (using table \ref{tb:1} for example). There is, however, a selection of quantum states by Alice and Bob that actually resembles the classical problem, a semi-classical case: First, Bob prepares the prize in the state with $b_{i}=\frac{1}{\sqrt{3}}$ (i.e. equally distributed over the three boxes), then Alice chooses the state with $a_{i}=\frac{1}{\sqrt{3}}$ (i.e. the same confidence in each box) and finally, Bob applies the door-opening operator $\hat{D}_{o}$ with ($\varphi_{1} = \frac{\pi}{2},\varphi_{2} = \frac{\pi}{2},\varphi_{3} = \frac{\pi}{2}$) or ($\varphi_{1} = 0,\varphi_{2} = 0,\varphi_{3} = \frac{\pi}{2}$) or ($\varphi_{1} = 0 , \varphi_{2} = \frac{\pi}{2},\varphi_{3} = 0$). In this case, equations \eqref{eq:2} and \eqref{eq:3} give respectively $\frac{1}{3}$ and $\frac{2}{3}$. The semi-classical case discussed above points out to a classical interpretation of the amplitudes $a_{i}$ and $b_{i}$ in our scheme, being $\left| a_{i} \right|^2$ the probability of Alice initially choosing box $i$ and $\left| b_{i} \right|^2$ the probability of the prize being placed in box $i$. This means that our quantum version, until now, can be entirely reproduced in classical probability theory by the classical Monty Hall problem or by analyzing a non-symmetric case: the host is more inclined to hide the prize in certain box and the player has some kind of bias towards initially choosing one of the boxes. However, it is worth mentioning that expression \eqref{obs} assumes $\left| \psi_{0} \right\rangle$ is a separable state in $\mathcal H_{a} \otimes \mathcal H_{b}$ (equation \eqref{eq:1}). This is a fully classical restriction, one we do not have to follow in the quantum realm. For a general state \begin{equation} \left| \psi \right\rangle = \sum_{i,j=1}^{3} \gamma_{ij} \left| i_{a}, \; j_{b} \right\rangle \in \mathcal H_{a} \otimes \mathcal H_{b}, \label{eq:un} \end{equation} the probability of Alice winning by not switching and the probability of her winning by switching are respectively \begin{equation} P_{e,ns} = \sum^{3}_{i=1} \left| \gamma_{ii}\right|^{2}, \label{eq:ens} \end{equation} \begin{equation} P_{e,s} = \sum^{3}_{i,j=1 \; (i\neq j)} \left| \gamma_{ij}\right|^{2}. \label{eq:es} \end{equation} The consideration of entanglement in equation \eqref{eq:un} between the prize's location and the player's initial choice would make the game unfair. It is here considered for the purpose of extending our framework to construct a secure communication protocol, as will be discussed below. In the next section we describe the quantum-optical set-up in which our quantum version is inspired, and use it to analyze both a separable and an entangled initial state $\left| \psi_{0} \right\rangle$. \section{Experimental Realization} \begin{figure*}[t] \begin{centering} \includegraphics[scale=1]{Experiment_B} \par\end{centering} \caption{\label{fig:1} Diagram of the experimental set-up proposed for the Monty Hall problem. The BBO type II crystal produces a pair of entangled photons in both position and polarization. The vertical arm of the set-up corresponds to Alice's subsystem, while the horizontal one corresponds to Bob's. ``Pol V'' and ``Pol H'' represent vertical and horizontal polarizers respectively, ``Rot($\theta$)'' represents a polarization rotator of an angle $\theta$, ``PBS'' represents a polarized beam splitter, ``Pol$_{H}(\varphi)$'' represents a polarizer at an angle $\varphi$ respect to the horizontal polarization. Detectors A1, A2 and A3 correspond to Alice's initial choice of a box, while the detectors B1, B2 and B3 correspond to the doors in which the prize is prepared.} \end{figure*} Figure \ref{fig:1} shows a diagram of a quantum-optical approach to an experimental realization of our proposed quantum version for the Monty Hall problem. In it, detectors A1, A2 and A3 represent Alice's initial choice of a box, while detectors B1, B2 and B3 represent the boxes where the prize is hidden. The game starts with a pair of polarization-entangled photons produced via spontaneous parametric down-conversion in a Beta Barium Borate (BBO) type II crystal. Photon A, represented by the vertical output of the BBO crystal in the diagram, corresponds to the system available to Alice and modeled in $\mathcal H_{a}$. While photon B, represented by the horizontal output of the BBO crystal in the diagram, corresponds to the system available to Bob and modeled in $\mathcal H_{b}$. The state of the composite system at this stage is given by the entangled state \begin{equation} \left| \phi_0 \right\rangle = \frac{1}{\sqrt{2}} \left( \left| V_{1}, \; H_{1} \right\rangle + \left| H_{1}, \; V_{1} \right\rangle \right) \in \mathcal H_{a} \otimes \mathcal H_{b}, \label{entan} \end{equation} where $V$ and $H$ stand for vertical and horizontal polarization respectively, and the subindex $1$ represents the label of the detector to which the photon is heading (A1 for photon A and B1 for photon B). Firstly, a vertical polarizer (Pol V) and an horizontal polarizer (Pol H) are respectively placed in photon A's and photon B's path. These polarizers allow to control entanglement between the two subsystems. If the polarizers are present, the entanglement between photons A and B is lost, as the state $\left| \phi_0 \right\rangle$ reduces to $\left| V_{1}, \; H_{1} \right\rangle$. On the other hand, if the polarizers are removed, then the system remains entangled in the state $\left| \phi_0 \right\rangle$. Both entangled and non-entangled cases are analyzed, but for the sake of simplicity, let us describe just the case where the polarizers are placed and the initial state of the system is decribed by \begin{equation} \left| \phi_0 \right\rangle = \left| V_{1}, \; H_{1} \right\rangle. \label{nen} \end{equation} In order to obtain the expressions corresponding to the entangled case, the operations we will be describing must also be applied to the term $\left| H_{1}, \; V_{1} \right\rangle$. The next device placed in the photons path is a polarization rotator (Rot($\theta$)) with rotation angles $\theta_{a1}$ and $\theta_{b1}$ for photon A and photon B respectively. By choosing an angle $\theta_{b1}$, Bob is fixing the probability amplitude $b_3$ in \eqref{choice_b}. Analogously, by choosing an angle $\theta_{a1}$, Alice is fixing the probability amplitude $a_3$ in eq. \eqref{choice_a}. The polarization-rotator linear operator performs the operations \begin{align} Rot(\theta)\left| V \right\rangle & = \cos{\theta}\left| V \right\rangle - \sin{\theta}\left| H \right\rangle \label{eq:4} \\ Rot(\theta)\left| H \right\rangle & = \cos{\theta}\left| H \right\rangle + \sin{\theta}\left| V \right\rangle,\label{eq:5} \end{align} changing the initial state $\left| \phi_0 \right\rangle$ to \begin{equation} \left| \phi_1 \right\rangle = \left( \cos{\theta_{a1}} \left| V_{1}\right\rangle - \sin{\theta_{a1}} \left| H_{1}\right\rangle \right) \otimes \left( \cos{\theta_{b1}} \left| H_{1}\right\rangle + \sin{\theta_{b1}} \left| V_{1}\right\rangle \right). \end{equation} Next, each of the photons encounter a polarized beam splitter (PBS), positioned to reflect the vertical component of the polarization and transmit the horizontal one. These first PBS's reflect the vertical component of the polarization of photons A and B towards detectors A3 and B3 respectively, performing the operation \begin{equation} PBS \left( \alpha \left| V_1 \right\rangle + \beta \left| H_1 \right\rangle \right) = \alpha \left| V_3 \right\rangle + \beta \left| H_1 \right\rangle. \end{equation} The state of the system after these first PBS's is then \begin{equation} \left| \phi_2 \right\rangle = \left( \cos{\theta_{a1}} \left| V_{3}\right\rangle - \sin{\theta_{a1}} \left| H_{1}\right\rangle \right) \\ \otimes \left( \cos{\theta_{b1}} \left| H_{1}\right\rangle + \sin{\theta_{b1}} \left| V_{3}\right\rangle \right).\label{eq:6} \end{equation} It is worth mentioning that, strictly speaking, the operation performed by the PBS requires an ancillary input in order to have two outputs, accounting for the four faces of a BS. Since in this work we are just using a PBS as a controlled gate and the proposed experimental set-up does not have any interferometer-like behavior, this more-formal mathematical treatment of a PBS is not necessary and will not be used. In order for Alice and Bob to fix amplitudes $a_1$, $a_2$ in \eqref{choice_a} and $b_1$, $b_2$ in \eqref{choice_b}, another polarization rotators with angles $\theta_{a2}$ and $\theta_{b2}$ are respectively placed in the path of the horizontal component of photons A and B. Applying the polarization-rotator operator (equations \eqref{eq:4} and \eqref{eq:5}) to state $\left| \phi_2 \right\rangle$ with angles $\theta_{a2}$ and $\theta_{b2}$, the state of the system becomes \begin{multline} \left| \phi_3 \right\rangle = ( \cos{\theta_{a1}} \left| V_{3}\right\rangle - \sin{\theta_{a1}} \sin{\theta_{a2}} \left| V_{1}\right\rangle - \sin{\theta_{a1}}\cos{\theta_{a2}} \left| H_{1}\right\rangle ) \\ \otimes ( \sin{\theta_{b1}} \left| V_{3} \right\rangle + \cos{\theta_{b1}} \sin{\theta_{b2}} \left| V_{1}\right\rangle + \cos{\theta_{b1}}\cos{\theta_{b2}} \left| H_{1}\right\rangle ). \end{multline} Then, a polarized beam splitter (PBS) is placed in both photons paths. These last PBS's are positioned to reflect the vertical component of the polarization towards detectors A2 and B2. So the state of the system becomes \begin{multline} \left| \phi_4 \right\rangle = ( \cos{\theta_{a1}} \left| V_{3}\right\rangle - \sin{\theta_{a1}} \sin{\theta_{a2}} \left| V_{2}\right\rangle - \sin{\theta_{a1}}\cos{\theta_{a2}} \left| H_{1}\right\rangle) \\ \otimes ( \sin{\theta_{b1}} \left| V_{3} \right\rangle + \cos{\theta_{b1}} \sin{\theta_{b2}} \left| V_{2}\right\rangle + \cos{\theta_{b1}}\cos{\theta_{b2}} \left| H_{1}\right\rangle ). \label{eq:7} \end{multline} Notice equation \eqref{eq:1} is analogous to equation \eqref{eq:7} with \begin{align} a_{1} & = - \sin{\theta_{a1}}\cos{\theta_{a2}},\label{a1}\\ a_{2} & = - \sin{\theta_{a1}} \sin{\theta_{a2}},\label{a2}\\ a_{3} & = \cos{\theta_{a1}},\label{a3}\\ b_{1} & = \cos{\theta_{b1}} \cos{\theta_{b2}},\\ b_{2} & = \cos{\theta_{b1}} \sin{\theta_{b2}},\\ b_{3} & = \sin{\theta_{b1}}. \end{align} The process described so far has been to prepare the state of the system as in expression \eqref{eq:1}. The next step in the Monty Hall problem is for Bob to open one box. In order to model this door-opening procedure in our experimental set-up, three polarizers (Pol$_{H}(\varphi_{i})$) at angles $\varphi_ {i}$ respect to the horizontal polarization, are placed before the detectors B$_i$. Strictly speaking, a polarizer performs a dissipative operation, since it absorbs part of the radiation that arrives to it. However, we are only interested in photons that account for coincidences between detectors A and B, i.e. photons that do reach detectors B1, B2 or B3. This is because these photons are the ones that allow us to measure the probabilities in an analogous way as we did classically with table \ref{tb:1}. Therefore, we can include the effect of the polarizers by changing the probability amplitudes $b_ {i}$ in Bob's subsystem as \begin{align} b^{\prime}_{1} = b_{1}\cos{\varphi_{1}},\\ b^{\prime}_{2} = b_{2}\sin{\varphi_{2}},\\ b^{\prime}_{3} = b_{3}\sin{\varphi_{3}}, \end{align} which is equivalent to applying the door-opening operator \eqref{doo} to Bob's subsystem, using the two-doors-remained-closed condition expressed in \eqref{tdrcc} as we only need to block one third of the photons. Then, as it was described in the previous section, we must renormalize the state, which is experimentally justified by the fact that we are calculating the probability of measuring the state $\left| i_{a}, \; j_{b} \right\rangle$ as \begin{equation} P_{i,j} = \frac{C_{i,j}}{\displaystyle\sum_{m,n=1}^{3}C_{m,n}}, \end{equation} where $C_{m,n}$ stands for the number of coincidences between detectors A$m$ and B$n$. Renormalization of the state leads to the new probability amplitudes $\beta_i$ (defined in expression \eqref{betas}) for Bob's subsystem: \begin{equation} \beta_{i} = \frac{b^{\prime}_{i}}{\sqrt{(b^{\prime}_{1})^{2}+(b^{\prime}_{2})^{2}+(b^{\prime}_{3})^{2}}}. \end{equation} In the case when the first polarizers (Pol V and Pol H) are removed, and the initial state $\left| \phi_0 \right\rangle$ is as in equation \eqref{entan}, the non-normalized amplitudes with respect to the basis $\left\lbrace \left| i_{a}, \; j_{b}\right\rangle \right\rbrace^{i=1,2,3}_{j=1,2,3}$, after applying the door-opening operator to the resulting state, turn out to be \begin{align} c_{11} & = \frac{-1}{\sqrt{2}} \cos{\theta_{a2}} \cos{\theta_{b2}} \sin{(\theta_{a1}+\theta_{b1})} \cos{\varphi_{1}}, \\ c_{12} & = \frac{-1}{\sqrt{2}} \cos{\theta_{a2}} \sin{\theta_{b2}} \sin{(\theta_{a1}+\theta_{b1})} \sin{\varphi_{2}}, \\ c_{13} & = \frac{1}{\sqrt{2}} \cos{\theta_{a2}} \cos{(\theta_{a1}+\theta_{b1})} \sin{\varphi_{3}}, \\ c_{21} & = \frac{-1}{\sqrt{2}} \sin{\theta_{a2}} \cos{\theta_{b2}} \sin{(\theta_{a1}+\theta_{b1})} \cos{\varphi_{1}},\\ c_{22} & = \frac{-1}{\sqrt{2}} \sin{\theta_{a2}} \sin{\theta_{b2}} \sin{(\theta_{a1}+\theta_{b1})} \sin{\varphi_{2}},\\ c_{23} & = \frac{1}{\sqrt{2}} \sin{\theta_{a2}} \cos{(\theta_{a1}+\theta_{b1})} \sin{\varphi_{3}},\\ c_{31} & = \frac{1}{\sqrt{2}} \cos{\theta_{b2}} \cos{(\theta_{a1}+\theta_{b1})} \cos{\varphi_{1}},\\ c_{32} & = \frac{1}{\sqrt{2}} \sin{\theta_{b2}} \cos{(\theta_{a1}+\theta_{b1})} \sin{\varphi_{2}},\\ c_{33} & = \frac{1}{\sqrt{2}} \sin{(\theta_{a1}+\theta_{b1})} \sin{\varphi_{3}}. \end{align} Thus, the amplitudes $\gamma_{ij}$ necessary to calculate $P_{e,ns}$ and $P_{e,s}$ as in equations \eqref{eq:ens} and \eqref{eq:es} respectively, are \begin{equation} \gamma_{ij} = \frac{c_{ij}}{\sqrt{\displaystyle\sum_{i,j=1}^{3} \left|c_{ij}\right|^2}}. \end{equation} \section{Noise Effects in the Experimetal Realization} Noise effects in quantum games has been extensively studied \cite{Chen, Johnson, Flitney, Ozdemir}. In Ref. \cite{Gawron}, Gawron et al. considered the influence of a spontaneous emission channel and a generalized Pauli channel on the quantum Monty Hall Game (within the Flittney and Abbott scheme). In this section, we will discuss the effects of a Pauli channel in our quantum version of the Monty Hall problem. If Alice and Bob played near each other (in the same lab) the quantum Monty Hall game, using the experimental set-up here proposed, the main source of noise in the obtained results would be the experimental uncertainty associated with each device, which can be addressed using experimental and uncertainty-propagation methods. However, a more interesting source of noise arise if we consider that Alice and Bob wish to play remotely; we discuss this case below. Let us suppose that the whole vertical arm of the experimental set-up, after the initial vertical polarizer (Pol V), is far away in Alice's lab. The rest of the set-up, along with the computer that counts the coincidences between detectors, stay in Bob's lab. The coincidences-count software would have to be tuned in order to account for the difference in separation between detectors A and B from the computer. In this case, in order for Alice and Bob to play the quantum Monty Hall game, the photon in which Alice applies her operations must travel through a quantum channel connecting the set-ups in both labs, leading to a possible loss of information on the state created by the BBO type II crystal, and affecting the game results. This loss of information is modeled using the Pauli noise: \begin{equation} \hat{\rho}_{o} = (1-p_{x}-p_{y}-p_{z})\hat{\rho} + p_{x} \, \hat{\sigma}_{x}\hat{\rho}\hat{\sigma}_{x} + p_{y} \, \hat{\sigma}_{y}\hat{\rho}\hat{\sigma}_{y} + p_{z} \, \hat{\sigma}_{z}\hat{\rho}\hat{\sigma}_{z}, \end{equation} where $\hat{\rho}_{o}$ is the state at the channel's output, $\hat{\rho}$ is the state at the channel's input, $\sigma_{i}$ are the Pauli matrices, and $p_{x}$, $p_{y}$ and $p_{z}$ are real parameters, between $0$ and $1$ and such that $p_{x}+p_{y}+p_{z} = 1$, related to the fidelity of the quantum channel along the respective axis. In this work, for simplicity and in order to avoid specifying a fixed configuration of the channel, we consider $p_{x} = p_{y} = p_{z} = \frac{p}{3}$. Furthermore, since just Alice's photon is sent through the quantum channel, it is the only one subject to the noise, thus, in our proposed experimental set-up, the state at the output of the channel is given by \begin{equation} \hat{\rho}_{o} = (1-p)\hat{\rho} + \frac{p}{3}\left[ \left( \hat{\sigma}_{x} \otimes \hat{I} \right) \hat{\rho} \left( \hat{\sigma}_{x} \otimes \hat{I} \right) + \left( \hat{\sigma}_{y} \otimes \hat{I} \right) \hat{\rho} \left( \hat{\sigma}_{y} \otimes \hat{I} \right) + \left( \hat{\sigma}_{z} \otimes \hat{I} \right) \hat{\rho} \left( \hat{\sigma}_{z} \otimes \hat{I} \right) \right], \label{output} \end{equation} where $\hat{I}$ is the identity operator. In our proposed experimental set-up, the possible states of the system at the channel's input are given by \begin{equation} \hat{\rho} = \left| V_{1}, \, H_{1} \right\rangle \left\langle V_{1}, \, H_{1} \right|, \end{equation} \begin{equation} \hat{\rho}_{e} = \frac{1}{2} \left( \left| V_{1}, \, H_{1} \right\rangle \left\langle V_{1}, \, H_{1} \right| + \left| V_{1}, \, H_{1} \right\rangle \left\langle H_{1}, \, V_{1} \right| + \left| H_{1}, \, V_{1} \right\rangle \left\langle V_{1}, \, H_{1} \right| + \left| H_{1}, \, V_{1} \right\rangle \left\langle H_{1}, \, V_{1} \right| \right), \end{equation} where $\hat{\rho}$ and $\hat{\rho}_{e}$ stand respectively for the non-entangled and entangled cases. Using equation \eqref{output}, the possible states at the channel's output, for the non-entangled and entangled cases, are respectively \begin{equation} \hat{\rho}_{o} = \left(1 - \frac{2p}{3} \right) \left| V_{1}, \, H_{1} \right\rangle \left\langle V_{1}, \, H_{1} \right| + \frac{2p}{3} \left| H_{1}, \, H_{1} \right\rangle \left\langle H_{1}, \, H_{1} \right|, \end{equation} \begin{multline} \hat{\rho}_{e,o} = \frac{1}{2} \left[ \left(1 - \frac{2p}{3} \right) \left( \left| H_{1}, \, V_{1} \right\rangle \left\langle H_{1}, \, V_{1} \right| + \left| V_{1}, \, H_{1} \right\rangle \left\langle V_{1}, \, H_{1} \right|\right) \right. \\ \left. + \left(1 - \frac{4p}{3} \right) \left( \left| V_{1}, \, H_{1} \right\rangle \left\langle H_{1}, \, V_{1} \right| + \left| H_{1}, \, V_{1} \right\rangle \left\langle V_{1}, \, H_{1} \right| \right) + \frac{2p}{3} \left( \left| H_{1}, \, H_{1} \right\rangle \left\langle H_{1}, \, H_{1} \right| + \left| V_{1}, \, V_{1} \right\rangle \left\langle V_{1}, \, V_{1} \right| \right) \right]. \end{multline} If we denote as $\hat{E}$ the operator that represents all the operations performed by both Alice and Bob in the experimental set-up of the game, the final states of the game for the non-entangled and entangled cases, are respectively given by \begin{equation} \hat{\rho}_{f} = \hat{E} \hat{\rho}_{o} \hat{E}^{\dagger}, \end{equation} \begin{equation} \hat{\rho}_{e,f} = \hat{E} \hat{\rho}_{e,o} \hat{E}^{\dagger}, \end{equation} and thus, the probabilities of winning by not switching and by switching for the non-entangled case (expressions \eqref{eq:2} and \eqref{eq:3}), and for the entangled case (expressions \eqref{eq:ens} and \eqref{eq:es}), are generalized as \begin{align} & P_{ns} = \displaystyle\sum_{i=1}^{3} \left\langle i_{a}, \; i_{b} \right| \hat{\rho}_{f} \left| i_{a}, \; i_{b} \right\rangle, \\ & P_{s} = \displaystyle\sum^{3}_{i,j=1 \; (i\neq j)} \left\langle i_{a}, \; j_{b} \right| \hat{\rho}_{f} \left| i_{a}, \; j_{b} \right\rangle, \\ & P_{e,ns} = \displaystyle\sum_{i=1}^{3} \left\langle i_{a}, \; i_{b} \right| \hat{\rho}_{e,f} \left| i_{a}, \; i_{b} \right\rangle, \\ & P_{e,s} = \displaystyle\sum^{3}_{i,j=1 \; (i\neq j)} \left\langle i_{a}, \; j_{b} \right| \hat{\rho}_{e,f} \left| i_{a}, \; j_{b} \right\rangle, \end{align} where, as before, the state $\left| i_{a}, \; j_{b} \right\rangle$ represents the event of Alice detecting a photon in detector Ai and Bob detecting a photon in detector Bj. As an example, consider the semi-classical case described in Section \ref{PQS}. Figure \ref{Pclass_noise} shows the probabilities of winning by switching and by not switching as a function of the parameter $p$ related to the fidelity of the quantum channel. Notice that as $p$ increases, the difference between the switching and not switching probabilities decreases, making the decision of switching slightly less meaningful for Alice, from $2$ times better to approximately $1.57$ times better. \begin{figure*}[!t] \begin{centering} \includegraphics[scale=0.4]{Pclass_noise} \par\end{centering} \caption{\label{Pclass_noise} Probabilities of Alice winning the semi-classical Monty Hall game by switching (continuous line) and by not switching (dashed line), as a function of the parameter $p$ related to the fidelity of the quantum channel.} \end{figure*} \section{Results} In this section we analyze the expected payoff of Alice (the player) from a frequentist perspective. Namely, if the Monty Hall experimental set-up is played multiple times, does Alice have a better chance of winning the game by not switching (bet for a coincidence between detectors A1/B1, A2/B2 and A3/B3) or by switching (bet for a coincidence between detectors A1/B2, A1/B3, A2/B1, A2/B3, A3/B1 and A3/B2)? We answer this question using two different approaches: random and strategy-based. \begin{figure*}[!t] \begin{centering} \includegraphics[scale=0.4]{PeRan_noise} \par\end{centering} \caption{\label{PeRan_noise} Expectation values of the probabilities (random case) of Alice winning by switching (black) and by not switching (gray), as a function of the parameter $p$ related to the fidelity of the quantum channel.} \end{figure*} \subsubsection*{Random game} In the random approach, the parameters of the experiment are considered as random variables with a constant joint probability density function $\rho$. Angles $\theta_{a1}$, $\theta_{a2}$, $\theta_{b1}$ and $\theta_{b2}$ are considered as independent random variables, while the angles associated with the door-opening operator, $\varphi_{1}$, $\varphi_{2}$ and $\varphi_{3}$, are considered as random variables subject to the two-doors-remained-closed condition \eqref{tdrcc}, which restricts its values to the region \begin{align} 0<\cos{\varphi_{1}}<1,\\ \sin{\varphi_{1}}<\sin{\varphi_{2}}<1,\\ \sin{\varphi_{3}} = \sqrt{\sin^{2}{\varphi_{1}} + \cos^{2}{\varphi_{2}}} \; \; , \end{align} defining the joint probability density function $\rho$ through \begin{equation} \frac{1}{\rho} = \int_{0}^{\frac{\pi}{2}} d\theta_{a1} \int_{0}^{\frac{\pi}{2}} d\theta_{a2} \int_{0}^{\frac{\pi}{2}} d\theta_{b1} \int_{0}^{\frac{\pi}{2}} d\theta_{b2} \int_{0}^{\frac{\pi}{2}} d\varphi_{1} \int_{\varphi_{1}}^{\frac{\pi}{2}} d\varphi_{2}, \end{equation} which leads to \begin{equation} \rho = \frac{128}{\pi^6}. \end{equation} The probabilities in equations \eqref{eq:2}, \eqref{eq:3}, \eqref{eq:ens} and \eqref{eq:es}, are then functions of these random variables. We use their expectation values with respect to the probability density function $\rho$, as the expected payoff of Alice in each case. When entanglement is not considered, as in the state described in eq. \eqref{nen}, the expectation values of the probability of winning by not switching and by switching, are respectively \begin{align} \left\langle P_{ns} \right\rangle_{ran} & \approx 0.3664, \label{rns} \\ \left\langle P_{s} \right\rangle_{ran} & \approx 0.6336. \label{rs} \end{align} Analogously, when entanglement is considered, as in the state described in eq. \eqref{entan}, the expectation values of the probability of winning by not switching and by switching, are respectively \begin{align} \left\langle P_{e,ns} \right\rangle_{ran} & \approx 0.5189, \label{rens} \\ \left\langle P_{e,s} \right\rangle_{ran} & \approx 0.4811. \label{res} \end{align} When noise is considered, the results obtained without entanglement (expressions \eqref{rns} and \eqref{rs}) remain the same, a somewhat expected result, since we only take the average of all possible parameters without any particular quantum correlation in the system. Nevertheless, the results obtained with entanglement (expressions \eqref{rens} and \eqref{res}) do change as functions of the parameter $p$ related to the fidelity of the channel. Figure \ref{PeRan_noise} shows the effect of noise in the average results of the game when entanglement is considered. \begin{figure*}[!t] \begin{centering} \includegraphics[scale=0.4]{Pe_noise} \par\end{centering} \caption{\label{Pe_noise} Maximum (black) and minimum (gray) of the expectation value of the probability of Alice winning by switching when entanglement is considered, as a function of the parameter $p$ related to the fidelity of the quantum channel.} \end{figure*} \subsubsection*{Strategy-based game} In the strategy-based approach, the angles $\theta_{a1}$, $\theta_{a2}$, $\theta_{b1}$ and $\theta_{b2}$ are also considered as independent (in analogy with the classical game) random variables. However, the door-opening parameters, $\varphi_{1}$, $\varphi_{2}$ and $\varphi_{3}$, are left free to be tuned in favor of a certain strategy from Bob (the host). In this approach, the joint probability density function is just \begin{equation} \varrho = \frac{16}{\pi^4}, \end{equation} making the expectation values of the probabilities in equations \eqref{eq:2}, \eqref{eq:3}, \eqref{eq:ens} and \eqref{eq:es}, to be functions of the door-opening parameters $\varphi_{1}$ and $\varphi_{2}$. Figure \ref{fig:2} shows the graphics of the expectation value of the probability of Alice winning by switching as function of $\varphi_{1}$ and $\varphi_{2}$, for both the entangled $\left\langle P_{e,s}\right\rangle $ and non-entangled cases $\left\langle P_{s}\right\rangle $. Table \ref{tb:2} shows the approximate maximum and minimum values of $\left\langle P_{s}\right\rangle$, $\left\langle P_{ns}\right\rangle$, $\left\langle P_{e,s}\right\rangle $ and $\left\langle P_{e,ns}\right\rangle $ as well as the approximate corresponding values of $\varphi_{1}$ and $\varphi_{2}$ where that maximum or minimum is reached. \begin{figure*}[!t] \begin{centering} \includegraphics[scale=0.34]{Ps} \qquad \quad \includegraphics[scale=0.34]{Pes} \par\end{centering} \caption{\label{fig:2} Expectation value of the probability of Alice winning by switching, as a function of the door-opening parameters $\varphi_1$ and $\varphi_2$. \textbf{Left:} Non-entangled-case. \textbf{Right:} Entangled case.} \end{figure*} \begin{figure*}[!t] \begin{centering} \includegraphics[scale=0.35]{Pabs} \qquad \quad \includegraphics[scale=0.35]{Peabs} \par\end{centering} \caption{\label{fig:4} Absolute value of the difference between expectation values of the probabilities of Alice winning by not switching and by switching, as a function of the door-opening parameters $\varphi_1$ and $\varphi_2$. \textbf{Left:} Non-entangled-case. \textbf{Right:} Entangled case.} \end{figure*} \begin{table}[!h] \resizebox{0.3\textwidth}{!}{% \begin{tabular}{| c || c | c | c || c | c | c |} \hline & $\varphi_{1}$ & $\varphi_{2}$ & min & $\varphi_{1}$ & $\varphi_{2}$ & max \\ \hline \hline $\left\langle P_{s}\right\rangle$ & $\frac{\pi}{2}$ & $\frac{\pi}{2}$ & $0.5908$ & $0$ & $\frac{\pi}{2}$ & $0.75$ \\ \hline $\left\langle P_{ns}\right\rangle$ & $0$ & $\frac{\pi}{2}$ & $0.25$ & $0$ & $0$ & $0.4092$ \\ \hline $\left\langle P_{e,s}\right\rangle$ & $\frac{\pi}{2}$ & $\frac{\pi}{2}$ & $0.4003$ & $0$ & $\frac{\pi}{2}$ & $0.6487$ \\ \hline $\left\langle P_{e,ns}\right\rangle$ & $0$ & $\frac{\pi}{2}$ & $0.3513$ & $0$ & $0$ & $0.5997$ \\ \hline \end{tabular} } \caption{Numerically obtained maximum and minimum values of $\left\langle P_{s}\right\rangle$, $\left\langle P_{ns}\right\rangle$, $\left\langle P_{e,s}\right\rangle$ and $\left\langle P_{e,ns}\right\rangle$ along with the corresponding $\varphi_{1}$ and $\varphi_{2}$ where that maximum or minimum is reached. \label{tb:2}} \end{table} \begin{table}[!h] \resizebox{0.3\textwidth}{!}{% \begin{tabular}{ | c || c | c | c || c | c | c |} \hline & $\varphi_{1}$ & $\varphi_{2}$ & min & $\varphi_{1}$ & $\varphi_{2}$ & max \\ \hline \hline $P_{abs}$ & $\frac{\pi}{2}$ & $\frac{\pi}{2}$ & $0.1817$ & $0$ & $\frac{\pi}{2}$ & $0.5$ \\ \hline $P_{e,abs}$ & $\frac{\pi}{20}$ & $\frac{\pi}{4}$ & $0.0001$ & $0$ & $\frac{\pi}{2}$ & $0.2973$ \\ \hline \end{tabular} } \caption{Numerically obtained maximum and minimum values of $P_{abs}$ and $P_{e,abs}$ along with the corresponding $\varphi_{1}$ and $\varphi_{2}$ where that maximum or minimum is reached. \label{tb:3}} \end{table} When noise is considered, the maximum and minimum values in table \ref{tb:2} remain the same in the case when no entanglement is present. However, when entanglement is present, these values change as functions of the parameter $p$ related to the fidelity of the channel. Figure \ref{Pe_noise} shows the dependence of the maximum and minimum values of $\left\langle P_{e,s}\right\rangle$, as a function of the parameter $p$. In the classical Monty Hall problem, the host, opening one of the empty boxes, helps the player by creating an imbalance between the probabilities of winning by switching and by not switching, allowing her to make a rational decision. Motivated by this, we define the absolute value of the difference between $\left\langle P_{ns}\right\rangle$ and $\left\langle P_{s}\right\rangle$ for the non-entangled case as \begin{equation} P_{abs} = \left| \left\langle P_{ns}\right\rangle - \left\langle P_{s}\right\rangle \right|. \end{equation} Analogously, we also define the absolute value of the difference between $\left\langle P_{e,ns}\right\rangle$ and $\left\langle P_{e,s}\right\rangle$ for the entangled case as \begin{equation} P_{e,abs} = \left| \left\langle P_{e,ns}\right\rangle - \left\langle P_{e,s}\right\rangle \right|. \end{equation} Figure \ref{fig:4} shows the graphics of $P_{abs}$ and $P_{e,abs}$ as functions of $\varphi_{1}$ and $\varphi_{2}$, while table \ref{tb:3} shows their approximate maximum and minimum values as well as the approximate corresponding $\varphi_{1}$ and $\varphi_{2}$ where that maximum or minimum is reached. Notice from this table that it is Bob who can use entanglement to his advantage, either to help or to affect Alice. \section{Discussion and Conclusions} In the random approach, when no entanglement is considered, the expectation values of the probabilities of Alice (the player) winning by not switching and by switching, given by equations \eqref{rns} and \eqref{rs} respectively, differ from the classical probabilities by just $0.033$, concluding that, in average, Alice has a better chance of winning by switching, approximately $1.73$ times better, a slightly worse result than in the classical case. When entanglement is considered, as in equations \eqref{rens} and \eqref{res}, the results show the opposite conclusion. In this case, Alice has a better chance of winning by not switching, but just approximately $1.08$ times better, meaning that this kind of correlation between the prize's location and Alice's initial choice, is actually bad for Alice, not allowing her to make a switching choice as meaningful as in the classical case or the quantum non-entangled case. When Alice and Bob wish to play the random game remotely, noise affects the initial state of the system, modifying just the results where entanglement is considered (figure \ref{PeRan_noise}). We found that, for low values of noise, the switching and not switching gains remain relatively balanced. However, as noise increases, the difference between the gains of the two choices grow too, making the switching decision approximately $2.17$ times better than the not switching one in the worst scenario. This means that, when entanglement is considered, a strongly noisy channel plays in favor of Alice, allowing her to make a rational choice. In the strategy-based approach, Bob (the host) can freely choose the door-opening parameters, allowing him to increase or decrease (in average) the chances of Alice winning by switching and by not switching. From the results presented in tables \ref{tb:2} and \ref{tb:3}, when entanglement is not considered, we notice that Bob can increase the imbalance between the switching and not-switching cases to a maximum of $0.5$, with $\left\langle P_{s}\right\rangle = 0.75$ and $\left\langle P_{ns}\right\rangle = 0.25$, making the switching decision three times better than the not-switching one, being this the best strategy possible, in average, if Bob wishes to help Alice win the prize. When entanglement is considered and if Bob wants again to help Alice, he can increase the imbalance between the switching and not-switching cases to a maximum of $0.2973$, with $\left\langle P_{e,s}\right\rangle = 0.6487$ and $\left\langle P_{e,ns}\right\rangle = 0.3513$, making the switching decision approximately $1.85$ times better than the not-switching one, a very similar result to the classical case. As in the random approach, in the strategy-based one, the presence of noise only affects the results where entanglement is considered. In figure \ref{Pe_noise} we see that the switching option becomes dominant for strongly noisy channels, increasing the maximum of the expectation value of the probability to approximately $0.8$. In this scenario, even the minimum value increases to approximately $0.65$, meaning that, as in the random approach, noise plays in favor of Alice. If Bob does not want Alice to win the prize, his strategy would be to minimize the imbalance between the switching and not-switching cases, decreasing the advantage that she gets from it. Results in tables \ref{tb:2} and \ref{tb:3} show that, when entanglement is not considered, Bob can decrease this imbalance to a minimum of $0.1817$ with $\left\langle P_{s}\right\rangle = 0.5908$ and $\left\langle P_{ns}\right\rangle = 0.4092$, making the switching decision $1.4438$ times better than the not-switching one, still allowing Alice to make a rational choice. However, when entanglement is considered, Bob can decrease the imbalance to a minimum of $0$, as both $\left\langle P_{e,s}\right\rangle$ and $\left\langle P_{ns}\right\rangle$ have the same value of $0.5$ at approximately $\varphi_{1}=\frac{\pi}{20}$ and $\varphi_{2}=\frac{\pi}{4}$, leaving Alice with no other option but to randomly decide whether she bets for switching or not switching. The proposed experimental set-up, as it is, only allows to model the classical game with three boxes, two players and one empty door to be opened. To extend the set-up to model a more general version of the classical Monty Hall problem as the one discussed in the introduction, there are basically two options: to include more detectors, as these represents the boxes in the game, along with polarization rotators to be able to change the degree of confidence (probability amplitude) in each new box, and to change the two-doors-remain closed condition in order to allow for more doors to be opened. The number of players, however, must remain as two, since the photons produced by the BBO type II crystal come in pairs. In conclusion, we have presented a quantum version of the Monty Hall problem, based on a quantum-optical set-up that is experimentally feasible. This set-up allows two persons to quickly verify the counter-intuitive result of the famous Monty Hall problem and some statistical results beyond the classical game, adding pedagogical value to the proposed scheme. We have also discussed some results of the game when the parties play through a noisy quantum channel. \begin{acknowledgments} L. F. Quezada thanks C3-UNAM for financial support. A. Mart\'in-Ruiz acknowledges support from DGAPA-UNAM under project No. IA101320. E. Nahmad-Achar acknowledges partial support from DGAPA-UNAM under project No. IN100120. \end{acknowledgments} \FloatBarrier
1,477,468,751,295
arxiv
\section{Introduction} The chemostat model provides the foundation for much of current research in bio-engineering, ecology, and population biology \cite{DLLS03,DLS03, EPR01, GR05, SW95}. In the engineering literature, the chemostat is known as the continuously stirred tank reactor. It has been used for modeling the dynamics of interacting organisms in waste water treatment plants, lakes and oceans. In its basic setting, it describes the dynamics of species competing for one or more limiting nutrients. If there are $n$ species with concentrations $x_i$ for $i=1,\dots, n$ and just one limiting nutrient with concentration $S$ and dilution rate $D>0$, then the model takes the form \begin{equation} \label{model-full} \left\{\begin{array}{rcl} {\dot S}&=&D(S_{in}-S)-\displaystyle\sum_{i=1}^n\mu_i(S)x_i/\gamma_i\\ {\dot x_i}&=&x_i(\mu_i(S)-D),\;\; i=1,\dots, n \end{array}\right. \end{equation} \smallskip\smallskip \noindent where $\mu_i$ denotes the per capita growth rate of species $i$ and $\dot p$ is the time derivative of any variable $p$. (In much of the paper, we simplify our notation by omitting the arguments of the functions. For instance, when no confusion can arise from the context, we denote $S(t)$ simply by $S$.) The functions $\mu_i$ depend only on the nutrient concentration, and are zero at zero, continuously differentiable and strictly increasing, although non-monotone functions have been the subject of research as well. The conversion of nutrient into new biomass for each species $i$ happens with a certain yield $\gamma_i\in(0,1)$ and the natural control variables in this model are the input nutrient concentration $S_{in}$ and the dilution rate $D$. The latter variable is defined as the ratio of the volumetric flow rate $F$ (with units of volume over time) and the reactor volume $V_r$ which is kept constant. Therefore it is proportional to the speed of the pump that supplies the reactor with fresh medium containing the nutrient. The equations (\ref{model-full}) are then straightforwardly obtained from writing the mass-balance equations for the total amounts of the nutrient and each of the species, assuming the reactor content is well-mixed. The full model (\ref{model-full}) is illustrated in Figure \ref{chem}. \begin{figure}[h] \centerline{\scalebox{.77}{\input{chemostat.epic}}} \caption{Chemostat} \label{chem} \end{figure} In the present work, we consider the case where there is just one species with concentration $x$, in which case the equations (\ref{model-full}) take the form \begin{equation} \label{model-reduced} \left\{\renewcommand{\arraystretch}{1.25}\begin{array}{rcl} {\dot S}&=&D(S_{in}-S)-\mu(S)x/\gamma\\ {\dot x}&=&x(\mu(S)-D) \end{array}\right. \end{equation} \noindent(but see Theorem \ref{iss-track} below for results on chemostats with disturbances, and Section \ref{several} for models involving several species). We assume $S_{in}$ is a given positive constant, while the per capita growth rate $\mu$ is a Monod function (which is also known as a Michaelis-Menten function) taking the form \begin{equation}\label{muchoice} \mu(S)=\dfrac{mS}{a+S}, \end{equation} \noindent for certain positive constants $m$ and $a$ that we specify later. The dilution rate is an appropriate continuous positive periodic function we also specify below. Since $\dot S\ge 0$ when $S> 0$ is near zero, one can readily check that (\ref{model-reduced}) leaves the domain of interest \[\mathcal{X}:=(0,\infty)\times(0,\infty)\] positively invariant (i.e., trajectories for (\ref{model-reduced}) starting in $\mathcal{X}$ remain in $\mathcal{X}$ for all future times); see Theorem \ref{iss-track} for a more general invariance result for perturbed chemostats. Since we are taking $S_{in}$ to be a fixed positive constant, we rescale the variables to reduce the number of parameters. Using the change of variables \[ {\bar S}=\dfrac{S}{S_{in}}, \; \; {\bar x}=\dfrac{x}{S_{in}\gamma}, \; \; {\bar \mu}({\bar S})=\mu(S_{in}{\bar S}) \] \noindent and dropping bars, we eliminate $S_{in}$ and $\gamma$ and so obtain the new dynamics \begin{equation} \label{model} \left\{\renewcommand{\arraystretch}{1.35}\begin{array}{rcl} {\dot S}&=&D(1-S)-\mu(S)x\\ {\dot x}&=&x(\mu(S)-D)\end{array}\right. \end{equation} again evolving on the state space $\mathcal{X}=(0,\infty)^2$. \textcolor{black}{Motivated by the realistic ecological situation where the species concentrations are oscillating, we solve the following biological problem:\begin{itemize} \item[]\textcolor{blue}{{\bf Biological Problem B1:}} For a prescribed oscillatory behavior for the species concentration and substrate level given by a time-periodic pair $(S_r(t), x_r(t))$, design a dilution rate function $D(t)$ such that if this choice of $D(t)$ is used in the chemostat (\ref{model}), then all solution pairs $(S(t),x(t))$ for the substrate levels and corresponding species levels obtained from solving (\ref{model}) (i.e. for all possible initial values) closely approximate $(S_r(t),x_r(t))$ for large times $t$. \end{itemize}See also Problem (SP) in Section \ref{define} below for a precise mathematical statement of the preceding problem. In the language of control theory, solving Biological Problem B1 means we will prove the stability of a suitable periodic reference signal for the species concentration $t\mapsto x_r(t)$ in (\ref{model}) using an appropriate time-periodic dilution rate $D(t)$; see \cite{K02} for the fundamental ideas from control theory we need in the sequel.} \textcolor{black}{Since $D(t$) is proportional to the speed of the pump which supplies the chemostat with medium containing nutrient, implementation of the prescribed oscillatory behavior requires that we control the pump in a very precise way. In practice this control process is prone to errors, and the actual pump speed will be subject to small fluctuations which we will model by replacing $D(t)$ by $D(t)+u_1(t)$ in the chemostat equations, where $u_1(t)$ models the error. It is therefore of interest to study the effect of these small fluctuations on the periodic behavior. Preferably this effect will be small, and the resulting behavior is not too different from the prescribed periodic behavior. We will show that this is indeed the case by actually quantifying how small the deviations are, relying on the well-known control-theoretic notion of Input-to-State Stability or ISS; see \cite{S00, S06} and Remark \ref{aboutISS} for details about ISS. Summarizing this a bit more formally, we will solve the following biological problem (which we state in a more precise mathematical way in Section \ref{thm}):} \textcolor{black}{\begin{itemize} \item[]\textcolor{blue}{{\bf Biological Problem B2:}} For the prescribed oscillatory behavior $(S_r(t),x_r(t))$ and dilution rate $D(t)$ obtained in Biological Problem B1, quantify how the substrate and species levels $(S(t),x(t))$ in the chemostat model (\ref{model}) are affected by unexpected changes in the dilution rate, and show that the convergence of $(S(t),x(t))$ to the oscillatory behavior $(S_r(t),x_r(t))$ is robust to small changes in $D(t)$.\end{itemize} Our solution to Biological Problem B2 will be a special case of our more general input-to-state stability result for the chemostat tracking equations, assuming the dilution rate and initial concentration are both perturbed by {}noise terms of small enough magnitude}. In the next section, we briefly review the literature focusing on what makes our approach different. In Section \ref{track}, we fix the reference signal we wish to track. In Section \ref{define}, we precisely formulate the definitions and the stability problem we are solving. We state our main stability theorem in Section \ref{thm} and we discuss the significance of our theorem in Section \ref{discuss}. We prove our stability result in Section \ref{main}. In Section \ref{several}, we show that the stability is maintained when there are additional species that are being driven to extinction. We validate our results in Section \ref{simulations} using a numerical example. We conclude in Section \ref{concl} by summarizing our findings. \section{Review of the Literature and Comparison with Our Results} \label{review} The behavior of the system $(\ref{model-full})$ is well understood when $S_{in}$ and $D$ are positive constants, as well as cases where $n=2$ and either of these control variables is held fixed while the other is periodically time-varying. See \cite{HS83, S81} for periodic variation of $S_{in}$ and \cite{BHW85} for periodic variation of $D$ and the general reference \cite{SW95} on chemostats. When both $S_{in}$ and $D$ are constants, the so-called ``competitive exclusion principle'' holds, meaning that at most one species survives. Mathematically this translates into the statement that system $(\ref{model-full})$ has a steady state with at most one nonzero species concentration, which attracts almost all solutions; see \cite{SW95}. This result has triggered much research to explain the discrepancy between the (theoretical) competitive exclusion principle and the observation that in real ecological systems, many species coexist. The results on the periodically-varying chemostat mentioned above should be seen as attempts to explain this paradox. They involve chemostats with $n=2$ species, and their purpose is to show that an appropriate periodic forcing for either $S_{in}(t)$ or $D(t)$ can make the species coexist, usually in the form of a (positive) periodic solution. Few results on coexistence of $n>2$ species are available. An exception is \cite{RR90}, where a periodic function $S_{in}(t)$ is designed (with $D$ kept fixed) so that the resulting system has a (positive) periodic solution with an arbitrary number of coexisting periodically varying species. The stability properties of this solution are not known. More recent work has explored the use of state-dependent but time invariant feedback control of the dilution rate $D$ to generate coexistence; see \cite{DLLS03,DLS03} for monotone growth rate functions in the $n=2$ species case, and \cite{DLP05} for the $n=3$ species case. The paper \cite{GR05} considers feedback control when the growth rate functions are non-monotone. In \cite{GMR05}, \cite{LMR1}, and \cite{MLR}, coexistence is proved for models taking into account intra-specific competition. In these models, the usual growth functions $\mu_i(S)$ are replaced by functions $\mu_i(S,x_i)$ which are decreasing with respect to the variable $x_i$. All the results discussed so far apply to a more general model than $(\ref{model})$ involving $n>1$ species. This is because the main purpose of these papers is to investigate environmental conditions under which the competitive exclusion principle fails and several species can coexist. Here we will not consider any coexistence problems. Our main objective is to provide a proof of stability of a periodic solution based on a Lyapunov-type analysis and to investigate the robustness properties of the periodic solution with respect to perturbations. As an illustration we show that the stability of the periodic solution is robust with respect to additional species that are being driven to extinction, or to small disturbances on the initial nutrient concentration or dilution rate. These features set our work apart from the known results on periodically forced chemostat models which do not rely on the construction of a Lyapunov function. Proving stability in the chemostat usually relies on reduction and monotonicity arguments, and not so often on Lyapunov functions (but see for instance Theorem $4.1$ in \cite{SW95} which uses a Lyapunov function introduced in \cite{H78} and more recently \cite{GMR05}). Finally we point out that closely related to our results is \cite{EPR01} where a single-species chemostat with a continuous and bounded (but otherwise arbitrary) function $S_{in}(t)$ and constant dilution rate is investigated; there it is shown that two positive solutions converge to each other. However, the proof is not based on a Lyapunov function. The advantage of having a Lyapunov function is that it can be used to {\em quantify} the effect of additional noise terms on the stability of the unperturbed dynamics. In fact, to our knowledge, our work provides the first input-to-state stability analysis of chemostats whose dilution rates and initial concentrations are perturbed by small noise; see Remark \ref{aboutISS} for a discussion on the importance of input-to-state stability in control theory and engineering applications. \section{Choosing a Reference Trajectory} \label{track} We first choose the dilution rate $D=D(t)$ that will give a reference trajectory $(S_r(t),x_r(t))$ for (\ref{model}) which we show to be stable. We assume a growth rate with constants $m,a>0$ as \textcolor{black}{follows, in which the constants $a$ and $m$ and the variable $S$ are all dimensionless by the change of coordinates used to obtain the normalized equations (\ref{model}) so the units do not matter:} \begin{equation} \label{mbound} \mu(S)=\frac{mS}{a+S},\; \; {\rm where}\; \; m>4a+1\; . \end{equation} \textcolor{black}{For the sake of computational simplicity, we choose a sinusoidal reference trajectory but the extension to more general reference trajectories can be handled by similar methods; see Remark \ref{means} for details. Simple calculations show that (\ref{model}) admits the trajectory} \begin{equation} \label{reftraj} (S_r(t),x_r(t)) = \left(\frac{1}{2}-\frac{1}{4}\cos(t),\frac{1}{2}+\frac{1}{4}\cos(t)\right) \end{equation} which we refer to as a {\em reference trajectory} when we choose \begin{equation} \label{chli} D(t)\; \; =\; \; -\frac{\dot x_r(t)}{x_r(t)}+\mu(1-x_r(t))\; \; =\; \; \frac{\sin(t)}{2+\cos(t)}+\frac{m(2-\cos(t))}{4a+2-\cos(t)}. \end{equation} Condition (\ref{mbound}) then provides constants $\bar D, D_o> 0$ such that \[D_o\; \le\; D(t)\; \le\; \bar D\] for all $t \ge 0$: \begin{equation} \label{aer1} \bar D = 1 + \frac{3m}{4a + 3} \; \; {\rm and}\; \; D_o = \frac{m}{4a + 1}-1\; . \end{equation} See Figure \ref{refff} for the graph of $D(t)$ for $m=10$ and $a=\frac{1}{2}$. \begin{figure}[h] \begin{center} \scalebox{.6}{\includegraphics{dilution-new.eps}} \end{center} \caption{Dilution Rate $D(t)$ for the Chemostat from (\ref{chli}) \label{refff}} \end{figure} \section{Definitions and Statement of Stability Problem} \label{define} We wish to solve the following stability problem \textcolor{black}{which is merely a restatement of Biological Problem B1 above in precise control theoretic terms}: \begin{itemize} \item[]\begin{itemize} \item[(SP) \ ] Given any trajectory $(S,x): [0,\infty) \to (0,\infty)^2$ for (\ref{model}) corresponding to the dilution rate $D(t)$ from (\ref{chli}) and $\mu$ as in (\ref{mbound}) (i.e. for any initial value for $(S,x)$), show that the corresponding deviation $(\tilde S(t), \tilde x(t)):=(S(t) - S_r(t), x(t)-x_r(t))$ of $(S,x)$ from the reference trajectory (\ref{reftraj}) asymptotically approaches $(0,0)$ as $t\to +\infty$. \end{itemize}\end{itemize} We will solve (SP) by proving a far more general tracking result for a single species chemostat acted on by a disturbance vector $u=(u_1,u_2):[0,\infty)\to \mathbb{R}^2$ as follows: \begin{equation} \label{mod1} \left\{ \begin{array}{rcl} \dot{S}(t) & = & [D(t) + u_1(t)](1+u_2(t)-S(t))-\mu(S(t))x(t)\\[.5em] \dot x(t)&=& x(t)[\mu(S(t))-D(t)-u_1(t)] \end{array} \right.. \end{equation} We will quantify the extent to which the reference trajectory (\ref{reftraj}) tracks the trajectories of (\ref{mod1}) \textcolor{black}{which will solve Biological Problem B2 from the introduction}. To this end, we need to introduce a priori bounds on $u_1$ and $u_2$; see Remark \ref{bound-u}. Our main theoretical tool will be the input-to-state stability (ISS) property \cite{S89} which is one of the central paradigms of current research in nonlinear stability analysis; see Remark \ref{aboutISS}. The relevant definitions are as follows. We let ${\mathcal K}_\infty$ denote the set of all continuous functions $\gamma:[0,\infty)\to[0,\infty)$ for which (i) $\gamma(0)=0$ and (ii) $\gamma$ is strictly increasing and unbounded. We let $\mathcal{KL}$ denote the class of all continuous functions $\beta:[0,\infty)\times [0,\infty)\to[0,\infty)$ for which (I) $\beta(\cdot, t)\in {\mathcal K}_\infty$ for each $t\ge 0$, (II) $\beta(s,\cdot)$ is non-increasing for each $s\ge 0$, and (III) $\beta(s,t)\to 0$ as $t\to +\infty$ for each $s\ge 0$. Consider a general control-affine dynamic \begin{equation} \label{gen} \dot y=F(y,t)+G(y,t)u,\; \; y\in \mathcal{O},\; u\in \mathbf{U} \end{equation} evolving on a given subset $\mathcal{O}\subseteq\mathbb{R}^n$ where $\mathbf U$ is a given subset of Euclidean space. (Later we specialize to dynamics for the chemostat.) For each $t_o\ge 0$ and $y_o\in \mathcal{O}$, let $y(t; t_o, y_o, \alpha)$ denote the solution of (\ref{gen}) satisfying $y(t_o)=y_o$ for a given control function $\alpha\in \mathcal{U}:=\{{\rm measurable\ essentially\ bounded\ }\alpha:[0,\infty)\to {\mathbf U}\}$; i.e. the solution of the initial value problem \[\dot y(t)=F(y(t),t)+G(y(t),t)\alpha(t) \; {\rm a.e.}\; t\ge t_o\, ,\; y(t_o)=y_o\; .\] We always assume that such solutions are uniquely defined on all of $[t_o,\infty)$ (i.e., (\ref{gen}) is forward complete and $\mathcal{O}$ is positively invariant for this system) and that there exists $\Theta\in \mathcal{K}_\infty$ such that $|F(y,t)|+|G(y,t)|\le \Theta(|y|)$ everywhere, where $|\cdot |$ is the usual Euclidean norm. For example, \[[t_o,\infty)\ni t\mapsto (S(t; t_o,(S_o,x_o),\alpha),x(t; t_o, (S_o,x_o),\alpha))\] is the solution of (\ref{mod1}) for the disturbance $u=(u_1, u_2)=\alpha(t)$ satisfying the initial condition $(S(t_o),x(t_o))=(S_o,x_o)$. \begin{definition}\label{issdef} We call (\ref{gen}) {\em input-to-state stable (ISS)} provided there exist $\beta\in \mathcal{KL}$ and $\gamma\in \mathcal{K}_\infty$ such that \begin{equation} \label{ISSest} |y(t; t_o, y_o, \alpha)|\; \; \le\; \; \beta(|y_o|, t-t_o)+\gamma(|\alpha|_\infty) \end{equation} for all $t\ge t_o$, $t_o\ge 0$, $y_o\in \mathcal{O}$, and $\alpha\in \mathcal{U}$. \end{definition} Here $|\alpha|_\infty$ denotes the essential supremum of $\alpha\in \mathcal{U}$. By causality, the ISS condition (\ref{ISSest}) is unchanged if $|\alpha|_\infty$ is replaced by the essential supremum $|\alpha|_{[t_o,t]}$ of $\alpha$ restricted to $[t_o,t]$. In particular, (\ref{ISSest}) says $y(t; t_o, y_o, \mathcal{Z})\to 0$ as $t\to +\infty$ for all initial values $y_o$ and initial times $t_o$, where $\mathcal{Z}$ is the zero disturbance $\alpha(t)\equiv 0$. \begin{remark} \label{aboutISS} The theory of ISS systems originated in \cite{S89}. ISS theory provides the foundation for much current research in robustness analysis and controller design for nonlinear systems, and has also been used extensively in engineering and other applications \cite{A04, AISW04, C05, MRS04, S89, S06}. The ISS approach can be viewed as a unification of the operator approach of Zames (e.g. \cite{Z66a, Z66b}) and the Lyapunov state space approach. The operator approach involves studying the mapping $(t_o, y_o, \alpha)\mapsto y(\cdot ; t_o, y_o,\alpha)$ of initial data and control functions into appropriate spaces of trajectories, and it has the advantages that it allows the use of Hilbert or Banach space techniques to generalize many properties of linear systems to nonlinear dynamics. By contrast, the state space approach is well suited to nonlinear dynamics and lends itself to the use of topological or geometric ideas. The ISS framework has the advantages of both of these approaches including an equivalent characterization in terms of the existence of suitable Lyapunov-like functions; see Remark \ref{pba} below. For a comprehensive survey on many recent advances in ISS theory including its extension to systems with outputs, see \cite{S06}. \end{remark} To specify the bound $\bar u$ on our disturbances $u=(u_1,u_2)$, we use the following constants whose formulas will be justified by the proof of our main stability result: \begin{equation} \label{ck} \begin{array}{l} c=8\left(\frac{1}{2}+\frac{\bar D}{D_o}\right)^2,\; \; \; \kappa=4+ \max\left\{\frac{112m}{4a+1}\, ,\, \frac{16m(4a+3)(a+2)}{a(4a+1)^2}\right\}\; ,\\ C_1=\min\left\{1,\frac{\kappa}{200},\frac{ma}{2(4a+3)(a+2)}\right\} \end{array} \end{equation} \section{Statement of Theorem} \label{thm} {}From now on, we assume the disturbance vector $u=(u_1,u_2)$ in (\ref{mod1}) takes all of its values in a fixed square control set of the form \begin{equation} \label{constraint} \begin{array}{l} \mathbf{U}\; :=\; \{(u_1,u_2)\in \mathbb{R}^2: \, |u_1|\le \bar u, \, |u_2|\le \bar u\} \; \; \text{ where}\\[.5em] 0 < \bar u\; <\; \min\left\{\dfrac{C_1}{\sqrt{ 8(1 + 2c\kappa C_1)}}, \dfrac{D_o}{2}\right\} \end{array} \end{equation} where $c$, $\kappa$, and $C_1$ are in (\ref{ck}) (but see Remark \ref{bound-u} for related results under less stringent conditions on the disturbance values). We will prove the following robustness result: \begin{theorem} \label{iss-track} Choose $D(t)$, $\mu$, and $(S_r,x_r)$ as in (\ref{mbound})-(\ref{chli}). Then the corresponding solutions of (\ref{mod1}) satisfy \begin{equation} \label{c1ru1} \renewcommand{\arraystretch}{1.25}\begin{array}{l}\left[\, S_o> 0\; \; \&\; \; x_o>0\; \; \&\; \; t\ge t_o\ge 0\; \; \&\; \; \alpha\in \mathcal{U}\, \right]\\[.25em]\; \; \; \Rightarrow\; \; \; \left[\; S(t; t_0, (S_0,x_0), \alpha)> 0 \; \; \& \; \; x(t; t_0, (S_0,x_0), \alpha)>0\; \right]\; \; .\end{array} \end{equation} Moreover, there exist $\beta\in \mathcal{KL}$ and $\gamma\in \mathcal{K}_\infty$ such that the corresponding transformed error vector \[\renewcommand{\arraystretch}{1.25} \begin{array}{l} y(t;t_o,y_o,\alpha) :=\\[.25em] \left(S(t; t_0, (S_0,x_0), \alpha) - S_r(t), \ln(x(t; t_0, (S_0,x_0), \alpha)) - \ln(x_r(t))\right)\end{array}\] satisfies the ISS estimate (\ref{ISSest}) for all $\alpha\in \mathcal{U}$, $t_0 \geq 0$, $t \geq t_0$, $S_0 > 0$, and $x_0 > 0$, where $y_o=(S_0,x_0)$. \end{theorem} \section{Discussion on Theorem \ref{iss-track}} \label{discuss} Before proving the theorem, we discuss the motivations for its assumptions, and we interpret its conclusions from both the control theoretic and biological viewpoints. \begin{remark} \label{invariance} Condition (\ref{c1ru1}) says $(0,\infty)^2$ is positively invariant for (\ref{mod1}). One may also prove that $[0,\infty)^2$ is positively invariant for (\ref{mod1}), as follows. Suppose the contrary. Fix $t_o\ge 0$, $x_o\ge 0$, $S_o\ge 0$, and $\alpha\in \mathcal{U}$ for which the corresponding trajectory $(S(t),x(t))$ for (\ref{mod1}) satisfying $(S(t_o), x(t_o))=(S_o, x_o)$ exits $[0,\infty)^2$ in finite time. This provides a finite constant $t_1:=\max\{\tilde t\ge t_o: (S(t),x(t))\in [0,\infty)^2 \; \forall t\in [t_o,\tilde t]\}$. Then $S(t_1)=0$, since otherwise $S(t_1)>0$ and $x(t)=0$ for all $t\ge t_1$ and then we could use the continuity of $S$ to contradict the maximality of $t_1$. Since $\bar u<{\rm min}\{1, D_o/2\}$, the continuity of $S$ and $x$ and the fact that $S(t_1)=0$ provide a constant $\varepsilon>0$ such that $1+u_2(t)-S(t)\ge (1-\bar u)/2$ and $\mu(S(t))x(t)\le D_o(1-\bar u)/8$ for (almost) all $t\in [t_1, t_1+\varepsilon]$, hence also $\dot S(t)\ge D_o(1-\bar u)/8>0$ for all $t\in [t_1, t_1+\varepsilon]$ (since $D(t)+u_1(t)\ge D_o/2$ everywhere). Hence, $S(t)>S(t_1)=0$ for all $t\in [t_1,t_1+\varepsilon]$. Since $x(t)$ clearly stays in $[0,\infty)$, this contradicts the maximality of $t_1$. The positive invariance of $[0,\infty)^2$ follows. \end{remark} \begin{remark} \label{means} Theorem \ref{iss-track} says that in terms of the error signals $y$, any componentwise positive trajectory of the unperturbed chemostat dynamics (\ref{mod1}) converges to the nominal trajectory (\ref{reftraj}), uniformly with respect to initial conditions. This corresponds to putting $\alpha\equiv 0$ in (\ref{ISSest}). It also provides the additional desirable robustness property that for an arbitrary $\mathbf{U}$-valued control function $\alpha\in \mathcal{U}$, the trajectories of the {\em perturbed} chemostat dynamics (\ref{mod1}) are ``not far'' from (\ref{reftraj}) for large values of time. In other words, they ``almost'' track (\ref{reftraj}) with a small overflow $\gamma(|\alpha|_\infty)$ from the ISS inequality (\ref{ISSest}). Similar results can be shown for general choices of $x_r$ and $D(t)$. For example, we can choose any $x_r(t)$ that admits a constant $\ell>0$ such that \[\max\{\ell,|\dot x_r(t)|\}\; \le\; x_r(t)\; \le\; \frac{3}{4}\] for all $t\ge 0$ and $S_r=1-x_r$. In this case, we take the dilution rate \[D(t)=-\frac{\dot x_r(t)}{x_r(t)}+\mu(1-x_r(t)),\] which is again uniformly bounded above and below by positive constants. The proof of this more general result is similar to the proof of Theorem \ref{iss-track} we give below except with different choices of the constants $c$ and $\kappa$. \end{remark} \begin{remark} \label{mfre1} The robustness result \begin{equation} \label{cru1}\renewcommand{\arraystretch}{1.25} \begin{array}{l} |\left(S(t; t_0, (S_0,x_0), \alpha) - S_r(t), \ln(x(t; t_0, (S_0,x_0), \alpha)) - \ln(x_r(t))\right)| \\[.25em]\le \beta(|(S_0,x_0)|,t - t_0) + \gamma(|\alpha|_{[t_0,t]})\end{array} \end{equation} of Theorem \ref{iss-track} differs from the classical ISS condition in the following ways: \begin{enumerate} \item For biological reasons, negative values of the nutrient level $S$ and the species level $x$ do not make physical sense. Hence, only componentwise positive solutions are of interest. Therefore, (\ref{cru1}) is not valid for all $(S_0,x_0) \in \mathbb{R}^2$ but rather only for $(S_0,x_0) \in (0,\infty) \times (0,\infty)$. \item Our condition (\ref{cru1}) provides an estimate on the {\em transformed} error component $\ln(x(t; t_0, (S_0,x_0), \alpha)) - \ln(x_r(t))$ instead of the more standard error $x(t; t_0, (S_0,x_0), \alpha) - x_r(t)$. Our reasons for using the transformed form of the error are as follows. The function $\ln(x)$ goes to $- \infty$ when $x$ goes to zero. This property is relevant from a biological point of view. Indeed, in the study of biological systems, it is important to know if the concentration of the species is above a strictly positive constant when the time is sufficiently large or if the concentration admits zero in its omega limit set. In the first case, the species is called {\em persistent}. The persistency property is frequently desirable, and it is essential to know whether it is satisfied. Hence, the function $\ln(x(t; t_0, (S_0,x_0), \alpha)) - \ln(x_r(t))$ has the desirable properties that (a) it goes to $+ \infty$ if $x(t; t_0, (S_0,x_0), \alpha)$ does, (b) it is equal to zero when $x(t; t_0, (S_0,x_0), u)$ is equal at time $t$ to the value of $x_r$, and (c) it goes to $- \infty$ if $x(t; t_0, (S_0,x_0), u)$ goes to zero. Therefore, roughly speaking, if the species faces extinction, then it warns us.\end{enumerate} \end{remark} \begin{remark} \label{pba} Our proof of Theorem \ref{iss-track} is based on a Lyapunov type analysis. Recall that a $C^1$ function $V:\mathbb{R}^n\times [0,\infty)\to [0,\infty)$ is called an {\em ISS Lyapunov function (ISS-LF)} for (\ref{gen}) provided there exist $\gamma_1, \gamma_2,\gamma_3, \gamma_4\in \mathcal{K}_\infty$ such that \begin{enumerate} \item $\gamma_1(|y|)\le V(y,t)\le \gamma_2(|y|)$ and \smallskip \item $V_t(y,t)+V_y(y,t)[F(y,t)+G(y,t)u]\le -\gamma_3(|y|)+\gamma_4(|u|)$\end{enumerate} hold for all $y\in \mathcal{O}$, $t\ge 0$, and $u\in \mathbf{U}$. The function $V$ we will construct in the proof of Theorem \ref{iss-track} is not an ISS-LF for the chemostat error dynamics because of the specificities of the state space, which preclude the existence of the necessary functions $\gamma_1,\gamma_2\in \mathcal{K}_\infty$ in Condition 1 above. Hence, we cannot directly apply the result that the existence of an ISS Lyapunov function implies that the system is ISS e.g. \cite[Theorem 1]{ELW00} to prove our theorem. Instead, we prove our Theorem \ref{iss-track} directly from the decay inequality satisfied by the time derivative of $V$ along the trajectories. The proof that the decay inequality implies ISS is very similar to that part of the proof of \cite[Theorem 1]{ELW00}, so we only sketch that part of our proof in the appendix. \end{remark} \begin{remark} \label{bound-u} Our estimate (\ref{cru1}) would not hold if we had instead chosen the full control set $\mathbf{U}=\mathbb{R}^2$. In fact, taking the disturbance $\alpha\equiv (u_1, u_2)=(0,-1)$ and any initial condition $(S(t_o),x(t_o))=(S_0,x_o)\in (0,\infty)^2$ in (\ref{mod1}) would give $S(t; t_0, (S_0,x_0), \alpha) \to 0$ and so also $\ln(x(t; t_0, (S_0,x_0), \alpha))) \to - \infty$ as $t\to +\infty$ (since $|u_1(t)|\le \bar u<D_o/2\le D(t)/2$ almost everywhere). Therefore extinction would occur and (\ref{cru1}) would not be satisfied. On the other hand, if our set $\mathbf{U}$ is replaced by $\mathbf{U}^\sharp:=[-\bar u,+\bar u]^2$ for any fixed constant $\bar u\in (0,\min\{1,D_o\})$, then the chemostat error dynamics instead satisfies the less stringent {\em integral} ISS property; see Remark \ref{iISS-rk} below for details. \end{remark} \section{Proof of Theorem \ref{iss-track}}\label{main} The proof of (\ref{c1ru1}) is immediate from the structure of the dynamics (\ref{mod1}) and the fact that $\bar u<1$ (which imply that $\dot S\ge 0$ when $S>0$ is sufficiently small); see Remark \ref{invariance} for a similar argument. It remains to prove the ISS estimate (\ref{cru1}) for suitable functions $\beta\in \mathcal{KL}$ and $\gamma\in \mathcal{K}_\infty$. Throughout the proof, all (in)equalities should be understood to hold globally unless otherwise indicated. Also, we repeatedly use the simple ``(generalized) triangle inequality'' relation \begin{equation} \label{triangle} pq\; \; \le\; \; d p^2+\frac{1}{4d}q^2 \end{equation} for various choices of $p\ge 0$, $q\ge 0$, and $d>0$ that we specify later. Fix $t_o\ge 0$, $S_o>0$, $x_o>0$, and $\alpha\in \mathcal{U}$, and let $[t_o,\infty)\ni t\mapsto (S(t),x(t))$ denote the corresponding solution of (\ref{mod1}) satisfying $(S(t_0),x(t_o))=(S_o,x_o)$. For simplicity, we write $\alpha(t)$ as $(u_1,u_2)$, omitting the time argument as before. We first write the error equation for the variables \begin{equation} \label{impo} (\tilde z, \tilde \xi) = (z - z_r, \xi - \xi_r) \end{equation} where $\xi = \ln x$, $z = S + x$, $z_r(t) = S_r(t) + x_r(t)=1$, and $\xi_r(t) = \ln x_r(t)$. One easily checks that \[ \begin{array}{rcl} \dot z(t)&=& [D(t)+u_1(t)][1+u_2(t)-z(t)]\\ \dot x(t)&=& x(t)[\mu(z(t)-x(t))-D(t)-u_1(t)]\, . \end{array} \] Therefore, since $z_r\equiv 1$ (which implies $\dot z_r(t)= [D(t)+u_1(t)][1-z_r(t)]$), our formula (\ref{mbound}) for $\mu$ immediately implies that the (transformed) error \[(\tilde z, \tilde \xi)(t)=(z(t)-z_r(t), \ln x(t)-\ln x_r(t))\] satisfies the {\em chemostat error dynamics} \begin{equation} \label{dis} \left\{ \begin{array}{rcl} \dot{\tilde{z}} & = & - [D(t) + u_1(t)] [\tilde z-u_2(t)] \\[.5em] \dot{\tilde{\xi}} & = & ma\dfrac{\tilde{z} - e^{\xi_r(t)}(e^{\tilde{\xi}} - 1)}{(a + z - e^{\xi})(a + z_r(t) - e^{\xi_r(t)})} - u_1(t). \end{array} \right. \end{equation} We are going to show that (\ref{dis}) has the Lyapunov function \begin{equation} \label{VChoice} \begin{array}{l} V(\tilde \xi, \tilde z) = e^{L_3(\tilde{\xi}, \tilde z)} - 1,\; \; \; {\rm where}\; \; L_3(\tilde \xi, \tilde z)=L_1(\tilde \xi)+\kappa L_2(\tilde z),\\ L_1(\tilde \xi)=e^{\tilde \xi}-\tilde \xi-1,\; \; {\rm and}\; \; L_2(\tilde z) = \frac{1}{D_o - \bar u} \tilde z^2 \end{array} \end{equation} and $\kappa>0$ is the constant defined in (\ref{ck}). {}From the explicit expressions $z_r(t) = 1$ and $e^{\xi_r(t)} = \frac{1}{2} + \frac{1}{4}\cos(t)\ge \frac{1}{4}$, we deduce that the time derivative of $L_1$ along trajectories of (\ref{dis}) satisfies \[\renewcommand{\arraystretch}{2.5} \begin{array}{rcl} \dot{L}_1 & = & ma\dfrac{(e^{\tilde{\xi}} - 1)\tilde{z} - e^{\xi_r(t)} (e^{\tilde{\xi}} - 1)^2}{(a + z - e^{\xi})(a + z_r(t) - e^{\xi_r(t)})} - (e^{\tilde{\xi}} - 1) u_1(t)\\ & \le & ma\dfrac{ - \frac{1}{4 a + 2 - \cos(t)}(e^{\tilde{\xi}} - 1)^2 + \frac{4}{4 a + 2 - \cos(t)}|e^{\tilde{\xi}} - 1||\tilde{z}|}{(a + z - e^{\xi})} - (e^{\tilde{\xi}} - 1) u_1(t) \\ & \leq & \dfrac{ - \frac{ma}{4 a + 3}(e^{\tilde{\xi}} - 1)^2 + \frac{4 ma}{4 a + 1}|e^{\tilde{\xi}} - 1||\tilde{z}|}{(a + z - e^{\xi})} - (e^{\tilde{\xi}} - 1) u_1(t) , \end{array} \] where we also used the fact that $z-e^\xi=S\ge 0$. Since $D(t)+u_1(t)\ge D_o-\bar u$ everywhere, one readily checks that along the trajectories of (\ref{dis}), \begin{equation} \label{s1mple} \begin{array}{rclcl} \dot{L}_2 &\leq & - \tilde z^2 + \tilde c|\tilde z| |u_2(t)|&\le &-\frac{1}{2}\tilde z^2+cu^2_2(t) \end{array} \end{equation} where $\tilde c = 2(\bar D + \bar u)(D_o - \bar u)^{-1}$, the constant $c$ is defined by (\ref{ck}), and the last inequality used (\ref{triangle}) with the choices $p=|\tilde z|$, $q=|u_2(t)|$, and $d=1/(2\tilde c)$. The fact that $\frac{1}{2}\tilde c^2\le c$ easily follows because $\bar u\le \frac{1}{2}D_o$. Along the trajectories of (\ref{dis}), \begin{equation} \label{s2mple} \begin{array}{rcl} \dot{L}_3 & \leq & \dfrac{ - \frac{ma}{4 a + 3}(e^{\tilde{\xi}} - 1)^2 + \frac{4 ma}{4 a + 1}|e^{\tilde{\xi}} - 1||\tilde{z}|}{(a + z - e^{\xi})} - (e^{\tilde{\xi}} - 1) u_1(t)\\&& - \dfrac{1}{2}\kappa \tilde z^2 + \kappa c u^2_2(t). \end{array} \end{equation} We distinguish between two cases. \noindent {\em \underline{Case 1a}: $z(t) \leq 2$}. Then since $z-e^\xi=S \ge 0$, we get \begin{equation} \label{s2h}\renewcommand{\arraystretch}{1.5} \begin{array}{rcl} \dot{L}_3 & \leq & - \dfrac{m a}{(4a + 3)(a + 2)}(e^{\tilde{\xi}} - 1)^2 + \dfrac{4 m}{4a + 1} |e^{\tilde{\xi}} - 1||\tilde{z}|\\&& - (e^{\tilde{\xi}} - 1) u_1(t) - \frac{1}{2}\kappa \tilde z^2 + \kappa c u^2_2(t). \end{array} \end{equation} Using the triangle inequality (\ref{triangle}) with the choices \[ p=|e^{\tilde \xi}-1|,\; \; \; q=|\tilde z|, \; \; \; {\rm and}\; \; \; d=\frac{a(4a+1)}{8(4a+3)(a+2)},\] we deduce from (\ref{s2h}) that \begin{equation} \label{s21}\renewcommand{\arraystretch}{1.5} \begin{array}{rcl} \! \! \dot{L}_3 & \leq & - \dfrac{m a}{2(4a + 3)(a + 2)}(e^{\tilde{\xi}} - 1)^2\\[1em]&& - \left[\dfrac{\kappa}{2} - \dfrac{8(4a + 3)(a + 2)m}{a (4 a + 1)^2}\right] \tilde z^2 - (e^{\tilde{\xi}} - 1) u_1 + \kappa c u^2_2. \end{array} \end{equation} \noindent {\em \underline{Case 2a}: $z(t) \geq 2$}. Since $z(t) - e^{\xi(t)} \geq 0$, it follows that $e^{- \xi_r(t)} z(t) - e^{\tilde \xi(t)} \geq 0$. Therefore, since $x_r\ge 1/4$, \begin{equation} \label{a4h}\begin{array}{l} e^{\tilde \xi(t)} \; \leq\; e^{- \xi_r(t)} z(t) \; \leq\; e^{-\ln(1/4)}z(t)= 4 z(t),\\ {\rm hence}\; \; \; - 1 \; \leq \; e^{\tilde \xi(t)} - 1 \; \leq \; 4 z(t) - 1. \end{array}\end{equation} Since $z(t) \geq 2$ and $z_r=1$, we have $\tilde z(t) \geq 1$. As $\tilde z=z-z_r=z-1$, condition (\ref{a4h}) gives \begin{equation} - \tilde z(t) \; \leq\; e^{\tilde \xi(t)} - 1 \; \leq\; 3 + 4\tilde z(t) \; \leq\; 7 \tilde z(t), \; \; \; {\rm hence}\; \; \; |e^{\tilde \xi(t)} - 1| \leq 7 \tilde z(t). \end{equation} {}From this last inequality and the inequality $z - e^{\xi} \geq 0$, we deduce from dropping the first term in (\ref{s2mple}) that \begin{equation} \label{u2upue} \begin{array}{rcl} \dot{L}_3 & \leq & \dfrac{28 m}{4 a + 1} \tilde z^2 - (e^{\tilde{\xi}} - 1) u_1 - \dfrac{\kappa}{2} \tilde z^2 + \kappa c u^2_2 \\[1em] & \leq & - \dfrac{1}{200}\kappa (e^{\tilde \xi} - 1)^2 - \left[\dfrac{1}{4} \kappa - \dfrac{28 m}{4 a + 1}\right] \tilde z^2 - (e^{\tilde{\xi}} - 1) u_1 + \kappa c u^2_2 . \end{array} \end{equation} \noindent We deduce from our choice (\ref{ck}) of $\kappa$, (\ref{s21}), and (\ref{u2upue}) that in Cases 1a-2a, \begin{equation} \label{a2upue} \begin{array}{rcl} \dot{L}_3 & \leq & - C_1 [(e^{\tilde \xi} - 1)^2 + \tilde z^2] - (e^{\tilde{\xi}} - 1) u_1 + \kappa c u^2_2 \end{array} \end{equation} where $C_1$ is defined in (\ref{ck}). Using (\ref{triangle}) with $p=|{\rm exp}(\tilde \xi(t))-1|$, $q=|u_1(t)|$, and $d=\frac{C_1}{2}$, and then the upper bounds of $u_1$ and $u_2$, we deduce from (\ref{a2upue}) that \begin{equation} \label{a2uiue}\renewcommand{\arraystretch}{2.5} \begin{array}{rcl} \! \! \! \! \dot{L}_3 & \leq & - \dfrac{C_1}{2} [(e^{\tilde \xi} - 1)^2 + \tilde z^2] + \dfrac{1}{2 C_1} \bar u |u_1| + \kappa c \bar u |u_2| \\ & \leq & - \dfrac{C_1}{2} [(e^{\tilde \xi} - 1)^2 + \tilde z^2] + C_2 |u|,\; \; \; {\rm where} \; \; \; C_2 := \left(\dfrac{1}{C_1} + 2\kappa c\right) \bar u \end{array} \end{equation} and where the last inequality used the relationship $|u|_1\le 2|u|_2$ between the $1$-norm and the $2$-norm. We consider two additional cases. \noindent \underline{\em Case 1b}: $(e^{\tilde \xi(t)} - 1)^2 + \tilde z^2(t) \geq \frac{1}{2}$. Then (\ref{a2uiue}) gives \begin{equation} \label{s2dide} \begin{array}{rcl} \dot{V} & \leq & e^{L_3(\tilde{\xi},\tilde{z})}\left(- \dfrac{C_1}{4} + C_2 |u(t)|\right). \end{array} \end{equation} Next notice that (\ref{constraint}) and our choice of $C_1\in (0,1]$ give \begin{equation} \label{s2dodo} \bar u \; \leq\; \frac{C_1}{8 C_2},\; \; \; \; {\rm hence} \; \; \; \; \dot{V} \; \leq\; - \dfrac{C_1}{8}e^{L_3(\tilde{\xi},\tilde{z})} \; \leq\; - \dfrac{C_1}{8} V(\tilde \xi, \tilde z). \end{equation} \noindent \underline{\em Case 2b}: $(e^{\tilde \xi(t)} - 1)^2 + \tilde z^2(t) \leq \frac{1}{2}$. Then $(\tilde \xi(t),\tilde z(t))$ is in a suitable bounded set, so since \[\tilde F(L)=(e^L-L-1)(e^L-1)^{-2},\;\; \; {\tilde G}(L)= (e^L-1)L^{-1} \] are locally bounded when defined to be zero at $L=0$, one can readily use (\ref{a2uiue}) to compute constants $C_3 > 0$ and $C_4 > 0$ such that \begin{equation}\label{een} \begin{array}{rclcl} \dot{V} & \leq & - C_3 L_3(\tilde{\xi}, \tilde z) + C_2 |u(t)| &\le & -C_4 V(\tilde{\xi}, \tilde z)+ C_2 |u(t)| \end{array} \end{equation} where $\tilde F$ was used to get $C_3$ and $\tilde G$ was used to get $C_4$. It follows from (\ref{s2dodo})-(\ref{een}) that, in Cases 1b-2b, \begin{equation} \label{goal} \begin{array}{rcl} \dot{V} & \leq & - C_5 V(\tilde{\xi}, \tilde z) + C_2 |u(t)| \end{array} \end{equation} with $C_5 = \inf\left\{C_4, \frac{C_1}{8}\right\}$. Condition (\ref{goal}) is the classical ISS Lyapunov function decay condition for the transformed error dynamics evolving on our restricted state space. Therefore, a slight variant of the classical ISS arguments combined with (\ref{goal}) give the ISS estimate asserted by Theorem \ref{iss-track}. For details, see the appendix below. This concludes the proof. \begin{remark} \label{iISS-rk} If our control set $\mathbf{U}$ is replaced by the larger control set $\mathbf{U}^\sharp:=[-\bar u,+\bar u]^2$ for any fixed constant $\bar u\in (0,\min\{1,D_o\})$, then the error dynamics (\ref{dis}) instead satisfies the less stringent {\em integral} ISS property. The relevant definitions are as follows. We say that (\ref{gen}) is {\em integral input-to-state stable (iISS)} provided there exist $\delta_1,\delta_2\in \mathcal{K}_\infty$ and $\beta\in \mathcal{KL}$ such that \[ \tag{iISS} \delta_1(|y(t; t_o, y_o, \alpha)|)\; \le\; \beta(|y_o|, t-t_o)+\int_{t_o}^{t+t_o}\delta_2(|\alpha(r)|)dr\] everywhere for all measurable essentially bounded functions $\alpha:[0,\infty)\to \mathbf{U}^\sharp$. This condition is less restrictive than ISS since e.g. $\dot y=-\arctan(y)+u$ is iISS but not ISS \cite{ASW00}. An {\em iISS-LF} for (\ref{gen}) (with controls in $\mathbf{U}^\sharp$) is then defined to be a $C^1$ function $V:\mathbb{R}^n\times [0,\infty)\to [0,\infty)$ for which there are $\gamma_1, \gamma_2, \gamma_4\in \mathcal{K}_\infty$ and a positive definite function $\gamma_3$ (i.e., $\gamma_3:[0,\infty)\to[0,\infty)$ is continuous and zero only at zero) such that Conditions 1-2 in Remark \ref{pba} hold everywhere. This is less restrictive than the ISS-LF condition since $\gamma_3$ need not be of class $\mathcal{K}_\infty$. Arguing as in the proof of Theorem \ref{iss-track} up through (\ref{a2uiue}) and solving the appropriate constrained minimum problem to get $\gamma_3$ shows that $V=L_3$ satisfies the iISS Lyapunov function decay condition (namely, Condition 2 from Remark \ref{pba}) for the error dynamics (\ref{dis}) and the control set $\mathbf{U}^\sharp$ using $\gamma_3(s)=C_1(e^{-s}-1)^2/2$. Therefore, this system is in fact iISS, by a slight variant of the proof of the iISS estimate in \cite[Theorem 1]{ASW00}. We leave the details to the reader. \end{remark} \section{Stability in the Presence of Several Species} \label{several} Theorem \ref{iss-track} shows that the stability of the reference trajectory (\ref{reftraj}) is robust with respect to small perturbations of the dilution rate and initial concentration. To further demonstrate the robustness of our results, we next show that the stability of (\ref{reftraj}) is also maintained when the model (\ref{model}) is augmented to include additional species that are being driven to extinction, in the following sense. We assume for simplicity that $u_1\equiv u_2\equiv 0$. Consider the augmented system \begin{equation}\renewcommand{\arraystretch}{.8} \label{nmodel} \! \! \! \! \! \left\{\begin{array}{rcl} {\dot S} & = & D(t)(1 - S) - \mu(S) x - \displaystyle \sum_{i=1}^n\nu_i(S)y_i \\ \dot{x} & = & x (\mu(S) - D(t)) \\[.5em] {\dot y_i} & = & y_i(\nu_i(S) - D(t)),\;\; i=1,\dots, n , \end{array}\right. \end{equation} \noindent where $\mu$ is as in (\ref{mbound}) and $\nu_i$ is continuous and increasing and satisfies $\nu_i(0) = 0$ for $i=1,2,\ldots, n$. The variables $y_i$ represent the levels of $n$ additional species. We choose $D$ and $D_o$ as in (\ref{chli}) and (\ref{aer1}), and we assume $\nu_i(1)<D_o$ for $i=1,2,\ldots, n$. (This assumption is, in a sense, natural because one can easily check that it ensures that each species concentration $y_i$ converges to zero. Indeed, the fact that, for all $t \geq 0$, $D(t) > D_o > 0$ and $\mu(S(t)) x(t) + \sum_{i=1}^n\nu_i(S(t))y_i(t) \geq 0$ ensures, in combination with the inequalities $\nu_i(1)<D_o$, that there exists an instant $T > 0$ and a constant $c > 0$ such that for all $t \geq T$ and for $i=1,2,\ldots, n$, $\dot{y}_i(t) \leq - c y_i(t)$. This implies that $y_i$'s converge to $0$ exponentially.) We show that the transformed error \begin{equation} \label{nte} (\tilde z,\tilde \xi,\tilde y)\; :=\; (S+x-S_r-x_r,\ln(x)-\ln(x_r),y) \end{equation} between any componentwise positive solution $(S,x,y)$ of (\ref{nmodel}) and the reference trajectory $(S_r,x_r,0, \ldots, 0) = \left(\frac{1}{2} - \frac{1}{4}\cos(t) ,\frac{1}{2} + \frac{1}{4}\cos(t), 0,\ldots,0\right)$ converges exponentially to the zero vector as $t\to +\infty$. To this end, notice that in the coordinates (\ref{nte}), the system (\ref{nmodel}) becomes \begin{equation} \label{dmodel}\renewcommand{\arraystretch}{2.25} \! \! \! \! \! \left\{\begin{array}{rcl} \dot{\tilde z} & = & - D(t) \tilde z(t) - \displaystyle \sum_{i=1}^n\nu_i(S)y_i \\ \dot{\tilde \xi} & = & ma\dfrac{\tilde{z} - e^{\xi_r(t)} (e^{\tilde{\xi}} - 1)}{(a + z - e^{\xi})(a + z_r(t) - e^{\xi_r(t)})} \\ \dot{y}_i & = & y_i(\nu_i(S) - D(t)),\;\; i=1,\dots, n , \end{array}\right. \end{equation} by the same calculations that led to (\ref{dis}). Set $L_1(\tilde \xi)={\rm exp}(\tilde \xi)-\tilde \xi-1$ where ${\rm exp}(r):=e^r$. Since $e^{\xi_r}\ge 1/4$, $z_r\equiv 1$, $0\le z_r-e^{\xi_r}=S_r=1/2-\cos(t)/4\leq 1$, and $0\le z-e^\xi=\tilde z+1-e^\xi\le 2+\tilde z^2$, we deduce that the derivative of \[L_3(\tilde \xi,\tilde z):=L_1(\tilde \xi)+\frac{4m}{aD_o}\tilde z^2\] along the trajectories of (\ref{dmodel}) satisfies \begin{equation} \label{dmo1}\renewcommand{\arraystretch}{2.25} \begin{array}{rcl} \dot{L}_3 & \leq & ma\dfrac{\tilde z(e^{\tilde \xi}-1)-\frac{1}{4}(e^{\tilde \xi}-1)^2}{(a+z-e^\xi)(a+z_r-e^{\xi_r})}-\dfrac{8m}{a}\tilde z^2- \dfrac{8m}{aD_o}\tilde{z} \displaystyle \sum_{i=1}^n\nu_i(S)y_i\\ &\le & ma\dfrac{4\tilde z^2-\frac{1}{16}(e^{\tilde \xi}-1)^2}{(a+z-e^\xi)(a+z_r-e^{\xi_r})}-\dfrac{8m}{a}\tilde z^2- \dfrac{8m}{aD_o}\tilde{z} \displaystyle \sum_{i=1}^n\nu_i(S)y_i\\ &\leq& - \dfrac{ma(e^{\tilde{\xi}} - 1)^2}{16(a+1) (a + 2 + \tilde{z}^2)} - \dfrac{4m}{a}\tilde{z}^2\\[.5em] &&-2\left[\dfrac{\sqrt{m}}{\sqrt{a}}{\tilde z}\right]\left[\dfrac{4\sqrt{m}}{\sqrt{a}{D_o}} \displaystyle \sum_{i=1}^n\nu_i(S)y_i\right] \\[1em] & \leq & - \dfrac{ma(e^{\tilde{\xi}} - 1)^2}{16(a+1) (a + 2 + \tilde{z}^2)} - \dfrac{3m}{a}\tilde{z}^2 + \dfrac{16 m}{ a D^2_o}\left[\displaystyle \sum_{i=1}^n\nu_i(S)y_i\right]^2 \end{array} \end{equation} where the second inequality is by (\ref{triangle}) with $p=|\tilde z|$, $q=|e^{\tilde \xi}-1|$, and $d=4$ and the last inequality used the relation $J^2+K^2\ge -2JK$ for real values $J$ and $K$. On the other hand, since $\nu_i(1)< D_o$ for each $i$, the form of the dynamics for $S$ and the nonnegativity of $\mu$ and the $\nu_i$'s along our componentwise positive trajectories imply that there exist $\varepsilon > 0$ and $T \geq 0$ such that (i) $S(t) \leq 1 + \varepsilon$ for all $t\ge T$ and (ii) $\nu_i(1 + \varepsilon) < D_o$ for all $i=1,2,\ldots, n$. We deduce that, for all $i=1,2,\ldots, n$ and for all $t \geq T$, \begin{equation} \label{exs1}\renewcommand{\arraystretch}{2} \begin{array}{rclclcl} \! \! \! \dfrac{1}{2}\dfrac{d}{dt}y^2_i &\leq& (\nu_i(S(t)) - D(t)) y_i^2 &\leq& (\nu_i(1 + \varepsilon) - D_o) y_i^2 &\leq& - \delta y_i^2, \end{array}\end{equation} where $\delta=D_o-\max\{\nu_i(1 + \varepsilon): i=1,2,\ldots, n\}>0$. Hence, each $y_i(t)$ converges exponentially to zero. Next notice that along each pair $(\tilde \xi(t),\tilde z(t))$, the function \[\Delta(\tilde \xi,\tilde z):= \dfrac{ma(e^{\tilde{\xi}} - 1)^2}{16(a+1) (a + 2 + \tilde{z}^2)} \] is positive if and only if $\tilde\xi \ne 0$. By (\ref{dmo1}) and (\ref{exs1}), the time derivative of \begin{equation} \label{dmo3} L_4(\tilde{z}, \tilde{\xi}, y_1, ..., y_n) = L_3(\tilde{z}, \tilde{\xi}) + A \displaystyle\displaystyle \sum_{i = 1}^{n} y_i^2, \; \; \; {\rm where}\; \; A:=\frac{16mn^2}{a\delta} \end{equation} along the trajectories of (\ref{dmodel}) satisfies \begin{equation}\label{surmise}\renewcommand{\arraystretch}{2.25} \begin{array}{rcl} \dot{L}_4 &\le& - \Delta(\tilde \xi,\tilde z) - \dfrac{3m}{a}\tilde{z}^2 +\dfrac{16m}{aD^2_o}\left[\displaystyle \sum_{i=1}^n\nu_i(S)y_i\right]^2-2A\delta\displaystyle \sum_{i=1}^ny^2_i\\ &\le& - \Delta(\tilde \xi,\tilde z) - \dfrac{3m}{a}\tilde{z}^2 +\dfrac{16mn^2}{aD^2_o}\displaystyle \sum_{i=1}^n\nu^2_i(S)y^2_i-2A\delta\displaystyle \sum_{i=1}^ny^2_i\\&\le& - \Delta(\tilde \xi,\tilde z) -\dfrac{3m}{a}\tilde z^2-\dfrac{16mn^2}{a}\displaystyle \sum_{i=1}^ny^2_i\; \; =:\; \; -M(\tilde \xi,\tilde z,y) \end{array}\end{equation} \noindent provided $t>T$ where $T$ is chosen as above. (The second inequality in (\ref{surmise}) follows because for any nonnegative $a_k$, we get $a_k\le (\sum_{i=1}^na^2_i)^{1/2}$ which we sum and then square to get $(\sum_{i=1}^na_i)^2\le n^2 \sum_{i=1}^na^2_i$. The last inequality used $\nu_i(S(t))\le \nu_i(1+\varepsilon)<D_o$ for all $t\ge T$ and our choice of $A$. ) It is tempting to surmise from (\ref{surmise}) and the structure of $L_4$ that $L_4$ is a Lyapunov function for (\ref{dmodel}) since then we could use standard Lyapunov function theory to conclude that $(\tilde \xi(t),\tilde z(t), y(t))$ asymptotically converges to zero. However, such an argument would not be technically correct, since the state space of (\ref{dmodel}) is not ${\mathbb R}^{n+2}$ (because the original augmented chemostat model (\ref{nmodel}) is only defined for componentwise nonnegative values of the state). Instead, we argue as follows (in which we may assume for simplicity that the initial time for the augmented error dynamics is zero). For any $t \geq 0$, integrating the last inequality of (\ref{surmise}) over $[0,t]$ gives \begin{equation} \label{da2} \begin{array}{l} L_4\left(\tilde{z}(t), \tilde{\xi}(t), y(t)\right) - L_4\left(\tilde{z}(0), \tilde{\xi}(0), y(0)\right)\\\leq - \displaystyle\int_{0}^{t} M\left(\tilde \xi(l),\tilde z(l),y(l)\right) dl\; . \end{array} \end{equation} It follows that, for all $t \geq 0$, \begin{equation} \label{da3} \begin{array}{rcl} L_1\left(\tilde \xi(t)\right) &\leq& L_4\left(\tilde{z}(0), \tilde{\xi}(0), y(0)\right)\; . \end{array} \end{equation} Therefore $\xi(t)=\tilde \xi(t)+\xi_r(t)$ is a bounded function. Similarly, $\tilde{z}(t)$ and $y(t)$ are bounded. We deduce that $\tilde\xi$, $\tilde z$, and the components of $y$ are uniformly continuous, since their time derivatives (\ref{dmodel}) are bounded. Reapplying (\ref{da2}) therefore implies \[\int_{0}^{+ \infty} M(\tilde \xi(l),\tilde z(l),y(l)) dl\] is finite. It follows from Barbalat's lemma \cite[p.323]{K02} and the structure of the function $M$ that $(\tilde \xi(t), \tilde z(t), y(t))\to 0$ as $t\to+\infty$. This establishes our stability condition for the multi-species model. \begin{remark} Notice that $-\dot L_4$ is bounded below by a quadratic of the form $\bar c|(\tilde \xi, \tilde z,y)|^2$ along the trajectories of (\ref{dmodel}), and that $L_4$ is bounded above and below by such quadratics along the trajectories as well, since the trajectories are bounded. {}From this fact and (\ref{surmise}), one can deduce that the trajectories $(\tilde \xi(t), \tilde z(t),y(t))$ converge {\em exponentially} to zero. \end{remark} \section{Simulation} \label{simulations} To validate our convergence result, we simulated the dynamics (\ref{mod1}) with the initial values $x(0)=2$ and $S(0)=1$ and the reference trajectory $x_r(t)$, using the parameters $m=10$ and $a=\frac{1}{2}$ and $t_o=0$. In this case, the lower bound on $D(t)$ provided by (\ref{aer1}) is $D_o=7/3$. It follows from Remark \ref{iISS-rk} that the convergence of $x(t)$ to $x_r(t)$ is robust to disturbances that are valued in $[-\bar u, \bar u]^2$ for any positive constant $\bar u<\min\{1,D_o\}=1$, in the sense of integral input-to-state stability. Moreover, using the estimate (\ref{a2uiue}), one easily checks that in this case, estimate (iISS) on p.\pageref{iISS-rk} above holds with $\delta_2(r)=2C_2r$; cf. the proof of \cite[Theorem 1]{ASW00}. For our simulation, we took the disturbance $u_1(t)=0.5e^{-t}$ on the dilution rate and $u_2(t)\equiv 0$. This gave the plot of $x(t)$ and $x_r(t)$ against time in Figure \ref{xsol}. Our simulation shows that the state trajectory $x(t)$ closely tracks the reference trajectory $x_r(t)$ even in the presence of small disturbances and so validates our findings. \begin{figure}[t] \begin{center} \scalebox{.5}{\includegraphics{disturbed-tracking.eps}} \end{center} \caption{State Trajectory Component $x(t)$ (dashed) and Reference Trajectory $x_r(t)$ (solid) for Chemostat \label{xsol}} \end{figure} \section{Conclusions} \label{concl} The chemostat model is a useful framework for modeling species competing for nutrients. For the case of one species competing for one nutrient and a suitable time-varying dilution rate, we proved stability of an appropriate reference trajectory. Moreover, we found that the stability was maintained even if the model is augmented with other species that are being driven to extinction, or if there are disturbances of appropriately small magnitude acting on the dilution rate and input nutrient concentration. \smallskip \section*{Appendix} For completeness, we provide the slight variant of the classical ISS arguments needed to finish the proof of Theorem \ref{iss-track}. Multiplying through (\ref{goal}) by $e^{C_5l}$ and applying the standard ``variation of parameters'' formula to $[t_o,t]\ni l\mapsto V(\tilde \xi(l),\tilde z(l))$ (by integrating between $t_0\ge 0$ and $t \geq t_o$) gives \begin{equation} \begin{array}{rcl} V(\tilde{\xi}(t), \tilde z(t)) &\leq& e^{(t_0 - t) C_5} V(\tilde{\xi}(t_0), \tilde z(t_0))+ C_2 |u|_{[t_o,t]} \; , \end{array} \end{equation} where we enlarged $C_2$ without relabeling. We deduce that \[ L_1(\tilde{\xi}(t)) + \frac{\kappa}{D_o - \bar u} \tilde z^2(t) \leq \ln\left(1 + e^{(t_0 - t) C_5} V(\tilde{\xi}(t_0), \tilde z(t_0)) + C_2 |u|_{[t_o,t]}\right)\] where $L_1$ is defined in (\ref{VChoice}). Since $e^r - 1 - r \geq \frac{1}{2}r^2$ and $\ln(1 + r) \leq r$ for all $r \geq 0$, we deduce from the formula for $V$ that \begin{equation}\label{49}\begin{array}{rcl} \dfrac{1}{2}\tilde{\xi}^2(t) + \dfrac{\kappa}{D_o - \bar u} \tilde z^2(t) & \leq& e^{(t_0 - t) C_5} \Omega\left(|(\tilde{\xi}(t_0),\tilde z(t_0))|\right) + C_2 |u|_{[t_o,t]}\\&& {\rm where}\; \; \; \; \Omega(r) = e^{e^{r} - 1 - r + \frac{\kappa}{D_o - \bar u} r^2} - 1\; . \end{array} \end{equation} In particular, $\Omega\in{\mathcal K}_{\infty}$. {}From (\ref{49}) and the inequality $\sqrt{a+b}\le \sqrt{a}+\sqrt{b}$, \begin{equation} \label{tal} \begin{array}{rcl} |\tilde{\xi}(t)| & \leq & \sqrt{2 e^{(t_0 - t) C_5} \Omega\left(|(\tilde{\xi}(t_0),\tilde z(t_0))|\right)} + \sqrt{2 C_2 |u|_{[t_o,t]}} \; \; \; \; {\rm and} \end{array} \end{equation} \begin{equation} \begin{array}{rcl} |\tilde z(t)| & \leq & \sqrt{e^{(t_0 - t) C_5} \frac{D_o - \bar u}{\kappa} \Omega\left(|(\tilde{\xi}(t_0),\tilde z(t_0))|\right)} + \sqrt{C_2 \frac{D_o - \bar u}{\kappa}|u|_{[t_o,t]}} \end{array}\; . \end{equation} The relations $\tilde z=S-S_r+e^{\xi_r}(e^{\tilde \xi}-1)$ and $e^{a+b}-1\le \frac{1}{2}(e^{2a}-1)+\frac{1}{2}(e^{2b}-1)$ give \[\renewcommand{\arraystretch}{2} \begin{array}{rcl} |S(t) - S_r(t)| & \leq & |\tilde z(t)| + (e^{|\tilde{\xi}(t)|} - 1)\\ & \hspace{-.75in}\leq & \hspace{-.35in} \sqrt{e^{(t_0 - t) C_5} \frac{D_o - \bar u}{\kappa} \Omega\left(|(\tilde{\xi}(t_0),\tilde z(t_0))|\right)} + \sqrt{C_2 \frac{D_o - \bar u}{\kappa}|u|_{[t_o,t]}} \\ & & \hspace{-.5in}+ \frac{1}{2} \left(e^{2\sqrt{2 e^{(t_0 - t) C_5} \Omega\left(|(\tilde{\xi}(t_0),\tilde z(t_0))|\right)}} - 1\right) + \frac{1}{2} \left(e^{2\sqrt{2 C_2 |u|_{[t_o,t]}}} - 1\right) . \end{array} \] The desired ISS estimate (\ref{ISSest}) now follows immediately from this last inequality and (\ref{tal}) with the choices \[\renewcommand{\arraystretch}{1.5}\begin{array}{l}\renewcommand{\arraystretch}{1.25} \beta(s,t)=4\sqrt{\Omega(s){\rm exp}(-C_5t)\left\{1+\frac{D_o-\bar u}{\kappa}\right\}}+ {\rm exp}\left(4\sqrt{\Omega(s){\rm exp}(-C_5t)}\right)-1\\ {\rm and}\; \; \gamma(r)=4\sqrt{C_2\left(1+(D_o-\bar u)/\kappa\right)r}+{\rm exp}(4\sqrt{c_2r})-1.\end{array} \] This completes the proof of Theorem \ref{iss-track}. \section*{Acknowledgments} Part of this work was done while P. De Leenheer and F. Mazenc visited Louisiana State University (LSU). They thank LSU for the kind hospitality they enjoyed during this period. F. Mazenc thanks Claude Lobry and Alain Rapaport for illuminating discussions. Malisoff was supported by NSF/DMS Grant 0424011. De Leenheer was supported by NSF/DMS Grant 0500861. The authors thank the referees for their comments, and they thank Hairui Tu for helping with the graphics. The second author thanks Ilana Aldor for stimulating discussions.
1,477,468,751,296
arxiv
\section{Introduction} Correlated systems with competing on-site and intersite Coulomb interactions~\cite{imada1998} and fillings away from one electron per site ($n=1$, half filling) are presently a subject of intensive investigation due to the appearance of complex phases such as unconventional charge and magnetic orders. These systems become even more challenging when novel lattice structures emerge out of the original lattice in the region of strong correlations~\cite{baskaran2016}. This phenomenon is often found in geometrically frustrated systems, such as triangular and kagome lattices. On the triangular lattice, for instance, large on-site $U$ and nearest-neighbor $V$ Coulomb interactions generate effective honeycomb and enlarged triangular lattices at $1/3$ filling ($n=2/3$) by inducing charge disproportionation~\cite{watanabe2005,tocchio2014,kaneko2016}. When $V\gtrsim U/3$ the system tends to create a honeycomb lattice of empty sites and an enlarged triangular lattice of doubly occupied sites, while at smaller ratios of $V/U$ the system evolves into a honeycomb lattice of singly occupied sites with long-range antiferromagnetic order. A similar charge ordered state with noncollinear magnetic order has also been proposed in the Kondo lattice system~\cite{reja2015}. While these states are insulating, such exotic charge and magnetic orders become metallic away from the commensurate filling~\cite{tocchio2014}. Furthermore, at quarter filling ($n=1/2$), metallic states, named pinball liquids, have been also recently proposed~\cite{hotta2006,miyazaki2009,canocortes2011,merino2013}. They are characterized by a three-sublattice structure, in which the carriers of one sublattice are essentially localized (pins), with the remaining charges (balls) building an itinerant liquid on the interstitials. Recently, other mechanisms than direct charge disproportionation have been also proposed to generate new lattice structures such as the emergence of a kagome lattice via spontaneous ferrimagnetic order coexisting with a $\sqrt{3}\times\sqrt{3}$ charge order pattern in a triangular Kondo lattice~\cite{akagi2015}. Similarly, on the kagome lattice, large values of $U$ and $V$ have been discussed to induce nearly isolated six-site rings and an enlarged kagome lattice at $1/3$ filling ($n=2/3$)~\cite{wen2010,ferhat2014,pollmann2014}. Specifically, when $U,V>0$ and $t=0$, each corner-sharing triangle possesses charge order characterized by two singly occupied sites and an empty site. The empty site randomly sits on one of the three vertices of a triangle, which gives macroscopic charge degeneracy. Nonzero hopping $t$ lifts the charge degeneracy and appears to stabilize a $\sqrt{3}\times\sqrt{3}$ charge pattern, whose unit cell contains nine sites~\cite{wen2010,ferhat2014,pollmann2014}. Recently, by mapping the system into a hard-core boson Hamiltonian, a topological liquid was also proposed~\cite{roychowdhury2015}. Reported realizations of such emergent lattices are for instance the generation of a honeycomb structure through charge disproportion in the metallic magnet ${\rm AgNiO_2}$~\cite{wawrzynska2007,wawrzynska2008} or the appearance of effective spin-$1/2$ chains in the heavy-fermion spinel ${\rm LiV_2O_4}$~\cite{fulde2001,fulde2002}. Actually, even when the lattice structures themselves are not geometrically frustrated, such a formation of new lattices is possible due to strong electron correlations. For example, in the cubic lattice, a staggered $(\pi,\pi,\pi)$ charge order generates doubled face-centered-cubic lattices~\cite{hayami2014}. Moreover, effective spin and charge interactions in such systems may acquire additional geometrical frustrations. Indeed, unconventional noncoplanar magnetic orders have been proposed in the periodic Anderson model on the cubic lattice at $n=3/2$ filling~\cite{hayami2014}. One of the most discussed bipartite lattices in two-dimensional systems is the honeycomb lattice. While correlation effects on the honeycomb lattice have been intensively discussed in the past, many studies focused mostly at half filling~\cite{meng2010,sorella2012,raghu2008,weeks2010,capponi2015,motruk2015,scherer2015,kurita2015}. Castro {\it et al.}~\cite{castro2011} and Grushin {\it et al.}~\cite{grushin2013} studied extensively the phase diagram in honeycomb systems for arbitrary filling; however, they considered only the spinless Hubbard model. In this work, we investigate the possible emergence of correlation-induced new lattice structures both in a bipartite and in a geometrically frustrated lattice at fillings that, to our knowledge, have not been investigated before. In particular, we perform an extensive analysis of the spinful extended Hubbard model on honeycomb and triangular lattices away from half filling, and focus on the interplay between on-site $U$ and intersite nearest- and next-nearest Coulomb interactions $V$ and $V'$, respectively. By using the Hartree-Fock approximation, as well as perturbation theory and the variational Monte Carlo method, we find that a triangular structure emerges from charge order on the honeycomb lattice for large values of $U$ and $V$ at $3/4$ filling ($n=3/2$, three electrons per two sites on average). Charge-poor sites possess spin degrees of freedom, and their spin correlations are found to be antiferromagnetic in most of the phase diagram, while they become ferromagnetic when $U$ is much larger than $V$. On the other hand, for the triangular lattice with $U$, $V$, and $V'$ interactions at $3/8$ filling ($n=3/4$, three electrons per four sites on average), considering large values of $U$ and a finite $V'$ (we set $V' = V/5$), we find that the system shows rich charge orders: a kagome structure emerges for intermediate values of $V$, while a one-dimensional structure is stabilized for large values of $V$. Both examples show an enhancement of geometrical frustration, from the honeycomb to the triangular lattice in the former case, and from the triangular to the kagome in the latter one. The paper is organized as follows. In Sec.~\ref{sec:honeycomb}, we present the extended Hubbard model on the honeycomb lattice and show the possible phases of the model as a function of $U$ and $V$ obtained with the Hartree-Fock approximation and with variational Monte Carlo. We also discuss how $U$ and $V$ determine magnetic patterns of the emergent charge ordered states by means of perturbation theory. In Sec.~\ref{sec:triangular}, we present variational Monte Carlo results for the extended Hubbard model on the triangular lattice and discuss possible phases of the model as a function of $V$ for large $U$ and $V'= V/5$. Finally, in Sec.~\ref{sec:conclusions}, we draw our conclusions. \section{Emergent triangular structure on a honeycomb system} \label{sec:honeycomb} \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_1.pdf} \caption{(Color online) The extended Hubbard model with hopping $t$, on-site Coulomb interaction $U$, and nearest-neighbor Coulomb interaction $V$, Eq.~(\ref{eq:hubbard}) and Eq.~(\ref{eq:hubbard_2orbital}), on (a) the honeycomb lattice and (b) its equivalent two-band representation, Eq.~(\ref{eq:hubbard_2orbital}). Blue and red circles denote orbitals $c$ and $d$, respectively.} \label{fig:lattice} \end{figure} \subsection{Extended Hubbard model on a honeycomb lattice} We consider an extended Hubbard model on the isotropic honeycomb lattice [see Fig.~\ref{fig:lattice}(a)] where the Hamiltonian is given as \begin{eqnarray} \label{eq:hubbard} H &=& - t \sum_{\langle i,j\rangle,\sigma} c^{\dagger}_{i,\sigma} c^{\phantom{\dagger}}_{j,\sigma} + \mathrm{h.c.} \nonumber \\ && + U \sum_{i} n_{i,\uparrow} n_{i,\downarrow} + V \sum_{\langle i,j\rangle} n_{i} n_{j}; \end{eqnarray} $t$ denotes the hopping parameter, and $U$ and $V$ are the on-site and nearest-neighbor Coulomb interaction, respectively. Hereafter, we investigate repulsive Coulomb interactions ($U,V\ge 0$), and focus on $3/4$ filling ($n=3/2$). We note that on the honeycomb lattice $3/4$ and $1/4$ fillings are equivalent via the particle-hole transformation. \begin{figure*}[t] \includegraphics[height=0.65\columnwidth]{fig_2a.pdf}\hspace{10ex} \includegraphics[height=0.65\columnwidth]{fig_2b-e.pdf} \caption{(Color online) (a) Hartree-Fock phase diagram of the Hubbard model at $n=3/2$, see Eq.~(\ref{eq:hubbard}), on the honeycomb lattice. (b) Illustration of the ferromagnetic metallic state (FM metal), with one up-spin and half a down-spin (on average) per site. (c) Charge ordered metallic state (CO metal), with alternating doubly and singly occupied site (large/small circles). (d) Charge ordered antiferromagnetic insulating state (CO+AF insulator). (e) Charge ordered ferromagnetic insulating state (CO+FM insulator).} \label{fig:MF_phase_diag} \end{figure*} This model, being defined on a lattice with two sites per unit cell, can be also regarded as a two-band (two-orbital) Hubbard model whose hoppings connect only different orbitals [see Fig.~\ref{fig:lattice}(b)], \begin{eqnarray} \label{eq:hubbard_2orbital} H &=& - t \sum_{i,\sigma} \left( d^{\dagger}_{i,\sigma} c^{\phantom{\dagger}}_{i,\sigma} + d^{\dagger}_{i,\sigma} c^{\phantom{\dagger}}_{i+\bm{e}_x,\sigma} + d^{\dagger}_{i,\sigma} c^{\phantom{\dagger}}_{i+\bm{e}_y,\sigma} + \mathrm{h.c.} \right) \nonumber \\ && + U \sum_{i} \left( n_{i,\uparrow}^{c} n_{i,\downarrow}^{c} + n_{i,\uparrow}^{d} n_{i,\downarrow}^{d} \right) \nonumber \\ && + V \sum_{i} \left( n_{i}^{d} n_{i}^{c} + n_{i}^{d} n_{i+\bm{e}_x}^{c} + n_{i}^{d} n_{i+\bm{e}_y}^{c} \right), \end{eqnarray} Both representations of the Hamiltonian are equivalent and we will make use of the latter representation for computational purposes. \subsection{Mean-field phase diagram} In order to investigate the interplay between charge and magnetic order, we start with a mean-field analysis of the above presented Hamiltonian. Figure~\ref{fig:MF_phase_diag} (a) shows the $U$-$V$ phase diagram of the honeycomb model of Eq.~(\ref{eq:hubbard_2orbital}) at $3/4$ filling obtained with the restricted Hartree-Fock method (as explained in Appendix~\ref{sec:Hartree-Fock}). For simplicity, we have restricted ourselves to coplanar magnetic order patterns. \begin{figure}[t] \includegraphics[width=0.4\columnwidth]{fig_3.pdf} \caption{(Color online) Metastable state found at $V=0$. Antiferromagnetic (up-up-down-down) insulator without charge order. } \label{fig:V0_states} \end{figure} \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_4.pdf}% \caption{(Color online) Results for the honeycomb lattice at $3/4$ filling for $V=0$ obtained by the Hartree-Fock method. (a) Energy of each state as a function of $U/t$. (b) Magnetization of the metallic ground state (normal and FM). (c) Density of states for the ground-state FM metal at each value of $U/t$. The Fermi level is set to $0$. When the up-spin band is completely below the Fermi level, the state becomes semimetallic.} \label{fig:V0_MF} \end{figure} \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_5.pdf}% \caption{(Color online) Results for the honeycomb lattice at $3/4$ filling for $U=0$ obtained by the Hartree-Fock method. (a) Energy of metallic states as a function of $V/t$. (b) The number of electrons per each sublattice for the metallic ground states (with and without charge order). (c) Density of states for the ground-state metallic states. The Fermi level is set to $0$.} \label{fig:U0_MF} \end{figure} In the absence of nearest-neighbor Coulomb interaction $V$ (along $V=0$) we find four ground-state candidates: normal metal, ferromagnetic metal [Fig.~\ref{fig:MF_phase_diag}(b)], and antiferromagnetic insulator with and without charge order [see Fig.~\ref{fig:MF_phase_diag}(d) and Fig.~\ref{fig:V0_states}]. As shown in Fig.~\ref{fig:V0_MF}(a), the energies of antiferromagnetic and charge ordered states are always higher than those of normal and ferromagnetic metal states. A continuous phase transition from normal to ferromagnetic metal occurs at $U/t\sim 5$, as shown in Fig.~\ref{fig:V0_MF}(b). This ferromagnetic metal [Fig.~\ref{fig:MF_phase_diag} (b)] is consistent with the result obtained by Hanisch {\it et al}~\cite{hanisch1997}. When $U/t\gtrsim 6$, spins are fully polarized. In the ferromagnetic state at $3/4$ filling, the up-spin lower band is fully occupied while the down-spin upper band is half filled and the density of states (DOS) is zero at the Fermi energy indicating a semimetallic behavior. Figure~\ref{fig:V0_MF}(c) shows the DOS as a function of $U$. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_6.pdf}% \caption{(Color online) States found for $U=V$. (a) Normal metal without charge and magnetic order. (b) Charge ordered stripe antiferromagnetic insulator. (c) Charge ordered 120$^\circ$ antiferromagnetic insulator. The energy of the stripe antiferromagnetic state is found to be the lowest one. } \label{fig:UV_states} \end{figure} \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{fig_7.pdf}% \caption{(Color online) Results for the honeycomb lattice at $3/4$ filling for $U=V$ obtained by the Hartree-Fock method. (a) Energy of each state as a function of $U=V$. (b) The number of electrons per each sublattice for ground states (normal metal and charge ordered collinear AF state). (c) Magnetization of ground states. Charge-poor (charge-rich) sites show large (small) magnetization. (d) Density of states for the ground state. The Fermi level is set to $0$.} \label{fig:UV_MF} \end{figure} On the other hand, in the absence of on-site Coulomb interaction $U$ (along $U=0$) and for finite $V$, within the $2\times 2$ sublattice structure, we obtain a staggered charge order state, where $c$-orbital sites are charge-rich ($n_c=n_{c\uparrow}+n_{c\downarrow}>3/2$) while $d$-orbital sites are charge-poor ($n_d=n_{d\uparrow}+n_{d\downarrow}<3/2$), as shown in Fig.~\ref{fig:MF_phase_diag}(c). In the absence of on-site Coulomb interaction ($U=0$), this charge ordered state does not have any magnetic order. As shown in Fig.~\ref{fig:U0_MF}, we find a phase transition from the nonmagnetic metal to the charge ordered metal at $V/t\sim 2$ which is stabilized by splitting the upper and lower bands. This state is metallic since the upper band is always half filled for $n=3/2$. We consider now the case of large $U$ and $V$ values, where charge order is expected to generate complex magnetic orders. As shown in Fig.~\ref{fig:MF_phase_diag} (a), when both $U$ and $V$ are large, we find a charge ordered antiferromagnetic insulator. It has a rich-poor-rich-poor type charge pattern, and the charge-poor sites form an emergent triangular structure. Magnetic order is found to be collinear and shows stripe order [Fig.~\ref{fig:MF_phase_diag} (d)]. On the other hand, when $U$ is much larger than $V$, a charge ordered ferromagnetic insulating phase [Fig.~\ref{fig:MF_phase_diag} (e)] appears. It also has triangular-like charge order, and charge-poor sites show dominant ferromagnetic order. In order to investigate the possible antiferromagnetic patterns on the emergent triangular structure, we consider the collinear and the commensurate spiral state with $120^\circ$ order of Fig.~\ref{fig:UV_states} along the $U=V$ line of the phase diagram. We note that, in general, magnetic states may show incommensurate coplanar spiral order or noncoplanar order in doped Hubbard models~\cite{pasrija2016}. However, here we restrict ourselves to the coplanar case. As shown in Fig.~\ref{fig:UV_MF}, the stripe antiferromagnetic charge ordered state is found to have a lower energy than the 120$^\circ$ N\'eel ordered state, although the energy difference becomes extremely small as $U$ and $V$ increase. The stability of the collinear phase should be induced by second-order processes where a doubly occupied site is formed in the charge-poor sublattice, after the hopping of one electron from the charge-rich sublattice. Indeed, the hopping of one electron from a doubly occupied site to a singly occupied neighboring one is favored when collinearity holds, even for large values of $U$ and $V$. Since the intermediate state costs an energy $2V$, see for instance the first part of the process in Fig.~\ref{fig:UV_Jeff_Jnn}, the energy of the second-order process scales as $t^2/V$, as confirmed by the Hartree-Fock calculations for both stripe and 120$^\circ$ N\'eel antiferromagnetic charge ordered states in the insulating phase. Moreover, as discussed in Appendix~\ref{sec:Perturbation}, effective next-neighbor exchange couplings on the emergent triangular lattice are generated for moderately large values of $U$ and $V$, favoring collinear orderings~\cite{jolicoeur1990,chubukov1992}. All these contributions may then break down the 120 N\'{e}el spin state, and induce the observed collinear pattern. \subsection{Spin correlation in charge ordered states} \label{sec:spin_correlations} \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_8.pdf} \caption{(Color online) Illustration of effective (a) two-, (b) three- spin interaction terms of Eqs.~(\ref{eq:H_spin}) and (\ref{eq:H_perm}) and (c) a four-site spin ring exchange. } \label{fig:UV_2_3_4_interaction} \end{figure} In the previous section, we showed that when correlations generate staggered charge order patterns on the honeycomb lattice, charge-rich and charge-poor sites form triangular lattices, respectively. At $3/4$ filling, charge-rich sites contain two electrons on average, while charge-poor sites contain one electron on average with spin degrees of freedom. In order to investigate how magnetic order appears in this limit, we apply perturbation theory to obtain an effective spin Hamiltonian. At the lowest order, the effective low-energy spin Hamiltonian on the triangular lattice (see Fig.~\ref{fig:UV_2_3_4_interaction}) contains a spin exchange interaction and a three-particle permutation term, namely, \begin{equation} H_{\rm spin} = J_1\sum_{\langle i,j \rangle_1} \bm{S}_i\cdot \bm{S}_{j} \label{eq:H_spin} \end{equation} and \begin{equation} H_{\rm perm} = K_3\sum_{\triangle} (P_3 + P_3^{-1}). \label{eq:H_perm} \end{equation} (Further details are given in Appendix~\ref{sec:Perturbation}.) Here, the sum is taken over all nearest-neighbor sites denoted by $\langle i,j \rangle_1$ for $H_{\rm spin}$, while it is taken over all triangles, which connect charge-poor sites, for $H_{\rm perm}$, as illustrated in Fig.~\ref{fig:UV_2_3_4_interaction}. The symbol $P_n$ denotes a cyclic permutation operator. \begin{figure*}[t] \includegraphics[width=0.95\textwidth]{fig_9a-c.pdf}\newline \includegraphics[width=0.95\textwidth]{fig_9d-f.pdf} \caption{(Color online) Variational Monte Carlo (symbols) and Hartree-Fock (lines) results for the nonmagnetic metallic state ($U/t=V/t=2$, top row) and the charge-ordered antiferromagnetic insulating state [$U/t=V/t=10$, bottom row; as illustrated in Fig.~\ref{fig:MF_phase_diag}(d)]. In the first column [panels (a) and (d)] the number of electrons for the two sites making up the unit cell are given [denoted as $c$ and $d$ orbitals; compare Eq.~(\ref{eq:hubbard_2orbital})]. In the second column [panels (b) and (e)] the respective sublattice magnetizations. In the last column [panels (c) and (f)] the respective momentum distributions $n(k)$, as evaluated for $200$ sites (using VMC) are presented. The hexagon denotes the Brillouin zone of the honeycomb lattice.} \label{fig:UV_VMC} \end{figure*} By considering virtual hopping processes, as shown in Fig.~\ref{fig:UV_Jeff_Jnn}, the exchange interaction $J_1$ can be evaluated as a function of $t$, $U$, and $V$, namely, $J_1=c_1t^4/(V^2U)$ with a positive constant $c_1=1$. This results in antiferromagnetic spin correlations. Similarly, the coefficient $K_3$ in the permutation terms can be evaluated by considering six cyclic processes for right-pointing triangles that lie inside the hexagons ($K_3^{\triangleright}$) and for left-pointing triangles that connect three hexagons ($K_3^{\triangleleft}$). In Appendix~\ref{sec:Perturbation} we show in Fig.~\ref{fig:UV_Jeff_Jr3} one of the virtual processes generating $K_3^{\triangleright}$, which does not require the formation of intermediate double-occupied sites. This coefficient survives even for $U=\infty$, namely, $K_3^{\triangleright}=-d_3t^6/V^5$ with a positive constant $d_3$. When $U<\infty$, the formation of intermediate double-occupied sites leads to other six cyclic process in $K_3^{\triangleleft}$ and in $K_3^{\triangleright}$. Since $P_3$ can be mapped to two-spin exchange operators~\cite{roger1983}, these permutation terms finally result in ferromagnetic exchange interactions. The effective spin Hamiltonian is given by \begin{equation} H = J_1^{\rm eff} \sum_{\langle i,j \rangle_1} \bm{S}_i\cdot \bm{S}_{j} \end{equation} with $J_1^{\rm eff} = J_1+2K_3^{\triangleright}+2K_3^{\triangleleft}$. In the limit $(V/t)^3\gg U/t$, antiferromagnetic spin correlations become relevant ($J_1\gg K_3$), and hence $J_1^{\rm eff}$ becomes antiferromagnetic. On the other hand, when $U/t\gg (V/t)^3$, antiferromagnetic spin correlations are suppressed ($K_3\gg J_1$), leading to a ferromagnetic $J_1^{\rm eff}$. This is consistent with the results in the previous section. Furthermore, when $(V/t)^3\sim U/t$, ferromagnetic $K_3$ and antiferromagnetic $J_1$ nearly cancel out. In this case, higher order processes in the perturbation theory become relevant. One of the dominant terms is a four-spin ring-exchange interaction $K_4 (P_4 + P_4^{-1})$ on the effective triangular lattice, as shown in Figs.~\ref{fig:UV_2_3_4_interaction} (c) and \ref{fig:UV_Jeff_Jr4}. This may induce exotic spin liquid states~\cite{motrunich2005,grover2010,holt2014} or chiral magnetic order~\cite{korshunov1993,kubo1997}. Moreover, the effective energy scale is extremely small ($|J_1^{\rm eff}|\sim |K_4|\sim t^8/V^7$), which induces highly degenerate low-energy states. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_10.pdf}% \caption{(Color online) Schematic phase diagram for the honeycomb lattice at $3/4$ filling for $U=V$ obtained using Hartree-Fock (MF, top) and variational Monte Carlo (VMC, bottom).} \label{fig:UV_PD} \end{figure} \subsection{Charge ordering vs.\ phase separation} The presence of charge order is a necessary condition for the validity of the perturbation expansion discussed in Sec.~\ref{sec:spin_correlations}. The mean-field approach tends however to overestimate the stability range of ordered phases and to underestimate the stability of nonordered metallic phases stabilized in turn by quantum fluctuations. In what follows, we investigate the stability of the ordered state performing variational Monte Carlo simulations. We prepare the initial states by choosing the charge ordered states found in the restricted Hartree-Fock method and then optimize the variational parameters. The details of the method are presented in Appendix~\ref{sec:VMC}. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_11.pdf}% \caption{(Color online) Charge structure factors at $(U/t,V/t)=(0,30)$ (phase separated state) for the honeycomb lattice with $288$ sites, as obtained by VMC. (a) Total charge structure factor $N(q)$. (b) Charge-rich orbital ($c$-orbital-$c$-orbital) charge structure factor $N^{cc}(q)$. (c) Charge structure factor between two different orbitals ($c$-orbital and $d$-orbital) $N^{cd}(q)$. (d) Charge-poor orbital ($d$-orbital-$d$-orbital) charge structure factor $N^{dd}(q)$. The total charge structure factor $N(q)$ shows sharp peaks at the achievable smallest wave vector $q$, suggesting phase separation. Since $N^{dd}(q_{\rm peak})\gg N^{cc}(q_{\rm peak})$ and $N(q)$ is similar to the $d$-orbital (charge-poor orbital) charge structure factor $N^{dd}(q)$, phase separation mainly occurs in the $d$-orbital sector. } \label{fig:U0_VMC_Nq} \end{figure} \begin{figure*}[t] \includegraphics[width=0.85\textwidth]{fig_12.pdf} \caption{(Color online) Snapshot of a phase separated state at $(U/t,V/t)=(0,30)$ for $288$ sites in the VMC calculation. Large and small circles correspond to doublons and spinons, respectively. Dots correspond to holons. Up and down arrows correspond to up and down spins, respectively. (a) Spin and charge configurations for both orbitals. (b) Same snapshot for only charge-rich $c$ orbital. It mainly consists of doublons. (c) Same snapshot for only charge-poor $d$ orbital. It consists of large doublon and holon islands.} \label{fig:U0_VMC_snap_shot} \end{figure*} The Hartree-Fock calculation suggests that a triangular-like charge order appears at large $V$. We first check with variational Monte Carlo (VMC) the stability when $U$ as well as $V$ are large. We confirm the presence of the insulating state with charge order and stripe antiferromagnetic order at $U/t=V/t=10$. Both the number of electrons per orbital and the magnetization are nearly saturated as shown in Figs.~\ref{fig:UV_VMC}(d)--\ref{fig:UV_VMC}(f). The momentum distribution $n(k)$ is a smooth function of $k$ [Fig.~\ref{fig:UV_VMC}(f)], suggesting the state to be insulating. Note that our variational wave function also finds a metallic state without charge and magnetic orders at $U/t=V/t=2$ [see Figs.~\ref{fig:UV_VMC}(a)--\ref{fig:UV_VMC}(c)]. Figure~\ref{fig:UV_PD} presents the schematic phase diagram for $V=U$ obtained with the various approaches considered here. Besides, we do not find any indication of phase separation, which can be detected by the divergence of the charge structure factor $N(q)$ at the smallest achievable wave vector $q\sim 2\pi/L$~\cite{becca2000}. We now investigate the case of $U=0$ and large $V$, namely, $V/t=10$, $20$, $30$, with the VMC method. Note that perturbation theory is not applicable in this case since $U$ is not large enough. When $V/t=10$, the charge ordered metallic state found in the mean-field calculation is replaced by a metal without any charge order. The total charge structure factor $N(q)$, see Eq.~(\ref{eq:def_structure_factor_Nq}), shows $q$-linear behavior near $q\sim 0$, suggesting the state to be metallic. On the other hand, when $V/t=20$ and $30$, we find a charge disproportionate state, where the average number of $c$ electrons is larger than that of $d$ electrons. As shown in Fig.~\ref{fig:U0_VMC_Nq}, $N(q)$ is found to have sharp peaks near $q\sim 2\pi/L$, suggesting phase separation~\cite{becca2000}. The peaks in $N(q)$ appear to be dominated by that of the charge-poor $d$-orbital charge structure factor $N^{dd}(q)$; see Eq.~(\ref{eq:def_structure_factor_Nabq}). This means that phase separation is mainly activated in the $d$-orbital sector. In order to clarify the mechanism of phase separation, we also take a snapshot of this state. As shown in Fig.~\ref{fig:U0_VMC_snap_shot}(a), phase separation is characterized by the charge ordered insulating state with a 2020$\cdots$ structure (doubly occupied--empty--doubly occupied--empty $\cdots$ sites) and the metallic state with a mixture of doubly occupied and singly occupied sites. Following the conventions~\cite{anderson1988}, we denote single-occupied sites with spin as ``spinons'' while doubly occupied (empty) sites with no spin as ``doublons'' (``holons''). For the charge-rich $c$ orbital, each site is nearly doubly occupied and there are no holon sites (empty sites) [see Fig.~\ref{fig:U0_VMC_snap_shot}(b)]. On the other hand, for the charge-poor $d$ orbital there are two islands: one formed by holons and the other one that is a mixture of doublons and spinons [see Fig.~\ref{fig:U0_VMC_snap_shot}(c)]. In this doublon-spinon mixture region, each spinon can hop through a doublon sea of the $c$ and $d$ orbitals. This does not cost an additional energy if two spinons are not next to each other on the original honeycomb lattice. Each spinon is always surrounded by three doublons, which reside on the nearest-neighbor sites of the honeycomb lattice. It can be assumed that one spinon and at least one doublon are bound together, and this new quasiparticle freely moves inside the doublon sea. The total kinetic energy gain is determined by the size of the doublon sea and the effective filling of the new quasiparticles. We note that the concept of spin-charge separation has been only rigorously defined in one spatial dimension; however, such a possibility has been also discussed in higher dimensions in the presence of geometrical frustrations~\cite{balents2010,poilblanc2009}. Our numerical results imply that the system wants to generate a larger doublon sea with spinons to gain kinetic energy. This mechanism is similar to what has been found in the doped extended Hubbard model on a one-dimensional chain~\cite{mila1993,penc1994} and a two-leg ladder~\cite{vojta2001}. Summarizing, a nearest-neighbor Coulomb interaction $V$ at $U=0$ stabilizes a charge ordered metal at the Hartree-Fock level. However inclusion of quantum fluctuations via the Gutzwiller approximation (GA), see Appendix~\ref{sec:Gutzwiller}, and via finite-size VMC calculations suggests that the charge-ordered metal is replaced by phase separation, see Fig.~\ref{fig:U0_PD}, although the energies of these two states are found to be very close, as shown in Appendix~\ref{sec:U=0}. Figure~\ref{fig:U0_PD} shows the schematic phase diagram for $U=0$ and finite $V$. We note that the critical $U$ is shifted to a larger value in the VMC result. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_13.pdf} \caption{(Color online) Schematic phase diagrams for the honeycomb lattice at $3/4$ filling and $U=0$ obtained using restricted Hartree-Fock (MF, top), the Gutzwiller approximation (GA, middle) and variational Monte Carlo (VMC, bottom). CO denotes a charge ordered phase, while PS denotes a phase separated phase. } \label{fig:U0_PD} \end{figure} \section{Emergent kagome and chain structures on a triangular system} \label{sec:triangular} \begin{figure}[t] \includegraphics[width=.7\columnwidth]{fig_14a.pdf}\\[\baselineskip] \includegraphics[width=.7\columnwidth]{fig_14b.pdf} \caption{(Color online) (a) Effective kagome lattice generated by charge order at $n=3/4$ on the triangular lattice. The unit cell contains four sites and is denoted by a green parallelogram. Small empty circles denote empty sites, while full red circles denote single-occupied sites. (b) Effective 1D chains generated by charge order at $n=3/4$ on the triangular lattice. Small empty circles denote empty sites, full small red circles denote single-occupied sites, while full large red circles denote double-occupied sites. } \label{fig:effective_lattices} \end{figure} In this section we investigate the extended Hubbard model on the isotropic triangular lattice, as defined by the Hamiltonian: \begin{equation}\label{eq:hubbard_triang} \begin{split} H=&-t\sum_{\langle i,j\rangle,\sigma} c^{\dagger}_{i,\sigma} c^{\phantom{\dagger}}_{j,\sigma} + \textrm{h.c.} +U\sum_i n_{i,\uparrow}n_{i,\downarrow} \\ & +V\sum_{\langle i,j\rangle} n_i n_j +V'\sum_{\langle\langle i, j\rangle \rangle} n_i n_j, \end{split} \end{equation} where $t$ denotes the hopping parameter, $U$ is the on-site Coulomb repulsion, $V$ is the nearest-neighbor Coulomb interaction, and $V'$ is the next-nearest-neighbor one. As in the previous section, we investigate repulsive Coulomb interactions, focusing on the appearance of charge-ordered states induced by a nonlocal potential. Here, we focus on 3/8 filling ($n=3/4$), where emergent kagome and one-dimensional structures may be generated by the appearance of charge order. Both effective lattices are shown in Fig.~\ref{fig:effective_lattices}. When $U\gg V$, double occupancies are prohibited and the charge ordered ground state has a kagome-like structure, with three sites of the unit cell singly occupied and one site empty. By increasing the ratio $V/U$, the number of empty sites increases in order to avoid the energy loss from the $V$ term, thus inducing a one-dimensional (1D) charge structure. We remark that the presence of a nearest-neighbor Coulomb repulsion is not sufficient to stabilize the aforementioned charge orders and one additional next-nearest neighbor $V'$ is necessary in the triangular lattice case. Indeed, if we consider for example the kagome-like order of Fig.~\ref{fig:effective_lattices} (a), we can describe it as alternate rows that are fully occupied and rows where only half of the sites are occupied. If interaction is restricted to nearest neighbors, the reciprocal positions of the empty sites between different rows can be changed without any further energy cost, implying the absence of charge order in the system. In the $t\to 0$ limit, the energy of the kagome and the 1D phases can be easily computed being equal to $E=3/2(V+V')$ for the kagome substructure and to $E=V+V'+U/4$ for the 1D one. The 1D phase is then more favorable than the kagome one when $V+V'\ge U/2$. Here we set $U=30$, in analogy with our former investigation of charge ordered phases on the triangular lattice~\cite{tocchio2014}, and $V'=V/5$. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_15.pdf} \caption{(Color online) Electronic density $n_{\alpha}$ as a function of $V/t$ obtained from VMC calculations for each of the four sublattices $A$, $B$, $C$, and $D$, as present in the effective lattices emerging from the charge ordered $n=3/4$ triangular lattice, as illustrated in Fig.~\ref{fig:effective_lattices}. The data are for $U/t=30$, $V'=V/5$, and a lattice size $L=144$.} \label{fig:density} \end{figure} The model of Eq.~(\ref{eq:hubbard_triang}) is studied by means of the variational Monte Carlo method, the details being presented in Appendix~\ref{sec:VMC}. In order to distinguish the different kinds of charge ordering in the model, we plot in Fig.~\ref{fig:density} the average electronic density per sublattice $n_{\alpha}$, with $\alpha=A,B,C,D$ for each of the four sublattices that build up the unit cell (see Fig.~\ref{fig:effective_lattices}). Our results show that for $V/t \le 5$, the charge is uniformly distributed in the lattice, while for $6\le V/t\le 12$ one sublattice depletes, with the electrons forming an effective kagome lattice. In this case the frustration of the original lattice is effectively enhanced. Finally, as expected from the Coulomb energy argument, the 1D substructure of Fig.~\ref{fig:effective_lattices} is stabilized for $V/t \ge 13$. As discussed also in the honeycomb lattice section, the static structure factor $N(q)=\langle n_q n_{-q}\rangle$ is a good indicator for metallic behavior. The metallic phase is characterized by $N(q)\propto |q|$ for $q\to 0$, which implies a vanishing gap for particle-hole excitations. On the contrary, $N(q)\propto q^2$ for $q\to 0$, implies a finite charge gap and insulating behavior~\cite{tocchio2011,tocchio2014}. The results shown in Fig.~\ref{fig:N_q} indicate that the system is metallic in the absence of charge order ($V/t=4,5$), while the charge ordered state with an effective kagome lattice exhibits an insulating behavior ($V/t=6,8,10,12$). $N(q)$ is shown along the path in the Brillouin zone connecting the point $\Gamma=(0,0)$ to the point $M=(\pi,\pi/\sqrt{3})$ but a similar behavior can be obtained also along other directions. The results for the 1D charge ordered phase at $V/t=13$ indicate also an insulating behavior although we observe a dependence on the path chosen in the Brillouin zone, with strong finite-size effects. By increasing the lattice size up to $L=400$ we find, however, an insulating behavior along all the selected paths (not shown). \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_16.pdf} \caption{(Color online) Variational Monte Carlo results for $N(q)/|q|$ as a function of $|q|/\pi$ for different values of $V/t$. The data are for $n=3/4$ and $U/t=30$ for the triangular lattice and for momenta $q$ connecting $\Gamma=(0,0)$ and $M=(\pi,\pi/\sqrt{3})$. The results for lattice sizes $L=144$ and $L=256$ are superimposed.} \label{fig:N_q} \end{figure} In a similar way, one can consider the small-$q$ behavior of the spin-spin correlations $S(q)=\langle s_q s_{-q}\rangle$ to discriminate between a spin gapped and a spin gapless behavior. Our results indicate that the effective kagome lattice, induced by charge order, is characterized by gapless spin excitations, since $S(q)\propto |q|$ for $q\to 0$; see Fig.~\ref{fig:S_q}. Moreover, no peak can be observed in the spin-spin correlations, implying the absence of magnetic correlations, even at the short-range scale. We point out that gapless spin excitations have been also proposed for the Heisenberg model on the kagome lattice, by a similar variational approach~\cite{iqbal2013}, while the density matrix renormalization group approach suggests a finite gap in the spin excitations~\cite{depenbrock2012}. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_17.pdf} \caption{(Color online) $S(q)/|q|$ as a function of $|q|/\pi$ for different values of $V/t$, within the region where the effective kagome lattice is stabilized. Data are shown along the line between $\Gamma=(0,0)$ and $M=(\pi,\pi/\sqrt{3})$ in the Brillouin zone on the $L=144$ and the $L=256$ lattice sizes.} \label{fig:S_q} \end{figure} We finally summarize the VMC phase diagram of the model of Eq.~(\ref{eq:hubbard_triang}) at 3/8 filling, as a function of $V/t$, in Fig.~\ref{fig:pd_triang}. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig_18.pdf} \caption{(Color online) Schematic VMC phase diagram of the model of Eq.~(\ref{eq:hubbard_triang}) as a function of $V/t$ at 3/8 filling, where we set $U/t=30$ and $V'=V/5$. For $V/t \le 5$ we observe a metallic phase with a uniform charge distribution. For $6 \le V/t\le 12$ we stabilize the charge ordered insulator with an effective kagome lattice of Fig.~\ref{fig:effective_lattices} (a). Finally, for $V/t \ge 13$, the charge ordered insulator with effective 1D chains of Fig.~\ref{fig:effective_lattices} (b) occurs.} \label{fig:pd_triang} \end{figure} \section{Conclusions} \label{sec:conclusions} In conclusion, by using a combination of Hartree-Fock, perturbation theory, and variational Monte Carlo, we have investigated the possibility of novel lattice structures emerging from charge disproportionation in doped systems via strong correlations. In particular, we find an emergent geometrical frustration on bipartite honeycomb lattices, and an enhancement of the underlying geometrical frustration on a triangular lattice when Coulomb interactions beyond on-site are considered. Concerning the honeycomb lattice, we have found that in the presence of both on-site $U$ and nearest-neighbor $V$ Coulomb interactions, charge order converts the original honeycomb structure at $3/4$ filling into an effective {\it half-filled} triangular lattice where the charge ordered state is characterized by a $2121\cdots$ ordered pattern, while the singly occupied sites have a macroscopic spin degeneracy. A nonzero hopping $t$ lifts the spin degeneracy by forming magnetic order, which can be controlled by the Coulomb interactions $U$ and $V$. Our analysis via Hartree-Fock of charge order and spin correlations shows that most of the $U-V$ phase diagram at large values of $U$ and $V$ is characterized by a charge ordered antiferromagnetic insulator. This result is corroborated by VMC calculations for selected values of the parameter space. The emergent antiferromagnetic spin correlations are consistent with the effective antiferromagnetic Heisenberg model predicted by our perturbation theory analysis. When $U$ is much larger than $V$, a charge ordered ferromagnetic insulating state appears instead, which is consistent with the results from perturbation theory. By further decreasing $V$, charge order completely disappears, and eventually a Nagaoka ferromagnetic semimetal~\cite{nagaoka1966} appears for $V/t\gtrsim 6$. For $U=0$ and finite $V$, we find a charge ordered metal as the ground-state candidate. Inclusion of quantum fluctuations via the VMC method, as well as the Gutzwiller approximation, suggests however that this state may be unstable towards phase separation by forming a $2020\cdots$ charge ordered insulating region and a metallic one. Concerning the triangular lattice, we find that effective kagome and one-dimensional lattices are generated at $3/8$ filling ($n=3/4$) because of the presence of charge order. We consider a large value of the on-site Coulomb repulsion $U$ and a small, but finite, value of the next-nearest-neighbor Coulomb interaction $V'=V/5$. By increasing the ratio $V/U$ above $V/U\simeq 5.5$, the uniform metallic phase evolves into an insulating state, where the electronic charges form a kagome structure, each site being singly occupied. The emergence of a kagome lattice out of the original triangular one effectively enhances the frustration of the original lattice. The behavior of the spin-spin correlations $S(q)$ shows that the effective kagome lattice generated by charge order is nonmagnetic, with gapless spin excitations. By further increasing the ratio $V/U$ above $V/U\simeq 12.5$, the number of empty sites increases in order to avoid the energy loss due to the $V$ term, thus generating another charge ordered insulator, where electrons form a one-dimensional charge structure. Note added: Recently we became aware of a paper~\cite{sugita2016} by Sugita and Motome that reports the emergence of kagome and one-dimensional charge orders on a triangular extended Hubbard model in the presence of spin-orbit coupling. \acknowledgments The authors would like to thank F. Becca, A. Kim, and S. M. Winter for fruitful discussions. R.K., R.V., and C.G. acknowledge the support of the German Science Foundation through Grant No.\ SFB/TRR49. L.F.T. acknowledges the support of the Italian Ministry of Education, University, and Research through Grant No. PRIN 2010 2010LLKJBX. The variational Monte Carlo code, which is used in the honeycomb lattice, is based on a code first developed by Tahara~\cite{tahara2008}.
1,477,468,751,297
arxiv
\section*{Results} \noindent {\bf Zero-determinant strategies in large-scale social dilemmas.} ZD strategies are particular memory-one strategies \cite{nowak:nature:1993,hauert:prsb:1997,sigmund:book:2010,press:pnas:2012}; they only condition their behavior on the outcome of the previous round. Memory-one strategies can be written as a vector $(p_{S,j})$, where $p_{S,j}$ denotes the probability to cooperate in the next round, given that the player previously played $S\in\{C,D\}$, and that $j$ of the co-players cooperated. ZD strategies have a particular form (see also Supporting Information): players with a ZD strategy set their cooperation probabilities such that \begin{equation} \label{Eq:DefZD} \begin{array}{llc} p_{C,j} &= &\displaystyle 1+\phi \Big[(1-s)(l-a_j)-\frac{n-j-1}{n-1}(b_{j+1}-a_j)\Big]\\[0.25cm] p_{D,j} &= &\displaystyle ~\phi \Big[(1-s)(l-b_j)+\frac{j}{n-1}(b_j-a_{j-1})\Big], \end{array} \end{equation} where $a_j$ and $b_j$ are the specific payoffs of the social dilemma (as outlined in Fig.~1), and where $l$, $s$, and $\phi>0$ are parameters that can be chosen by the player. While these ZD strategies may appear inconspicuous, they give players an unexpected control over the resulting payoffs of the game, as we will show below. Instead of presuming that players act in isolation, as in previous models of zero-determinant strategies \cite{press:pnas:2012,stewart:pnas:2012,ball:nature:2012,hilbe:pnas:2013,akin:2013,stewart:pnas:2013,hilbe:plosone:2013b,szolnoki:pre:2014,Pan:arxiv:2014} we explicitly allow subjects to form alliances, and to coordinate on some joint ZD strategy. Let $k$ be the number of allies, with $1\le k<n-1$ (in particular, this covers the case $k=1$ of solitary alliances). In the Supporting Information we prove that if each of the allies applies the same ZD-strategy with parameters $l$, $s$, and $\phi$, then payoffs satisfy the equation \begin{equation} \label{Eq:PropZD} \pi_{-\mathcal{A}}=s_\mathcal{A}\pi_\mathcal{A}+(1-s_\mathcal{A})l, \end{equation} where $\pi_\mathcal{A}$ is the mean payoff of the allies, $\pi_{-\mathcal{A}}$ is the mean payoff of all outsiders, and \begin{equation} \label{Eq:sEff} s_\mathcal{A}= \frac{s(n-1)-(k-1)}{n-k}. \end{equation} Relation [\ref{Eq:PropZD}] suggests that by using a ZD strategy, alliances exert a direct influence on the payoffs of the outsiders. This relation is remarkably general, as it is independent of the specific social dilemma being played, and as it is fulfilled irrespective of the strategies that are adopted by the outsiders (in particular, outsiders are not restricted to memory-one strategies; it even holds if some or all of the outsiders coordinate on a joint strategy themselves). We call the parameter $l$ the baseline payoff, $s$ the slope of the applied ZD strategy, and $s_\mathcal{A}$ the effective slope of the alliance. In the special case of a single player forming an alliance, $k=1$, the effective slope of the alliance according to Eq. [\ref{Eq:sEff}] simplifies to $s_\mathcal{A}=s$. The parameters $l,s,$ and $\phi$ of a ZD strategy cannot be chosen independently, as the entries $p_{S,j}$ of a ZD strategy are probabilities that need to satisfy $0\le p_{S,j}\le 1$. In general, the admissible parameters depend on the specific social dilemma being played. In the Supporting Information we show that exactly those relations [\ref{Eq:PropZD}] can be enforced for which either $s_\mathcal{A}=s=1$ (in which case the parameter $l$ in the definition of ZD strategies is irrelevant), or for which the parameters $l$ and $s_\mathcal{A}$ satisfy \begin{equation} \label{Eq:CharPayRelAlliances} \displaystyle \begin{array}{ccccc} \max \!\left\{b_j-\frac{j}{(1-s_\mathcal{A})(n-k)}(b_j-a_{j-1})\right\} \!\! &\!\! \le \!\!\! &\!\! l \!\! &\!\!\! \le \!\! &\!\! \min \! \left\{a_j+\frac{n-j-1}{(1-s_\mathcal{A})(n-k)}(b_{j+1}-a_j)\right\},\\ \end{array} \end{equation} where the maximum and minimum is taken over all possible group compositions, $0\le j \le n-1$. It follows that feasible baseline payoffs are bounded by the payoffs for mutual cooperation and mutual defection, $b_0\le l\le a_{n-1}$, and that the effective slope needs to satisfy the inequality $-1/(n-k)\le s_\mathcal{A}\le 1$. Moreover, as social dilemmas satisfy $b_{j+1}>a_j$ for all $j$, condition [\ref{Eq:CharPayRelAlliances}] implies that the range of enforceable payoff relations is strictly increasing with the size of the alliance -- larger alliances are able to enforce more extreme relationships between the payoffs of the allies and the outsiders. In the following, we will highlight several special cases of ZD strategies, and we discuss how subjects can increase their strategic power by forming alliances.\\ \noindent {\bf Fair alliances.} As a first example, let us consider alliances that apply a ZD strategy with slope $s_\mathcal{A}=s=1$. By Eq.~[\ref{Eq:PropZD}], such alliances enforce the payoff relation $\pi_\mathcal{A}=\pi_{-\mathcal{A}}$, such that the allies' mean payoff matches the mean payoff of the outsiders. We call such ZD strategies (and the alliances that apply them) {\it fair}. As shown in Figure 2A, fair strategies do not ensure that all group members get the same payoff -- due to our definition of social dilemmas, unconditional defectors always outperform unconditional cooperators, no matter whether the group also contains fair players. Instead, fair players can only ensure that they do not take any unilateral advantage of their peers. Interestingly, it follows from Eq.~[\ref{Eq:sEff}] that fair alliances consist of fair players: because $s_\mathcal{A}=1$ implies $s=1$, each player $i$ of a fair alliance individually enforces the relation $\pi_i=\pi_{-i}$. It also follows from our characterization [\ref{Eq:CharPayRelAlliances}] that such fair ZD strategies exist for all social dilemmas -- irrespective of the particular payoffs and irrespective of the group size. As an example, let us consider the strategy {\it proportional Tit-for-Tat} ($pTFT$), for which the probability to cooperate is simply given by the fraction of cooperators among the co-players in the previous round, \begin{equation}\label{eq:ptft} pTFT_{S,j}=\frac{j}{n-1}. \end{equation} For pairwise games, this definition of $pTFT$ simplifies to the classical Tit-for-Tat strategy. However, also for the public goods game and for the volunteer's dilemma, $pTFT$ is a ZD strategy (because it can be obtained from Eq.~[\ref{Eq:DefZD}] by setting $s=1$ and $\phi=1/c$, with $c$ being the cost of cooperation). As $s=1$, alliances of $pTFT$ players are fair, and they enforce $\pi_\mathcal{A}=\pi_{-\mathcal{A}}$. Interestingly, this strategy has received little attention in the previous literature. Instead, researchers have focused on other generalized versions of Tit-for-Tat, which cooperate if at least $m$ co-players cooperated in the previous round \cite{boyd:JTB:1988, kurokawa:PRSB:2009}, i.e. $p_{S,j}=0$ if $j<m$ and $p_{S,j}=1$ if $j\ge m$. Unlike ${\it pTFT}$, these threshold strategies neither enforce a linear relation between payoffs, nor do they induce fair outcomes, suggesting that $pTFT$ may be the more natural generalization of Tit-for-Tat for large social dilemmas. \\ \begin{figure}[t] \centering \includegraphics[width=15cm]{fig2} \caption{Characteristic dynamics of payoffs over the course of the game for three different alliances. Each panel depicts the payoff of each individual group member (thin lines) and the resulting average payoffs (thick lines) for the alliance (blue) and for outsiders (red). (A) An alliance that adopts a fair strategy ensures that the payoff of the allies matches the mean payoff of the outsiders. This does not imply that all outsiders yield the same payoff. (B) For games in which mutual defection leads to the lowest group payoff, extortionate alliances ensure that their payoffs are above average. (C) In games in which mutual cooperation is the social optimum, generous alliances let their co-players gain higher payoffs. The three graphs depict the case of a public goods game with $r=4$, $c=1$, group size $n=20$, and alliance size $k=8$. For the strategies of the outsiders we have used random memory-one strategies, were the cooperation probabilities were independently drawn from a uniform distribution. For the strategies of the allies, we have used (A) $pTFT$, (B) $p^{Ex}$ with $s=0.8$, (C) $p^{Ge}$ with $s=0.8$. } \label{Fig:Illustration2} \end{figure} \noindent {\bf Extortionate alliances.} As another interesting subclass of ZD strategies, let us consider strategies that choose the mutual defection payoff as baseline payoff, $l=b_0$, and that enforce a positive slope $0<s_\mathcal{A}<1$. For such strategies, relation [\ref{Eq:PropZD}] becomes $\pi_{-\mathcal{A}}= s_\mathcal{A}\pi_\mathcal{A}+(1-s_\mathcal{A})b_0$, implying that the outsiders only get a fraction $s_\mathcal{A}$ of any surplus over the mutual defection payoff. Moreover, as the slope $s_\mathcal{A}$ is positive, the payoffs $\pi_\mathcal{A}$ and $\pi_{-\mathcal{A}}$ are positively related. As a consequence, the collective best reply for the outsiders is to maximize the allies' payoffs by cooperating in every round. In analogy to Press and Dyson \cite{press:pnas:2012}, we call such alliances {\it extortionate}, and we call the quantity $\chi=1/s_\mathcal{A}$ the extortion factor. Extortionate alliances are particularly powerful in social dilemmas in which mutual defection leads to the lowest group payoff (as in the public goods game and in the volunteer's dilemma): in that case, they enforce the relation $\pi_{-\mathcal{A}} \le \pi_\mathcal{A}$; on average, the allies perform at least as well as the outsiders (as also depicted in Figure~2B). Similarly to the results for fair alliances, extortionate alliances consist of extortionate players: an alliance that enforces the baseline payoff $l=b_0$ and a slope $0<s_\mathcal{A}<1$ requires the allies to use ZD strategies with $l=b_0$ and $0<s<1$, such that each player $i$ individually enforces the relation $\pi_{-i}=s\pi_i+(1-s)b_0$. For the specific example of a public goods game, let us consider the ZD strategy $p^{Ex}$ with $l=0$, $\phi=1/c$, and $0<s<1$, for which Eq.~[\ref{Eq:DefZD}] becomes \begin{equation} \begin{array}{lcl} p_{C,j}^{Ex} &= &\frac{j}{n-1}-(1-s)\left(\frac{rj}{n}-\frac{n-r}{n}\right)\\[0.3cm] p_{D,j}^{Ex} &= &\frac{j}{n-1}-(1-s)\frac{rj}{n} \end{array} \end{equation} In the limit of $s\rightarrow 1$, these extortionate strategies approach the fair strategy $pTFT$. However, as $s$ decreases from 1, the cooperation probabilities of $p^{Ex}$ are increasingly biased to the own advantage (with the probabilities $p_{D,j}^{Ex}$ decreasing more rapidly than the probabilities $p_{C,j}^{Ex}$). As with fair strategies, such extortionate strategies exist for all repeated social dilemmas. However, in large groups the power of alliances to extort their peers depends on the social dilemma, and on the size of the alliance (as generally described by condition [\ref{Eq:CharPayRelAlliances}]). For example, for single-player alliances ($k=1$) in the public goods game, the feasible extortion factors $\chi$ are bounded when groups become large, with $\chi_{\max}=(n-1)r/\big((n-1)r-n\big)$ being the maximum extortion factor (see also \cite{Pan:arxiv:2014}). To be able to enforce arbitrarily high extortion factors, players need to form an alliance such that the fraction of alliance members exceeds a critical threshold. By solving condition ~[\ref{Eq:CharPayRelAlliances}] for the case of extortionate coalitions with infinite extortion factors (i.e., $l=0$ and $s_\mathcal{A}=0$), this critical threshold can be calculated explicitly as \begin{equation} \frac{k}{n} \ge \frac{r-1}{r}. \end{equation} Only for alliances that have this critical mass, there are no bounds to extortion. \\ \noindent {\bf Generous alliances.} As the benevolent counterpart to extortioners, Stewart and Plotkin were first to describe a set of generous strategies for the iterated prisoner's dilemma \cite{stewart:pnas:2012,stewart:pnas:2013}. Unlike extortioners, generous alliances set the baseline payoff to the mutual cooperation payoff $l=a_{n-1}$, while still enforcing a positive slope $0<s_\mathcal{A}<1$. This results in the payoff relation $\pi_{-\mathcal{A}} = s_\mathcal{A}\pi_\mathcal{A} + (1-s_\mathcal{A}) a_{n-1}$, such that a generous alliances accept a larger share of any loss (compared to the mutual cooperation payoff $a_{n-1}$). In particular, for games in which mutual cooperation is the optimal outcome (as in the public goods game and in the prisoner's dilemma, but not in the volunteer's dilemma), the payoff of a generous player satisfies $\pi_\mathcal{A}\le \pi_{-\mathcal{A}}$ (see also Fig. 2C depicting the case of a public goods game). As with fair and extortionate alliances, generous alliances consist of players that are individually generous. For the example of a public goods game, we obtain a particularly simple generous ZD strategy $p^{Ge}$ by setting $l=rc-c$, $\phi=1/c$, and $0<s<1$, such that \begin{equation} \begin{array}{lcl} p_{C,j}^{Ge} &= &\frac{j}{n-1}+(1-s)\frac{(n-j-1)r}{n}\\[0.3cm] p_{D,j}^{Ge} &= &\frac{j}{n-1}+(1-s)\frac{(n-j)r-n}{n} \end{array} \end{equation} In parallel to the extortionate strategy discussed before, these generous strategies approach $pTFT$ in the limit of $s=1$, whereas they enforce more generous outcomes for $s<1$. Again, generous strategies exist for all social dilemmas, but the extent to which players can be generous depends on the particular social dilemma, and on the size of the alliance.\\ \noindent {\bf Equalizers.} As a last interesting class of ZD strategies, let us consider alliances that choose $s=(k-1)/(n-1)$, such that by Eq.~[\ref{Eq:sEff}] the effective slope becomes $s_\mathcal{A}=0$. By Eq.~[\ref{Eq:PropZD}], such alliances enforce the payoff relation $\pi_{-\mathcal{A}}=l$, meaning that they have unilateral control over the mean payoff of the outsiders (for the prisoner's dilemma, such equalizer strategies were first discovered by \cite{boerlijst:AMM:1997}). However, as with extortionate and generous strategies, equalizer alliances need to reach a critical size to be able to determine the outsiders' payoff; this critical size depends on the particular social dilemma, and on the imposed payoff $l$ (the exact condition can be obtained from [\ref{Eq:CharPayRelAlliances}] by setting $s_\mathcal{A}=0$). For the example of a public goods game, a single player can only set the co-players' mean score if the group size is below $n\le 2r/(r-1)$. For larger group sizes, players need to form alliances, with \begin{equation} \label{Eq:Equalizer} \frac{k}{n} \ge \frac{(n-2)(r-1)}{n+(n-2)r} \end{equation} being the minimum fraction of alliance members that is needed to dictate the outsiders' payoff. Although the right hand side of Eq.~[\ref{Eq:Equalizer}] is monotonically increasing with group size, equalizer alliances are always feasible; in particular, alliances of size $k=n-1$ can always set the payoff of the remaining player to any value between $0$ and $rc-c$.\\ \begin{table}[t] {\singlespacing \small \begin{tabular}{c|c|ccc} \Xhline{4\arrayrulewidth} &&&&\\[-.1cm] \parbox[c]{1.6cm}{\centering Strategy class} &\parbox[c]{1.4cm}{\centering Typical property} &\parbox[c]{1.2cm}{\centering Prisoner's dilemma} &\parbox[c]{3cm}{\centering Public goods game} &\parbox[c]{3cm}{\centering Volunteer's dilemma}\\[0.3cm] \hline &&&&\\[-.1cm] \parbox[c]{1.6cm}{\centering Fair\\ strategies} &$\pi_{-\mathcal{A}}=\pi_\mathcal{A}$ &\parbox[c]{1.2cm}{\centering Always exist} &\parbox[c]{1.2cm}{\centering Always exist} &\parbox[c]{1.2cm}{\centering Always exist}\\[0.4cm] \parbox[c]{1.6cm}{\centering Extortionate strategies} &$\pi_{-\mathcal{A}}\le \pi_\mathcal{A}$ &\parbox[c]{1.2cm}{\centering Always exist} &\parbox[c]{4.7cm}{\centering In large groups, single players cannot be arbitrarily extortionate, but sufficiently large alliances can be arbitrarily extortionate} &\parbox[c]{3.9cm}{\centering Even large alliances cannot be arbitrarily extortionate}\\[0.9cm] \parbox[c]{1.6cm}{\centering Generous strategies} &$\pi_{-\mathcal{A}}\ge \pi_\mathcal{A}$ &\parbox[c]{1.2cm}{\centering Always exist} &\parbox[c]{4.7cm}{\centering In large groups, single players cannot be arbitrarily generous, but sufficiently large alliances can be arbitrarily generous} &\parbox[c]{3.8cm}{\centering Do not ensure that own payoff is below average}\\[0.9cm] Equalizers &$\pi_{-\mathcal{A}}=l~$ &\parbox[c]{1.2cm}{\centering Always exist} &\parbox[c]{4.7cm}{\centering May not be feasible for single players, but is always feasible for sufficiently large alliances} &\parbox[c]{3.9cm}{\centering Only feasible if the size of the alliance is $k=n-1$, can only enforce $l=b-c$}\\[0.5cm] \Xhline{4\arrayrulewidth} \end{tabular}} \caption{Strategic power of different ZD strategies for three different social dilemmas. In the repeated prisoner's dilemma, single players can exert all strategic behaviors \cite{press:pnas:2012,stewart:pnas:2013,hilbe:plosone:2013b}. Other social dilemmas either require players to form alliances in order to gain sufficient control (as in the public goods game), or they only allow for limited forms of control (as in the volunteer's dilemma).} \label{Tab:SumZD} \end{table} \noindent {\bf Strategic power of different ZD strategies.} Table 1 gives an overview for these four strategy classes for three examples of social dilemmas. It shows that while generally ZD strategies exist for all group sizes, the power of single players to enforce particular outcomes typically diminishes or disappears in large groups. Forming alliances allows players to increase their strategic scope. The impact of a given alliance, however, depends on the specific social dilemma: while alliances can become arbitrarily powerful in public goods games, their strategic options remain limited in the volunteer's dilemma. \begin{figure}[h!] \centering \includegraphics[width=7cm]{fig3} \caption{The effect of different alliance strategies and various alliance sizes. Each panel shows the outcome of simulated public goods games in which the alliance members interact with $n-k$ random co-players (uniformly taken from the set of memory-one strategies). We compare the success of different alliances along three dimensions: the relative payoff advantage of the alliance (defined as $\pi_\mathcal{A}/\pi_{-\mathcal{A}}$), the payoff inequality within a group (defined as the mean variance between payoffs of all group members), and the absolute payoff of the alliance (as given by $\pi_\mathcal{A}$). Simulations suggest that (A) extortionate alliances gain the highest relative payoff advantages, (B) fair alliances reduce inequality within their group, and (C) sufficiently large generous alliances get the highest payoffs. For the simulations, we have used a public goods game ($r=3$, $c=1$) in a group of size $n=7$; data was obtained by averaging over 500 randomly formed groups. The strategy of the alliance members was $pTFT$, $p^{Ex}$ (with $s=0.85$), and $p^{Ge}$ (with $s=0.85$), respectively.} \label{Fig:DiffZD} \end{figure} While fair, extortionate, and generous alliances enforce different payoff relations, simulations suggest that each of these strategy classes has its particular strength when facing unknown opponents (Figure 3). Forming an extortionate alliance gives the allies a relative advantage compared to the outsiders, and by increasing the alliance's size, allies can enforce more extreme relationships. Forming a fair alliance, on the other hand, is an appropriate measure to reduce the payoff inequality within a group -- while the other two behaviors, generosity and extortion, are meant to induce unequal payoffs (to the own advantage, or to the advantage of the outsiders members, respectively), fair players actively avoid generating further inequality by matching the mean payoff of the outsiders. Generous alliances, however, are most successful in increasing the absolute payoffs. While it is obvious that generous alliances are beneficial for the outsiders (and that this positive effect is increasing in the size of the alliance), Figure 3 suggests that even the allies themselves may benefit from coordinating on a generous alliance strategy. Fair and extortionate alliances are programmed to fight back when being exploited; this is meant to reduce the outsiders' payoffs, but it also reduces the payoffs of the other allies. Therefore, when the alliance has reached a critical size, it is advantageous to agree on a generous alliance strategy instead (but without being overly altruistic), as it helps to avoid self-destructive vendettas. This somewhat unexpected strength of generous strategies is in line with previous evolutionary results for the iterated prisoner's dilemma. For this pairwise dilemma, several studies have reported that generosity, and not extortion, is favored by selection \cite{akin:2013,stewart:pnas:2013,hilbe:plosone:2013b}. Such an effect has also been confirmed in a recent behavioral experiment, in which human cooperation rates against generous strategies were twice as high as against extortioners, although full cooperation would have been the humans' best response in both cases \cite{hilbe:natcomm:2014}. Our results suggest that in multiplayer dilemmas, generous alliances are able to induce a similarly beneficial group dynamics. In the Supporting Information we show that if a generous alliance has reached a critical mass, it becomes optimal for outsiders to become generous too (independent of the specific social dilemma, and independent of the strategy of the remaining outsiders). Once this critical mass is achieved, generosity proves self-enforcing. \section*{Discussion} When subjects lack individual power to enforce beneficial outcomes, they can often improve their strategic position by joining forces with others. Herein, we have used and expanded the theory of zero-determinant strategies \cite{press:pnas:2012,akin:2013,stewart:pnas:2013} to explore the role of such alliances in repeated dilemmas. We have found that three key characteristics determine the effect of an alliance of ZD strategists: the underlying social dilemma, the size of the alliance, and the strategy of the allies. While subjects typically have little influence to transform the underlying dilemma, we have shown that they can considerably raise their strategic power by forming larger alliances, and they can achieve various objectives by choosing appropriate strategies. Our approach is based on the distinction between alliance members (who agree on a joint ZD strategy), and outsiders (who are not restricted to any particular strategy, and who may form an alliance themselves). This distinction allowed us to show the existence of particularly powerful alliances, and to discuss their relative strengths. As an interesting next step of research, we plan to investigate how such alliances are formed in the first place (which is typically at the core of traditional models of coalitions, e.g. \cite{peleg:book:2003}), and whether evolutionary forces would favor particular alliances over others \cite{Mesterton-Gibbons:jtb:2011}. The results presented herein suggest that subjects may have various motives to join forces. As particular examples, we have highlighted extortionate alliances (who aim for a relative payoff advantage over the outsiders), fair alliances (who aim to reduce the inequality within their group), and generous alliances (who are able to induce higher payoffs as they avoid costly vendettas after accidental defections). Whether such alliances emerge and whether they are stable thus needs to be addressed in light of the respective aim of the alliance: when subjects are primarily interested in low inequality, then forming a fair alliance is an effective means to reach this aim; and once a fair alliance is formed, inequity-averse subjects have little incentives to leave (even if leaving the alliance would allow them to gain higher payoffs). While we have focused on the effects of alliances in multiplayer social dilemmas, it should be noted that our results on ZD strategies also apply for solitary alliances, consisting of single players only. Thus, even if players are unable to coordinate on joint strategies, zero-determinant strategies are surprisingly powerful. They allow players to dictate linear payoff relations, irrespective of the specific social dilemma being played, irrespective of the group size, and irrespective of the counter-measures taken by the outsiders. In particular, we have shown that any social dilemma allows players to be fair, extortionate, or generous. At the same time, zero-determinant strategies are remarkably simple. For example, in order to be fair in a public goods game (or in a volunteer's dilemma), players only need to apply a rule called proportional Tit-for Tat $pTFT$: if $j$ of the $n-1$ other group members cooperated in the previous round, then cooperate with probability $j/(n-1)$ in the following round. Extortionate and generous strategies can be obtained in a similar way, by slightly modifying $pTFT$ to the own advantage or to the advantage of the outsiders. While these results were derived for the special case of infinitely repeated games, they can be extended to the more realistic finite case. In finitely repeated games, end-game effects may prevent alliances to enforce a perfect linear relation between payoffs; but it is still possible to enforce an arbitrarily strong correlation between payoffs, provided that the game is repeated sufficiently often. Similarly, we show in the Supporting Information, that it is not necessary that all alliance members coordinate on the same ZD strategy, and that different alliance members may apply different strategies. However, we have focused here on the case of symmetric alliances with a joint strategy, because they are most powerful: any payoff relationship that can be enforced by asymmetric alliances with different ZD strategies, can also be enforced by a symmetric alliance. Overall, our results reveal how single players in multiplayer games can increase their strategic power by forming beneficial alliances with others, helping them to regain control in large-scale social dilemmas. \subsection*{Acknowledgments} CH gratefully acknowledges generous funding by the Schr\"odinger stipend J3475 of the Austrian Science Fund. {\small \setlength{\bibsep}{0\baselineskip}
1,477,468,751,298
arxiv
\section{Introduction} Braneworld models with variable brane tension pave attempts to probe signatures arising from high energy physics. Indeed, the drastic modification of the temperature of the Universe along its cosmological evolution instigates a braneworld scenario with variable tension, generalizing the original Randall-Sundrum model \cite{ran}, in order to permit Friedmann branes \cite{maar2000, binetruy, bazeia1}. The variable tension brane dynamics was investigated in \cite{gly1, gly2, bulk1,bulk2}, also in a particular model, where the brane tension has an exponential dependence with the scale factor \cite{european}. Branes are dynamical objects emerging as topological defects in field theory or solitonic vacuum solutions in string theory. They can fluctuate and are described by their inherent degrees of freedom, as their tension. Otherwise the brane is rigid and there are no branons associated to it \cite{branons}. The field equations regarding a brane containing any type of matter with a cosmological constant in the bulk were introduced in \cite{binetruy}. There, the Friedmann equation in the context of a braneworld scenario was obtained, as well as the time dependent scale factor. Solutions designating realistic black holes on the brane (at least stable and absent of naked singularities) are not straightforward to be obtained. Computational methods regarding relativistic stars on the brane and the exact solution of the collapse on the brane as well, in the AdS/CFT correspondence framework \cite{emp2,emp3}, are some endeavours to solve this question \cite{maartens, yoshino}. There are some arguments that whatever the solutions are, it may tend to a static geometry at late times, close to the Schwarzschild one, at large distances \cite{maartens}. Motivated by some generalizations concerning how black holes on braneworld scenarios capable to inquest the bulk metric \cite{Gergely:2006hd,Anderson:2005af}, black strings associated to a Friedmann-Robertson-Walker (FRW) on the brane are here presented in a variable tension brane scenario. Our main aim here is to consider a Schwarzschild black hole embedded in a FRW brane \cite{mcvittie}. The McVittie metric is known to be regular everywhere and the solutions asymptote in the future, and near the horizon, to the Schwarzschild-de Sitter geometry. Such description holds if the cosmological scenario is dominated at late times by a positive cosmological constant \cite{Kaloper:2010ec}. For solutions without a positive cosmological constant the horizon is a {weak soft singularity}, but the metric can be extended, being possible to regard the causal character of the singularity. McVittie solutions are spherically symmetric, parametrized by an scale factor $a(t)$ and a mass parameter $M$. They are led to black hole metrics or the standard FRW cosmology in suitable limits. These properties make these solutions physically relevant for describing real gravitating objects in the Universe. These solution has been the subject of a huge amount of investigations \cite{plb2013,lake1,Carrera:2009ve, faraoni,nolann,abd,lake2, Ferraris:1996ey,Haines:1993sd,Patel:1999ej,sakaihaines}, and the misinformation around this solution was only recently provided, what poses this solution as a true black hole model \cite{Kaloper:2010ec, lake1}. The spacetime related to the McVittie solution can be further generalized, providing an instrument to a black hole mass variation, subsequently influencing the associated black string. On the brane, this more general solution regards the description of some systems presenting a coupling between the local gravitational attraction and expansion of the 4D Universe \cite{abd}, described by the braneworld. {In this paper we analyze the evolution of the black string warped horizon in a variable tension brane framework along the extra dimension \cite{meuhoff}, where the corrections in the black string warped horizon arise, in the three stages of the Universe evolution. We shall show that on a variable tension braneworld, the associated black string may present finite extent along the extra dimension, even near the brane. Such study has huge importance, in particular in what concerns the black string stability/instability, as analyzed originally for the (Schwarzschild) black string, \`a la Gregory and Laflamme \cite{greg}. In fact, when the Universe is dominated by non-relativistic matter or by relativistic matter/radiation, the black string extent is going to be shown here to be finite, and the bulk solution is an exact regular solution: the 5D singularities in the bulk are removed as the cosmological time elapses, due to the an immediate effect of the variable brane tension. Moreover, we investigate the physical singularities in the bulk, determined by the curvature invariants. When a brane metric with a horizon is considered, it is an well established fact that it can generate in the bulk an additional singularity at the location of the horizon, and if a brane metric with no horizon is used instead then no additional singularity appears in the bulk \cite{Kanti1,Kanti2}. We shall reveal that feature regarding a black string associated to the McVittie metric on the brane, by investigating the 4D and 5D Kretschmann invariants.} Such two aims overlap and answer relevant questions. In fact, as the singularity structure of the higher-dimensional spacetime is of great importance in deciding whether a particular solution is physically acceptable, by analyzing the 5D Kretschmann invariant we show that the black string warped horizon vanishes in some point along the extra dimension, for a Universe dominated by a cosmological constant and it makes the brane singularities to disappear. In other words, the vanishing of the warped horizon leaves behind a regular bulk. We show further that in the eras corresponding to a Universe dominated by matter or radiation, the physical singularities in the brane remain in the bulk, where no further singularities are formed. Our program throughout this paper explicitly consists of the following: the next Section briefly revisits the way how the bulk metric along the extra dimension can be obtained, from exclusive information on the brane. It is accomplished by studying the black string warped horizon along the extra dimension, and considering terms including the variable brane tension. In Section 3 we delve into a black hole metric on a FRW braneworld with variable tension, under the E\"otv\"os law. The associated black string warped horizon profile is studied for the three standard cases: the brane dominated by a) non-relativistic matter; b) radiation or relativistic matter; c) a cosmological constant. Each case is deeply analyzed in the context of a variable brane tension as well as their physical consequences. The case a) and b) present unexpected properties, namely, when the variable brane tension is taken into account there is an era beyond which the black string is warped horizon is zero, and the black string ceases to exist. In fact, {in Section 4 the black string physical singularities are analyzed. We show that when the black string horizon goes to zero, it implies also that the singularities are banished in the bulk, what makes a regular bulk solution. For a Universe dominated by a cosmological constant, the physical singularities on the brane remain in the bulk, and no additional singularity appears in the bulk. } \section{Black String Warped Horizon and Variable Brane Tension} In a braneworld with a single extra dimension of infinite extent, a vector field in the bulk decomposes into components in the brane and orthogonal to the brane, as $ (x^{\alpha},y)$. The bulk is endowed with a metric $\check{g}_{AB}dx^A dx^B = g_{\mu\nu}(x^\alpha,y)\,dx^\mu dx^\nu + dy^2$. The brane metric components $g_{\mu\nu}$ and the bulk metric are related by $ \check{g}_{\mu\nu} = g_{\mu\nu} + n_\mu n_\nu, $ where the $n^\sigma$ are time-like vector field components, splitting the bulk. Moreover, $g_{55} = 1$ and $g_{j5} = 0$, $\kappa^2_{4}=\frac{1}{6}\lambda\kappa^4_5$ and $ \Lambda_4=\frac{\kappa_5^{2}}{2}\Big(\Lambda_{5}+\frac{1}{6}\kappa_5^{2}\lambda^{2}\Big)$, where $\Lambda_4$ is the effective brane cosmological constant, $\kappa_4$ [$\kappa_5$] denotes the 4D [5D] gravitational coupling, and $\lambda$ is the brane tension. The extrinsic curvature {\begin{eqnarray} K_{\mu\nu}&=&-\frac{1}{2}\kappa_5^2 \left(T_{\mu\nu}+ \frac{1}{3} \left(\lambda-T\right)g_{\mu\nu} \right).\label{kurv} \end{eqnarray}}\noindent is obtained by using the junction conditions. Hereupon $T^{\mu\nu}$ is the energy-momentum tensor, for any 2-tensor $D_{\mu\nu}$, we shall adopt the convention $D=D_\mu^{\;\mu}$, and $D^2 = D_{\rho\sigma}D^{\rho\sigma}$. One defines ${E}_{\mu\nu} = C_{\mu\nu\sigma\rho} n^\sigma n^\rho$ and ${\cal B}_{\mu\nu\alpha} = g_\mu^{\;\rho} g_\nu^{\;\sigma} C_{\rho\sigma\alpha\beta}n^\beta$ where $C_{\mu\nu\sigma\rho}$ is the 5D Weyl tensor. The field equations together with the 5D Einstein and Bianchi equations \cite{3333,Gergely:2006hd, maartens} are used to compute the bulk metric near the brane, and in particular the black string warped horizon along the extra dimension \cite{maar2000, casadio2004}. Such procedure provides informations on all the bulk metric components \cite{meuhoff}, given by (denoting $g_{\mu\nu}(x^\alpha,0) = g_{\mu\nu}$): \begin{eqnarray} \hspace*{-0.6cm}{\;\;\;\;}g_{\mu\nu}(x^\alpha,y)&\!\!=\!\!& g_{\mu\nu}-\kappa_5^2\left( T_{\mu\nu}\!+\!\frac{1}{3}(\lambda-T)g_{\mu\nu}\right)|y|\nonumber\\&+&\! \left[\frac{1}{2}\kappa_5^4\!\left( T_{\mu\alpha}T^\alpha_\nu +\frac{2}{3} (\lambda-T)T_{\mu\nu} \right)\!-2{E}_{\mu\nu}+\left( \frac{1}{18} \kappa_5^4(\lambda-T)^2-\frac{\Lambda_5}{3} \right)\!g_{\mu\nu}\right] \frac{y^2}{2!} {\;}\nonumber\\ &+&\!\!\!\left.\Bigg[2K_{\mu\beta}K^{\beta}_{\alpha}K^{\alpha}_{\nu} - ({E}_{\mu\alpha}K^{\alpha}_{\nu}+K_{\mu\alpha}{E}^{\alpha}_{\nu})-\!\frac{1}{3}\Lambda_5K_{\mu\nu}\!-\!\nabla^\alpha{\cal B}_{\alpha(\mu\nu)} \!+ \!\frac{\Lambda_5}{6}\!\left(K_{\mu\nu}\!-\!g_{\mu\nu}K\right) \right.\nonumber\\ &+&\left.K^{\alpha\beta}R_{\mu\alpha\nu\beta} + 3K^\alpha{}_{(\mu}{\cal E}_{\nu)\alpha}-K{E}_{\mu\nu}+\left(K_{\mu\alpha}K_{\nu\beta} -K_{\alpha\beta}K_{\mu\nu}\right)K^{\alpha\beta}-\frac{\Lambda_5}{3}K_{\mu\nu}\Bigg]\;\frac{|y|^3}{3!}\right. \nonumber\\&+&\left.\Bigg[\frac{\Lambda_5}{6}\left(R-\frac{\Lambda_5}{3} + K^2\right)g_{\mu\nu} + \left(\frac{K^2}{3}- \Lambda_5\right)K_{\mu\alpha}K^{\alpha}_{\;\nu} + (R-\Lambda_5 + 2K^2){E}_{\mu\nu}\right.\nonumber\\&-&\left. \frac{13}{2}K_{\mu\beta}{E}^\beta_{\;\alpha}K^{\alpha}_{\;\nu}+\left(K^{\alpha}_{\;\sigma}K^{\sigma\beta} \!+ {E}^{\alpha\beta}\! +KK^{\alpha\beta}\right)R_{\mu\alpha\nu\beta} - \frac{\Lambda_5}{6}R_{\mu\nu} + 2 K_{\mu\beta}K^{\beta}_{\;\sigma}K^\sigma_{\;\alpha}K^\alpha_{\;\nu}\right.\nonumber\\&+&\!\!\left. {E}_{\mu\alpha}\!\left(\!K_{\nu\beta}K^{\alpha\beta}\!-3K^\alpha_{\;\sigma}K^{\sigma}_{\nu}\! +\! \frac{1}{2}KK^\alpha_{\nu}\!\right)+K_{\sigma\rho}K^{\sigma\rho}K\,K_{\mu\nu}-\! K_{\mu\alpha}K_{\nu\beta}{E}^{\alpha\beta}\right.\nonumber\\ &+&\left. \left(\frac{7}{2}KK^{\;\alpha}_{\mu}- 3K^\alpha_{\;\sigma}K^\sigma_{\;\mu}\right)\!{E}_{\nu\alpha}+\left(3K^\alpha_{\mu}K^{\beta}_{\alpha}\!-\!K_{\mu\alpha}K^{\alpha\beta}\right)\!{E}_{\nu\beta}\!\! -4K^{\alpha\beta}R_{\mu\nu\gamma\alpha}K^{\gamma}_{\;\beta}\!\right.\nonumber\\&-&\left.\! \! \frac{7}{6}K^{\sigma\beta}K^{\;\alpha}_{\mu}R_{\nu\sigma\alpha\beta}\Bigg]\,\frac{y^4}{4!} + \cdots \right. \label{eletrico} \end{eqnarray} The black string warped horizon $\sqrt{g_{\theta\theta}(x^\alpha,y)}$ \cite{clark} is given by (\ref{eletrico}) for $\mu=\theta=\nu$. This Taylor expansion is not a perturbation and the brane is not bent, providing thus the continuity of $g_{\mu\nu}(x^\alpha,y)$ when there are discontinuities in the matter stress tensor or in the the tensors $E_{\mu\nu}$ and ${\cal B}_{\mu\nu\sigma}$ \cite{casadio2004}. This regularity can be viewed further in a microscopic description of the braneworld: matter should be smooth into the bulk, and further localized on the brane within a width $\sim\lambda^{-1/2}$ \cite{all}. In a variable brane tension scenario, the terms in $|y|$ and $y^2/2!$ in (\ref{eletrico}) present no additional terms. Notwithstanding, starting from the order $|y|^3/3!$, those additional terms play a fundamental role. The term that contributes to the derivatives of the variable tension $\lambda$ in the order $|y|^3/3!$ in (\ref{eletrico}) reads: \begin{eqnarray} -\frac{2\kappa_5^2}{3}\left((\nabla^\alpha\nabla_\alpha\lambda)g_{\mu\nu}-(\nabla_{(\nu}\nabla_{\mu)}\lambda) \right).\label{magnetico} \end{eqnarray} Terms in order $y^4/4!$ are obtained arising from the variable tension brane in (\ref{eletrico}) are given by \begin{eqnarray} &&6 \left[(\Box\lambda)K_{(\mu\tau}{E}_{\nu)}^\tau - \nabla^\alpha((\nabla_{(\mu}\lambda)\, {E}_{\nu)\alpha})\right] +2\left(K + \frac{7}{3}\kappa_5^2\right)\left[(\Box\lambda)K\,K_{\mu\nu}-\nabla^\alpha((\nabla_{(\mu}\lambda)\, K\,K_{\alpha\vert\nu)})\right]\nonumber\\ &&- \frac{1}{3}\kappa_5^2\,\left[\Box(\Box\lambda)g_{\mu\nu}-\nabla_{(\nu}\nabla_{\mu)}(\Box\lambda)\right]+\left(\frac{1}{3}\kappa_5^2+2 K\right)[(\Box\lambda){E}_{(\mu\nu)} - \nabla^\alpha\left((\nabla_{(\mu}\lambda)\, {E}_{\nu)\alpha}\right)]\nonumber\\ &&+ \frac{1}{3}\kappa_5^2 \left[(\Box\lambda) (R_{\mu\nu}+K_{(\mu\vert\tau}K_{\nu)\beta} K^{\tau\beta} - K^2\,K_{(\mu\nu)})- \nabla^\alpha((\nabla_{(\mu}\lambda) (R_{\alpha\vert\nu)} - K_{\alpha\tau}K^{\tau}_{\nu)} - K \,K_{\alpha\nu)}))\right] \nonumber\\&& -2 K^{\tau\beta}\left[(\Box\lambda)R_{(\mu\vert\tau\vert\nu)\beta} - \nabla^\alpha\left((\nabla_{(\mu}\lambda)\, R_{\alpha\tau\vert\nu)\beta}\right)\right] +\left(2\,K^2-\frac{1}{3}\Lambda_5 \right)[(\Box\lambda)g_{\mu\nu}-\nabla_{(\nu}\nabla_{\mu)}\lambda].\label{magnetico1} \end{eqnarray} A time-dependent brane tension $\lambda = \lambda(t)$ is going to be considered, what turns the expressions (\ref{magnetico}, \ref{magnetico1}) to a simpler form. \section{Bulk Metric and Black String from a Dynamical E\"otv\"os Braneworld} This Section is devoted to derive bulk metric solutions near the brane, and in particular the black string profile, associated to a FRW E\"otv\"os variable tension braneworld. The McVittie solution in isotropic spherical coordinates {\rm r} defined implicitly by by $ {r} = {\rm r}\left(1+ \frac{2GM}{{\rm r}}\right)^2$, as in \cite{mcvittie, buchdahl,Kaloper:2010ec} reads \begin{equation} g_{\mu\nu}dx^\mu dx^\nu = - \Bigl(\frac{1-\mu}{1+\mu}\Bigr)^2 dt^2 + (1+\mu)^4 a^2(t) (d{\rm r}^2 +{\rm r}^2d\Omega^2)\, , \label{mcvitt} \end{equation} where $a(t)$ is the cosmological scale factor and one denotes $\mu= \frac{M}{2a(t) r}$. The metric (\ref{mcvitt}) is an exact solution of the Einstein field equations when the scale factor solves the Friedmann equation \begin{equation} \rho(t) = \frac{3\dot{a}^2}{8 \pi Ga^2}. \label{friedmann} \end{equation} As usual, $\rho$ denotes the energy density and $H(t) = \dot a(t)/a(t)$ denotes the Hubble parameter. The pressure is given by \cite{sakaihaines} \begin{equation} p = -\frac{1}{8\pi G} \left(3 \frac{\dot{a}^2}{a^2} +\frac{2}{\beta} \left(\frac{\ddot{a}}{a}-\frac{\dot{a}^2}{a^2}\right) \right), \label{pr1} \end{equation}\noindent where $\beta:= \frac{1-\mu}{1+\mu}$. The McVittie solution is shown to be the unique solution describing the field of a spherically symmetric mass in a spatially flat asymptotically FRW cosmology \cite{nolann}. Notwithstanding, due to stringent assumptions, non-gravitational forces encrypted by the inhomogeneous pressure (\ref{pr1}) are demanded. In order to analyze the black string associated to the McVittie metric, we focus on the term $g_{\theta\theta}(x^\alpha,y)$ of the expansion at Eq.~(\ref{eletrico}), as it represents the bulk metric near the brane and, in particular, the square of the black string warped horizon along the extra dimension, when $\mu = \nu = \theta$. Clearly, a time dependent brane tension modifies the black string associated to the McVittie solution. The projected Weyl term on the brane is given by \cite{maartens} \begin{eqnarray} \!\!\!\!\!{E}_{\theta\theta}(r,t)&=& \frac{1}{4(1+\mu)^4a^2}\! +\frac{\rho^2}{3\beta^4}\left(1-2\beta\right)^2-\! \frac{\rho p}{4a^2}\left(4\mu^3+5\mu^2+4\mu+1\right)\,. \label{weyl} \end{eqnarray} {In addition, the extrinsic curvature in (\ref{kurv}) can be expressed as \begin{eqnarray} K_{\theta\theta}=\kappa_5^2\left(3\frac{\dot{a}^2}{a^2}+\frac{\lambda}{3}+\frac{\dot{H}}{\beta}+\frac{3\dot{a}^2}{2\beta^6}\right)\,.\label{kurvmc} \end{eqnarray}} \noindent By substituting the expressions (\ref{friedmann}, \ref{pr1}), the black string warped horizon $g_{\theta\theta}(x^\alpha,y)$ in (\ref{eletrico}) is \begin{eqnarray} g_{\theta\theta}({\rm r},t,y)&=&{\rm r}^2\left[1-\kappa_5^2\left(3\frac{\dot{a}^2}{a^2}+\frac{\lambda}{3}+\frac{\dot{H}}{\beta}+\frac{3\dot{a}^2}{2\beta^6}\right)|y|\right]\,\nonumber\\ &+&{\rm r}^2\left[\!\frac{3\dot{a}^4}{16 a^4}\!\!\left[\frac{\left(2\beta- 1\right)^2}{6\beta^4}+\frac{9\beta^2}{4}-\frac{3\beta}{2}\!+\!\frac{145}{4(1+\mu)^4a^2}\!+\! \left(\frac{3\dot{a}^2}{a^2}+\!\frac{2\dot{H}}{\beta}\right)\! \left(\frac{4\mu^2+\mu+3}{(\mu-1)(1+\mu)^4}\right)\right]\right.\nonumber\\ &+&\left.\! \!\!\frac{\kappa_5^4}{36}\!\left(\!\lambda \!+\! \frac{3\dot{a}^2}{2\beta^2}(1\!+\!\mu)^4\!+ \!\frac{9\dot{a}^2}{2} \!+ \! \frac{3\dot{H}^2}{\beta}\!\right) \!+\!\frac{3\beta}{2(1\!+\!\mu)^4a^2}\!+\! \frac{2\lambda}{3} \!-\! \frac{\Lambda_5}{6}(1\!+\!\mu)^4 a^4\right]\!\!\frac{y^2}{2!} + \cdots \label{mag2} \end{eqnarray}\noindent where $\dot{H}= \!\left(\frac{\ddot{a}}{a}\!-\!\frac{\dot{a}^2}{a^2}\right)$. Eq.~(\ref{mag2}) is written here up to the second order in the extra dimension for the sake of conciseness, although the awkward expansion including the term $y^4/4!$ was considered in \cite{meuhoff}, and shall be adopted in the graphics in the end of this Section. When $a(t)=1$, by transforming the spherical isotropic coordinates to the spherical standard ones, the bulk metric component in (\ref{mag2}) is led to \begin{eqnarray} g_{\theta\theta}({\rm r},y)&=&{\rm r}^2\left[1-\frac{\kappa_5^2\lambda}{3}\,|y|~~{}+\frac{1}{3}\left(\frac{1}{6}\kappa_5^4\lambda^2 - \Lambda_5\right)\, \frac{y^2}{2!}-\left(\frac{193}{216}\lambda^3\kappa_5^6 +\frac{5}{18}\Lambda_5\kappa_5^2\lambda\right)\,\frac{|y|^3}{3!} \right.\nonumber\\&&\qquad\qquad\qquad\left.-\frac{1}{18}\Lambda_5\left(\Lambda_5 + \frac{1}{6}\lambda^2\kappa_5^4+\frac{7}{324}\lambda^4\kappa_5^8\right)\,\frac{y^4}{4!} + \cdots\right],\end{eqnarray} which is the classical Schwarzschild black string warped horizon when {\rm r} corresponds to the coordinate singularity \cite{plb2013,Chamblin:1999by, maartens, clark, meuhoff} is obtained in this limit. Indeed, the physical content regarding the black string in (\ref{mag2}) is based upon the analysis of an invariant \cite{Kaloper:2010ec}. When $a(t)=1$, the associated Kretschmann scalars are led to the Schwarzschild ones. With respect to Eq.(\ref{magnetico1}) for the McVittie metric, Eq.(\ref{magnetico}) has a straightforward form given by {\begin{eqnarray}\label{ele1}-\frac{2\kappa_5^2\lambda''}{3},\end{eqnarray}\noindent} and in order $y^4/4!$, are given by: \begin{eqnarray}\label{ele2} &&\!\!\!\frac{16 a^6 {\rm r}^4 \left(\dot{a}^2 (2 a {\rm r}-5 M)+a \left(2 \ddot{a}^2 (2 a {\rm r}+M)+a \lambda (2 a {\rm r}-M)\right)\right)}{(M-2 a {\rm r}) (2 a {\rm r}+M)^4}+\frac{6 \left(\dot{a}^2-a \ddot{a}^2\right) (2 a {\rm r}+M)}{2 a {\rm r}-M}\!\!\nonumber\\ &&\!\!\!+\frac{14 \kappa_5^2+5 \Lambda }{3}-9 \dot{a}^2+6a^2+ \frac{2}{3 a^6{\rm r}^4 \left(1 + \frac{M}{2 a {\rm r}}\right)^8} \left(\frac{a^4 \left(\lambda -\frac{3 \dot{a}^2}{a^2}\right)}{\left(\frac{M}{2 a {\rm r}}+1\right)^8}+\frac{6 \left(\dot{a}^2-a \ddot{a}^2\right) (2 a {\rm r}+M)}{2 a {\rm r}-M}-9 \dot{a}^2\right)\nonumber\\&&+\frac{a^4 \left(\lambda -\frac{3 \dot{a}^2}{a^2}\right)}{\left(\frac{M}{2 a {\rm r}}+1\right)^8}+\frac{48 a^{10} {\rm r}^4}{(M-2 a {\rm r})^4} \left(\frac{M}{2 a {\rm r}}+1\right)^{12} \left(\frac{a^2 \left(\lambda -\frac{3 \dot{a}^2}{a^2}\right)}{3 \left(\frac{M}{2 a {\rm r}}+1\right)^8}+\frac{3 \dot{a}^2}{a^2}\right)+\frac{6 \left(\dot{a}^2-a \ddot{a}^2\right) (2 a {\rm r}+M)}{2 a {\rm r}-M}\nonumber\\&&+128 a^5 M^2 {\rm r}^5 \left(\left(6 a^2+2\right) {\rm r}^2\right)-9 \dot{a}^2+256 a^6 M {\rm r}^6 a^2 {\rm r}^2\nonumber\\&&\frac{8 a \kappa_5^2 \lambda''}{3 {\rm r} (M-2 a {\rm r})^4 (2 a {\rm r}+M)^6}\left(\left(64 a^2 M^7 {\rm r}^2\!-\!512 a^7 {\rm r}^7\!+\!8 a M^6 {\rm r} \left(a^2 {\rm r}^2\!-\!1\right)\!+\!16 a^2 M^5 {\rm r}^2 \left(\left(30 a^2-8\right) {\rm r}^2\!+\!7\right)\right.\right.\nonumber\\&&\left.\left.+128 a^4 M^3 {\rm r}^4 \left(\left(8 a^2-4\right) {\rm r}^2+13\right)+64 a^3 M^4 {\rm r}^5 \left(13 a^2+6\right) {\rm r}^2+12 a M^8 {\rm r}+M^9\right)\right)\nonumber\\ &&\frac{\kappa_5^8 \lambda ''}{147456 a^{20} (2 a {\rm r}-M)} \left(4096 a^{18} \left(\dot{a}^2 (2 a {\rm r}-5 M)+a \left(2 \ddot{a}^2 (2 a {\rm r}+M)+a \lambda (2 a {\rm r}-M)\right)\right)\right.\nonumber\\&&\left.-\frac{3 (2 a {\rm r}+M)^{12} \left(\dot{a}^2 (2 a {\rm r}-5 M)+2 a \ddot{a}^2 (2 a {\rm r}+M)\right)}{{\rm r}^{12}}\right)-\frac{\kappa_5^2 \lambda ^{(4)} (2 a {\rm r}+M)^4}{48 a^2 {\rm r}^4}\nonumber\\&&+\frac{\kappa_5^2 \lambda''}{2}\!\! \left(\frac{16 a^2 {\rm r}^4}{(2 a {\rm r}+M)^4}\!+\!\frac{3 \dot{a}^2 \left(4 a^3 {\rm r}^3\!+\!8 a^2 M {\rm r}^2\!+\!5 a M^2 {\rm r}+2 M^3\right) \left(\dot{a}^2 (2 a {\rm r}-5 M)+2 a \ddot{a}^2 (2 a {\rm r}+M)\right)}{4 a^7 {\rm r}^3 a^2 (2 a {\rm r}-M)}\right.\nonumber\\&&\left.\!+\!\frac{6 \dot{a}^4 (3 M\!-\!2 a {\rm r})^2}{a^4 (M-2 a {\rm r})^2}\right)\times\nonumber\\&& \left(\frac{16 a^4 {\rm r}^4 \left(\dot{a}^2 (2 a\! {\rm r}\!-\!5 M)\!+\!a\! \left(2 \ddot{a}^2 (2 a {\rm r}\!+\!M)+\!a\! \lambda (2 a {\rm r}\!-\!M)\right)\right)}{(M-2 a {\rm r}) (2 a {\rm r}+M)^4}\!+\!\frac{3 \left(\dot{a}^2 (2 a {\rm r}-5 M)\!+\!2 a \ddot{a}^2 (2 a {\rm r}+M)\right)}{a^2 (2 a {\rm r}-M)}\right.\nonumber\\&&\left.+\frac{(2 a {\rm r}+M)^2 \left(\lambda a^2\!-\!3 \dot{a}^2\right)}{3 a^2 (M\!-\!2 a {\rm r})^2}\!-\!\frac{3 \dot{a}^2}{a^2}\right)\!+\!4 M^2 \!\left(480 a^3 M {\rm r}^3\!+\!360 a^2 M^2 {\rm r}^2\!+\!16 a^6 {\rm r}^4 \lambda\!+\!120 a M^3 {\rm r}\!+\!15 M^4\right) \nonumber\\&&+\frac{16 \kappa_5^2 \left(\lambda ''\right)^2 (M-2 a {\rm r})^3 \left(4 a^2 {\rm r}^2-2 a M {\rm r}+M^2\right)}{3 a^5 {\rm r}^3 \left(\frac{M}{2 a {\rm r}}+1\right)^{14} (M-2 a {\rm r})^4} \left(\left(2 a \left(3 \ddot{a}^2 (2 a {\rm r}+M)^5\!-\!64 {} a^7 (\lambda \!-\!5) {\rm r}^4 (2 a {\rm r}\!-\!M)\right)\right.\right.\nonumber\\&&\left.\left.-3 \dot{a}^2 (5 M-2 a {\rm r}) (2 a {\rm r}+M)^4\right)\right)-32 a M {\rm r}^7 (M-2 a {\rm r})^3 \left(4 a^2 {\rm r}^2-8 a M {\rm r}+M^2\right)\times\nonumber\\&& \left(2 a \left(64 {} a^7 (\lambda -5) {\rm r}^4 (2 a {\rm r}-M)-3 \ddot{a}^2 (2 a {\rm r}+M)^5\right)+3 \dot{a}^2 (5 M-2 a {\rm r}) (2 a {\rm r}+M)^4\right)(2 a {\rm r}+M)^{12}\,.\nonumber\end{eqnarray} We assume the brane tension as an intrinsic property of the brane, just as an effective model \cite{gly1,gly2,bulk1,bulk2,european}. The brane tension is also supposed to be smooth, to have an inferior limit, and the brane tension fluctuations are evanescent, in the sense that they are suppressed exponentially. The huge variation of the temperature of the Universe in expansion needs to be modelled by a variable tension in the braneworld cosmology framework. The phenomenological E\"otv\"os law regarding standard fluid membranes \cite{Eotvos} is therefore used. Essentially, the E\"otv\"os law asserts that the (fluid) membrane tension depends on the temperature as \begin{equation} \lambda=\chi (T_{c}-T),\label{TERM0}\end{equation} where $\chi$ is a constant and $T_{c}$ is a critical temperature above what the membrane does not exist. The tension variation is now expressed in terms of the (cosmic) time, instead of the temperature. Indeed, as the Universe expands, it cools down, and a variation on the temperature is nothing but a time variation. In the absence of stresses in the bulk there is no exchange of energy and momentum between the brane and the bulk \cite{maartens}. As $dQ=dE+pdV=0$, by taking into account photons from the cosmic microwave background, it is possible to use the standard quantities $E=\sigma T^4V$ and $p={E}/{3}$, what implies that $\frac{dT}{T}=-\frac{1}{3}\frac{dV}{V}$. Finally, by expressing the volume in terms of the FRW scale factor ($V(t)=a^3(t)$) it implies that $T(t)\propto 1/a(t)$. This approach is in full agreement with the standard cosmological model \cite{meuhoff}. From Eq.~(\ref{TERM0}) one obtains \begin{equation} \lambda(t)=1-\frac{1}{a(t)}, \label{opa11} \end{equation} where we normalize the brane tension and the scale factor as well. By delving into the analysis on the influence of the brane tension variation on the black string associated to the McVittie solution, two points support our effective approach. From the theoretical point of view, Eq.~(\ref{opa11}) is useful to merge supersymmetry and inflationary cosmology, since it engenders a time variable 4D cosmological constant, which starts from a negative value and converges to a small positive one, as the universe expands \cite{gly2}. On the another hand, the type of time variation in Eq.~(\ref{opa11}) is appropriate, from the experimental point of view. The projection scheme of bulk gravitational quantities \cite{3333} evinces a linear dependence between the effective Newtonian constant and the brane tension, namely, $G\sim \lambda$. Hence, a time variation on the brane tension means a time variable gravitational constant. The best model independent bound on the fractional variation of $G$ is provided by the lunar laser ranging \cite{WILL}, which stays that $\dot{G}/G<(4.9\pm 5.7)\times 10^{-13} yr^{-1}$. When the expression (\ref{opa11}) is taken into account, the following bound \begin{equation} \frac{\dot{\lambda}(t)}{\lambda(t)}=\frac{\dot{a}(t)}{a(t)[a(t)-1)]},\label{novaaa} \end{equation} is obtained, and all the inputs analyzed in this paper lead to a fractional variation of the brane tension satisfying the lunar laser ranging experiment, via Eq.~(\ref{novaaa}), for late times. Regarding the McVittie solution, the Einstein field equations on the brane provide $ \rho \propto a^{-3(1+w)/\beta}$, where it is standard to define the state parameter $w = \frac{p}{\rho}$. It leads to the time evolution of the scale factor. When $M=0$ it implies that $\beta = 1$, and the scale factor takes the well known value for the scale factor of a flat universe $a(t)\propto t^{{2}/{3}}$ (dominated by non-relativistic matter, where $w=0$) or $a(t)\propto t^{{1}/{2}}$ (dominated by the radiation or relativistic matter, where $w=\frac{1}{3}$). In the case of a cosmological constant ($w=-1$), it implies that $a(t)\propto \exp(H_0 t)$, independently of the value for $\beta$. The manifold $\mu=1$, corresponding to $\beta=0$, is the event horizon in the Schwarzschild case. Notwithstanding, one must avoid this value in order to circumvent the big bang singularity \cite{Kaloper:2010ec}. We shall compare the McVittie black string profile in the two eras of evolution of our Universe (without a cosmological constant) and, in addition, in the presence of a cosmological constant. The pure cosmological constant braneworld scenario is displayed in Figs. 1 and 2, approaching a realistic black string in a global asymptotically FRW braneworld, where locally the behavior of a solution is analyzed. In order to evince the difference between constant brane tension and variable brane tension scenarios, in Fig. 1 we plot the McVittie black string for a constant brane tension analogue to \cite{plb2013}, and in the graphics in Fig. 2 for the variable tension expanding braneworld. The scenario is dominated by a cosmological constant as the scale factor is given by $a(t) \propto \exp(H_0t)$. According to (\ref{opa11}), the brane tension is given by \begin{equation} \label{tensioneh0t}\lambda(t) = 1 - \exp(-H_0 t).\end{equation} \noindent Hereupon in the graphics it shall be adopted $\Lambda_5 =\kappa_5=1$. \begin{figure}[H] \begin{center}\includegraphics[width=2.7in]{eht} \caption{\footnotesize\; Graphic of the brane effect-corrected warped black string horizon $g_{\theta\theta}(t,{\rm r},y)$ associated to the McVittie solution on the brane with \emph{constant} tension, along the extra dimension $y$ and also as function of the time $t$. The scale factor $a(t) = \exp(H_0 t)$ is used. } \end{center} \end{figure} \begin{figure}[H] \begin{center}\includegraphics[width=2.7in]{ehtvconstant}\quad\quad\includegraphics[width=2.7in]{ehtv} \caption{\footnotesize\; Graphic of the brane effect-corrected warped black string horizon $g_{\theta\theta}(t,{\rm r},y)$ associated to the McVittie solution on the brane with variable tension, along the extra dimension $y$ and also as function of the time $t$. These graphics respectively \emph{does not} and \emph{does} take into account the extra terms given by Eqs.~(\ref{magnetico}) and (\ref{magnetico1}), for the McVittie black string.} \end{center} \end{figure} Now, the case where the scale factor $a(t) \propto t^{\beta/2}$ is considered, what emulates a a brane dominated by radiation. Therefore Eq.~(\ref{opa11}) is written as \begin{equation}\label{tensiont2b} \lambda(t) = 1 - t^{-\beta/2}.\end{equation} As the situation previously analyzed, we first depict the black string warped horizon for the brane tension $\lambda$ constant \cite{plb2013}, for the sake of ulterior comparison to the case of a variable brane tension: \begin{figure}[H] \begin{center}\includegraphics[width=2.7in]{t12.pdf} \caption{\footnotesize\; Graphic of the black string warped horizon $g_{\theta\theta}(t,{\rm r},y)$ along the extra dimension $y$, as an explicit function of time $t$, where $a(t) \propto t^{-\beta/2}$, for the McVittie metric. } \end{center} \end{figure} \begin{figure}[H] \begin{center}\includegraphics[width=2.7in]{t12vconstant}\quad\quad\includegraphics[width=2.7in]{t12v.pdf} \caption{\footnotesize\; Plot of the warped horizon $g_{\theta\theta}(t,{\rm r},y)$ along the extra dimension $y$, as an explicit function of time $t$, for $a(t) \propto t^{\beta/2}$. The brane tension is given by $\lambda(t) = 1 - t^{-\beta/2}$. These graphics respectively \emph{does not} and \emph{does} take into account the extra terms given by Eqs.~(\ref{magnetico}) and (\ref{magnetico1}), for the McVittie black string.} \end{center} \end{figure} In Fig. 1, the black string warped horizon for a brane dominated by a cosmological constant is illustrated, considering a constant brane tension. The graphics in Fig. 2 show respectively that a) the black string warped horizon profile different for a variable brane tension in such scenario, provided by Eq.~(\ref{tensioneh0t}); b) the black string, associated to the McVittie metric on the brane, has a different warped horizon profile when the extra terms given by Eqs.~(\ref{magnetico}) and (\ref{magnetico1}) are taken into account. Those graphics show the paramount importance of considering more terms in the metric expansion given by Eq.~(\ref{eletrico}), as accomplished heretofore. By concerning a brane dominated by radiation or relativistic matter $(a(t) \propto t^{\beta/2})$, as the time elapses, the black string warped horizon of the associated black string decreases along the extra dimension for any value for $t\lesssim0.53$ in Fig. 4, for a constant brane tension. For the time parameter greater than this value, the warped horizon of the black string always increases, along the extra dimension. Instead, the graphics in Fig. 5 take into account the variable brane tension in Eq.~(\ref{tensiont2b}), respectively without and with the extra terms arising from the variable brane tension in Eqs.~(\ref{magnetico}) and (\ref{magnetico1}). Now, the case where the scale factor $a(t) \propto t^{2\beta/3}$ emulates a matter-dominated brane is taken into account. In this case Eq.~(\ref{opa11}) is written as \begin{equation}\label{tension32b} \lambda(t) = 1 - t^{-2\beta/3}.\end{equation} \begin{figure}[H] \begin{center}\includegraphics[width=2.7in]{t23.pdf} \caption{\footnotesize\; Graphic of the warped horizon $g_{\theta\theta}(t,{\rm r},y)$ along the extra dimension $y$, as an explicit function of time $t$, for $a(t) \propto t^{2\beta/3}$ for the McVittie metric. } \end{center} \end{figure} \begin{figure}[H] \begin{center}\includegraphics[width=2.7in]{t23vconstant}\includegraphics[width=2.7in]{t23v.pdf} \caption{\small Graphic of the warped horizon $g_{\theta\theta}(t,{\rm r},y)$ along the extra dimension $y$, as an explicit function of time $t$, for $a(t) \propto t^{2\beta/3}$. The brane tension is given by $\lambda(t) = 1 - t^{-2\beta/3}$. These graphics respectively \emph{does not} and \emph{does} take into account the extra terms given by Eqs.~(\ref{magnetico}) and (\ref{magnetico1}), for the McVittie black string. The black string warped horizon decreases along the extra dimension $y$ for any value for $t\lesssim0.76$. } \end{center} \end{figure} The graphics on the left in Figs. 2, 4, and 6 do not take into account the time derivative terms provided by Eqs.~(\ref{magnetico}) and (\ref{magnetico1}) respectively for each case analyzed, given by (\ref{tensioneh0t}), (\ref{tensiont2b}), and (\ref{tension32b}), while the graphics on the right in Figs. 2, 4, and 6 does take such extra terms, respectively. Regarding each group of figures (Figs. 1, 2; Figs. 3, 4; Figs. 5, 6), at slices corresponding to constant time in the range considered in the graphics, there is a subtle and prominent difference between the warped horizons, regarding the respective corresponding eras. In Figs. 1 and 2 that regard a brane dominated by a cosmological constant, the warped horizon of the associated McVittie black string increases monotonically along the extra dimension, irrespectively of the time and independently whether the brane tension is constant or the brane tension is variable. In all other cases, it does not happen. Notwithstanding, Figs. 3-6 evince another type of behavior. Fig. 3 concerns a radiation-dominated FRW brane for a constant brane tension. The black string warped horizon is still a monotonic function along the extra dimension. Instead, when one regards this kind of brane, by considering the variable brane tension, the graphic on the left in Fig. 4 shows that for the time scale $t \lesssim 0.59$ (in the normalized scale used) the black string warped horizon decreases along the extra dimension, and for $t \gtrsim t_1 = 0.59$ it monotonically increases. For the graphic on the right in Fig. 4, that takes into account the extra terms in Eqs.(\ref{magnetico}, \ref{magnetico1}) regarding the variable brane tension, the effects due to those terms are even more drastic. For $y \gtrsim y_1 \sim 0.02$ the square of the black string horizon is negative, preventing the black string to exist for values greater than $y_1$. It means that the black string has a pancake-like shape \cite{maartens} and ceases to exist along the extra dimension for $y > y_1$. It is a prominent result for the formalism here employed, based on the Taylor expansion (\ref{eletrico}): such procedure near the brane, in the range where the Taylor expansion holds, provides in this context all the information about the black string, and it is not solely a perturbative method. For the matter-dominated FRW brane, the analysis is similar, although the same black string behavior happen later in time: in this situation $t_1 \sim 0.76$. Besides, along the extra dimension $y_1 \sim 0.018$. All features are similar as analyzed in the previous paragraph. Indeed, as the graphics in the right hand side in Figs. 2, 4, and 6 are the most realistic ones, regarding respectively the cases where the Taylor expansion (\ref{eletrico}) is considered with the extra terms (\ref{magnetico}) and (\ref{magnetico1}) % due to the variable brane tension, we illustrate below the McVittie black strings. As the graphics in Figs. 4 and 6 have a similar pattern, we depict2 below the black strings for $t=0.6$, respectively for the Fig. 2 and for Fig. 6 (similar to Fig. 4): \begin{figure}[h] \begin{minipage}{14pc} \includegraphics[width=16pc]{bs10.pdf} \caption{\label{bs9} \footnotesize\; { Graphic of the black string for a Universe dominated by relativistic matter or radiation, where $a(t) \propto t^{\beta/2}$.}} \end{minipage}\hspace{7pc}% \begin{minipage}{14pc} \includegraphics[width=16pc]{bs11.pdf} \caption{\label{bs11} {Graphic of the black string for a Universe dominated by non-relativistic matter, where $a(t) \propto t^{2\beta/3}$. }}\end{minipage} \end{figure} \begin{figure}[H] \begin{center}\includegraphics[width=2.7in]{bs9} \caption{\label{bs10}Graphic of the black string for a Universe dominated by relativistic matter or radiation, where $a(t) \propto \exp(H_0 t)$.} \end{center} \end{figure} \noindent As already observed, the variable brane tension scenario brings drastic changes in the black string profiles, as depicted in Figs. \ref{bs9}-\ref{bs10}. In Fig. \ref{bs10} above on the left, the black string warped horizon always increases in full compliance to Fig. 2, for the case of a brane dominated by a cosmological constant. On the other hand, Figs. \ref{bs9} and \ref{bs11} show that there is a point along the extra dimension where the black string horizon tends to zero. It is an unexpected property, and in Section IV we shall show, by analyzing the 5D Kretschmann invariants, that the vanishing of the black string horizon corresponds to a regular solution in the bulk, absent of physical singularities. It is an immediate consequence of out analysis that the brane is allowed to fluctuate in the variable tension paradigm. A possible interpretation may be given in terms of the emission and/or absorption of gravitons into the bulk implying the transference of momentum to the brane, being interpreted as a local deformation of the brane shape. In fact, combining the fact that a completely rigid object cannot exist in the general relativity framework with the presence of a scalar field representing the brane position into the bulk, one arrives at the possibility of a spontaneous symmetry breaking of the bulk diffeomorphism. In this way, a perturbative spectrum of scalar particles, the so called branons, may appear if the tension scale is much smaller than the higher dimensional mass scale \cite{BRANON}. In all cases for the cosmological evolution on the brane, described by (\ref{tensioneh0t}), (\ref{tensiont2b}), and (\ref{tension32b}), respectively corresponding to a Universe dominated by non-relativistic matter, dominated by the radiation or relativistic matter, and having a cosmological constant, the variable brane tension tends to a constant value as the time goes to infinity, and the thus brane becomes rigid. \section{Bulk Metric, the Black String and Variable Brane Tension: Removing Physical Singularities} In this Section we aim to show that although the previous analysis on the black string profile regards a perturbative method, the determination about the bulk singularities consists of an exact method. The analysis of the 4D and 5D Kretschmann invariants evinces the character of the bulk solutions, that can be regular in the whole bulk, not solely near the brane. Furthermore, we shall prove that for some eras of the evolution of the Universe, the singularities in the bulk are removed as the cosmological time elapses, due to the variable brane tension. Actually, in the case where at late times the cosmology is dominated by a positive cosmological constant, the metric (\ref{mcvitt}) on the brane is regular everywhere on and outside the associated black hole horizon, and it asymptotes to the Schwarzschild-de Sitter geometry, which has a Kottler black string associated to it \cite{EPJC}. When the cosmological constant equals zero our results are led to the ones in \cite{Chamblin:1999by}. When $M=0$, the solution is led to a homogeneous and isotropic FRW cosmology on the brane. For $H(t)=H_0$, the classical (Schwarzschild) black string \cite{maartens, Chamblin:1999by, meuhoff} or a Schwarzschild-de Sitter (or Kottler) black string of mass $M$ \cite{EPJC} is obtained. All curvature invariants on the null surface equal their values on the horizon of a Schwarzschild-de Sitter generalized black string of mass $M$ \cite{EPJC} and positive Hubble constant. At least in the case when $H_{0}>0$ the McVittie metric on the brane induces a black string \cite{plb2013}. An alternative radial coordinate is defined \cite{nolann} as ${\tt r} = (1+\mu)^2 a(t) {\rm r}$, the McVittie metric (\ref{mcvitt}) reads \begin{equation} d s^2 = -g\; d t^2 - {2H\,{\tt r}}\,f^{-1/2}d {\tt r}\,d t + f^{-1}{d {\tt r}^2} + {\tt r}^2 d\Omega_2, \label{fin}\nonumber \end{equation}\noindent where $f = 1-2M/{\tt r}$. On the brane, a null apparent horizon is placed at ${\tt r}={\tt r}_-$, which is the smaller positive root of $g({\tt r})=1- 2M/{\tt r} - H^2 {\tt r}^2=0$. When $H$ equals a constant, the metric above is the Schwarzschild-de Sitter metric. When ${\tt r}=2M$ and $t$ is finite, the McVittie solution has a curvature singularity at $\mu = 1$, {as the Ricci scalar has the form $R = 12H^{2}+ \frac{6}{\beta} \dot H\left(1-\frac{2M}{\tt r}\right)^{-1}$. The metric (\ref{fin}) evinces that in such case there is a spacelike 3-surface on the brane, as {\tt r} is fixed to $2M$, and consequently $d{\tt r} = 0$. Besides, the metric term in $dt^2$ is $g({\tt r}) = 4M^2H^2(t) > 0$, and the sphere has finite radius ${\tt r} = 2M$. This surface lies in the causal past of all spacetime points in the patch of the metric considered \cite{Kaloper:2010ec} and the black hole singularity is indeed localized near the brane.} {The invariant \[ \xi=(\nabla_{\mu} \nabla_{\nu} R_{\phi\psi\rho \sigma})(\nabla^{\mu} \nabla^{\nu} R^{\phi\psi\rho \sigma})\] (here $\nabla_\mu$ denotes the covariant derivative on the brane) diverges at the horizon along ingoing null geodesics \cite{Kaloper:2010ec}.} This curvature invariant is very soft since it takes invariants involving at least two derivatives of the curvature to detect it. This 4D Kretschmann invariant can be related to its 5D counterpart, as the 5D and the 4D Riemann tensors are related by the Gauss equation as $ {}^{(5)}R_{\phi\kappa\rho\sigma} = R_{\phi\kappa\rho\sigma} -K_{\phi\rho}K_{\kappa\sigma} + K_{\phi\sigma}K_{\kappa\rho}$. Consequently, the 5D version of the invariant $\xi$ reads \begin{equation} {}^{(5)}\xi=(D_a D_b {}^{(5)}R_{\phi\kappa\zeta \sigma})(D^aD^b {}^{(5)}R^{\phi\kappa\zeta\sigma}),\label{xi5}\end{equation} where $D_a$ denotes the 5D covariant derivative. {It is worthwhile to point that $a,b$ are effectively 4D spacetime indexes, as the 5D covariant derivative can be realized as $D_a = \nabla_\mu$, for $a = 0,\ldots, 3$, and $D_a = \nabla_y$, when $a=5$. }The invariant (\ref{xi5}) was shown to diverge at the black string warped horizon as well as in the McVittie black string singularity \cite{plb2013}, agreeing to the limit $a(t)=1$, corresponding to the classical black string. {Therefore, the difference between the 4D and 5D invariants is given by \begin{eqnarray} {}^{(5)}\xi - \xi\!&=&\! 2(\nabla_{\mu} \nabla_{\nu} K_{\tau[\rho\vert}K_{\psi\vert\sigma]})(\nabla^{\mu} \nabla^{\nu}K^{\tau\rho}K^{\psi\sigma}) -2 (\nabla_{y} \nabla_{\nu} K_{\tau\rho}K_{\psi\sigma}) (\nabla^{y} \nabla^{\nu}K^{\tau\sigma}K^{\psi\rho}) \nonumber\\&&+(\nabla_{(y} \nabla_{\nu)} R_{\tau\psi\rho \sigma})(\nabla^y \nabla^{\nu} R^{\tau\psi\rho \sigma}) -2(\nabla_{(\mu} \nabla_{y)} K_{\tau\rho}K_{\psi\sigma})(\nabla^{(\mu} \nabla^{y)} R^{\tau\psi\rho \sigma}) \nonumber \\&&- 4 (\nabla_{\mu} \nabla_{y} K_{\tau[\rho\vert}K_{\psi\vert\sigma]}) (\nabla^{\mu} \nabla^{y}K^{\tau\sigma}K^{\psi\rho})+ (\nabla_y^2 R_{\tau\psi\rho \sigma})((\nabla^y)^2 R^{\tau\psi\rho \sigma})\nonumber\\&&-4(\nabla_y^2 K_{\tau\rho}K_{\psi\sigma})((\nabla^y)^2 R^{\tau\psi\rho \sigma})+2(\nabla_y^2 K_{\tau[\sigma\vert}K_{\psi\vert\rho]})((\nabla^y)^2 K^{\tau\sigma}K^{\psi\rho})\,.\label{xi66}\end{eqnarray} By considering the extrinsic curvature in (\ref{kurvmc}) for the McVittie solution, we can calculate ${}^{(5)}\xi$. Let us first write the 4D Kretschmann scalar $\xi$ for the McVittie solution (\ref{mcvitt}): \begin{eqnarray} \xi&=& \frac{1}{{\tt r}^{17} \!\left(1\!-\!\frac{2M}{{\tt r}}\right)^{\frac{11}{2}}}\Biggl[12\sqrt{1\!-\!\frac{2 M}{{\tt r}}} {\tt r}^{13}{H}^4 \Bigl( (885 M^4\!-\!1320 M^2 (2 M\!-\!{\tt r})^5\!-\!1686 M^3 {\tt r}\!+\!1240 M^2 {\tt r}^2 \nonumber \\ &&-412 M {\tt r}^3 +52 {\tt r}^4 ) {\dot H}^2 \Bigr)\!+\!24 {\tt r}^{13} \left(158 M^4\!-\!185 M^3 {\tt r}\!+\!63 M^2 {\tt r}^2\!-\!M {\tt r}^3\!-\!2 {\tt r}^4\right) {H}^3 {\dot H}{\ddot H} \nonumber\\ && -24 (2 M\!-\!{\tt r}) {\tt r}^{10} {H} {\ddot H} \biggl(M^2 \!\left(6 M^2\!\!-\!\!19 M {\tt r}\!+\!8 {\tt r}^2\right)\!{\dot H} +\sqrt{1\!-\!\frac{2 M}{{\tt r}}} {\tt r}^4 \left(67 M^2\!-\!56 M {\tt r}\!+\!12 {\tt r}^2\right) \! {\dot H} ^2 \nonumber \\ &&+ {\tt r}^4 \left(14 M^2-11 M {\tt r}+2 {\tt r}^2\right) \dddot{H} \biggr) \!+\! 4 {\tt r}^3\! (-2 M+{\tt r}) {H} ^2 \biggl[6 M {\tt r}^{10} \left(47 M {\tt r}\!-\!57 M^2\!-\!10 {\tt r}^2\right) {\dot H} ^3\nonumber \\ && -2 M^2 \sqrt{1\!-\!\frac{2 M}{{\tt r}}} {\tt r}^7 \left(334 M^2\!-\!347 M {\tt r}\!+\!96 {\tt r}^2\right) {\dot H} ^2 \! +\!3 \sqrt{1\!-\!\frac{2 M}{{\tt r}}} \Bigl(240 M^2 (25 M\!-\!12 {\tt r}) {\tt r}^4\!-\!2M \nonumber \\ && +{\tt r}^{11}\! \left(109 M^2\!-\!88 M {\tt r}\!+\! 19 {\tt r}^2\right){\ddot H} ^2\Bigr) \!-\!6 M {\tt r}^3{\dot H}\Bigl(180 M {\tt r}^4\!-\!2M\sqrt{1\!-\!\frac{2 M}{{\tt r}}} {\tt r}^8 (-5 M+{\tt r}) { \dddot{H}} \Bigr)\biggr] \nonumber \\ && +4 ({\tt r}\!-\!2M) \biggl[ 2 M^2 {\tt r}^{10} \left(6 M^2+M {\tt r}-2 {\tt r}^2\right) {\dot H} ^3+3 \sqrt{1\!-\!\frac{2 M}{{\tt r}}} {\tt r}^{14} \left(57 M^2\!-\!52 M {\tt r}\!+\!12 {\tt r}^2\right) {\dot H} ^4 \nonumber \\&& +{\tt r}^7 ({\tt r}\!-\!2M) {\dot H} ^2 \left(M^2 \sqrt{1\!-\!\frac{2 M}{{\tt r}}} \left(847 M^2-832 M {\tt r}+222 {\tt r}^2\right)-6 (5 M-2 {\tt r}) {\tt r}^7 \dddot{H} \right) \nonumber \\ && -2 M^2 (2 M-{\tt r}) {\tt r}^3 {\dot H} \left(180 M (2 M-{\tt r})^3+7 \sqrt{1\!-\!\frac{2 M}{{\tt r}}} {\tt r}^8 \dddot{H} \right) \nonumber \\&& +3{\tt r}\!\left({1-\frac{2 M}{{\tt r}}}\right)^{3/2}\!\!\!\!\bigl(120 M^2 ({\tt r}\!-\!2M)^3 \left(65 M^2\!\!-\!60 M {\tt r}\!+\!14 {\tt r}^2\right) \!+\!{\tt r}^{15} \dddot{H} ^2\bigr)\!-\!\frac{40}{3} M^2 {\tt r}^{11} {\ddot H} ^2\biggr] \Biggr]\, \label{krettxi} \end{eqnarray}\noindent This 4D Kretschmann invariant encodes the existence of two physical singularities on the brane, at ${\tt r} = 0$ and ${\tt r} = 2M$. Such kind of singularity is soft, as it involves at least two derivatives of the curvature to detect it. Besides, the singularity at the surface ${\tt r}=2M$ is a soft, null naked singularity when $t\rightarrow\infty$ in an FRW brane if the Hubble parameter $H(t)$ goes to zero at late times \cite{Kaloper:2010ec}, which is the case when $a(t) \propto t^{\beta/2}$ and $a(t) \propto t^{2\beta/3}$. In order to calculate the 5D Kretschmann invariant (\ref{xi5}), all the terms at the right-hand side in (\ref{xi66}) are computed. As the expression for ${}^{(5)}\xi$ is extensive, all the information about it and the associated singularities is depicted in the graphics in what follows, where we used the normalization $M=1$ for the sake of simplicity, without loss of generality. In what follows we depict the graphics of the 5D Kretschmann scalar for all the eras of the evolution of the Universe. We show that the vanishing of the black string warped horizon, for a Universe dominated by matter or radiation (see Figs. \ref{bs11} and \ref{bs10}) is accompanied with the banishment of the physical singularities in the bulk. \newpage \begin{figure}[h] \begin{minipage}{14pc} \includegraphics[width=17pc]{tmeio1.pdf} \caption{\label{kretd} \footnotesize\; {Plot of the 5D Kretschmann scalar ${}^{(5)}\xi$ as a function of time and ${\tt r}$, for the scale factor $a(t) \propto t^{\beta/2}$.}} \end{minipage}\hspace{7pc}% \begin{minipage}{14pc} \includegraphics[width=17pc]{tmeio2.pdf} \caption{\label{kret5d2} {Plot of the 5D Kretschmann scalar ${}^{(5)}\xi$ as a function of time and ${\tt r}$, for the scale factor $a(t) \propto t^{2\beta/3}$.}}\end{minipage} \end{figure} Figs. \ref{kretd} and \ref{kret5d2} reveal that as time elapses, the singularities in the brane are removed, providing a regular 5D bulk solution, as the 5D Kretschmann invariants do not diverge for both cases of a Universe dominated by radiation or matter. Fig. \ref{kretdexp} evinces that two physical singularities on the brane at ${\tt r} = 0$ and ${\tt r} = 2M$ remain in the bulk along the extra dimension and there is no additional singularity in the bulk, in a Universe dominated by a cosmological constant. \begin{figure}[h] \begin{minipage}{14pc} \includegraphics[width=17pc]{exptsem.pdf} \caption{\label{kretdexp} \footnotesize\; {Plot of the 5D Kretschmann scalar ${}^{(5)}\xi$ as a function of time and ${\tt r}$, for the scale factor $a(t) \propto \exp(H_0 t)$.}} \end{minipage}\hspace{7pc}% \end{figure} } \section{Concluding Remarks} Since no single perfect fluid description can be used as a source for the generalized McVittie solution, to find a suitable single-fluid interpretation for the metric it is required the introduction of viscosity and heat transport as well, and the analysis here can be accomplished in the context of the gravity/fluid correspondence \cite{navier}. Hence, a single imperfect fluid can be used as a source to obtain a generalized McVittie metric as an exact solution to Einstein equations, and that the mass variation can be interpreted as a consequence of heat flow in the radial direction within the fluid. An accreting black hole model was used in \cite{abd} to unravel its differences with respect to the static-mass case, keeping the necessary conditions for the McVittie metric to be interpreted as a black hole at future infinity. A generalized black string can be obtained in this sense. In our approach, the additional terms in the black string warped horizon (\ref{mag2}) metric are shown to provide modifications in the McVittie black string warped horizon in a variable tension braneworld scenario. The black string associated to the McVittie solution of the Einstein field equations is shown to be drastically modified by the terms due to the Universe expansion. In particular, the well known results in the literature are recovered as limiting cases when $a(t)=1$ or $M=0$. For radiation-dominated and matter-dominated FRW branes, the Taylor expansion provides an exact method, providing all the information about the black string warped horizon along the extra dimension, and it is not a mere perturbative method. When the variable brane tension is taken into account there is a value for the time coordinate beyond which the black string warped horizon is zero thereupon, what does mean that the black string ceases to exist along the extra dimension. {As illustrated in Figs. 7, the McVittie black string warped horizon can have a completely different profile in a variable brane tension scenario. The black string warped horizon along the extra dimension provides immediate information on the black string stability under small perturbations, as the (Schwarzschild) black string Gregory-Laflamme instability \cite{greg}. Indeed, the horizon $\sqrt{g_{\theta\theta}({\tt r}, t, y)}$ can collapse to zero before the perturbation takes part, as illustrated in the Figs. 3-6, for adequate ranges in the variable $t$ therein. The determination whether the McVittie black string is unstable or not, under Gregory-Laflamme perturbations, is out of the scope of this work. By analyzing the 4D and 5D Kretschmann invariants, we showed that the black string warped horizon vanishes along the extra dimension, for a Universe dominated by matter or radiation, what induces the bulk singularities to disappear, leaving a regular bulk solution. Moreover, no additional singularity is introduced in the bulk, with respect to the brane black hole physical singularities, when the expanding Universe is dominated by a cosmological constant. The analysis on how the black string warped horizon leaks into the bulk near the brane concerned a perturbative method, widely used in the literature. Notwithstanding, the analysis of the bulk singularities relies on an exact method provided by the Gauss equation and Eqs.(\ref{xi5}) and (\ref{xi66}). Therefore, the 4D and 5D Kretschmann invariants show that the bulk solutions, can be regular in the whole bulk, and in some eras of the evolution of the Universe, the singularities in the bulk are removed as the cosmological time elapses, due to the variable brane tension. } Branons and the brane flexibility \cite{branons, branons1, branons3} are related and some cosmological and astrophysical constraints on the brane tension were considered in \cite{branons5}. There are bounds on the brane tension and on the branon mass, in the case where the brane tension scale is much smaller than the 5D fundamental scale of gravity \cite{branons5}, and the first indications of extra dimensions may arise by the production of branons, {allowing to measure the brane tension \cite{alca}, and consequently the feasibility concerning the more general assumption of a variable brane tension}. In the context of a variable brane tension, according to this interpretation, the contribution to the branons creation may be notorious, as well as its influence on the black string warped horizon profile. \section*{Acknowledgments} D. Bazeia would like to thank CAPES, CNPq and FAPESP for financial support. R. da Rocha is grateful to CNPq grant 303027/2012-6 and 480482/2012-8. J. M. Hoff da Silva thanks to CNPq for partial financial support.
1,477,468,751,299
arxiv
\section*{Acknowledgements} \noindent We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MinES and FASO (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany), EPLANET, Marie Sk\l{}odowska-Curie Actions and ERC (European Union), Conseil G\'{e}n\'{e}ral de Haute-Savoie, Labex ENIGMASS and OCEVU, R\'{e}gion Auvergne (France), RFBR and Yandex LLC (Russia), GVA, XuntaGal and GENCAT (Spain), Herchel Smith Fund, The Royal Society, Royal Commission for the Exhibition of 1851 and the Leverhulme Trust (United Kingdom). \clearpage \FloatBarrier \clearpage \FloatBarrier \addcontentsline{toc}{section}{References} \setboolean{inbibliography}{true} \ifx\mcitethebibliography\mciteundefinedmacro \PackageError{LHCb.bst}{mciteplus.sty has not been loaded} {This bibstyle requires the use of the mciteplus package.}\fi \providecommand{\href}[2]{#2} \begin{mcitethebibliography}{10} \mciteSetBstSublistMode{n} \mciteSetBstMaxWidthForm{subitem}{\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space} {\relax}{\relax} \bibitem{LHCb-PAPER-2014-039} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{First observation of a baryonic ${\ensuremath{\B_\cquark^+}}\xspace$ decay}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.113.152003}{Phys.\ Rev.\ Lett.\ \textbf{113} (2014) 152003}, \href{http://arxiv.org/abs/1408.0971}{{\normalfont\ttfamily arXiv:1408.0971}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2017-012} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{First observation of a baryonic $B^0_s$ decay}}, }{}\href{http://arxiv.org/abs/1704.07908}{{\normalfont\ttfamily arXiv:1704.07908}}, {submitted to Phys. Rev. Lett}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Hou:2000bz} W.-S. Hou and A.~Soni, \ifthenelse{\boolean{articletitles}}{\emph{{Pathways to rare baryonic B decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.86.4247}{Phys.\ Rev.\ Lett.\ \textbf{86} (2001) 4247}, \href{http://arxiv.org/abs/hep-ph/0008079}{{\normalfont\ttfamily arXiv:hep-ph/0008079}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Bevan:2014iga} \mbox{BaBar}\xspace and \mbox{Belle}\xspace collaborations, A.~J. Bevan {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The physics of the B factories}}, }{}\href{http://dx.doi.org/10.1140/epjc/s10052-014-3026-9}{Eur.\ Phys.\ J.\ \textbf{C74} (2014) 3026}, \href{http://arxiv.org/abs/1406.6311}{{\normalfont\ttfamily arXiv:1406.6311}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2014-034} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Evidence for ${\ensuremath{C\!P}}\xspace$ violation in ${\ensuremath{\Bu}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Pp}}\xspace{\ensuremath{\overline \proton}}\xspace{\ensuremath{\kaon^+}}\xspace$ decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.113.141801}{Phys.\ Rev.\ Lett.\ \textbf{113} (2014) 141801}, \href{http://arxiv.org/abs/1407.5907}{{\normalfont\ttfamily arXiv:1407.5907}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Geng:2008ps} C.~Q. Geng and Y.~K. Hsiao, \ifthenelse{\boolean{articletitles}}{\emph{{Direct CP and T violation in baryonic B decays}}, }{}\href{http://dx.doi.org/10.1142/S0217751X08041992}{Int.\ J.\ Mod.\ Phys.\ \textbf{A23} (2008) 3290}, \href{http://arxiv.org/abs/0801.0022}{{\normalfont\ttfamily arXiv:0801.0022}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Geng:2006jt} C.~Q. Geng, Y.~K. Hsiao, and J.~N. Ng, \ifthenelse{\boolean{articletitles}}{\emph{{Direct {\ensuremath{C\!P}}\xspace violation in $B^{\pm} \ensuremath{\rightarrow}\xspace {\ensuremath{\Pp}}\xspace {\ensuremath{\overline \proton}}\xspace K^{(*)\pm}$}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.98.011801}{Phys.\ Rev.\ Lett.\ \textbf{98} (2007) 011801}, \href{http://arxiv.org/abs/hep-ph/0608328}{{\normalfont\ttfamily arXiv:hep-ph/0608328}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Gronau:2011cf} M.~Gronau and J.~L. Rosner, \ifthenelse{\boolean{articletitles}}{\emph{{Triple product asymmetries in $K$, $D_{(s)}$ and $B_{(s)}$ decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.84.096013}{Phys.\ Rev.\ \textbf{D84} (2011) 096013}, \href{http://arxiv.org/abs/1107.1232}{{\normalfont\ttfamily arXiv:1107.1232}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Aubert:2007qea} BaBar collaboration, B.~Aubert {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Evidence for the $B^0\ensuremath{\rightarrow}\xspace p \overline{p} K^{*0}$ and $B^+\ensuremath{\rightarrow}\xspace \eta_c K^{*+}$ decays and study of the decay dynamics of $B$ meson decays into $p \overline{p} h$ final states}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.76.092004}{Phys.\ Rev.\ \textbf{D76} (2007) 092004}, \href{http://arxiv.org/abs/0707.1648}{{\normalfont\ttfamily arXiv:0707.1648}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Chen:2008jy} Belle collaboration, J.~H. Chen {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Observation of $B^0\ensuremath{\rightarrow}\xspace p \overline{p} K^{*0}$ with a large $K^{*0}$ polarization}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.100.251801}{Phys.\ Rev.\ Lett.\ \textbf{100} (2008) 251801}, \href{http://arxiv.org/abs/0802.0336}{{\normalfont\ttfamily arXiv:0802.0336}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{PDG2016} Particle Data Group, C.~Patrignani {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{\href{http://pdg.lbl.gov/}{Review of particle physics}}}, }{}\href{http://dx.doi.org/10.1088/1674-1137/40/10/100001}{Chin.\ Phys.\ \textbf{C40} (2016) 100001}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Bebek:1988jy} CLEO collaboration, C.~Bebek {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Search for the charmless decays $B \ensuremath{\rightarrow}\xspace p \bar{p} \pi$ and $p \bar{p} \pi \pi$}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.62.8}{Phys.\ Rev.\ Lett.\ \textbf{62} (1989) 8}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Alves:2008zz} LHCb collaboration, A.~A. Alves~Jr.\ {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The \mbox{LHCb}\xspace detector at the LHC}}, }{}\href{http://dx.doi.org/10.1088/1748-0221/3/08/S08005}{JINST \textbf{3} (2008) S08005}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-DP-2014-002} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{LHCb detector performance}}, }{}\href{http://dx.doi.org/10.1142/S0217751X15300227}{Int.\ J.\ Mod.\ Phys.\ \textbf{A30} (2015) 1530022}, \href{http://arxiv.org/abs/1412.6352}{{\normalfont\ttfamily arXiv:1412.6352}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Sjostrand:2006za} T.~Sj\"{o}strand, S.~Mrenna, and P.~Skands, \ifthenelse{\boolean{articletitles}}{\emph{{PYTHIA 6.4 physics and manual}}, }{}\href{http://dx.doi.org/10.1088/1126-6708/2006/05/026}{JHEP \textbf{05} (2006) 026}, \href{http://arxiv.org/abs/hep-ph/0603175}{{\normalfont\ttfamily arXiv:hep-ph/0603175}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Sjostrand:2007gs} T.~Sj\"{o}strand, S.~Mrenna, and P.~Skands, \ifthenelse{\boolean{articletitles}}{\emph{{A brief introduction to PYTHIA 8.1}}, }{}\href{http://dx.doi.org/10.1016/j.cpc.2008.01.036}{Comput.\ Phys.\ Commun.\ \textbf{178} (2008) 852}, \href{http://arxiv.org/abs/0710.3820}{{\normalfont\ttfamily arXiv:0710.3820}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PROC-2010-056} I.~Belyaev {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Handling of the generation of primary events in Gauss, the LHCb simulation framework}}, }{}\href{http://dx.doi.org/10.1088/1742-6596/331/3/032047}{{J.\ Phys.\ Conf.\ Ser.\ } \textbf{331} (2011) 032047}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Lange:2001uf} D.~J. Lange, \ifthenelse{\boolean{articletitles}}{\emph{{The EvtGen particle decay simulation package}}, }{}\href{http://dx.doi.org/10.1016/S0168-9002(01)00089-4}{Nucl.\ Instrum.\ Meth.\ \textbf{A462} (2001) 152}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Golonka:2005pn} P.~Golonka and Z.~Was, \ifthenelse{\boolean{articletitles}}{\emph{{PHOTOS Monte Carlo: A precision tool for QED corrections in $Z$ and $W$ decays}}, }{}\href{http://dx.doi.org/10.1140/epjc/s2005-02396-4}{Eur.\ Phys.\ J.\ \textbf{C45} (2006) 97}, \href{http://arxiv.org/abs/hep-ph/0506026}{{\normalfont\ttfamily arXiv:hep-ph/0506026}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Allison:2006ve} Geant4 collaboration, J.~Allison {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Geant4 developments and applications}}, }{}\href{http://dx.doi.org/10.1109/TNS.2006.869826}{IEEE Trans.\ Nucl.\ Sci.\ \textbf{53} (2006) 270}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Agostinelli:2002hh} Geant4 collaboration, S.~Agostinelli {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Geant4: A simulation toolkit}}, }{}\href{http://dx.doi.org/10.1016/S0168-9002(03)01368-8}{Nucl.\ Instrum.\ Meth.\ \textbf{A506} (2003) 250}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PROC-2011-006} M.~Clemencic {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The \mbox{LHCb}\xspace simulation application, Gauss: Design, evolution and experience}}, }{}\href{http://dx.doi.org/10.1088/1742-6596/331/3/032023}{{J.\ Phys.\ Conf.\ Ser.\ } \textbf{331} (2011) 032023}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-DP-2012-004} R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The \mbox{LHCb}\xspace trigger and its performance in 2011}}, }{}\href{http://dx.doi.org/10.1088/1748-0221/8/04/P04022}{JINST \textbf{8} (2013) P04022}, \href{http://arxiv.org/abs/1211.3055}{{\normalfont\ttfamily arXiv:1211.3055}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{BBDT} V.~V. Gligorov and M.~Williams, \ifthenelse{\boolean{articletitles}}{\emph{{Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree}}, }{}\href{http://dx.doi.org/10.1088/1748-0221/8/02/P02013}{JINST \textbf{8} (2013) P02013}, \href{http://arxiv.org/abs/1210.6861}{{\normalfont\ttfamily arXiv:1210.6861}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Breiman} L.~Breiman, J.~H. Friedman, R.~A. Olshen, and C.~J. Stone, {\em Classification and regression trees}, Wadsworth international group, Belmont, California, USA, 1984\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{AdaBoost} Y.~Freund and R.~E. Schapire, \ifthenelse{\boolean{articletitles}}{\emph{A decision-theoretic generalization of on-line learning and an application to boosting}, }{}\href{http://dx.doi.org/10.1006/jcss.1997.1504}{J.\ Comput.\ Syst.\ Sci.\ \textbf{55} (1997) 119}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2016-001} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Search for ${\ensuremath{\B_\cquark^+}}\xspace$ decays to the ${\ensuremath{\Pp}}\xspace{\ensuremath{\overline \proton}}\xspace{\ensuremath{\pion^+}}\xspace$ final state}}, }{}\href{http://dx.doi.org/10.1016/j.physletb.2016.05.074}{Phys.\ Lett.\ \textbf{B759} (2016) 313}, \href{http://arxiv.org/abs/1603.07037}{{\normalfont\ttfamily arXiv:1603.07037}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Skwarnicki:1986xj} T.~Skwarnicki, {\em {A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances}}, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, {\href{http://inspirehep.net/record/230779/}{DESY-F31-86-02}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Aston:1987ir} D.~Aston {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{A study of $K^-\pi^+$ scattering in the reaction $K^- p \ensuremath{\rightarrow}\xspace K^-\pi^+ n$ at 11\,GeV/c}}, }{}\href{http://dx.doi.org/10.1016/0550-3213(88)90028-4}{Nucl.\ Phys.\ \textbf{B296} (1988) 493}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Aubert:2008zza} BaBar collaboration, B.~Aubert {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Time-dependent and time-integrated angular analysis of $B\ensuremath{\rightarrow}\xspace\phi {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace{\ensuremath{\pion^0}}\xspace$ and $\phi K^{\pm}\pi^{\mp}$}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.78.092008}{Phys.\ Rev.\ \textbf{D78} (2008) 092008}, \href{http://arxiv.org/abs/0808.3586}{{\normalfont\ttfamily arXiv:0808.3586}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Wilks:1938dza} S.~S. Wilks, \ifthenelse{\boolean{articletitles}}{\emph{{The large-sample distribution of the likelihood ratio for testing composite hypotheses}}, }{}\href{http://dx.doi.org/10.1214/aoms/1177732360}{Ann.\ Math.\ Stat.\ \textbf{9} (1938) 60}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{fsfd} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Measurement of the fragmentation fraction ratio $f_s/f_d$ and its dependence on $B$ meson kinematics}}, }{}\href{http://dx.doi.org/10.1007/JHEP04(2013)001}{JHEP \textbf{04} (2013) 001}, \href{http://arxiv.org/abs/1301.5286}{{\normalfont\ttfamily arXiv:1301.5286}}, $f_s/f_d$ value updated in \href{https://cds.cern.ch/record/1559262}{LHCb-CONF-2013-011}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-DP-2012-003} M.~Adinolfi {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Performance of the \mbox{LHCb}\xspace RICH detector at the LHC}}, }{}\href{http://dx.doi.org/10.1140/epjc/s10052-013-2431-9}{Eur.\ Phys.\ J.\ \textbf{C73} (2013) 2431}, \href{http://arxiv.org/abs/1211.6759}{{\normalfont\ttfamily arXiv:1211.6759}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Anderlini:2202412} L.~Anderlini {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{The PIDCalib package}}, }{} \href{http://cdsweb.cern.ch/search?p=LHCb-PUB-2016-021&f=reportnumber&action_search=Search&c=LHCb+Notes} {LHCb-PUB-2016-021}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2012-018} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Observation of ${\ensuremath{\B^0}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace$ and evidence for ${\ensuremath{\B^0_\squark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\kaon^+}}\xspace{\ensuremath{\kaon^-}}\xspace$}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.109.131801}{Phys.\ Rev.\ Lett.\ \textbf{109} (2012) 131801}, \href{http://arxiv.org/abs/1207.5991}{{\normalfont\ttfamily arXiv:1207.5991}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Pivk:2004ty} M.~Pivk and F.~R. Le~Diberder, \ifthenelse{\boolean{articletitles}}{\emph{{sPlot: A statistical tool to unfold data distributions}}, }{}\href{http://dx.doi.org/10.1016/j.nima.2005.08.106}{Nucl.\ Instrum.\ Meth.\ \textbf{A555} (2005) 356}, \href{http://arxiv.org/abs/physics/0402083}{{\normalfont\ttfamily arXiv:physics/0402083}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Chilikin:2014bkk} Belle collaboration, K.~Chilikin {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Observation of a new charged charmoniumlike state in ${\ensuremath{\Bbar{}^0}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$ decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.90.112009}{Phys.\ Rev.\ \textbf{D90} (2014) 112009}, \href{http://arxiv.org/abs/1408.6457}{{\normalfont\ttfamily arXiv:1408.6457}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2016-030} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{\emph{{Measurement of matter-antimatter differences in beauty baryon decays}}, }{}\href{http://dx.doi.org/10.1038/nphys4021}{Nature Physics \textbf{13} (2017) 391}, \href{http://arxiv.org/abs/1609.05216}{{\normalfont\ttfamily arXiv:1609.05216}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \end{mcitethebibliography} \newpage \centerline{\large\bf LHCb collaboration} \begin{flushleft} \small R.~Aaij$^{40}$, B.~Adeva$^{39}$, M.~Adinolfi$^{48}$, Z.~Ajaltouni$^{5}$, S.~Akar$^{59}$, J.~Albrecht$^{10}$, F.~Alessio$^{40}$, M.~Alexander$^{53}$, S.~Ali$^{43}$, G.~Alkhazov$^{31}$, P.~Alvarez~Cartelle$^{55}$, A.A.~Alves~Jr$^{59}$, S.~Amato$^{2}$, S.~Amerio$^{23}$, Y.~Amhis$^{7}$, L.~An$^{3}$, L.~Anderlini$^{18}$, G.~Andreassi$^{41}$, M.~Andreotti$^{17,g}$, J.E.~Andrews$^{60}$, R.B.~Appleby$^{56}$, F.~Archilli$^{43}$, P.~d'Argent$^{12}$, J.~Arnau~Romeu$^{6}$, A.~Artamonov$^{37}$, M.~Artuso$^{61}$, E.~Aslanides$^{6}$, G.~Auriemma$^{26}$, M.~Baalouch$^{5}$, I.~Babuschkin$^{56}$, S.~Bachmann$^{12}$, J.J.~Back$^{50}$, A.~Badalov$^{38}$, C.~Baesso$^{62}$, S.~Baker$^{55}$, V.~Balagura$^{7,c}$, W.~Baldini$^{17}$, A.~Baranov$^{35}$, R.J.~Barlow$^{56}$, C.~Barschel$^{40}$, S.~Barsuk$^{7}$, W.~Barter$^{56}$, F.~Baryshnikov$^{32}$, M.~Baszczyk$^{27,l}$, V.~Batozskaya$^{29}$, B.~Batsukh$^{61}$, V.~Battista$^{41}$, A.~Bay$^{41}$, L.~Beaucourt$^{4}$, J.~Beddow$^{53}$, F.~Bedeschi$^{24}$, I.~Bediaga$^{1}$, A.~Beiter$^{61}$, L.J.~Bel$^{43}$, V.~Bellee$^{41}$, N.~Belloli$^{21,i}$, K.~Belous$^{37}$, I.~Belyaev$^{32}$, E.~Ben-Haim$^{8}$, G.~Bencivenni$^{19}$, S.~Benson$^{43}$, S.~Beranek$^{9}$, A.~Berezhnoy$^{33}$, R.~Bernet$^{42}$, A.~Bertolin$^{23}$, C.~Betancourt$^{42}$, F.~Betti$^{15}$, M.-O.~Bettler$^{40}$, M.~van~Beuzekom$^{43}$, Ia.~Bezshyiko$^{42}$, S.~Bifani$^{47}$, P.~Billoir$^{8}$, A.~Birnkraut$^{10}$, A.~Bitadze$^{56}$, A.~Bizzeti$^{18,u}$, T.~Blake$^{50}$, F.~Blanc$^{41}$, J.~Blouw$^{11,\dagger}$, S.~Blusk$^{61}$, V.~Bocci$^{26}$, T.~Boettcher$^{58}$, A.~Bondar$^{36,w}$, N.~Bondar$^{31}$, W.~Bonivento$^{16}$, I.~Bordyuzhin$^{32}$, A.~Borgheresi$^{21,i}$, S.~Borghi$^{56}$, M.~Borisyak$^{35}$, M.~Borsato$^{39}$, F.~Bossu$^{7}$, M.~Boubdir$^{9}$, T.J.V.~Bowcock$^{54}$, E.~Bowen$^{42}$, C.~Bozzi$^{17,40}$, S.~Braun$^{12}$, T.~Britton$^{61}$, J.~Brodzicka$^{56}$, E.~Buchanan$^{48}$, C.~Burr$^{56}$, A.~Bursche$^{16}$, J.~Buytaert$^{40}$, S.~Cadeddu$^{16}$, R.~Calabrese$^{17,g}$, M.~Calvi$^{21,i}$, M.~Calvo~Gomez$^{38,m}$, A.~Camboni$^{38}$, P.~Campana$^{19}$, D.H.~Campora~Perez$^{40}$, L.~Capriotti$^{56}$, A.~Carbone$^{15,e}$, G.~Carboni$^{25,j}$, R.~Cardinale$^{20,h}$, A.~Cardini$^{16}$, P.~Carniti$^{21,i}$, L.~Carson$^{52}$, K.~Carvalho~Akiba$^{2}$, G.~Casse$^{54}$, L.~Cassina$^{21,i}$, L.~Castillo~Garcia$^{41}$, M.~Cattaneo$^{40}$, G.~Cavallero$^{20,40,h}$, R.~Cenci$^{24,t}$, D.~Chamont$^{7}$, M.~Charles$^{8}$, Ph.~Charpentier$^{40}$, G.~Chatzikonstantinidis$^{47}$, M.~Chefdeville$^{4}$, S.~Chen$^{56}$, S.F.~Cheung$^{57}$, V.~Chobanova$^{39}$, M.~Chrzaszcz$^{42,27}$, A.~Chubykin$^{31}$, X.~Cid~Vidal$^{39}$, G.~Ciezarek$^{43}$, P.E.L.~Clarke$^{52}$, M.~Clemencic$^{40}$, H.V.~Cliff$^{49}$, J.~Closier$^{40}$, V.~Coco$^{59}$, J.~Cogan$^{6}$, E.~Cogneras$^{5}$, V.~Cogoni$^{16,f}$, L.~Cojocariu$^{30}$, P.~Collins$^{40}$, A.~Comerma-Montells$^{12}$, A.~Contu$^{40}$, A.~Cook$^{48}$, G.~Coombs$^{40}$, S.~Coquereau$^{38}$, G.~Corti$^{40}$, M.~Corvo$^{17,g}$, C.M.~Costa~Sobral$^{50}$, B.~Couturier$^{40}$, G.A.~Cowan$^{52}$, D.C.~Craik$^{52}$, A.~Crocombe$^{50}$, M.~Cruz~Torres$^{62}$, S.~Cunliffe$^{55}$, R.~Currie$^{52}$, C.~D'Ambrosio$^{40}$, F.~Da~Cunha~Marinho$^{2}$, E.~Dall'Occo$^{43}$, J.~Dalseno$^{48}$, A.~Davis$^{3}$, K.~De~Bruyn$^{6}$, S.~De~Capua$^{56}$, M.~De~Cian$^{12}$, J.M.~De~Miranda$^{1}$, L.~De~Paula$^{2}$, M.~De~Serio$^{14,d}$, P.~De~Simone$^{19}$, C.T.~Dean$^{53}$, D.~Decamp$^{4}$, M.~Deckenhoff$^{10}$, L.~Del~Buono$^{8}$, H.-P.~Dembinski$^{11}$, M.~Demmer$^{10}$, A.~Dendek$^{28}$, D.~Derkach$^{35}$, O.~Deschamps$^{5}$, F.~Dettori$^{54}$, B.~Dey$^{22}$, A.~Di~Canto$^{40}$, P.~Di~Nezza$^{19}$, H.~Dijkstra$^{40}$, F.~Dordei$^{40}$, M.~Dorigo$^{41}$, A.~Dosil~Su{\'a}rez$^{39}$, A.~Dovbnya$^{45}$, K.~Dreimanis$^{54}$, L.~Dufour$^{43}$, G.~Dujany$^{56}$, K.~Dungs$^{40}$, P.~Durante$^{40}$, R.~Dzhelyadin$^{37}$, M.~Dziewiecki$^{12}$, A.~Dziurda$^{40}$, A.~Dzyuba$^{31}$, N.~D{\'e}l{\'e}age$^{4}$, S.~Easo$^{51}$, M.~Ebert$^{52}$, U.~Egede$^{55}$, V.~Egorychev$^{32}$, S.~Eidelman$^{36,w}$, S.~Eisenhardt$^{52}$, U.~Eitschberger$^{10}$, R.~Ekelhof$^{10}$, L.~Eklund$^{53}$, S.~Ely$^{61}$, S.~Esen$^{12}$, H.M.~Evans$^{49}$, T.~Evans$^{57}$, A.~Falabella$^{15}$, N.~Farley$^{47}$, S.~Farry$^{54}$, R.~Fay$^{54}$, D.~Fazzini$^{21,i}$, D.~Ferguson$^{52}$, G.~Fernandez$^{38}$, A.~Fernandez~Prieto$^{39}$, F.~Ferrari$^{15}$, F.~Ferreira~Rodrigues$^{2}$, M.~Ferro-Luzzi$^{40}$, S.~Filippov$^{34}$, R.A.~Fini$^{14}$, M.~Fiore$^{17,g}$, M.~Fiorini$^{17,g}$, M.~Firlej$^{28}$, C.~Fitzpatrick$^{41}$, T.~Fiutowski$^{28}$, F.~Fleuret$^{7,b}$, K.~Fohl$^{40}$, M.~Fontana$^{16,40}$, F.~Fontanelli$^{20,h}$, D.C.~Forshaw$^{61}$, R.~Forty$^{40}$, V.~Franco~Lima$^{54}$, M.~Frank$^{40}$, C.~Frei$^{40}$, J.~Fu$^{22,q}$, W.~Funk$^{40}$, E.~Furfaro$^{25,j}$, C.~F{\"a}rber$^{40}$, A.~Gallas~Torreira$^{39}$, D.~Galli$^{15,e}$, S.~Gallorini$^{23}$, S.~Gambetta$^{52}$, M.~Gandelman$^{2}$, P.~Gandini$^{57}$, Y.~Gao$^{3}$, L.M.~Garcia~Martin$^{69}$, J.~Garc{\'\i}a~Pardi{\~n}as$^{39}$, J.~Garra~Tico$^{49}$, L.~Garrido$^{38}$, P.J.~Garsed$^{49}$, D.~Gascon$^{38}$, C.~Gaspar$^{40}$, L.~Gavardi$^{10}$, G.~Gazzoni$^{5}$, D.~Gerick$^{12}$, E.~Gersabeck$^{12}$, M.~Gersabeck$^{56}$, T.~Gershon$^{50}$, Ph.~Ghez$^{4}$, S.~Gian{\`\i}$^{41}$, V.~Gibson$^{49}$, O.G.~Girard$^{41}$, L.~Giubega$^{30}$, K.~Gizdov$^{52}$, V.V.~Gligorov$^{8}$, D.~Golubkov$^{32}$, A.~Golutvin$^{55,40}$, A.~Gomes$^{1,a}$, I.V.~Gorelov$^{33}$, C.~Gotti$^{21,i}$, E.~Govorkova$^{43}$, R.~Graciani~Diaz$^{38}$, L.A.~Granado~Cardoso$^{40}$, E.~Graug{\'e}s$^{38}$, E.~Graverini$^{42}$, G.~Graziani$^{18}$, A.~Grecu$^{30}$, R.~Greim$^{9}$, P.~Griffith$^{16}$, L.~Grillo$^{21,40,i}$, B.R.~Gruberg~Cazon$^{57}$, O.~Gr{\"u}nberg$^{67}$, E.~Gushchin$^{34}$, Yu.~Guz$^{37}$, T.~Gys$^{40}$, C.~G{\"o}bel$^{62}$, T.~Hadavizadeh$^{57}$, C.~Hadjivasiliou$^{5}$, G.~Haefeli$^{41}$, C.~Haen$^{40}$, S.C.~Haines$^{49}$, B.~Hamilton$^{60}$, X.~Han$^{12}$, S.~Hansmann-Menzemer$^{12}$, N.~Harnew$^{57}$, S.T.~Harnew$^{48}$, J.~Harrison$^{56}$, M.~Hatch$^{40}$, J.~He$^{63}$, T.~Head$^{41}$, A.~Heister$^{9}$, K.~Hennessy$^{54}$, P.~Henrard$^{5}$, L.~Henry$^{69}$, E.~van~Herwijnen$^{40}$, M.~He{\ss}$^{67}$, A.~Hicheur$^{2}$, D.~Hill$^{57}$, C.~Hombach$^{56}$, P.H.~Hopchev$^{41}$, Z.-C.~Huard$^{59}$, W.~Hulsbergen$^{43}$, T.~Humair$^{55}$, M.~Hushchyn$^{35}$, D.~Hutchcroft$^{54}$, M.~Idzik$^{28}$, P.~Ilten$^{58}$, R.~Jacobsson$^{40}$, J.~Jalocha$^{57}$, E.~Jans$^{43}$, A.~Jawahery$^{60}$, F.~Jiang$^{3}$, M.~John$^{57}$, D.~Johnson$^{40}$, C.R.~Jones$^{49}$, C.~Joram$^{40}$, B.~Jost$^{40}$, N.~Jurik$^{57}$, S.~Kandybei$^{45}$, M.~Karacson$^{40}$, J.M.~Kariuki$^{48}$, S.~Karodia$^{53}$, M.~Kecke$^{12}$, M.~Kelsey$^{61}$, M.~Kenzie$^{49}$, T.~Ketel$^{44}$, E.~Khairullin$^{35}$, B.~Khanji$^{12}$, C.~Khurewathanakul$^{41}$, T.~Kirn$^{9}$, S.~Klaver$^{56}$, K.~Klimaszewski$^{29}$, T.~Klimkovich$^{11}$, S.~Koliiev$^{46}$, M.~Kolpin$^{12}$, I.~Komarov$^{41}$, R.~Kopecna$^{12}$, P.~Koppenburg$^{43}$, A.~Kosmyntseva$^{32}$, S.~Kotriakhova$^{31}$, M.~Kozeiha$^{5}$, L.~Kravchuk$^{34}$, M.~Kreps$^{50}$, P.~Krokovny$^{36,w}$, F.~Kruse$^{10}$, W.~Krzemien$^{29}$, W.~Kucewicz$^{27,l}$, M.~Kucharczyk$^{27}$, V.~Kudryavtsev$^{36,w}$, A.K.~Kuonen$^{41}$, K.~Kurek$^{29}$, T.~Kvaratskheliya$^{32,40}$, D.~Lacarrere$^{40}$, G.~Lafferty$^{56}$, A.~Lai$^{16}$, G.~Lanfranchi$^{19}$, C.~Langenbruch$^{9}$, T.~Latham$^{50}$, C.~Lazzeroni$^{47}$, R.~Le~Gac$^{6}$, J.~van~Leerdam$^{43}$, A.~Leflat$^{33,40}$, J.~Lefran{\c{c}}ois$^{7}$, R.~Lef{\`e}vre$^{5}$, F.~Lemaitre$^{40}$, E.~Lemos~Cid$^{39}$, O.~Leroy$^{6}$, T.~Lesiak$^{27}$, B.~Leverington$^{12}$, T.~Li$^{3}$, Y.~Li$^{7}$, Z.~Li$^{61}$, T.~Likhomanenko$^{35,68}$, R.~Lindner$^{40}$, F.~Lionetto$^{42}$, X.~Liu$^{3}$, D.~Loh$^{50}$, I.~Longstaff$^{53}$, J.H.~Lopes$^{2}$, D.~Lucchesi$^{23,o}$, M.~Lucio~Martinez$^{39}$, H.~Luo$^{52}$, A.~Lupato$^{23}$, E.~Luppi$^{17,g}$, O.~Lupton$^{40}$, A.~Lusiani$^{24}$, X.~Lyu$^{63}$, F.~Machefert$^{7}$, F.~Maciuc$^{30}$, O.~Maev$^{31}$, K.~Maguire$^{56}$, S.~Malde$^{57}$, A.~Malinin$^{68}$, T.~Maltsev$^{36}$, G.~Manca$^{16,f}$, G.~Mancinelli$^{6}$, P.~Manning$^{61}$, J.~Maratas$^{5,v}$, J.F.~Marchand$^{4}$, U.~Marconi$^{15}$, C.~Marin~Benito$^{38}$, M.~Marinangeli$^{41}$, P.~Marino$^{24,t}$, J.~Marks$^{12}$, G.~Martellotti$^{26}$, M.~Martin$^{6}$, M.~Martinelli$^{41}$, D.~Martinez~Santos$^{39}$, F.~Martinez~Vidal$^{69}$, D.~Martins~Tostes$^{2}$, L.M.~Massacrier$^{7}$, A.~Massafferri$^{1}$, R.~Matev$^{40}$, A.~Mathad$^{50}$, Z.~Mathe$^{40}$, C.~Matteuzzi$^{21}$, A.~Mauri$^{42}$, E.~Maurice$^{7,b}$, B.~Maurin$^{41}$, A.~Mazurov$^{47}$, M.~McCann$^{55,40}$, A.~McNab$^{56}$, R.~McNulty$^{13}$, B.~Meadows$^{59}$, F.~Meier$^{10}$, D.~Melnychuk$^{29}$, M.~Merk$^{43}$, A.~Merli$^{22,40,q}$, E.~Michielin$^{23}$, D.A.~Milanes$^{66}$, M.-N.~Minard$^{4}$, D.S.~Mitzel$^{12}$, A.~Mogini$^{8}$, J.~Molina~Rodriguez$^{1}$, I.A.~Monroy$^{66}$, S.~Monteil$^{5}$, M.~Morandin$^{23}$, M.J.~Morello$^{24,t}$, O.~Morgunova$^{68}$, J.~Moron$^{28}$, A.B.~Morris$^{52}$, R.~Mountain$^{61}$, F.~Muheim$^{52}$, M.~Mulder$^{43}$, M.~Mussini$^{15}$, D.~M{\"u}ller$^{56}$, J.~M{\"u}ller$^{10}$, K.~M{\"u}ller$^{42}$, V.~M{\"u}ller$^{10}$, P.~Naik$^{48}$, T.~Nakada$^{41}$, R.~Nandakumar$^{51}$, A.~Nandi$^{57}$, I.~Nasteva$^{2}$, M.~Needham$^{52}$, N.~Neri$^{22,40}$, S.~Neubert$^{12}$, N.~Neufeld$^{40}$, M.~Neuner$^{12}$, T.D.~Nguyen$^{41}$, C.~Nguyen-Mau$^{41,n}$, S.~Nieswand$^{9}$, R.~Niet$^{10}$, N.~Nikitin$^{33}$, T.~Nikodem$^{12}$, A.~Nogay$^{68}$, A.~Novoselov$^{37}$, D.P.~O'Hanlon$^{50}$, A.~Oblakowska-Mucha$^{28}$, V.~Obraztsov$^{37}$, S.~Ogilvy$^{19}$, R.~Oldeman$^{16,f}$, C.J.G.~Onderwater$^{70}$, A.~Ossowska$^{27}$, J.M.~Otalora~Goicochea$^{2}$, P.~Owen$^{42}$, A.~Oyanguren$^{69}$, P.R.~Pais$^{41}$, A.~Palano$^{14,d}$, M.~Palutan$^{19,40}$, A.~Papanestis$^{51}$, M.~Pappagallo$^{14,d}$, L.L.~Pappalardo$^{17,g}$, C.~Pappenheimer$^{59}$, W.~Parker$^{60}$, C.~Parkes$^{56}$, G.~Passaleva$^{18}$, A.~Pastore$^{14,d}$, M.~Patel$^{55}$, C.~Patrignani$^{15,e}$, A.~Pearce$^{40}$, A.~Pellegrino$^{43}$, G.~Penso$^{26}$, M.~Pepe~Altarelli$^{40}$, S.~Perazzini$^{40}$, P.~Perret$^{5}$, L.~Pescatore$^{41}$, K.~Petridis$^{48}$, A.~Petrolini$^{20,h}$, A.~Petrov$^{68}$, M.~Petruzzo$^{22,q}$, E.~Picatoste~Olloqui$^{38}$, B.~Pietrzyk$^{4}$, M.~Pikies$^{27}$, D.~Pinci$^{26}$, A.~Pistone$^{20,h}$, A.~Piucci$^{12}$, V.~Placinta$^{30}$, S.~Playfer$^{52}$, M.~Plo~Casasus$^{39}$, T.~Poikela$^{40}$, F.~Polci$^{8}$, M.~Poli~Lener$^{19}$, A.~Poluektov$^{50,36}$, I.~Polyakov$^{61}$, E.~Polycarpo$^{2}$, G.J.~Pomery$^{48}$, S.~Ponce$^{40}$, A.~Popov$^{37}$, D.~Popov$^{11,40}$, B.~Popovici$^{30}$, S.~Poslavskii$^{37}$, C.~Potterat$^{2}$, E.~Price$^{48}$, J.~Prisciandaro$^{39}$, C.~Prouve$^{48}$, V.~Pugatch$^{46}$, A.~Puig~Navarro$^{42}$, G.~Punzi$^{24,p}$, C.~Qian$^{63}$, W.~Qian$^{50}$, R.~Quagliani$^{7,48}$, B.~Rachwal$^{28}$, J.H.~Rademacker$^{48}$, M.~Rama$^{24}$, M.~Ramos~Pernas$^{39}$, M.S.~Rangel$^{2}$, I.~Raniuk$^{45,\dagger}$, F.~Ratnikov$^{35}$, G.~Raven$^{44}$, F.~Redi$^{55}$, S.~Reichert$^{10}$, A.C.~dos~Reis$^{1}$, C.~Remon~Alepuz$^{69}$, V.~Renaudin$^{7}$, S.~Ricciardi$^{51}$, S.~Richards$^{48}$, M.~Rihl$^{40}$, K.~Rinnert$^{54}$, V.~Rives~Molina$^{38}$, P.~Robbe$^{7}$, A.B.~Rodrigues$^{1}$, E.~Rodrigues$^{59}$, J.A.~Rodriguez~Lopez$^{66}$, P.~Rodriguez~Perez$^{56,\dagger}$, A.~Rogozhnikov$^{35}$, S.~Roiser$^{40}$, A.~Rollings$^{57}$, V.~Romanovskiy$^{37}$, A.~Romero~Vidal$^{39}$, J.W.~Ronayne$^{13}$, M.~Rotondo$^{19}$, M.S.~Rudolph$^{61}$, T.~Ruf$^{40}$, P.~Ruiz~Valls$^{69}$, J.J.~Saborido~Silva$^{39}$, E.~Sadykhov$^{32}$, N.~Sagidova$^{31}$, B.~Saitta$^{16,f}$, V.~Salustino~Guimaraes$^{1}$, D.~Sanchez~Gonzalo$^{38}$, C.~Sanchez~Mayordomo$^{69}$, B.~Sanmartin~Sedes$^{39}$, R.~Santacesaria$^{26}$, C.~Santamarina~Rios$^{39}$, M.~Santimaria$^{19}$, E.~Santovetti$^{25,j}$, A.~Sarti$^{19,k}$, C.~Satriano$^{26,s}$, A.~Satta$^{25}$, D.M.~Saunders$^{48}$, D.~Savrina$^{32,33}$, S.~Schael$^{9}$, M.~Schellenberg$^{10}$, M.~Schiller$^{53}$, H.~Schindler$^{40}$, M.~Schlupp$^{10}$, M.~Schmelling$^{11}$, T.~Schmelzer$^{10}$, B.~Schmidt$^{40}$, O.~Schneider$^{41}$, A.~Schopper$^{40}$, H.F.~Schreiner$^{59}$, K.~Schubert$^{10}$, M.~Schubiger$^{41}$, M.-H.~Schune$^{7}$, R.~Schwemmer$^{40}$, B.~Sciascia$^{19}$, A.~Sciubba$^{26,k}$, A.~Semennikov$^{32}$, A.~Sergi$^{47}$, N.~Serra$^{42}$, J.~Serrano$^{6}$, L.~Sestini$^{23}$, P.~Seyfert$^{21}$, M.~Shapkin$^{37}$, I.~Shapoval$^{45}$, Y.~Shcheglov$^{31}$, T.~Shears$^{54}$, L.~Shekhtman$^{36,w}$, V.~Shevchenko$^{68}$, B.G.~Siddi$^{17,40}$, R.~Silva~Coutinho$^{42}$, L.~Silva~de~Oliveira$^{2}$, G.~Simi$^{23,o}$, S.~Simone$^{14,d}$, M.~Sirendi$^{49}$, N.~Skidmore$^{48}$, T.~Skwarnicki$^{61}$, E.~Smith$^{55}$, I.T.~Smith$^{52}$, J.~Smith$^{49}$, M.~Smith$^{55}$, l.~Soares~Lavra$^{1}$, M.D.~Sokoloff$^{59}$, F.J.P.~Soler$^{53}$, B.~Souza~De~Paula$^{2}$, B.~Spaan$^{10}$, P.~Spradlin$^{53}$, S.~Sridharan$^{40}$, F.~Stagni$^{40}$, M.~Stahl$^{12}$, S.~Stahl$^{40}$, P.~Stefko$^{41}$, S.~Stefkova$^{55}$, O.~Steinkamp$^{42}$, S.~Stemmle$^{12}$, O.~Stenyakin$^{37}$, H.~Stevens$^{10}$, S.~Stoica$^{30}$, S.~Stone$^{61}$, B.~Storaci$^{42}$, S.~Stracka$^{24,p}$, M.E.~Stramaglia$^{41}$, M.~Straticiuc$^{30}$, U.~Straumann$^{42}$, L.~Sun$^{64}$, W.~Sutcliffe$^{55}$, K.~Swientek$^{28}$, V.~Syropoulos$^{44}$, M.~Szczekowski$^{29}$, T.~Szumlak$^{28}$, S.~T'Jampens$^{4}$, A.~Tayduganov$^{6}$, T.~Tekampe$^{10}$, G.~Tellarini$^{17,g}$, F.~Teubert$^{40}$, E.~Thomas$^{40}$, J.~van~Tilburg$^{43}$, M.J.~Tilley$^{55}$, V.~Tisserand$^{4}$, M.~Tobin$^{41}$, S.~Tolk$^{49}$, L.~Tomassetti$^{17,g}$, D.~Tonelli$^{24}$, S.~Topp-Joergensen$^{57}$, F.~Toriello$^{61}$, R.~Tourinho~Jadallah~Aoude$^{1}$, E.~Tournefier$^{4}$, S.~Tourneur$^{41}$, K.~Trabelsi$^{41}$, M.~Traill$^{53}$, M.T.~Tran$^{41}$, M.~Tresch$^{42}$, A.~Trisovic$^{40}$, A.~Tsaregorodtsev$^{6}$, P.~Tsopelas$^{43}$, A.~Tully$^{49}$, N.~Tuning$^{43}$, A.~Ukleja$^{29}$, A.~Ustyuzhanin$^{35}$, U.~Uwer$^{12}$, C.~Vacca$^{16,f}$, V.~Vagnoni$^{15,40}$, A.~Valassi$^{40}$, S.~Valat$^{40}$, G.~Valenti$^{15}$, R.~Vazquez~Gomez$^{19}$, P.~Vazquez~Regueiro$^{39}$, S.~Vecchi$^{17}$, M.~van~Veghel$^{43}$, J.J.~Velthuis$^{48}$, M.~Veltri$^{18,r}$, G.~Veneziano$^{57}$, A.~Venkateswaran$^{61}$, T.A.~Verlage$^{9}$, M.~Vernet$^{5}$, M.~Vesterinen$^{12}$, J.V.~Viana~Barbosa$^{40}$, B.~Viaud$^{7}$, D.~~Vieira$^{63}$, M.~Vieites~Diaz$^{39}$, H.~Viemann$^{67}$, X.~Vilasis-Cardona$^{38,m}$, M.~Vitti$^{49}$, V.~Volkov$^{33}$, A.~Vollhardt$^{42}$, B.~Voneki$^{40}$, A.~Vorobyev$^{31}$, V.~Vorobyev$^{36,w}$, C.~Vo{\ss}$^{9}$, J.A.~de~Vries$^{43}$, C.~V{\'a}zquez~Sierra$^{39}$, R.~Waldi$^{67}$, C.~Wallace$^{50}$, R.~Wallace$^{13}$, J.~Walsh$^{24}$, J.~Wang$^{61}$, D.R.~Ward$^{49}$, H.M.~Wark$^{54}$, N.K.~Watson$^{47}$, D.~Websdale$^{55}$, A.~Weiden$^{42}$, M.~Whitehead$^{40}$, J.~Wicht$^{50}$, G.~Wilkinson$^{57,40}$, M.~Wilkinson$^{61}$, M.~Williams$^{40}$, M.P.~Williams$^{47}$, M.~Williams$^{58}$, T.~Williams$^{47}$, F.F.~Wilson$^{51}$, J.~Wimberley$^{60}$, M.A.~Winn$^{7}$, J.~Wishahi$^{10}$, W.~Wislicki$^{29}$, M.~Witek$^{27}$, G.~Wormser$^{7}$, S.A.~Wotton$^{49}$, K.~Wraight$^{53}$, K.~Wyllie$^{40}$, Y.~Xie$^{65}$, Z.~Xu$^{4}$, Z.~Yang$^{3}$, Z.~Yang$^{60}$, Y.~Yao$^{61}$, H.~Yin$^{65}$, J.~Yu$^{65}$, X.~Yuan$^{36,w}$, O.~Yushchenko$^{37}$, K.A.~Zarebski$^{47}$, M.~Zavertyaev$^{11,c}$, L.~Zhang$^{3}$, Y.~Zhang$^{7}$, A.~Zhelezov$^{12}$, Y.~Zheng$^{63}$, X.~Zhu$^{3}$, V.~Zhukov$^{33}$, S.~Zucchelli$^{15}$.\bigskip {\footnotesize \it $ ^{1}$Centro Brasileiro de Pesquisas F{\'\i}sicas (CBPF), Rio de Janeiro, Brazil\\ $ ^{2}$Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil\\ $ ^{3}$Center for High Energy Physics, Tsinghua University, Beijing, China\\ $ ^{4}$LAPP, Universit{\'e} Savoie Mont-Blanc, CNRS/IN2P3, Annecy-Le-Vieux, France\\ $ ^{5}$Clermont Universit{\'e}, Universit{\'e} Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France\\ $ ^{6}$CPPM, Aix-Marseille Universit{\'e}, CNRS/IN2P3, Marseille, France\\ $ ^{7}$LAL, Universit{\'e} Paris-Sud, CNRS/IN2P3, Orsay, France\\ $ ^{8}$LPNHE, Universit{\'e} Pierre et Marie Curie, Universit{\'e} Paris Diderot, CNRS/IN2P3, Paris, France\\ $ ^{9}$I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany\\ $ ^{10}$Fakult{\"a}t Physik, Technische Universit{\"a}t Dortmund, Dortmund, Germany\\ $ ^{11}$Max-Planck-Institut f{\"u}r Kernphysik (MPIK), Heidelberg, Germany\\ $ ^{12}$Physikalisches Institut, Ruprecht-Karls-Universit{\"a}t Heidelberg, Heidelberg, Germany\\ $ ^{13}$School of Physics, University College Dublin, Dublin, Ireland\\ $ ^{14}$Sezione INFN di Bari, Bari, Italy\\ $ ^{15}$Sezione INFN di Bologna, Bologna, Italy\\ $ ^{16}$Sezione INFN di Cagliari, Cagliari, Italy\\ $ ^{17}$Universita e INFN, Ferrara, Ferrara, Italy\\ $ ^{18}$Sezione INFN di Firenze, Firenze, Italy\\ $ ^{19}$Laboratori Nazionali dell'INFN di Frascati, Frascati, Italy\\ $ ^{20}$Sezione INFN di Genova, Genova, Italy\\ $ ^{21}$Universita {\&} INFN, Milano-Bicocca, Milano, Italy\\ $ ^{22}$Sezione di Milano, Milano, Italy\\ $ ^{23}$Sezione INFN di Padova, Padova, Italy\\ $ ^{24}$Sezione INFN di Pisa, Pisa, Italy\\ $ ^{25}$Sezione INFN di Roma Tor Vergata, Roma, Italy\\ $ ^{26}$Sezione INFN di Roma La Sapienza, Roma, Italy\\ $ ^{27}$Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Krak{\'o}w, Poland\\ $ ^{28}$AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Krak{\'o}w, Poland\\ $ ^{29}$National Center for Nuclear Research (NCBJ), Warsaw, Poland\\ $ ^{30}$Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania\\ $ ^{31}$Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia\\ $ ^{32}$Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia\\ $ ^{33}$Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia\\ $ ^{34}$Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN), Moscow, Russia\\ $ ^{35}$Yandex School of Data Analysis, Moscow, Russia\\ $ ^{36}$Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia\\ $ ^{37}$Institute for High Energy Physics (IHEP), Protvino, Russia\\ $ ^{38}$ICCUB, Universitat de Barcelona, Barcelona, Spain\\ $ ^{39}$Universidad de Santiago de Compostela, Santiago de Compostela, Spain\\ $ ^{40}$European Organization for Nuclear Research (CERN), Geneva, Switzerland\\ $ ^{41}$Institute of Physics, Ecole Polytechnique F{\'e}d{\'e}rale de Lausanne (EPFL), Lausanne, Switzerland\\ $ ^{42}$Physik-Institut, Universit{\"a}t Z{\"u}rich, Z{\"u}rich, Switzerland\\ $ ^{43}$Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands\\ $ ^{44}$Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, The Netherlands\\ $ ^{45}$NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine\\ $ ^{46}$Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine\\ $ ^{47}$University of Birmingham, Birmingham, United Kingdom\\ $ ^{48}$H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom\\ $ ^{49}$Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom\\ $ ^{50}$Department of Physics, University of Warwick, Coventry, United Kingdom\\ $ ^{51}$STFC Rutherford Appleton Laboratory, Didcot, United Kingdom\\ $ ^{52}$School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom\\ $ ^{53}$School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom\\ $ ^{54}$Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom\\ $ ^{55}$Imperial College London, London, United Kingdom\\ $ ^{56}$School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom\\ $ ^{57}$Department of Physics, University of Oxford, Oxford, United Kingdom\\ $ ^{58}$Massachusetts Institute of Technology, Cambridge, MA, United States\\ $ ^{59}$University of Cincinnati, Cincinnati, OH, United States\\ $ ^{60}$University of Maryland, College Park, MD, United States\\ $ ^{61}$Syracuse University, Syracuse, NY, United States\\ $ ^{62}$Pontif{\'\i}cia Universidade Cat{\'o}lica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to $^{2}$\\ $ ^{63}$University of Chinese Academy of Sciences, Beijing, China, associated to $^{3}$\\ $ ^{64}$School of Physics and Technology, Wuhan University, Wuhan, China, associated to $^{3}$\\ $ ^{65}$Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to $^{3}$\\ $ ^{66}$Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to $^{8}$\\ $ ^{67}$Institut f{\"u}r Physik, Universit{\"a}t Rostock, Rostock, Germany, associated to $^{12}$\\ $ ^{68}$National Research Centre Kurchatov Institute, Moscow, Russia, associated to $^{32}$\\ $ ^{69}$Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain, associated to $^{38}$\\ $ ^{70}$Van Swinderen Institute, University of Groningen, Groningen, The Netherlands, associated to $^{43}$\\ \bigskip $ ^{a}$Universidade Federal do Tri{\^a}ngulo Mineiro (UFTM), Uberaba-MG, Brazil\\ $ ^{b}$Laboratoire Leprince-Ringuet, Palaiseau, France\\ $ ^{c}$P.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia\\ $ ^{d}$Universit{\`a} di Bari, Bari, Italy\\ $ ^{e}$Universit{\`a} di Bologna, Bologna, Italy\\ $ ^{f}$Universit{\`a} di Cagliari, Cagliari, Italy\\ $ ^{g}$Universit{\`a} di Ferrara, Ferrara, Italy\\ $ ^{h}$Universit{\`a} di Genova, Genova, Italy\\ $ ^{i}$Universit{\`a} di Milano Bicocca, Milano, Italy\\ $ ^{j}$Universit{\`a} di Roma Tor Vergata, Roma, Italy\\ $ ^{k}$Universit{\`a} di Roma La Sapienza, Roma, Italy\\ $ ^{l}$AGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Krak{\'o}w, Poland\\ $ ^{m}$LIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain\\ $ ^{n}$Hanoi University of Science, Hanoi, Viet Nam\\ $ ^{o}$Universit{\`a} di Padova, Padova, Italy\\ $ ^{p}$Universit{\`a} di Pisa, Pisa, Italy\\ $ ^{q}$Universit{\`a} degli Studi di Milano, Milano, Italy\\ $ ^{r}$Universit{\`a} di Urbino, Urbino, Italy\\ $ ^{s}$Universit{\`a} della Basilicata, Potenza, Italy\\ $ ^{t}$Scuola Normale Superiore, Pisa, Italy\\ $ ^{u}$Universit{\`a} di Modena e Reggio Emilia, Modena, Italy\\ $ ^{v}$Iligan Institute of Technology (IIT), Iligan, Philippines\\ $ ^{w}$Novosibirsk State University, Novosibirsk, Russia\\ \medskip $ ^{\dagger}$Deceased } \end{flushleft} \end{document}
1,477,468,751,300
arxiv
\section*{Introduction} Let $k$ be an algebraically closed field of characteristic zero. Let $C$ be a smooth projective curve of genus $g$ over $k$. Riemann's original problem was to determine the order of vanishing of the theta function at a point in the Jacobian of $C$. His Singularity Theorem says that for every line bundle $L$ of degree $g-1$ in the theta divisor $\Theta$, the multiplicity of $\Theta$ at $L$ is $h^0(C,L)$. Recall that $W^r_d(C)$ is the subscheme of $\pic^d(C)$ parameterizing line bundles $L$ of degree $d$ with $\dim|L|\geq r$. The theta divisor $\Theta$ is $W^0_{g-1}(C)$. Kempf \cite{Kem} described the tangent cone of $W^0_d$ at every point. In particular, he generalized Riemann's multiplicity result to the $W^0_d$ locus. In his paper, he described the singularities of $W^0_d$ and its tangent cone as follows. Let $L$ be a point of $W^0_d(C)$, with $d<g$ and $l=\dim H^0(L)$. The tangent cone $\mathcal{T}_L(W^0_d(C))$ has rational singularities and therefore $W^0_d(C)$ has rational singularities. Moreover, the degree of ${\mathbf P} \mathcal{T}_L(W^0_d(C))$ as a subscheme of ${\mathbf P} H^0(C,K)^*$ is the binomial coefficient $${h^1(L) \choose {l-1}}={ {g-d +l-1}\choose {l-1}}.$$ Following the work of Riemann and Kempf, there has been much interest in the singularities of general theta divisors. For instance, using vanishing theorems, Ein and Lazarsfeld \cite{EL} showed that if $\Theta\subset A$ is an irreducible theta divisor on an abelian variety, then $\Theta$ is normal and has rational singularities. In this paper we approach the study of the singularities of the Brill-Noether locus $W^r_d(C)$ from the point of view of its jet schemes. The jet scheme $X_m$ of a given scheme $X$ of finite type over $k$ parameterizes $m$-jets on X, that is, morphisms $\Spec k[t]/(t^{m+1}) \rightarrow X$. Note that $X_0=X$ and $X_1$ is the total tangent space of $X$. For every $m\geq 0$, we have a morphism $\pi_m: X_m\rightarrow X$ that maps an $m$-jet to the image of the closed point. The fiber of $\pi_m$ at $x\in X$ is denoted by $X_{m,x}$. Let $L$ be a point in $\Theta$. By the definition of $\pic^d(C)$, an element $\mathcal{L}_m\in \pic^d(C)_m$ is identified with a line bundle on $C\times \Spec k[t]/t^{(m+1)}$. Using the description of the theta divisor as a determinantal variety, we partition the scheme $\Theta_{m,L}$ into constructible subsets $C_{\lambda}$ indexed by partitions $\lambda$ of length $h^0(C,L)$ with sum $\geq m+1$. Several invariants of $\mathcal{L}_m \in \Theta_{m,L}$ are determined by the corresponding partition $\lambda$. For instance, $\lambda$ determines the dimension of the kernel of the truncation map $H^0(C\times\Spec k[t]/t^{(m+1)}, \mathcal{L}_m)\rightarrow H^0(C,L)$. In this way, $\lambda$ determines for each $j\leq m$ the dimension of the subspace of sections in $H^0(L)$ that can be extended to sections of $\mathcal{L}_j$, where $\mathcal{L}_j$ is the image of $\mathcal{L}_m$ under the truncation map $\pic^d(C)_m\rightarrow \pic^d(C)_j$. Let us briefly describe the proof of Riemann's Singularity Theorem that we give using jet schemes. Since the inequality $\mult_{L}{\Theta}\geq l:=h^0{(L)}$ follows from the determinantal description of $\Theta$, we focus on the opposite inequality. In order to show that $\mult_{L}{\Theta}\leq l$, it is enough to prove that $\Theta_{l,L}\neq \pic^{g-1}(C)_{l,L}$. If this is not the case, then the image of $\Theta_{l,L}$ in $\Theta_{1,L}=\pic^{g-1}(C)_{1,L}$ is $\Theta_{1,L}$. Using the partition associated to any $\mathcal{L}_l\in \Theta_{l,L}$, we show that if $\mathcal{L}_1$ is the image of $\mathcal{L}_l$ in $\Theta_{1,L}$, then the restriction map $H^0(\mathcal{L}_1)\rightarrow H^0(L)$ is nonzero. On the other hand, we can identify $\mathcal{L}_1$ in $\pic^{g-1}(C)_{1,L}$ to a {\u C}ech cohomolgy class in $H^1(C,\mathcal{O}_C)$. Furthermore, the obstruction to lifting a section $s\in H^0(C,L)$ to a section of $\mathcal{L}_1$ can be described using the pairing $$H^0(C,L)\otimes H^1(C,\mathcal{O}_C)\xrightarrow{\nu} H^1(C,L)$$ that is, s lifts if and only if $\nu(s\otimes \mathcal{L}_1)=0$. Since the set of elements in $H^1(C,\mathcal{O}_C)$ for which there is a nonzero such $s$ is of codimension one, this gives a contradiction, proving that $\mult_{L}\Theta\leq l$. By estimating the dimension of the constructible subset $C_{\lambda}$ of $\Theta_{m,L}$ for each partition $\lambda$, we obtain the following result. \begin{thma}\label{theta divisor} For every smooth projective curve $C$ of genus $g\geq 3$ over $k$, and every integer $m\geq 1$, we have $\dim(\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})=(g-1)(m+1)-1$ if $C$ is a hyperelliptic curve. For nonhyperelliptic curves, we have $\dim(\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})=(g-1)(m+1)-2$. \end{thma} A theorem in \cite{Mus1} describes complete intersection rational singularities in terms of jet schemes. Applying that theorem, we recover Kempf's result that the theta divisor has rational singularities. Similarly, as a corollary of Theorem 3.3 in \cite{EMY}, we deduce that the theta divisor of a nonhyperelliptic curve has terminal singularities. Using similar ideas, we are able to compute the dimensions of the jet schemes of the Brill-Noether locus $W^r_d(C)$ for generic curves. Using Musta\c{t}\v{a}'s formula from \cite{Mmus2} describing the log canonical threshold in terms of dimensions of jet schemes, we obtain the following formula for the log canonical threshold of the pair $(\pic^d(C),W^r_d(C))$. \begin{thmb} \label{lct of BN} For a general projective smooth curve $C$ of genus $g$, let $L$ be a line bundle of degree $d$ with $d\leq g-1$ and $l=h^0(L)$. The log canonical threshold of $(\pic^d(C),W^r_d(C))$ at $L \in W^r_d(C)$ is $$\lct_L(\pic^d(C), W^r_d(C))=\min\limits_{1\leq i\leq {l-r}}\left\{\frac{(l+1-i)(g-d+l-i)}{l+1-r-i}\right\}.$$ \end{thmb} Recall that one can locally define a map from $\pic^d(C)$ to a matrix space such that $W^r_d(C)$ is the pull back of a suitable generic determinantal variety. It follows from the above theorem that for generic curves, the local log canonical threshold of $(\pic^d(C),W^r_d(C))$ at $L$ is equal to the local log canonical threshold of that generic determinantal variety at the image of $L$ (for the formula for the log canonical threshold of a generic determinantal variety, see Theorem 3.5.7. in \cite{Doc}). In light of Ein-Lazarsfeld's result for general principal polarizations on abelian varieties, it would be interesting to understand in general the properties of the jet schemes of such divisors. We also hope that in the future one could use the behavior of the jet schemes of the Brill-Noether loci to distinguish other geometric properties of curves. The paper is organized as follows. In the first section, we review some basic definitions and notation related to the jet schemes and the Brill-Noether loci. We give a criterion on the partition $\lambda$ associated to $\mathcal{L}_m\in \pic^{g-1}(C)_{m}$ to have $\mathcal{L}_m$ $\in\Theta_m$. We also prove the formula for $h^0(\mathcal{L}_m)$ in terms of the partition $\lambda$ associated to $\mathcal{L}_m$. As a first application, we give the proof of Riemann's multiplicity formula that we sketched above. In the second section, we prove Theorems A and B by estimating the dimensions of $W^r_d(C)_{m,L}$ for every $L\in W^r_d(C)$. \section*{Acknowledgement} It is a pleasure to thank Mircea Musta\c{t}\v{a} for introducing me to jet schemes and encouragement to start this project. I am grateful to Jesse Kass for his help and discussions Jacobians of curves. I would like to thank Linquan Ma and Sijun Liu for useful discussions in the earlier stages of this project. \section{Introduction to jet schemes and varieties of special linear series on a curve} Let $k$ be an algebraically closed field of characteristic zero. Given a scheme $X$ (of finite type) over $k$ and an integer $m\geq 0$, the jet scheme $X_m$ of order $m$ of $X$ is a scheme of finite type over $k$ satisfying the following adjunction \begin{equation}\label{adjunction}\tag{$\ddag$} \Hom_{\Sch/k}(Y,X_m)\cong\Hom_{\Sch/k}(Y\times \Spec k[t]/(t^{m+1}), X) \end{equation} for every scheme $Y$ of finite type over $k$. In particular, the $k$-rational points of $X_m$ are identified with the $m$-jets of $X$, that is, with the morphisms $\Spec k[t]/(t^{m+1})\rightarrow X$. For every $j$ with $0\leq j\leq m$, the natural ring homomorphism $k[t]/(t^{m+1})\rightarrow k[t]/(t^{j+1})$ induces a closed embedding \mbox{$\Spec k[t]/(t^{j+1})\rightarrow \Spec k[t]/(t^{m+1})$}. The above adjunction induces a truncation map \begin{equation*} \rho^m_j: ~X_m\rightarrow X_j\,. \end{equation*} For simplicity, we usually write $\pi_m^X$ or $\pi_m$ to denote the projection $\rho^m_0:X_m\rightarrow X$. For every fixed point $x\in X$, we write $X_{m,x}$ for the fiber of $\pi_m$ at $x$, the $m$-jets of $X$ centered at $x$. It turns out that the geometry of the jet schemes $X_m$ is closely related with the geometry of scheme $X$ itself. Let $C$ be a smooth projective curve of genus $g$ over field $k$. We now recall the definition of $\pic^d(C)$. For every scheme $S$, let $p$ and $q$ be the projections of $S\times C$ onto $S$ and $C$ respectively. {\it A family of degree $d$ line bundles on $C$ parameterized by a scheme $S$} is a line bundle on $C\times S$ which restricts to a degree $d$ line bundle on $C\times\{s\}$, for every $s$ in $S$. We say that two such families $\mathcal{L}$ and $\mathcal{L}'$ are {\it equivalent} if there is a line bundle $\mathcal{R}$ on $S$ such that $\mathcal{L}'\cong \mathcal{L}\otimes q^*\mathcal{R}$. $\pic^d(C)$ parameterizes degree $d$ line bundles on $C$; more precisely, it represents the functor $$F: \Sch/k \rightarrow \Set$$ where $F(S)$ is the set of equivalence classes of families of degree $d$ line bundles on $C$ parameterized by $S$. A universal line bundle $\mathcal{P}$ on $C\times \pic^d(C)$ is a {\it Poincar\'e line bundle} of degree $d$ for $C$. Recall now that $W^r_d(C)$ is the closed subset of $\pic^d(C)$ parameterizing line bundles $L$ of degree $d$ with $\dim |L|\geq r$: $$W^r_d(C)=\{L \in \pic^{d}(C): \deg L=d, h^0(L)\geq r+1\}.$$ In particular, we have the theta divisor $\Theta: =\{L \in \pic^{g-1}(C): h^0(C, L) \neq 0\}=W^0_{g-1}(C)$. Each $W^r_d(C)$ has a natural scheme structure as a degeneracy locus we now describe. \\ Let $E$ be any effective divisor on $C$ of degree $e\geq 2g-d-1$ and let $\mathcal{E}=\mathcal{O}(E)$. The following facts are standard (see \cite[\S IV.3]{ACGH}). For every family of degree $d$ line bundles $\mathcal{L}$ on $S\times C$, the sheaves ${p}_*{(\mathcal{L}\otimes q^*(\mathcal{E}))}$ and ${p}_*(\mathcal{L}\otimes q^*(\mathcal{E})\otimes\mathcal{O}_{q^{-1}E})$ are locally free of ranks $d+e+1-g$ and $e$, respectively. Moreover, there is an exact sequence on $S$ \begin{equation}\label{ses} 0\rightarrow {p}_*\mathcal{L}\rightarrow {p}_*(\mathcal{L}\otimes q^*(\mathcal{E}))\xrightarrow{\Phi_\mathcal{L}} {p}_*(\mathcal{L}\otimes q^*(\mathcal{E})\otimes\mathcal{O}_{q^ {-1}E})\rightarrow R^1{p}_*(\mathcal{L})\rightarrow 0. \end{equation} With the above notation, $W^r_d(C)$ represents the functor $\Sch/k \rightarrow \Set$ given by \begin{equation*} S \mapsto \left\{ \begin{array}{c} \text{equivalence classes of families } \mathcal{L} \text{ of degree } d \text{ line bundles on } \\ S\times C\xrightarrow{p} S \text{ such that } \rank(\Phi_{\mathcal{L}})\leq d+e-g-r \end{array} \right \}. \end{equation*} It can be shown that the above condition $\rank(\Phi_{\mathcal{L}})\leq d+e-g-r$ does not depend on the particular choice of $e$ and $E$. In particular, the line bundle $L\in \pic^d(C)$ is in $W^r_d(C)$ if and only if locally all the $e+d+1-g-r$ minors of $\Phi_L$ vanish. Therefore $W^r_d(C)$ is a determinantal variety. Let $T_m$ be the scheme $\Spec k[t]/(t^{m+1})$. We now discuss the jet schemes of the theta divisor $\Theta_m$ for all $m$. By the definition of $\Theta$, we have $\Theta_m$ consists of line bundles $\mathcal{L}_m \in \Pic(T_m\times C)$ such that $\deg(\mathcal{L}_m|_{\{0\}\times C})=g-1$ and $\det(\Phi_{\mathcal{L}_m})=0$ in $k[t]/(t^{m+1})$. Given a positive integer $n$, we recall that a {\it partition of $n$} is a weakly increasing sequence $1\leq \lambda_1\leq \lambda_2 \leq \cdots\leq \lambda_l$ such that $\lambda_1+\cdots+\lambda_l=n$. The number $l$ of integers in the sequence is called the {length of the partition}, and the value $\lambda_l$ is the {largest term}. The set of partitions with length $l$ is denoted by $\Lambda_l$, and the set of partitions with length $l$ and largest term at most $m$ is denoted by $\Lambda_{l,m}$. For every $i$ with $1\leq i\leq m$, if $\lambda\in \Lambda_{l,m}$, we define $\overline{\lambda}\in \Lambda_{l,i}$ by putting $\overline{\lambda}_k=\min\{\lambda_k,i\}$ for every $k$ with $1\leq k\leq l$. We thus obtain a natural map $\Lambda_{l,m}\rightarrow \Lambda_{l,i}$. Fix an effective divisor $E$ of degree $e\geq 2g-d-1$ on $C$. We now associate a partition to every $\mathcal{L}_m\in \pic^d(C)_m$. ${p}_*{(\mathcal{L}_m\otimes q^*(\mathcal{E}))}$ and ${p}_*(\mathcal{L}_m \otimes q^*(\mathcal{E})\otimes\mathcal{O}_{q^{-1}E})$ are locally free sheaves on $T_m$, hence they are finitely generated free modules over $k[t]/(t^{m+1})$. \begin{definition}{\label{lambda}} A family of line bundles $\mathcal{L}_m$ of degree $d$ on $C$ over $T_m$ is called of type $\lambda\in \Lambda_{l,m+1}$ if there are bases of ${p}_* {(\mathcal{L}_m\otimes q^*(\mathcal{E}))}$ and ${p}_*(\mathcal{L}_m \otimes q^*(\mathcal{E})\otimes\mathcal{O}_{q^{-1}E})$ in which $\Phi_{\mathcal{L}_m}$ is represented by the matrix in $M_{(d+e+1-g)\times e}(k[t]/(t^{m+1}))$ ~~~ \vspace{1cm} \[ \left( \begin{array}{cccccccc} 1& & & & & &0 & 0\\ &\ddots && && &&\\ & & 1& & && & \\ & & &t^{\lambda_1} & & &\vdots&\vdots\\ &&&& \ddots &&&\\ & && && t^{\lambda_l}&0 &0\\ \end{array} \right)\] \begin{picture}(0,0) \put(225,75){\Large $0$} \put(185,22){\Large $0$} \end{picture} \end{definition} \begin{definition}\label{nrlambda} Given a partition $\lambda$ , let $r_i(\lambda)$ be the number of $k$ such that $\lambda_k=i$ and let $n_i(\lambda)$ be the number of $k$ such that $\lambda_k\geq i$. \end{definition} It is easy to see that the partition $\lambda$ in Definition \ref{lambda} does not depend on the choice of bases. If $L$ is the image of $\mathcal{L}_m$ under the truncation map $\pi_m: \pic^d(C)_m\rightarrow \pic^d(C)$, then we will see below that the length of the partition associated to $\mathcal{L}_m$ is $h^0(C,L)$. We now give a criterion to decide whether an element $\mathcal{L}_m\in \pic^{g-1}(C)_m$ is a jet of $\Theta$ in terms of the partition $\lambda$. \begin{lemma}\label{condition} For every family of line bundles $\mathcal{L}_m\in \pic^{g-1}(C)_m$ centered at $L\in \Pic^{g-1}(C)$ and of type $\lambda\in \Lambda_{l,m+1}$, the following are equivalent: \begin{enumerate} \item[(i)] $\mathcal{L}_m\in \Theta_{m,L}$. \item[(i){\scriptsize${}^\prime$}] $\det(\Phi_{\mathcal{L}_m})=0$ in $k[t]/(t^{m+1})$. \item[(ii)] $\sum\limits_{i=1}^l{\lambda_i}\geq m+1$. \item[(ii){\scriptsize${}^\prime$}] $\sum\limits_{j=1}^{m+1} r_j(\lambda)\cdot j\geq m+1$. \item[(ii){\scriptsize${}^{\prime\prime}$}] $\sum\limits_{k=1}^{m+1} n_k(\lambda)\geq m+1$. \end{enumerate} \end{lemma} \begin{proof} Recall that $\Theta = W^{0}_{g-1}\subset \pic^{g-1}(C)$. With the above notation, for every family of line bundles $\mathcal{L}_m$ in $\Theta_m$, the sheaves ${p}_*{(\mathcal{L}_m\otimes q^*(\mathcal{E}))}$ and ${p}_*(\mathcal{L}_m \otimes q^*(\mathcal{E})\otimes\mathcal{O}_{q^{-1}E})$ are locally free of rank $e$. The definition shows that the theta divisor parameterizes the line bundles $\mathcal{L}_m$ for which $\det(\Phi_{\mathcal{L}_m})=0$. This proves the equivalence between (i) and (i){\scriptsize${}^\prime$}. It is clear that with the choice of basis in Definition \ref{lambda}, $\det(\Phi_{\mathcal{L}_m})=t^{\lambda_1+\cdots +\lambda_l}\in k[t]/(t^{m+1})$. Therefore the determinant vanishes if and only if $\sum\limits_i^l{\lambda_i}\geq m+1$. In order to complete the proof of the lemma, it suffices to show that $$\sum\limits_{i=1}^l{\lambda_i}=\sum\limits_{j=1}^{m+1} r_j(\lambda)\cdot j=\sum\limits_{k=1}^{m+1} n_k(\lambda).$$ The first equality is clear by the definition of $r_j(\lambda)$. The second equality follows from $n_k(\lambda)=\sum\limits_{j\geq k} r_j(\lambda)$. Indeed, $\sum\limits_{k=1}^{m+1} n_k(\lambda)=\sum\limits_{k=1}^{m+1} \sum\limits_{j\geq k} r_j(\lambda)= \sum\limits_{j=1}^{m+1} r_j(\lambda)\cdot j$. \end{proof} Using the definition of $W^r_d(C)$, we have the following description of $W^r_d(C)_{m,L}$, which gives a generalization of Lemma \ref{condition}. \begin{lemma}\label{conditionw} Let $\mathcal{L}_m\in \pic^d(C)_m$ have type $\lambda=(1\leq \lambda_1\leq \cdots\leq\lambda_l\leq m+1)$. The following are equivalent: \begin{enumerate} \item[(i)] $\mathcal{L}_m\in W^r_d(C)_{m,L}$. \item[(i')] All the $(e+d+1-g-r)$ minors of $\Phi_{\mathcal{L}_m}$ vanish in $k[t]/(t^{m+1})$. \item[(ii)] $\sum\limits_{i=1}^{l-r }{\lambda_i}\geq m+1$. \item[(ii')] $\sum\limits_{i=1}^{l-r}(l-i-r+1)(\lambda_i-\lambda_{i-1})\geq m+1$, where $\lambda_0=0$. \end{enumerate} \end{lemma} The proof of this lemma is very similar to that of Lemma \ref{condition}, so we leave it to the reader. Our first goal is to recover Riemann's Singularity Theorem using jet schemes. \begin{theorem}\label{multiplicity thm} For every $L\in \Theta$, we have ${\rm{mult}}_{L}\Theta=h^0(C,L)$. \end{theorem} \begin{remark}\label{smoothness} Note that the multiplicity of a divisor at a point is one if and only if the divisor is smooth at that point, hence Theorem \ref{multiplicity thm} implies in particular that a line bundle $L\in \Theta$ is a smooth point if and only if $h^0(C,L)=1$. \end{remark} Before proving the theorem we need some preparations. For every degree $d$ line bundle $L$, we shall first describe the fiber of $\rho^m_{m-1}: \pic^{d}(C)_{m,L}\rightarrow \pic^d(C)_{m-1,L}$. Let $E$ be the effective divisor of degree $e\geq 2g-d-1$ in Definition \ref{lambda}. By the universal property of $\pic^d(C)$, every $\mathcal{L}_m \in \pic^d(C)_{m,L}$ is identified with a line bundle on $C\times T_m$. Let us fix a line bundle $L\in \pic^d(C)$ and a family of line bundles $\mathcal{L}_m\in \pic^d(C)_{m,L}$ lying over $L$. For every $0\leq i\leq m$, we denote by $\mathcal{L}_i$ the image of $ \mathcal{L}_m$ in $\pic^d(C)_{i,L}$ under the truncation map $\pic^d(C)_m\rightarrow \pic^d(C)_i$. By the short exact sequence \eqref{ses}, $H^0(\mathcal{L}_i)$ is the kernel of the morphism \begin{equation*} \Phi_{\mathcal{L}_i}: M_i=H^0(\mathcal{L}_i\otimes q^*(\mathcal{E}))\rightarrow N_i=H^0(\mathcal{L}_i\otimes q^*(\mathcal{E})\otimes\mathcal{O}_{q^{-1}E}). \end{equation*} There is a $k[t]/(t^{m+1})$-module map $\pi_{i}^m: H^0(\mathcal{L}_m)\rightarrow H^0(\mathcal{L}_i)$ induced by restriction of sections. This can be described as follows. Applying the Base-change Theorem to the morphism $T_{i}\hookrightarrow T_{m}$, we obtain the following commutative diagram \begin{equation*} \begin{array}[c]{ccccc} H^0(\mathcal{L}_m)&{\hookrightarrow}& M_m& \xrightarrow{\Phi_{\mathcal{L}_m}} &N_m\\ \downarrow\scriptstyle{\pi_i^m}&&\downarrow\scriptstyle{\rho_M}&&\downarrow\scriptstyle{\rho_N}\\ H^0(\mathcal{L}_{i})&{\hookrightarrow}&M_{i}&{\xrightarrow{\Phi_{\mathcal{L}_i}}} &N_i \end{array} \end{equation*} Clearly $M_i=M_m \otimes_{k[t]/(t^{m+1})} k[t]/(t^{i+1})$ and $N_i=N_m \otimes_{k[t]/(t^{m+1})} k[t]/(t^{i+1})$ and the vertical maps are induced by the quotient map $ k[t]/(t^{m+1})\rightarrow k[t]/(t^{i+1})$. \begin{lemma}\label{filtration lemma} For every $0\leq i\leq m$, there is an embedding of $k[t]/(t^{m+1})$-modules $$v^m_i: H^0(\mathcal{L}_i)\hookrightarrow H^0(\mathcal{L}_m)$$ such that the image is the kernel of $\pi_{m-i-1}^m: H^0(\mathcal{L}_m)\rightarrow H^0(\mathcal{L}_{m-i-1})$. \end{lemma} \begin{proof} The multiplication with $t^{m-i}$ defines a linear map of $k[t]/(t^{m+1})$-modules $$k[t]/(t^{i+1})\rightarrow k[t]/(t^{m+1})$$ and induces embeddings of $k[t]/(t^{m+1})$ modules $M_i \xrightarrow{u^m_i} M_m$ and $N_i\xrightarrow{w^m_i} N_m$. Therefore it induces an injective $k[t]/(t^{m+1})$-module morphism $v^m_i: H^0(\mathcal{L}_i)\rightarrow H^0(\mathcal{L}_m).$ It is clear that the image of the embedding $u_i^m: M_i \rightarrow M_m$ is $\Ann_{M_m}(t^{i+1})$. By definition, we have $H^0(\mathcal{L}_m) \cap \Ann_{M_m}(t^{i+1})=\Ann_{H^0(\mathcal{L}_m)}(t^{i+1})$. The multiplication map $w^m_i: N_i\rightarrow N_m$ is injective, and one deduces easily that the image of $v_i$ is $\Ann_{H^0(\mathcal{L}_m)}(t^{i+1})$. Since $\ker\pi_{m-i-1}^m=\Ann_{H^0(\mathcal{L}_m)}(t^{i+1})$, this completes our proof. \end{proof} \begin{lemma}\label{dimension lemma} For every family of line bundles $\mathcal{L}_m\in \pic^d(C)_m$ of type $\lambda\in \Lambda_{l,m+1}$, we have $$h^0(\mathcal{L}_m)=\sum\limits_{k=1}^{m+1} n_k(\lambda).$$ \end{lemma} \begin{proof} Choose bases $\{e_j\}$ and $\{f_h\}$ for the free modules $M_m$ and $N_m$ such that $\Phi_{\mathcal{L}_m}$ is represented by the matrix \begin{equation*} \nonumber \left( \begin{array}{cccccccc} 1& & & & & &0 & 0\\ &\ddots && && &&\\ & & 1& & && & \\ & & &t^{\lambda_1} & & &\vdots&\vdots\\ &&&& \ddots &&&\\ & && && t^{\lambda_l}&0 &0\\ \end{array} \right) = A_0+A_1\cdot t +\cdots+ A_m\cdot t^m. \end{equation*} \begin{picture}(0,0) \put(148,92){\Large $0$} \put(100,30){\Large $0$} \end{picture} All $A_i$ are $(d+e+1-g)\times e$ matrices over the field $k$. For every $0\leq i\leq m$ the image of $\{e_j\}$ under the map $\rho_M: M_m\rightarrow M_i$ gives a basis of $M_i$ over $k[t]/(t^{i+1})$. Similarly, the image of $\{f_h\}$ under $\rho_N: N_m\rightarrow N_i$ gives a basis of $N_i$. With respect to these bases, the homomorphism $\Phi_{\mathcal{L}_i}$ is represented by the matrix $A_0+A_1\cdot t +\cdots+ A_i\cdot t^i$. We first consider the case $m=0$. $\Phi_L$ is represented by $A_0$, which is a diagonal matrix with $1$ showing up on the first $e+d+1-g-l$ rows, hence $h^0(L)=\dim_k\ker\Phi_L=l=n_1(\lambda)$. Let $\lambda'$ be the type of $\mathcal{L}_{m-1}$. One can check easily that $\lambda'$ is the image of $\lambda$ under the natural map $\Lambda_{l,m+1}\rightarrow \Lambda_{l,m}$. For $k\leq m$, we have $n_k(\lambda')=n_k(\lambda)$. Now it suffices to show that $h^0(\mathcal{L}_m)-h^0(\mathcal{L}_{m-1})=n_{m+1}(\lambda)$ for $m\geq 1$. For each $i>0$, $A_i$ is a diagonal matrix with entries $0$ or $1$, with $1$'s in the rows $(e+d+1-g)-l+r_1+\cdots+r_{i-1}+j$, with $1\leq j\leq r_{i}$, where $r_i=r_i(\lambda)$ (See Definition \ref{nrlambda}). We now consider $\{t^k\cdot e_j\}$ and $\{t^k\cdot f_h\}$ where $0\leq k\leq m$ to be the bases of $M_m$ and $N_m$, respectively, as linear spaces over $k$. The matrix associated to $\Phi_{\mathcal{L}_m}$ as a morphism of $k-$linear spaces has the upper triangular form \begin{equation*} \nonumber \Psi_{\mathcal{L}_m}=\left( \begin{array}{cccccccc} A_0&A_1 &A_2 &\cdots &A_{m-1}& A_m&0&0\\ & A_0&A_1 &\cdots& A_{m-2}&A_{m-1}&0&0\\ &&A_0 &\ddots & &\vdots&\vdots &\vdots \\ &&&\ddots&\ddots &\vdots&\vdots&\vdots\\ &&& & A_0&A_1&0&0\\ && && &A_0&0&0\\ \end{array} \right) \end{equation*} \begin{picture}(0,0) \put(170,30){\Large $0$} \end{picture} Therefore the associated matrix $\Psi_{\mathcal{L}_{m-1}}$ of $\Phi_{\mathcal{L}_{m-1}}$ as a $k$-linear map is the bottom right corner submatrix of the associated matrix of $\Phi_{\mathcal{L}_m}$, obtained by omitting the rows and columns containing the left upper corner $A_{0}$. In each row and column of the matrix $\Psi_{\mathcal{L}_m}$, there is at most one nonzero element. Therefore $\rank \Psi_{\mathcal{L}_m}=\rank \Psi_{\mathcal{L}_{m-1}}+ \sum\limits_{i=0}^{m} \rank A_i$. Since $\rank(A_0)=d+e+1-g-l$ and $\rank(A_i)=r_i(\lambda)$ for $1\leq i\leq m$, we deduce that $\dim_k\ker \Phi_{\mathcal{L}_m}-\dim_k\ker\Phi_{\mathcal{L}_{m-1}}=n_{m+1}(\lambda)$. Therefore $h^0(\mathcal{L}_m)-h^0(\mathcal{L}_{m-1})=n_{m+1}(\lambda)$. \end{proof} \begin{remark}\label{image} For every $j$ with $0\leq j\leq m$, Lemmas \ref{filtration lemma} and \ref{dimension lemma} imply that the image of the morphism $\pi^j_0: H^0(\mathcal{L}_j)\rightarrow H^0(L)$ has dimension equal to $$h^0(\mathcal{L}_{j})-\dim_k \ker(\pi^j_0)=h^0(\mathcal{L}_{j})-h^0(\mathcal{L}_{j-1})=n_{j+1}(\lambda).$$ Therefore $\pi^{j}_0$ is a zero map if and only if $n_{j+1}(\lambda)=0$. \end{remark} We now fix a line bundle $L\in \pic^d(C)$ and describe the fibers of the truncation maps $\rho^{m}_{m-1}: \pic^d(C)_{m,L}\rightarrow \pic^d(C)_{m-1,L}$ for every $m$. Let $\{U_\alpha\}$ be an affine covering of $C$ which trivializes the line bundle $L$ by isomorphisms $\gamma_\alpha: L|_{U_\alpha}\cong \mathcal{O}_{U_\alpha}$. Let $\{g_{\alpha\beta}=\gamma_\beta\circ \gamma_\alpha^{-1}\}$ be the corresponding transition functions. For every scheme $U_{\alpha}$ and $i\geq 1$, we have short exact sequence of sheaves on $U_{\alpha}\times T_i$ as follows: \begin{eqnarray*} 0\rightarrow \mathcal{O}_{U_\alpha}\rightarrow \mathcal{O}_{U_\alpha\times T_i}^*\rightarrow \mathcal{O}_{U_{\alpha}\times T_{i-1}}^*\rightarrow 0, \end{eqnarray*} where the embedding morphism maps $x\in \mathcal{O}_{U_{\alpha}}$ to $1+x\cdot t^i$. Since $U_{\alpha}$ is affine, $H^j(\mathcal{O}_{U_{\alpha}})$ vanishes for every $j\geq 1$. We thus obtain an isomorphism $H^{1}(\mathcal{O}_{U_{\alpha}\times T_i})\cong H^{1}(\mathcal{O}_{U_{\alpha}\times T_{i-1}})$. In other words, we have $\pic(U_{\alpha}\times T_{i})\cong \pic(U_{\alpha}\times T_{i-1})$. By induction on $i$ with $0\leq i\leq m$, we deduce that $\{U_{\alpha}\times T_m\}$ is an affine covering of $C\times T_m$ which trivializes every line bundle $\mathcal{L}_m\in \pic^d(C)_{m,L}$. In particular, for every line bundle $\mathcal{L}_1\in \pic^d(C)_{1,L}$ on $C\times T_1$, there is a trivialization for $\mathcal{L}_1$ on the covering $\{U_\alpha\times T_1\}$ with the transition functions $\{g_{\alpha\beta}(1+t\phi^{(1)}_{\alpha\beta})\}$. This gives a bijection $\xi: \pic^d(C)_{1,L}\rightarrow H^1(C,\mathcal{O}_C)$ via $\xi(\mathcal{L}_1)=[\phi^{(1)}_{\alpha\beta}]$. In general, we fix a family of line bundles $\mathcal{L}_{m-1}\in \pic^d(C)_{m-1,L}$. After we also fix a point $\mathcal{M}$ in the fiber of $\rho^{m}_{m-1}$ over $\mathcal{L}_{m-1}$, we get an isomorphism $$(\rho^m_{m-1})^{-1}(\mathcal{L}_{m-1})\cong H^1(C,\mathcal{O}_C).$$ Since we will use later the description in terms of \u{C}ech cohomology classes, we describe this isomorphism as follows. We choose a trivialization of $\mathcal{L}_{m-1}$ with the transition functions $g_{\alpha\beta}^{m-1}:=g_{\alpha\beta}(1+t\phi^{(1)}_{\alpha\beta}+\cdots +t^{m-1}\phi^ {(m-1)}_{\alpha\beta})$. It is easy to see that there is a trivialization for $\mathcal{M}$ with transition functions $g_{\alpha\beta}^m=:g_{\alpha\beta}(1+t \phi^{(1)}_{\alpha\beta}+\cdots +t^{m-1}\phi^{(m-1)}_{\alpha\beta}+t^{m}\phi^{(m)}_{\alpha\beta})$. Every point $\mathcal{L}_m\in (\rho^m_{m-1})^{-1}(\mathcal{L}_{m-1})$ has transition functions $$g_{\alpha\beta}(1+t\phi^{(1)}_{\alpha\beta}+\cdots +t^{m-1}\phi^ {(m-1)}_{\alpha\beta}+t^{m}(\phi^{(m)}_{\alpha\beta}+\psi_{{\alpha\beta}}))$$ where $[\psi_{{\alpha\beta}}]\in H^1(C,\mathcal{O}_C)$. We thus obtain an isomorphism $$\xi: (\rho^m_{m-1})^{-1}(\mathcal{L}_{m-1})\rightarrow H^1(C,\mathcal{O}_C)$$ given by $\xi(\mathcal{L}_m)=[\psi_{{\alpha\beta}}]$. Abusing the notation, we write $[\mathcal{L}_m]$ for the cohomology class corresponding to $\mathcal{L}_m$. Note, however, that this depends on the choice of $\mathcal{M}$. Let $s_{m-1} \in H^0(\mathcal{L}_{m-1})$ be a nonzero section. The obstruction to extending $s_{m-1}$ to a section of $\mathcal{L}_m$ can be described as follows. We have a short exact sequence of sheaves on $C\times T_m$, $$0\rightarrow L\rightarrow \mathcal{L}_m\rightarrow \mathcal{L}_{m-1}\rightarrow 0.$$ Let $\delta_{\mathcal{L}_{m}}$ be the connecting map $H^0(\mathcal{L}_{m-1})\rightarrow H^1(C,L)$. The long exact sequence on cohomology implies that $s_{m-1}$ can be extended to a section $s_m$ of $\mathcal{L}_{m}\in (\rho^m_{m-1})^{-1}(\mathcal{L}_{m-1})$ if and only if $\delta_{\mathcal{L}_{m}}(s_{m-1})=0$. With the above notation, we get the following more explicit obstruction to extending a section of $\mathcal{L}_{m-1}$ in terms of \u{C}ech cohomology. \begin{lemma}\label{equationoftransitionfunction} Fix a line bundle $\mathcal{M}$ in the fiber of $\rho^m_{m-1}$ over $\mathcal{L}_{m-1}$. For a fixed section $s_{m-1}=(\sum\limits_{j=0}^{m-1}c_{\alpha}^{(j)}t^j)\in H^0(\mathcal{L}_{m-1})$, let $s_0$ be the its image under $\pi^{m}_{0}: H^0(\mathcal{L}_{m-1})\rightarrow H^0(L)$. The section $s_{m-1}$ has an extension to a section of $\mathcal{L}_m$ if and only if \begin{equation}\label{commonsol}\tag{$\dagger$} \nu(s_0\otimes [\mathcal{L}_m]) {\rm ~is ~the~ cohomology ~class ~corresponding ~to~ } (- \gamma_\alpha^{-1}(\sum\limits_{j=1}^{m}\phi_{\alpha\beta}^{(j)} c^{(m-j)}_\alpha)) \end{equation} where $\nu$ is the natural pairing $H^0(C,L)\otimes H^1(C,\mathcal{O}) \rightarrow H^1(C,L)$. \end{lemma} \begin{proof} Assume there is an extension of $s_{m-1}\in H^0(\mathcal{L}_{m-1})$ to a section $s_m\in H^0(\mathcal{L}_m)$. Locally $s_{m-1}$ is given by functions $\sum\limits_{j=0}^{m-1}c^{(j)}_\alpha t^j\in \Gamma(U_\alpha\times T_{m-1}, \mathcal{O}_{U_\alpha\times T_{m-1}})$. We write $s_m$ as $\sum\limits_{j=0}^{m-1}c^{(j)}_\alpha t^j+c^{(m)}_\alpha t^m$. Let $\gamma^{m}_{\alpha}: \mathcal{L}_{m}|{_{U_\alpha\times T_m}}\rightarrow \mathcal{O}_{U_\alpha\times T_m}$ be a trivialization of $\mathcal{L}_m$ on $U_{\alpha}\times T_m$. We thus have the following equality on $(U_\alpha\cap U_\beta)\times T_m$: $$(\gamma^{m}_{\alpha})^{-1}(\sum\limits_{j=0}^{m}c^{(j)}_\alpha t^j)=(\gamma^{m}_{\beta})^{-1}(\sum\limits_{j=0}^{m}c^{(j)}_\beta t^j).$$ Since $(\gamma^{m}_{\alpha})^{-1}=(\gamma^{m}_{\beta})^{-1}\circ g^{m}_{\alpha\beta}$, we have $(\gamma^{m}_{\beta})^{-1} \circ g^{m}_{\alpha\beta}(\sum\limits_{j=0}^{m}c^{(j)}_\alpha t^j)=(\gamma^{m}_{\beta})^{-1}(\sum\limits_{j=0}^{m}c^{(j)}_\beta t^j).$ More explicitly, we obtain $$g_{\alpha\beta}(1+t \phi^{(1)}_{\alpha\beta}+\cdots+t^{m-1}\phi^{(m-1)}_{\alpha\beta}+t^{m}(\phi^{(m)}_{\alpha\beta}+\psi^{(m)}_{\alpha\beta}))(\sum\limits_{j=0}^{m}c^{(j)}_\alpha t^j)=(\sum\limits_{j=0}^{m}c^{(j)}_\beta t^j)$$ in $\mathcal{O}((U_{\alpha}\cap U_{\beta})\times T_m)$. We now expend this equation and take the coefficient of $t^i$ for $i$ with $0\leq i\leq m$. If $i<m$, the equation we obtain from the coefficient of $t^i$ always holds since $s_{m-1}$ is a section of $\mathcal{L}_{m-1}$. For $i=m$, we obtain $$g_{\alpha\beta}(\psi_{\alpha\beta}\cdot c^{(0)}_\alpha+\sum\limits_{j=1}^m \phi_{\alpha\beta}^{(j)}\cdot c_\alpha^{(m-j)}+c_\alpha^{(m)})=(c_\beta^{(m)})$$ in $\mathcal{O}((U_{\alpha}\cap U_\beta)\times T)$. Note that the restriction to the trivialization $\gamma^m_\alpha$ to the subsheaf $L$ of $\mathcal{L}_m$ is exactly the trivialization $\gamma_\alpha$, we have $$(\gamma_{\beta})^{-1}\circ g_{\alpha\beta}(\psi_{\alpha\beta}\cdot c^{(0)}_\alpha+\sum\limits_{j=1}^m \phi_{\alpha\beta}^{(j)}\cdot c_\alpha^{(m-j)}+c_\alpha^{(m)})= (\gamma_{\beta})^{-1}(c_\beta^{(m)})$$ as sections of $L$ on $(U_{\alpha}\cap U_\beta)$. Clearly $(\gamma_{\beta})^{-1}\circ g_{\alpha\beta} (c_\alpha^{(m)})-(\gamma_{\beta})^{-1}(c_\beta^{(m)})$ gives the zero cohomology class in $H^1(C,L)$. We obtain that $\nu(s_0\otimes [\mathcal{L}_m])$, the cohomology class corresponding to $(\gamma_{\alpha}^{-1}(\psi_{\alpha\beta}\cdot c^{(0)}_\alpha))$ is equal to the cohomology class corresponding to $(- \gamma_{\alpha}^{-1}(\sum\limits_{j=1}^{m}\phi_{\alpha\beta}^{(j)} c^{(m-j)}_\alpha))$. By reversing the argument, we also obtain the converse. \end{proof} \begin{remark} The identification between the fiber of $\pic^d(C)_{m,L}\rightarrow \pic^d(C)_ {m-1,L}$ and $H^1(C,\mathcal{O}_C)$ is not canonical. In particular, the expression for $\gamma(s_0\otimes[\mathcal{L}_m])$ in (\ref{commonsol}) does depend on $\mathcal{M}$. However, for any fixed nonzero section $s_{m-1}$, the dimension of the subset $$\{\mathcal{L}_m\in (\rho^m_{m-1})^{-1}(\mathcal{L}_{m-1})~|~ H^0(\mathcal{L}_m)\rightarrow H^0(\mathcal{L}_{m-1}) {\rm ~has ~nonempty~fiber ~over~ } s_{m-1}\} $$ is independent of $\mathcal{M}$. \end{remark} We now prove Theorem \ref{multiplicity thm}. The idea is similar to that in Kempf's proof of Riemann's multiplicity formula. \noindent{\it Proof} of Theorem \ref{multiplicity thm}. For every effective Cartier divisor $D$ on a smooth variety $X$ and a point $x\in D$, the multiplicity of $D$ at $x$ is equal to the minimal positive integer $m$ such that $D_{m,x}$ is a proper subset of $X_{m,x}$. Let $L\in \Theta$ be a line bundle with $l=h^0(L)$. We first show that $\Theta_{m,L}=\pic^{g-1}(C)_{m,L}$ for every $m<l$. This follows from the description of $\Theta$ as a determinantal variety. Indeed, let $\mathcal{L}_m\in \pic^{g-1}(C)_{m,L}$ be a line bundle of type $\lambda\in \Lambda_{l,m}$, then $\sum\limits_{i=1}^{l}\lambda_i\geq l>m$. By Lemma \ref{condition}, we have $\mathcal{L}_m\in \Theta_{m,L}$. Hence $\Theta_{m,L}=\pic^{g-1}(C)_{m,L}$ for every $m<l$. We now show that $\Theta_{m,L}\neq \pic^{g-1}(C)_{m,L}$ for $m=l$. Let $\mathcal{Z}_1$ be the image of $\Theta_{m,L}$ under $\pic^{g-1}(C)_m\rightarrow \pic^{g-1}(C)_1$. It suffices to show that $\mathcal{Z}_1\neq \pic^{g-1}(C)_{1,L}$. For every $\mathcal{L}_m\in \Theta_{m,L}$ of type $\lambda=(1\leq\lambda_1\leq\cdots\leq\lambda_l)$, Lemma \ref{condition} implies that $\sum\limits_{i=1}^{l}\lambda_i \geq m+1$. Hence $\lambda_l\geq 2$ and $n_2(\lambda)\geq 1$. By Lemma \ref{dimension lemma}, we have $h^0(\mathcal{L}_1)-h^0(L)=n_2(\lambda)\geq 1$. By Remark \ref{image}, we see that the map $\pi^1_0: H^0(\mathcal{L}_1)\rightarrow H^0(L)$ is not zero. Equivalently, there is a nonzero section $s_0 \in H^0(L)$ which can be extended to a section of $H^0(\mathcal{L}_1)$. Let $\mathcal{Z}_2$ be the subset $$\{\mathcal{L}_1 \in \pic^{g-1}(C)_1~|~\pi^1_0: H^0(\mathcal{L}_1)\rightarrow H^0(L) {\rm ~is ~not ~zero}\}.$$ We have seen that $\mathcal{Z}_1$ is a subset of $\mathcal{Z}_2$, hence it is enough to show that $\mathcal{Z}_2\neq \pic^{g-1}(C)_{1,L}$. We now apply Lemma \ref{equationoftransitionfunction} with $m=1$. Let $\mathcal{M}$ be the trivial deformation of $L$, i.e. $\mathcal{M}$ represents the zero tangent vector at $L$. To compute the dimension of $\mathcal{Z}_2$, we consider the proper subset $$\mathcal{Z}=\{(W, \mathcal{L}_1) ~|~\nu(s_0\otimes [\mathcal{L}_1])=0 {\rm~ for~ every~} s_0\in W \}$$ of ${\mathbf P} (H^0(C,L))\times H^1(C,\mathcal{O}_C)$. Here ${\mathbf P} (H^0(C,L))$ stands for the projective space of one dimensional subspaces of $H^0(C,L)$. Let $W$ be an element in ${\mathbf P} (H^0(C,L))$ and $s_0$ a nonzero element of $W$. The induced map $H^1(C,\mathcal{O}_C)\rightarrow H^1(C,L)$ taking $[\mathcal{L}_1]$ to $\nu(s_0\otimes [\mathcal{L}_1])$ is surjective. Hence each fiber of the first projection map $\mathcal Z \rightarrow {\mathbf P} (H^0(C,L))$ is a codimension $l$ vector space of $H^1(C,\mathcal{O}_C)$. We obtain $\dim\mathcal Z=g-1$. Since $\mathcal{Z}_2$ is a subset of the image of the second projection map $\mathcal{Z}\rightarrow H^1(C,\mathcal{O}_C)$, we obtain $\dim\mathcal{Z}_2\leq g-1$. Hence $\mathcal{Z}_2\neq \pic^{g-1}(C)_{1,L}$. This completes the proof. \hfill{$\Box$} For smooth projective curves of genus $g\leq 2$, Riemann's Singularity Theorem implies that the theta divisor is smooth. We consider the singularities of the theta divisor for curves of genus $g\geq 3$ in the next section. \section{Singularities of the Theta divisor and of the $W^r_d$ loci} Our first goal in this section is to give an upper bound for $\dim W^r_d(C)_{m,L}$ for each $L\in \pic^d(C)$ and $m\geq 0$. We fix a line bundle $L$ of degree $d$ with $l=h^0(L)$. For every partition $\lambda\in\Lambda_{l,m+1}$, we denote by $C_{\lambda,m}$ the subset $$\{\mathcal{L}_m\in \pic^d(C)_{m,L}~|~\mathcal{L}_m \text{ is of type } \lambda\}.$$ It is easy to see that locally $C_{\lambda,m}$ is the pull back of a locally closed subset of the $m$-th jet scheme of the variety of $(d+e+1-g)\times e$ matrices. Therefore $C_{\lambda,m}$ is a constructible subset of $\pic^d(C)_{m,L}$. By Lemma \ref{conditionw}, we have $W^r_d(C)_{m,L}= \bigcup\limits_{\lambda} C_{\lambda,m}$, where $\lambda$ varies over the partitions in $\Lambda_{l,m+1}$ satisfying $\sum\limits_{i=1}^{l-r }{\lambda_i}\geq m+1$. In particular, we have a finite union $\Theta_{m,L}=\bigcup\limits_{\lambda}C_{\lambda,m}$, where $\lambda$ varies over all elements in $\Lambda_{l,m+1}$ with $\sum\limits_{i=1}^{l}{\lambda_i}\geq m+1$. In order to estimate the dimension of $\Theta_{m,L}$, it is enough to bound the dimension of $C_{\lambda,m}$ for every $\lambda\in \Lambda_{l,m+1}$. The idea is to describe the image of $C_{\lambda,m}$ under the truncation map $\rho^m_i: \pic^d(C)_{m,L}\rightarrow \pic^d(C)_{i,L}$ for every $i\leq m$. \begin{definition} {\it A weak flag of} $H^0(C,L)$ {\it of signature} $\kappa=(\kappa_i)$ with $\kappa_1\geq \cdots \geq\kappa_n $ is a sequence of subspaces of $H^0(C,L)$, $$\textbf{V}: ~H^0(C,L)=V_0\supseteq V_1\supseteq\cdots \supseteq V_{n-1}\supseteq V_n$$ such that $\dim V_i=\kappa_i$ for every $1\leq i\leq n$. Here $n$ is called the {\it length} of the weak flag $\textbf{V}$. Given a weak flag $\textbf{V}$ of $H^0(C,L)$ of length $n$, for every $i\leq n$ we denote by $\textbf{V}_{(i)}$ the truncated weak flag of length $i$: $$\textbf{V}_{(i)}: H^0(C,L)=V_0\supseteq V_1\supseteq\cdots \supseteq V_{i-1}\supseteq V_i.$$ \end{definition} For every $\mathcal{L}_m\in C_{\lambda,m}$ and every $j$ with $0\leq j\leq m$, we denote by $\mathcal{L}_j$ the image of $\mathcal{L}_m$ under $\rho^m_j: \pic^d(C)_m\rightarrow \pic^d(C)_j$. Lemma \ref{dimension lemma} implies that the function $C_{\lambda,m}\rightarrow {\mathbf Z}$ which takes $\mathcal{L}_m$ to $h^0(\mathcal{L}_j)=\sum\limits_{k=1}^{j+1} n_k(\lambda)$ is constant. For a fixed $\mathcal{L}_m\in C_{\lambda,m}$, the images $V_j$ of the morphisms $\pi^j_0: H^0(\mathcal{L}_j)\rightarrow H^0(L)$ give a weak flag $\textbf{V}_{\mathcal{L}_m}$ of $H^0(L)$ of length $m$. Remark \ref{image} implies that $\dim V_j=\dim H^0(\mathcal{L}_{j})-\dim H^0(\mathcal{L}_{j-1})=n_{j+1}(\lambda)$. Hence the signature $\kappa$ of the weak flag $\textbf{V}_{\mathcal{L}_m}$, with $\kappa_j=n_{j+1}(\lambda)$, only depends on the partition $\lambda$. Lemma \ref{filtration lemma} shows that there is a short exact sequence $$0\rightarrow H^0(\mathcal{L}_{m-1})\xrightarrow{v^m_{m-1}} H^0(\mathcal{L}_{m})\twoheadrightarrow V_m\rightarrow 0.$$ We now choose a splitting of this short exact sequence, which gives a decomposition $H^0(\mathcal{L}_{m})=H^0(\mathcal{L}_{m-1})\oplus \widetilde{V}_m\,,$ with $\widetilde{V}_m$ mapping isomorphically onto $V_m$. The restriction map $\pi^m_{m-1}: H^0(\mathcal{L}_{m})\rightarrow H^0(\mathcal{L}_{m-1})$ maps $\widetilde{V}_m$ isomorphic to its image. For the short exact sequence $$0\rightarrow H^0(\mathcal{L}_{m-2})\xrightarrow{v^{m-1}_{m-2}} H^0(\mathcal{L}_{m-1})\twoheadrightarrow V_{m-1}\rightarrow 0,$$ we can choose a splitting $H^0(\mathcal{L}_{m-1})=H^0(\mathcal{L}_{m-2})\oplus \widetilde{V}_{m-1}$ such that the restriction map $\pi^m_{m-1}$ maps $\widetilde{V}_m$ into $\widetilde{V}_{m-1}$. By descending induction on $i$ with $0\leq i\leq m$, we can find a subspace $\widetilde{V_i}\subset H^0(\mathcal{L}_i)$ for each $i$ such that \begin{enumerate} \item The restriction of the truncation map $\pi^i_0: H^0(\mathcal{L}_{i})\rightarrow H^0(L)$ to $\widetilde{V}_i$ induces an isomorphism onto $V_i$. \item The truncation map $\pi^i_{i-1}: H^0(\mathcal{L}_{i})\rightarrow H^0(\mathcal{L}_{i-1})$ takes $\widetilde{V}_i$ into $\widetilde{V}_{i-1}$ \end{enumerate} \begin{definition} A weak flag $\textbf{V}$ of $H^0(C,L)$ of length $m$ is {\it extended compatibly} to the line bundle $\mathcal{L}_m$ if there are linear subspaces $\widetilde{V}_i\subset H^0(C,\mathcal{L}_i)$ for each $i\leq m$ such that (1) and (2) above hold. \end{definition} In this case, the set of linear subspaces $\{\widetilde{V}_i\}_i$ as above is called a {\it compatible extension} of $\textbf{V}$ to line bundle $\mathcal{L}_m$. The above argument shows that every weak flag $\textbf{V}_{\mathcal{L}_m}$ associated to a line bundle $\mathcal{L}_m$ can be extended compatibly to the line bundle $\mathcal{L}_m$. For every $i$ with $1\leq i\leq m$, recall that $\overline{\lambda}$ is the image of $\lambda\in \Lambda_{l,m+1}$ under the map $\Lambda_{l,m+1}\rightarrow \Lambda_{l,i+1}$. Given a weak flag $\textbf{V}$ of $H^0(L)$ of length $m$, we denote by $S_{i,\textbf{V}}^{\lambda}$ the set of line bundles $\mathcal{L}_i\in \pic^d(C)_{i,L}$ such that $\mathcal{L}_i\in C_{i,\overline{\lambda}}$ and $\textbf{V}_{(i)}$ can be extended compatibly to $\mathcal{L}_i$. For a fixed non-increasing sequence $\kappa$, we define $S_{i,\kappa}^{\lambda}=\bigcup\limits_{\textbf{V}'}S_{i,\textbf{V}'}^{\lambda}$, where $\textbf{V}'$ varies over all weak flags of $H^0(L)$ of signature $\kappa$. For convenience, we set $S_{0,\textbf{V}}^{\lambda}=S_{0,\kappa}^{\lambda}=\{L\}$. Standard arguments show that $S_{i,\textbf{V}}^{\lambda}$ and $S_{i,\kappa}^{\lambda}$ are constructible subsets of $\pic^d(C)_{i,L}$. For the benefit of the reader, we give the details in the appendix. The truncation map $\rho^m_i: \pic^d(C)_{m,L}\rightarrow \pic^d(C)_{i,L}$ maps $C_{\lambda,m}$ to the set $S_{i,\kappa}^{\lambda}$ with $\kappa_j=n_{j+1}(\lambda)$. In order to estimate the dimension of $C_{\lambda,m}$, we only need to estimate $\dim S_{i,\kappa}^{\lambda}$ for a suitable $i\leq m$. \begin{definition} For a fixed weak flag $\textbf{V}$ of $H^0(L)$ of length $m$, for every $i$ and $j$ with $1\leq i\leq j\leq m$, we define $\widetilde{S}_{i,j,\textbf{V}}^{\lambda}$ to be the set of pairs $(\mathcal{L}_i, W)$ such that \begin{enumerate} \item $\mathcal{L}_i\in S_{i,\textbf{V}}^{\lambda}$ and $W$ is a subspace of $H^0(\mathcal{L}_i)$ of dimension $\kappa_j$. \item There is a compatible extension $\{\widetilde{V}_l\}_{l\leq i}$ of $\textbf{V}_{(i)}$ to $\mathcal{L}_i$ such that $W$ is the inverse image of $V_j$ in $H^0(\mathcal{L}_i)$ under the isomorphism $\widetilde{V}_i\rightarrow V_i$. \end{enumerate} \end{definition} We call the $W$ in a pair $(\mathcal{L}_i, W)$ as above a lifting of $V_j$ to $\mathcal{L}_i$. For any element $s\in V_j$, the preimage of $s$ via the isomorphism $W\rightarrow V_j$ is called a lifting of $s$ to the level $i$. In the appendix we also show that $\widetilde{S}_{i,j,\textbf{V}}^{\lambda}$ is a constructible subset of a suitable Grassmann bundle. For the convenience, we set $\widetilde{S}_{0,j,\textbf{V}}^{\lambda}=\{(L,V_j)\}$ for every $\lambda$. \begin{lemma}\label{dimofbundles} Let $X_1$ and $Y_1$ be constructible subsets of algebraic varieties $X$ and $Y$ respectively. Let $f: X_1\rightarrow Y_1$ be the restriction of a morphism $g: X\rightarrow Y$. If all the fibers of $f$ are of dimension $d\geq 0$, then $\dim X_1=\dim Y_1+d$. \end{lemma} \begin{proof} Since $Y_1$ is a constructible subset of $Y$, we write $Y_1$ as a finite disjoint union of locally closed subset $V_k$ of $Y$. We may assume that all subsets $V_k$ are irreducible. For every $k$, the inverse image $f^{-1}(V_k)$, as the intersection of $g^{-1}(V_k)$ with $X_1$, is a constructible subset of $X$. We thus have $\dim Y_1=\max\limits_{k}\{\dim V_k\}$ and $\dim X_1=\max\limits_{k}\{\dim f^{-1}(V_k)\}$. Hence it is enough to show the statement for the map $f^{-1}(V_k)\rightarrow V_k$ for every $k$. We may thus assume that $Y_1$ is an irreducible algebraic variety. Consider the stratification $X_1=\coprod\limits_{l=1}^m(W_l)$, where each $W_l$ is a locally closed subset of $X$. For every $l$, the morphism $W_l\rightarrow Y_1$ has fibers of dimension $\leq d$. We get $$\dim W_l\leq \dim f(W_l)+d\leq \dim Y_1+d.$$ This implies that $\dim X_1=\max\limits_{l}\{\dim W_l\}\leq \dim Y_1+d$. We now prove the other direction of the inequality. Let $\{W_l\}_{l=1,\ldots, m_0}$ be the collection of those $W_l$ that dominates $Y_1$. (This collection is nonempty since otherwise $f^{-1}(y)$ would be empty for a general point $y\in Y_1$.) We choose an open subset $V\subset Y_1$ such that $\dim (W_l\cap f^{-1}(y))$ is constant for $y\in V$ and $l\leq m_0$. There is a subset $W_l$ with $l\leq m_0$ such that $\dim(f^{-1}(y)\cap W_l)=\dim f^{-1}(y)=d$. We obtain that $\dim W_l=d+\dim Y_1$. We thus have $$\dim X\geq \dim W_1=\dim Y_1+d.$$ This completes the proof. \end{proof} \begin{lemma}\label{dimension S} For a fixed $L\in \pic^d(C)$ with $l=h^0(C,L)$ and a partition $\lambda\in \Lambda_{l,m+1}$, let $\kappa=(\kappa_1,\kappa_2,\ldots,\kappa_m)$ be a signature of length $m$, with $\kappa_{j}\leq n_{j+1}(\lambda)$ for every $j\leq m$, and $\textbf{V}$ a weak flag of $H^0(L)$ of signature $\kappa$. For every $i$ with $1\leq i\leq m$, we write $d_i$ for the dimension of the kernel of $$\mu_{V_i}: V_i\otimes H^0(C,K\otimes L^{-1})\rightarrow H^0(C,K).$$ Then the the following holds: \begin{enumerate} \item $\dim S_{i,\textbf{V}}^{\lambda}-\dim S_{i-1,\textbf{V}}^{\lambda}\leq g+d_i-\kappa_i\cdot (g-d-1+n_i(\lambda))$, \item $\dim S_{i,\textbf{V}}^{\lambda}\leq gi-\sum\limits_{j=1}^{i}\{(\kappa_j\cdot(g-d-1+n_{j}(\lambda))-d_j\}$. \end{enumerate} \end{lemma} \begin{proof} Since we fix the partition $\lambda$, we may and will omit the superscript $\lambda$ in the proof. We apply Lemma \ref{equationoftransitionfunction} to compute the dimension of $S_{i,\textbf{V}}$ inductively on $i$. $S_{0,\textbf{V}}=\{L\}$ implies that $\dim S_{0,\textbf{V}}=0$. Consider the following commutative diagram: \begin{equation*} \begin{array}[c]{ccc} \widetilde{S}_{i,i,\textbf{V}}&\stackrel{h}{\rightarrow}&\widetilde{S}_{i-1,i,\textbf{V}}\\ \downarrow\scriptstyle{\rho_1}&&\downarrow\scriptstyle{\rho_2}\\ S_{i,\textbf{V}}&{\rightarrow}&S_{i-1,\textbf{V}} \end{array} \end{equation*} The horizontal map $h$ maps $(\mathcal{L}_i,W')$ to $(\mathcal{L}_{i-1}, W)$, where $W$ is the image of $W'$ under the truncation map $\pi^i_{i-1}: H^0(C, \mathcal{L}_i) \rightarrow H^0(C, \mathcal{L}_{i-1})$. The vertical map $\rho_1$ is given by mapping $(\mathcal{L}_i,W')$ to $\mathcal{L}_i$ and $\rho_2$ is defined similarly. Let $\mathcal{L}_i$ be a fixed point in $S_{i,\textbf{V}}$. The fiber of $\rho_1$ over the point $\mathcal{L}_i$ is the set of linear subspaces $W'\subset H^0(\mathcal{L}_i)$ that map isomorphically onto $V_i$ via $\pi^i_0: H^0(\mathcal{L}_i)\rightarrow H^0(L)$. Let $\{s_{0,k}\}_k$ be a basis of $V_i$. A lifting $W'$ of $V_i$ is determined by the preimage of $s_{0,k}$ in $W'$ for each $k$. By Lemma \ref{filtration lemma}, we see that for every $s\in V_i$, any two liftings of $s$ to the level $i$ differ up to an element of $H^0(\mathcal{L}_{i-1})$. Therefore, the relative dimension of the map $\rho_1$ is $h^0(\mathcal{L}_{i-1})\cdot \kappa_i=(\sum\limits_{j=1}^{i}n_j(\lambda))\cdot\kappa_i$. Similarly the relative dimension of the second vertical map $\rho_2$ is $(\sum\limits_{j=1}^{i-1}n_j(\lambda))\cdot \kappa_i$. Consider the horizontal map $h$. For every element $(\mathcal{L}_{i-1}, W)\in \widetilde{S}_{i-1,i,\textbf{V}}$, we now give a criterion to decide whether it is in the image of $h$ or not. Fix an element $\mathcal{M}$ in the fiber of $\rho^i_{i-1}: \pic^d(C)_i\rightarrow \pic^d(C)_{i-1}$ over $\mathcal{L}_{i-1}$. We identify the fiber $(\rho^i_{i-1})^{-1}(\mathcal{L}_{i-1})$ with $H^1(C,\mathcal{O}_C)$. Let $\{s_{0,k}\}_k$ be a basis of $W$. We donote by $s_{i-1,k}$ the lifting of $s_{0,k}$ to the level $i-1$ in $W$. With the notation in Lemma \ref{equationoftransitionfunction}, every element $s_{i-1,k}=(\sum\limits_{j=0}^{i-1}c_{k,\alpha}^{(j)}t^j)\in H^0(\mathcal{L}_{i-1})$ has an extension to a section of $\mathcal{M}'\in (\rho^i_{i-1})^{-1}(\mathcal{L}_{i-1})$ if and only if the following equation holds \begin{equation}\label{sk}\tag{$\dagger_k$} \nu(s_{0,k}\otimes [\mathcal{M}'])= {\rm the~ cohomology ~class ~corresponding ~to~ } (- \gamma_\alpha^{-1}(\sum\limits_{j=1}^{i}\phi_{\alpha\beta}^{(j)} c^{(i-j)}_{k,\alpha})). \end{equation} Hence $(\mathcal{L}_{i-1}, W)$ is in the image of $h$ if and only if there is a point $\mathcal{M}'\in (\rho^i_{i-1})^{-1}(\mathcal{L}_{i-1})$ such that the above identity (\ref{sk}) holds for every $k$. We now assume that $(\mathcal{L}_{i-1}, W)$ is in the image of $h$ and fix an element $(\mathcal{M}',W')$ in the fiber of $h$ over $(\mathcal{L}_{i-1}, W)$. The above argument implies that \begin{equation*} \rho_1(h^{-1}(\mathcal{L}_{i-1}, W))=\{[\mathcal{L}_{i}]\in (\rho^i_{i-1})^{-1}(\mathcal{L}_{i-1})~|~ \nu(s_{0,k}\otimes ([\mathcal{L}_{i}]-[\mathcal{M}']))=0 {\rm ~for ~every ~} k\}. \end{equation*} By taking the dual linear spaces, we now deduce that $\rho_1(h^{-1}(\mathcal{L}_{i-1}, W))$ is an affine space consisting of the elements in $H^1(C,\mathcal{O}_C)$ that annihilate the image of the pairing $$\mu_{V_i}: V_i\otimes H^0(C,K\otimes L^{-1})\rightarrow H^0(C,K).$$ It follows that $\dim \rho_1(h^{-1}(\mathcal{L}_{i-1}, W))=g-(\kappa_i\cdot(l-d-1+g)-d_i).$ If $\mathcal{L}_{i}$ is an element in $\rho_1(h^{-1}(\mathcal{L}_{i-1}, W))$, then a pair $(\mathcal{L}_i,W')$ is in the fiber of $h$ over $(\mathcal{L}_{i-1}, W)$ if and only if the truncation map $H^0(\mathcal{L}_i)\rightarrow H^0(\mathcal{L}_{i-1})$ takes $W'$ into $W$. A lifting $W'$ of $W$ is determined by the preimage of $\{s_{i-1,k}\}_k$ in $W'$. By Lemma \ref{filtration lemma}, we see that any two liftings only differ by an element of $H^0(L)$. Hence we deduce that $h^{-1}(\mathcal{L}_{i-1},W)\cap\rho_1^{-1}(\mathcal{L}_i)$ is an affine space of dimension $\kappa_i\cdot l$. Thus the dimension of every nonempty fiber of the horizonal map $h$ is $g+d_i-\kappa_i\cdot (l-d-1+g)+\kappa_i\cdot l$. By Lemma \ref{dimofbundles}, we have: \begin{equation*} \begin{array}{cl} \dim\widetilde{S}_{i,i,\textbf{V}}=\dim S_{i,\textbf{V}}+\kappa_i\cdot (\sum\limits_{j=1}^{i}n_j(\lambda))\\ \dim \widetilde{S}_{i,i,\textbf{V}}\leq \widetilde{S}_{i-1,i,\textbf{V}}+g+d_i-\kappa_i\cdot (l-d-1+g)+\kappa_i\cdot l\\ \dim \widetilde{S}_{i-1,i,\textbf{V}}=\dim S_{i-1,\textbf{V}}+\kappa_i\cdot (\sum\limits_{j=1}^{i-1}n_j(\lambda))) \end{array} \end{equation*} It follows that $$\dim S_{i,\textbf{V}}-\dim S_{i-1,\textbf{V}}\leq g-\kappa_i\cdot (g-d-1)+d_i-\kappa_i\cdot n_i(\lambda).$$ This proves $(1)$. Part $(2)$ follows from $(1)$ by induction on $i$, using $\dim S_{0,\textbf{V}}=0$. \end{proof} \begin{remark}\label{rmk} From the proof, we know that the equality in (1) can be achieved if the map $h: \widetilde{S}_{i,i,\textbf{V}}^{\lambda}\rightarrow \widetilde{S}_{i-1,i,\textbf{V}}^{\lambda}$ is a surjection. In fact, we will see that equality can be achieved when we apply the above lemma in the proofs of the main theorems. \end{remark} We now prove our first main result. \noindent{\it Proof} of Theorem \textbf{A}. Let $C$ be a curve of genus $g\geq 3$. Since we fix the curve $C$, we may and will write $W^r_d$ for $W^r_d(C)$ for every $r$ and $d$. By Remark \ref{smoothness}, we know that $\Theta_{\text{sing}}=W_{g-1}^{1}=\bigcup\limits_{l\geq 2} (W_{g-1}^{l-1}\smallsetminus W_{g-1}^{l}) $. To bound the dimension of $(\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})$, it is enough to bound the dimension of $(\pi^{\Theta}_m)^{-1}(W_{g-1}^{l-1}\smallsetminus W_{g-1}^ {l})$ for each $l\geq 2$. Let $L$ be a point in $W_{g-1}^{l-1}\smallsetminus W_{g-1}^{l}$. We have seen in the proof of Theorem \ref{multiplicity thm} that $\Theta_{m,L}=\pic^{g-1}(C)_{m,L}$ for $m< l $. Hence $\dim\Theta_{m,L}=mg$ for $m<l$. We now assume that $m\geq l$. Recall that we put $C_{\lambda,m}=\{\mathcal{L}_m\in \Theta_{m,L}~|~ \mathcal{L}_m \text{ is of type } \lambda\}$, where $\lambda$ is a partition in $\Lambda_{l,m+1}$. By Lemma \ref{condition}, $\Theta_{m,L}$ is a finite union of $C_{\lambda,m}$, with $\lambda$ satisfying $\sum\limits_{i=1}^{l}{\lambda_i}\geq m+1$. In order to prove the theore, we first bound the dimension of each $C_{\lambda,m}$. We now fix a partition $\lambda\in \Lambda_{l,m+1}$ with $\sum\limits_{i=1}^{l}{\lambda_i}\geq m+1$. Let $\kappa$ be the signature with $\kappa_i=1$ for every $i\leq \lambda_{l}-1$ and $\kappa_i=0$ for $i\geq \lambda_l$. If $\mathcal{L}_m\in C_{\lambda}$, we denote by $\mathcal{L}_i$ the image of $\mathcal{L}_m$ under $\rho^m_i:\pic^m(C)_m\rightarrow \pic^i(C)_i$ for every $i\leq m$. The definition of $n_k(\lambda)$ implies that $\lambda_l$ is the largest index $k$ such that $n_k(\lambda)\neq 0$. Remark \ref{image} implies that the map $\pi^{\lambda_l-1}_0: H^0(\mathcal{L}_{\lambda_l-1})\rightarrow H^0(L)$ is nonzero while the map $\pi^{\lambda_l}_0: H^0(\mathcal{L}_{\lambda_l})\rightarrow H^0(L)$ is zero. Let $W\subset H^0(C,L)$ be the $1$--dimensional subspace in the image of $\pi^{\lambda_l-1}_0$. Consider a weak flag of $H^0(L)$ of signature $\kappa$, \begin{equation*} \textbf{V}_W: H^0(C,L)=V_0\supset V= V_1=\cdots V_{\lambda_l-1}=W\supset V_{\lambda_l}=\cdots=V_m=0. \end{equation*} Hence $\mathcal{L}_i$ is in $S_{i,\textbf{V}}^\lambda$ for each $i\leq \lambda_l-1$. We thus conclude that the truncation map $\rho^m_{\lambda_{l}-1}: \pic^{g-1}(C)_{m,L}\rightarrow \pic^{g-1}(C)_{\lambda_{l}-1,L}$ maps $C_{\lambda,m}$ into $S_{\lambda_{l}-1,\kappa}^\lambda$. Let $\text{Flag}_\kappa$ be the variety parameterizing all weak flags of signature $\kappa$. Let $W$ be a 1-dimensional subspace of $H^0(C,L)$. It defines a weak flag $\textbf{V}_W=\{V_i\}$ of signature $\kappa$. We thus have a bijection between $\text{Flag}_\kappa$ and ${\mathbf P}(H^0(C,L))$. We now compute the dimension of $S_{\lambda_l-1,\textbf{V}_W}^{\lambda}$. Let $s_0$ be a nonzero element in $W$. The multiplication map $$m_{s_0}: H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C)$$ is always injective. We thus conclude that $W\otimes H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C)$ is injective. Recall that $d_i$ is the dimension of the kernel of map $$\mu_{V_i}: V_i\otimes H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C).$$ We conclude that $d_i=0$ for every $i$ with $0\leq i\leq m$. Moreover, the dual map of $m_{s_0}$, denoted by $m_{s_0}^*: H^1(C,\mathcal{O}_C)\rightarrow H^1(C,L)$, is a surjection. Lemma \ref{equationoftransitionfunction} implies that for every $i\leq \lambda_{l}-1$ and every $s_{i-1}\in H^0(\mathcal{L}_{i-1})$ which is a lifting of $s_0$, there are line bundles $\mathcal{L}_i$ over $\mathcal{L}_{i-1}$ such that $s_{i-1}$ can be extended as a section of $\mathcal{L}_i$. Therefore, the horizontal map $h: \widetilde{S}_{i,i,\textbf{V}_W}^\lambda\rightarrow \widetilde{S}_{i-1,i,\textbf{V}_W}^\lambda$ is a surjection. By Lemma \ref{dimension S} and Remark \ref{rmk}, we obtain that for every weak flag $\textbf{V}_W$ $$\dim S_{\lambda_l-1,\textbf{V}_W}^\lambda=(\lambda_l-1)g-\sum\limits_{k=1}^{\lambda_l-1}n_k(\lambda).$$ By Lemma \ref{dimofbundles}, we obtain that $\dim S_{\lambda_{l}-1,\kappa}^{\lambda}\leq \max\limits_{W}\{\dim S_{\lambda_l-1,\textbf{V}_W}^\lambda\}+\dim {\mathbf P} H^0(L)$, where $W\in H^0(C,L)$. We consider $C_{\lambda,m}$ as a subset of the preimage of $S_{\lambda_{l}-1,\kappa}^{\lambda}$ under the map $\rho^m_{\lambda_l-1}:\pic^{g-1}(C) _m \rightarrow \pic^{g-1}(C)_{\lambda_l-1}$. Hence \begin{equation*} \begin{array}{cl} \dim C_{\lambda,m} & \leq g\cdot (m-\lambda_l+1)+\max\limits_{W}\{\dim S_{\lambda_l-1,\textbf{V}_W}\}+\dim {\mathbf P}(H^0(L)) \\ &=mg-\sum\limits_{j=1}^{\lambda_l-1}n_j+l-1\\ &=mg-(\sum\limits_{i=1}^{l}\lambda_i-r_{\lambda_l})+l-1 \end{array} \end{equation*} Martens' theorem says that for every smooth curve of genus $g\geq 3$, and every $d$ and $r$ with $2\leq d\leq g-1$ and $0< 2r\leq d$, we have $\dim W^r_d(C)\leq d-2r$. (See \cite{Kem}). For $1\leq m<l$, we have \begin{equation*} \begin{array}{cl} \dim(\pi^{\Theta}_m)^{-1}(W^{l-1}_{g-1}\smallsetminus W^{l}_{g-1})&=\dim{(W^{l-1}_{g-1}\smallsetminus W^{l}_{g-1})}+ mg\\ &\leq g-1-2(l-1)+mg\\ &=(m+1)(g-1)+(m-2(l-1))\\ &\leq (m+1)(g-1)-m \end{array} \end{equation*} For $m\geq l$, we have \begin{equation}\label{star}\tag{$\ast$} \begin{array}{cl} \dim(\pi^{\Theta}_m)^{-1}(W^{l-1}_{g-1}\smallsetminus W^{l}_{g-1})&\leq \max\limits_{\lambda}\left\{\dim(C_\lambda)+g-2l+1\right\}\\ &\leq \max\limits_{\lambda} \left\{(m+1)(g-1)-(\sum\limits_{i=1}^{l}\lambda_i-m-1)-(l-r_{\lambda_l}(\lambda))\right\} \end{array} \end{equation} where $\lambda$ varies over partitions in $\Lambda_{l,m+1}$ with $\sum_{i=1}^{l}\lambda_i\geq m+1$. We conclude that for every $m$,we have $\dim(\pi^{\Theta}_m)^{-1}(W^{l-1}_{g-1}\smallsetminus W^{l}_{g-1})\leq (m+1)(g-1)$. Furthermore, if the equality is achieved for some $m$, then there is $\lambda\in \Lambda_{l,m+1}$ such that $\sum\limits_{i=1}^{l}\lambda_i=m+1$ and $l=r_{\lambda_l}(\lambda)$, i.e. $\lambda_1=\cdots=\lambda_l$. It follows that for $m$ such that $m+1$ is not divisible by any integer $l\in [2,g-1]$, the set $(\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})=\bigcup\limits_{l\geq 2}(\pi^{\Theta}_m)^{-1}(W^{l-1}_{g-1}\smallsetminus W^{l}_{g-1})$ has dimension smaller than $(m+1)(g-1)$. Hence $\Theta_m$ is irreducible for arbitrarily large $m$, which implies that $\Theta_m$ is irreducible for all $m$. (See \cite[Proposition 1.6]{Mus1}.) This implies that $$\dim (\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})\leq (m+1) (g-1)-1$$ for every $m$. In order to get the lower bound for $\dim (\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})$, we need the following lemma, see \cite[proposition 1.6]{Mus1}. \begin{lemma}\label{count more} If $X$ is a locally complete intersection variety of dimension n and $Z\subset X$ is a closed subscheme, then $\dim(\pi^X_{m+1})^{-1}(Z)\geq \dim(\pi^X_{m})^{-1}(Z) +n$ for every $m\geq 1$. \end{lemma} If $C$ is a hyperelliptic curve, we show that $\dim (\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})=(m+1)(g-1)-1$ by induction on $m\geq 1$. By \cite[\S VI.4]{ACGH}, we know that $\Theta_{\text{sing}}$ has dimension equal to $g-3$, which implies that $\dim (\pi^{\Theta}_{1})^{-1}(\Theta_{\text{sing}})=g-3+g=2(g-1)-1$. We thus have the assertion for $m=1$. Assume now that the assertion holds for $m-1$. A repeated application of Lemma \ref{count more} implies that for every $m\geq 1$, $\dim (\pi^{\Theta}_m)^{-1}(L)\geq (m-1)(g-1)+\dim(\pi_1^{-1})(L)$. Hence for every $L\in \Theta_{\text{sing}}$, $\dim (\pi^{\Theta}_m)^{-1}(L)\geq (m-1)(g-1)+g$. Therefore $$\dim (\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})\geq\dim \Theta_{\text{sing}}+(m-1)(g-1)+g=(m+1)(g-1)-1.$$ This completes the proof of the theorem for hyperelliptic curves. We now assume that $C$ is a nonhyperelliptic curve of genus $g$, and show that $\dim (\pi^{\Theta}_{m})^{-1}(\Theta_{\text{sing}})=(m+1)(g-1)-2$ by induction on $m$. By \cite[\S VI.4]{ACGH}, we have $\dim\Theta_{\text{sing}}=g-4$, hence $\dim (\pi^{\Theta}_{1})^{-1}(\Theta_{\text{sing}})=g-4+g=2(g-1)-2$. This proves the assertion for $m=1$. A repeated application of Lemma \ref{count more} implies that for every $L\in \Theta_{\text{sing}}$ and every $m\geq 1$, $\dim (\pi^{\Theta}_m)^{-1}(L)\geq (m-1)(g-1)+\dim(\pi_1^{\Theta})^{-1}(L)$. We thus have $\dim (\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})\geq g-4+(m-1)(g-1)+g=(m+1)(g-1)-2$. In order to finish the proof, it is enough to show that $$\dim (\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})<(m+1) (g-1)-1$$ for every $m$. Assume there is some $m_{0}$ such that $\dim (\pi^{\Theta}_{m_{0}})^{-1}(\Theta_{\text{sing}})\geq (m_{0}+1)(g-1)-1$. A repeated application of Lemma \ref{count more} implies that for every $m>m_{0}$, $$\dim (\pi^{\Theta}_{m})^{-1}(\Theta_{\text{sing}})\geq (m-m_0)(g-1)+\dim (\pi^{\Theta}_{m_{0}})^{-1}(\Theta_{\text{sing}})\geq(m+1)(g-1)-1.$$ On the other hand, for nonhyperelliptic curves, Martens' theorem has a better bound, namely $\dim W^{r}_{d}\leq d-2r-1$. By applying it for the theta divisor, we have $$\dim (W^{l-1}_{g-1}\smallsetminus W^{l}_{g-1})\leq g-2l.$$ Arguing as in (\ref{star}), we obtain \begin{eqnarray*} \dim(\pi^{\Theta}_m)^{-1}(W^{l-1}_{g-1}\smallsetminus W^{l}_{g-1})\leq \max\limits_{\lambda}\left\{(m+1)(g-1)-(\sum\limits_{i=1}^{l}\lambda_i-m-1)-(l-r_{\lambda_l}(\lambda))-1\right\} \end{eqnarray*} where $\lambda$ varies over partitions in $\Lambda_{l,m=1}$ with $\sum_{i=1}^l\lambda_i\geq m+1$. It follows that unless there is a $\lambda\in \Lambda_{l,m+1}$ with $\sum\limits_{i=1}^{l}\lambda_i=m+1$ and $r_{\lambda_l}(\lambda)=l$, we have $$\dim(\pi^{\Theta}_m)^{-1}(\Theta_{\text{sing}})< (m+1)(g-1)-1.$$ Therefore this holds for every $m$ such that $m+1$ is not divisible by any integer $2\leq l\leq g-1$. Since there are arbitrarily large such $m$, we obtain a contradiction. \hfill{$\Box$} In \cite{Mus1}, Musta\c{t}\v{a} describes complete intersection rational singularities in terms of jet schemes as follows. If X is a local complete intersection variety of dimension $n$ over $k$, then the following are equivalent: \begin{enumerate} \item[(i)] $X$ has rational singularities. \item[(ii)] $X$ has canonical singularities. \item[(iii)] $X_m$ is irreducible for each $m$. \item[(iv)] $\dim \pi_m^{-1}(X_{\text{sing}})< n(m+1)$ for every $m$. \end{enumerate} The equivalence of the first two parts is due to Elkik, see \cite{Elk}. Note also that by Theorem 3.3 in \cite{EMY}, for a reduced irreducible divisor $D$ on a smooth variety $X$ of dimension $n$, the following are equivalent, \begin{enumerate} \item[(i)] The jet scheme $D_m$ is a normal variety for every $m$.\\ \item[(ii)] $D$ has terminal singularities. \\ \item[(iii)] For every $m$, $\dim (\pi^{D}_m)^{-1}(D_{\text{sing}})\leq (m+1)(n-1)-2$. \end{enumerate} Applying these two results to the theta divisor, we obtain the following result concerning the singularities of this variety. \begin{corollary}\label{terminal and ratioanl} Let $C$ be a smooth projective curve of genus $g\geq 3$ over $k$. The theta divisor has terminal singularities if $C$ is a nonhyperelliptic curve. If $C$ is hyperelliptic, then the theta divisor has canonical non-terminal singularities. In particular, the theta divisor has rational singularities for every smooth curve. \end{corollary} We now apply the above ideas to compute the log canonical threshold of the pair $(\pic^d(C), W^r_d(C))$ at a point $L\in W^r_d(C)$, where $C$ is general in the moduli space of curves. In \cite[Corollary 3.6]{Mmus2}, one gives the following formula for the log canonical threshold of a pair in terms of the dimensions of the jet schemes. If $Y\subset X$ is a closed subscheme and $Z\subset X$ is a nonempty closed subset, then the log canonical threshold of the pair $(X,Y)$ at $Z$ is given by $$\lct_Z(X,Y)=\dim X-\sup\limits_{m\geq 0}\frac{\dim (\pi^Y_m)^{-1}(Y\cap Z)}{m+1}.$$ For every $L\in W^r_d(C)$, the above formula implies that $$\lct_L(\pic^d(C),W^r_d(C))=g-\sup\limits_{m\geq 0} \frac{\dim W^r_d(C)_{m,L}}{m+1}.$$ Our main goal is to estimate the dimension of $W^r_d(C)_{m,L}$ for each $m$. We now turn to the proof of Theorem \textbf{B}. Let $C$ be a general smooth projective curve of genus $g$ and let $L$ be a line bundle on $C$. The generality assumption on $C$ implies that the natural pairing $$\mu_0: H^0(C, L)\otimes H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C)$$ is injective for every $L$. This was stated by Petri and first proved by Gieseker \cite{Gie}. Before proving the theorem, we need to prove an identity for every partition as preparation. \begin{lemma}\label{idforpart} Let $\lambda\in \Lambda_{l,m+1}$ and $\lambda_0=0$. We now prove that $$\sum\limits_{i=1}^{\lambda_l} n_i^2(\lambda)=\sum\limits_{i=1}^{l}(l-i+1)^2(\lambda_i-\lambda_{i-1}).$$ \end{lemma} \begin{proof} Given a partition $\lambda\in \Lambda_{l,m+1}$, we may write it as: \begin{eqnarray*} 1\leq \lambda_1=\cdots=\lambda_{m_1}<\lambda_{m_1+1}=\cdots=\lambda_{m_2}<\lambda_{m_2+1}\cdots \lambda_{m_k}<\lambda_{m_{k}+1}=\cdots=\lambda_l. \end{eqnarray*} For simplicity, we write $n_i$ for $n_{i}(\lambda)$. It is easy to see that \begin{eqnarray*} n_1&=&\cdots=n_{\lambda_{m_1}}=l\\ n_{\lambda_{m_1}+1}&=&\cdots=n_{\lambda_{m_2}}=l-m_1\\ ~~~\cdots\\ n_{\lambda_{m_{k}}+1}&=&\cdots=n_{\lambda_{l}}=l-m_{k-1} \end{eqnarray*} This implies that \begin{eqnarray*} \sum\limits_{i=1}^{l}(l-i+1)^2(\lambda_i-\lambda_{i-1})&=& l^2(\lambda_{1})+(l-m_1)^2(\lambda_{m_1+1}-\lambda_{m_1})+\cdots+(l-m_{k})^2(\lambda_{m_k+1}-\lambda_{m_k})\\ &=&l^2(\lambda_{m_1})+(l-m_1)^2(\lambda_{m_2}-\lambda_{m_1})+\cdots+(l-m_{k})^2(\lambda_{l}-\lambda_{m_k})\\ &=&\sum\limits_{i=1}^{\lambda_{m_1}}l^2+\sum\limits_{i=\lambda_{m_1+1}}^{\lambda_{m_2}}n_i^2+\cdots+\sum\limits_{i={\lambda_{m_{k}+1}}}^{\lambda_l}n_i^2\\ &=&\sum\limits_{i=1}^{\lambda_l}n_i^2=\sum\limits_{i=1}^{\lambda_l}n_i^2(\lambda) \end{eqnarray*} \end{proof} \noindent{\it Proof} of Theorem B. Let C be a general smooth projective curve in the sense of Petri and Gieseker. Let $L$ be a line bundle in $W^r_d(C)$ with $l=h^0(C,L)\geq r+1$. Since we are only interested in the asymptotic behavior of $W^r_d(C)_{m,L}$, we can assume that $m\geq l$. By Lemma \ref{conditionw} we have a stratification $W^r_d(C)_{m,L}= \bigcup C_{\lambda,m}$, where $\lambda\in \Lambda_{l,m+1}$ are taken over all length $l$ partitions satisfying $\sum\limits_ {i=1}^{l-r }{\lambda_i}\geq m+1$. We now fix such a partition $\lambda$. Let $\kappa$ be a signature with $\kappa_i=n_{i+1}(\lambda)$ for $i$ with $1\leq i\leq m$. For every $\mathcal{L}_m\in C_{\lambda,m}$, we denote by $\mathcal{L}_i$ the image of $\mathcal{L}_m$ under the truncation $\rho^m_{i}: \pic^d(C)_m\rightarrow \pic^d(C)_i$. The images $V_i$ of the maps $\pi^i_0: H^0({\mathcal{L}_i})\rightarrow H^0(C,L)$ give a weak flag $\textbf{V}_{\mathcal{L}_m}$. By Remark \ref{image}, we obtain $\dim V_i=h^0(\mathcal{L}_i)-h^0(\mathcal{L}_{i-1})=n_{i+1}(\lambda)$. Therefore $\textbf{V}_{\mathcal{L}_m}$ is a weak flag of signature $\kappa$. The image of $\mathcal{L}_m$ in $\pic^d(C)_{\lambda_l-1}$ is in $S_{\lambda_l-1,\textbf{V}_{\mathcal{L}_m}}^\lambda$. Hence the truncation map $\rho_{\lambda_{l}-1}^{m}: \pic^d(C)_{m} \rightarrow \pic^d(C)_{\lambda_{l}-1}$ maps $C_{\lambda,m}$ to $S_{\lambda_l-1, \kappa}^\lambda=\bigcup\limits_{\textbf{V}}S_{\lambda_l-1,\textbf{V}}^\lambda$, where $\textbf{V}$ varies over all weak flags of signature $\kappa$. The key step is to compute the dimension of $S_{\lambda_l-1,\textbf{V}}^\lambda$ for each $\textbf{V}$. We keep the notation in the proof of Lemma \ref{dimension S}. The fact that the canonical pairing $$\mu_0: H^0(C, L)\otimes H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C)$$ is injective implies that all restrictions $\mu_{V_i}: V_i\otimes H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C)$ are injective. Hence $d_i=\dim \ker \mu_{V_i}$ is zero for every weak flag $\textbf{V}$ of $H^0(L)$ of signature $\kappa$. We now show that if the canonical pairing $\mu_0$ is injective, then all horizontal maps $h: \widetilde{S}_{i,i,\textbf{V}}^\lambda\rightarrow \widetilde{S}_{i-1,i, \textbf{V}}^\lambda$ in the proof of Lemma \ref{dimension S} are surjective. Let $(\mathcal{L}_{i-1}, W)$ be an element in $\widetilde{S}_{i-1,i,\textbf{V}}^\lambda$. Given a point $\mathcal{M}$ in the fiber of $\rho^i_{i-1}: \pic^d(C)_i\rightarrow \pic^d(C)_{i-1}$ over $\mathcal{L}_{i-1}$, we get an isomorphism $(\rho^i_{i-1})^{-1}(\mathcal{L}_{i-1})\cong H^1(C,\mathcal{O}_C)$. Let $\{s_{0,p}\}_p$ be a basis of $V_i$, and $s_{i-1,p}$ the lifting of $s_{0,p}$ to the level $i-1$ in $W$. It is easy to see that $(\mathcal{L}_{i-1}, W)$ is in the image of $h$ if and only if there is an element $\mathcal{L}_i\in (\rho^i_{i-1})^{-1}(\mathcal{L}_{i-1})$ such that for every $p$, the section $s_{i-1,p}$ has an extension to a section of $\mathcal{L}_{i}$. By Lemma \ref{equationoftransitionfunction}, we deduce that for every $p$, $s_{i-1,p}$ has an extension to a section of $\mathcal{L}_i$ if and only if the equation of the following form holds: \begin{equation}\label{ssk}\tag{$\diamond_p$} \nu(s_{0,p}\otimes [\mathcal{L}_i])=\tau_p. \end{equation} where $\tau_p$ is a cohomology class in $H^1(C,L)$ determined by the section $s_{i-1,p}$. In order to prove that $(\mathcal{L}_{i-1},W)$ is in the image of $h$, it suffices to show the existence of an element $\mathcal{L}_i\in (\rho^i_{i-1})^{-1}(\mathcal{L}_{i-1})$ such that the equation (\ref{ssk}) holds for every $p$. Recall that $H^1(C,\mathcal{O}_C)$ is the dual space of $H^0(C,K_C)$, hence we identify $[\mathcal{L}_i]$ with a linear map $H^0(C,K_C)\rightarrow k$. By the duality between $H^1(C,L)$ and $H^0(C,K_C\otimes L^{-1})$, we identify $\tau_p$ with a linear map $H^0(C,K_C\otimes L^{-1})\rightarrow k$. For every $p$, there is a map $m_{s_{0,p}}: H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C)$ taking $\gamma \in H^0(C,K_C\otimes L^{-1})$ to $\mu_{0}(s_{0,p}\otimes \gamma)$. Hence the equation (\ref{ssk}) holds for $\mathcal{L}_i$ for all $p$ if and only if the composition map $$H^0(C,K_C\otimes L^{-1})\stackrel{m_{s_{0,p}}}\hookrightarrow H^0(C,K_C)\stackrel{[\mathcal{L}_i]}\rightarrow k$$ is equal to $\tau_p$ for all $p$. Let $A_p$ be the image of $m_{s_{0,p}}$. The fact that $V_i \otimes H^0(C,K_C\otimes L^{-1})\rightarrow H^0(C,K_C)$ is injective implies that the sum $\sum\limits_{p}A_p$ in $H^0(C,K_C)$ is a direct sum. We conclude that there is a \u{C}ech cohomology classes $[\mathcal{L}_i]$ satisfying (\ref{ssk}) for all $p$. Therefore $(\mathcal{L}_{i-1}, W)$ is in the image of $h$. Applying Lemma \ref{dimension S} and Remark \ref{rmk} to the case $i={\lambda_l-1}$, we obtain \begin{equation*} \dim S_{\lambda_l-1,\textbf{V}}^{\lambda}=g(\lambda_l-1)-\sum\limits_{i=2}^{\lambda_l}n_{i}(\lambda)(g-d-1)- \sum\limits_{i=1}^{\lambda_l-1}n_{i+1}(\lambda)n_{i}(\lambda). \end{equation*} Recall that $\text{Flag}_\kappa$ is the variety parameterizing all weak flag variety of signature $\kappa$. We denote by $D_{\kappa}$ the dimension of $\text{Flag}_{\kappa}$. It is easy to see that $\text{Flag}_\kappa$ is exactly the usual flag variety of signature $\kappa'$ where $\kappa'$ is the longest decreasing subsequence of $\kappa$. Since $k_1=n_2(\lambda)\leq n_1(\lambda)=l=H^0(C,L)$, there are only finitely many ways to get strictly decreasing sequence with values $\leq l$ and length $\leq l$. There are thus only finitely many integers $D_{\kappa}$. Let $K_1$ be the maximal value among these numbers. Clearly $K_1$ only depends on $l$. In particular, it is independent on $m$, hence $\lim\limits_{m\rightarrow \infty}\frac{K_1}{m}=0$. Note that $S_{\lambda_l-1,\kappa}^\lambda=\bigcup\limits_{\textbf{V}}S_{\lambda_l-1,\textbf{V}}^\lambda$, where $\textbf{V}$ varies over all weak flags in $\text{Flag}_\kappa$. We thus have $\dim S_{\lambda_l-1,\kappa}^\lambda\leq \max\limits_{\textbf{V}}\{\dim S_{\lambda_l-1,\textbf{V}}^\lambda\}+K_1$. We have seen that $\rho^m_{\lambda_l-1}(C_{\lambda,m})\subset S_{\lambda_l-1,\kappa}^\lambda$, hence $\dim C_{\lambda,m}\leq(m-\lambda_{l}+1)g+ \dim S_{\lambda_l-1,\kappa}^\lambda$. For each $m\geq l$, we thus have \begin{eqnarray*} \codim(\pic^d(C)_{m,L}, W^r_d(C)_{m,L}) &=& \min\limits_\lambda\left\{mg-\dim C_{\lambda,m}\right\}\\ &\geq& \min\limits_\lambda\left\{\sum\limits_{i=2}^{\lambda_l}n_i(\lambda)(g-d-1)+ \sum\limits_{i=1}^{\lambda_l-1}n_{i}(\lambda)n_{i+1}(\lambda)-K_1\right\}\\ &=& \min\limits_\lambda\left\{(\sum\limits_{i=1}^{l}\lambda_i-l)(g-d-1)+\sum\limits_{i=1}^{\lambda_l-1}n_{i}(\lambda)n_{i+1}(\lambda)-K_1\right\} \end{eqnarray*} where $\lambda$ varies over the partitions in $\Lambda_{l,m+1}$ with $\sum\limits_ {i=1}^{l-r}{\lambda_i}\geq m+1$. Note that $\sum\limits_{i=1}^{\lambda_l-1}n_i(\lambda)n_{i+1}(\lambda)\geq \sum\limits_{i=1}^{\lambda_l}n_i^2(\lambda)-l^2$, and since $\lim\limits_{m\rightarrow \infty}\frac{l^2}{m}=0=\lim\limits_{m\rightarrow \infty}\frac{K_1}{m}$, we obtain \begin{eqnarray*} &&\inf\limits_{m\rightarrow \infty} \frac{\codim (\pic^d(C)_{m,L}, W^r_d(C)_{m,L})}{m+1} \\ &\geq& \inf\limits_{m\rightarrow \infty}\min\limits_\lambda\left\{\frac{1}{m+1}\left((\sum\limits_{i=1}^{l}\lambda_i-l)(g-d-1)+\sum\limits_{i=1}^{\lambda_l-1}n_{i+1}(\lambda)n_i(\lambda)-K_1\right)\right\}\\ &\geq&\inf\limits_{m\rightarrow \infty}\min\limits_\lambda\left\{\frac{1}{m+1}\left((\sum\limits_{i=1}^{l}\lambda_i) (g-d-1)+\sum\limits_{i=1}^{\lambda_l} n_i^2(\lambda)\right)\right\}\\ \end{eqnarray*} By Lemma \ref{idforpart}, we thus obtain \begin{eqnarray*} &&\inf\limits_{m\rightarrow \infty} \frac{\codim (\pic^d(C)_{m,L}, W^r_d(C)_{m,L})}{m+1}\\ &\geq & \inf\limits_{m\rightarrow \infty}\min\limits_\lambda\left\{\frac{1}{m+1} \left(\sum\limits_{i=1}^{l}(l-i+1)(\lambda_i-\lambda_{i-1})(g-d-1)+ \sum\limits_{i=1}^{l}(l-i+1)^2(\lambda_i-\lambda_{i-1})\right) \right\}\\ &=&\inf\limits_{m\rightarrow \infty}\min\limits_\lambda\left\{\frac{1}{m+1}\sum\limits_{i=1}^{l}(\lambda_i-\lambda_{i-1})(g-d+l-i)(l-i+1) \right\} \end{eqnarray*} For every $i$ with $1\leq i\leq l$, let $x_i=\lambda_i-\lambda_{i-1}$. Consider a linear function of the form $\sum\limits_{i=1}^{l}b_ix_i$ with $b_i\geq 0$, defined over the region $$\{(x_1,\cdots,x_l)\in \mathbb{R}^l~ |~ x_i\geq 0 {\rm ~for~every ~} i, ~\sum\limits_{i=1}^{l-r}(l-i-r+1)x_i\geq m+1\}.$$ The minimum value of this function is achieved at the vertices of this region, i.e. the points with all the $x_i$ but one equal to $0$ and $\sum\limits_{i=1}^{l-r}(l-i-r+1)x_i=m+1$. We thus have \begin{equation}\label{rhs}\tag{$\sharp$} \lct_L(\pic^d(C),W^r_d(C))\geq\min\limits_{i=1}^{l-r}\left\{\frac{(l+1-i)(g-d-i+l)}{l+1-r-i}\right\} \end{equation} On the other hand, recall that one can locally define a map from $\pic^d(C)$ to a variety of matrices $M_{(d+e+1-g)\times e}$ such that $W^r_d(C)$ is the pull back of a suitable generic determinantal variety $Y$ defined by $e+d+1-g-r$ minors. Let $\Phi_{L}$ be the image of $L$. The right hand side in (\ref{rhs}) is the log canonical threshold of the pair $(M_{(d+e+1-g)\times e},Y)$ at the point $\Phi_{L}$ (for the formula of log canonical threshold of a generic determinantal variety, see \cite[Theorem 3.5.7]{Doc}). We thus have $\lct_L(\pic^d(C),W^r_d(C)) \leq \lct_{\Phi_{L}}(M_{(d+e+1-g)\times e},Y)$, by \cite[Example 9.5.8]{Lar}, which completes the proof. \hfill{$\Box$} \section{Appendix} Let $L$ be a line bundle in $\pic^d(C)$ with $l=h^0(C,L)$. In this section, we are going to show that the subsets $S_{i,\textbf{V}}^{\lambda}$ and $S_{i,\kappa}^{\lambda}$ of $\pic^d(C)_{i,L}$ defined in section $2$ are constructible subsets of $\pic^d(C)_{i,L}$. The key point is to realize $\widetilde{S}_{i,j,\textbf{V}}^{\lambda}$ as a constructible subset of a suitable product of Grassmann bundles. Let $X$ be a scheme and $E$ a vector bundle of rank $n$ over $X$. For every $d\leq n$, we denote by $Gr(d,E)$ the Grassmann bundle of $d$-dimensional subspaces in $E$ and by $\pi$ the projection morphism from $Gr(d,E)$ to $X$. We write elements in $Gr(d,E)$ as pairs $(x,W)$ where $x$ is a point in $X$ and $W$ is a dimension $d$ subspace of $E_x$. \begin{lemma}\label{degeneracy} If $\Phi: E\rightarrow F$ is a homomorphism of vector bundles on the scheme $X$, then we have \begin{enumerate} \item The subset $I_{\Phi}:=\{x\in X~|~ \Phi_x: E_x\rightarrow F_x ~\text{is an injection}\}$ is an open subset of $X$. \item If $H$ is a subbundle of $F$, then the set $M^{\Phi}_{H}:=\{x~|~\Phi_x(E_x)\subset H_x\}$ is a closed subset of $X$. \end{enumerate} \end{lemma} The proof of Lemma \ref{degeneracy} is standard, so we leave it to the reader. Recall that $\mathcal{P}$ is a Poincar\'{e} line bundle on $\pic^d(C)\times C$. From the definition of jet schemes, we have $\pic^d(C)_m\times C_m\cong (\pic^d(C)\times C)_m\cong \Hom(T_m, \pic^d(C)\times C)$. By the adjunction (\ref{adjunction}) in section $1$ for $Y=\pic^d(C)_m\times C_m$ and $X=\pic^d(C)\times C$, the identity map of $\pic^d(C)_m\times C_m$ gives an evaluation morphism $\pic^d(C)_m\times C_m\times T_m\xrightarrow{\Xi} \pic^d(C)\times C$. For every $m$, we also have a morphism $C\xrightarrow{\gamma_m} C_m$ that takes a point to the corresponding constant jet. We have the composition map $$\eta: \pic^d(C)_m\times C\times T_m\xrightarrow{\id\times \gamma_m \times \id} \pic^d(C)_m\times C_m\times T_m\xrightarrow{\Xi} \pic^d(C)\times C.$$ We denote by $\mathcal{B}_m$ the pull back of the line bundle $\mathcal{P}$ to $\pic^d(C)_m\times C\times T_m$ via $\eta$. Recall that for every partition $\lambda$ in $\Lambda_{l,m+1}$, $C_{\lambda,m}$ is the locally closed subset $$\{\mathcal{L}_m\in \pic^d(C)_{m,L}~|~ \mathcal{L}_m \text{ is of type } \lambda\}.$$ For every $0\leq i\leq m$, there is a natural map $\Lambda_{l,m+1}\rightarrow \Lambda_{l,i+1}$ mapping $\lambda$ to $\overline{\lambda}$ where $\overline{\lambda}_k=\min\{\lambda_k, i+1\}$ for each $k\leq l$. We have seen that $\rho^m_i: \pic^d(C)_{m,L} \rightarrow \pic^d(C)_{i,L}$ maps $C_{\lambda,m}$ to $C_{\overline{\lambda},i}$. We now fix a partition $\lambda\in \Lambda_{l,m+1}$. We denote by $\mathcal{B}_{\lambda,m}$ the restriction of $\mathcal{B}_m$ to the subscheme $C_{\lambda,m}\times C\times T_m$, where on $C_{\lambda,m}$ we consider the reduced scheme structure. We denote by $p_1$ the projection to the first factor $\pic^d(C)_{m,L}\times C\times T_m\rightarrow \pic^d(C)_{m,L}$. It is easy to check that for every $\mathcal{L}_m\in \pic^d(C)_{m,L}$ corresponding to a morphism $f: T_m\rightarrow \pic^d(C)$, the restriction of $\mathcal{B}_{m}$ to the fiber of $p_1^{-1}(\mathcal{L}_m)\cong C\times T_m$ is $(f\times \id_C)^*(\mathcal{P})\cong\mathcal{L}_m$. Recall that for every $i$ with $0\leq i\leq m$, there is a closed embedding $\iota^m_i: T_i\hookrightarrow T_m$. Let $$\nu^m_i: C_{\lambda,m}\times C\times T_i\hookrightarrow C_{\lambda,m}\times C\times T_m$$ be the induced embedding. Let $\mathcal{D}_{\lambda,i}$ be the sheaf ${p_1}_*(\nu^m_i)_*(\nu^m_i)^*(\mathcal{B}_{\lambda,m})$ on $C_{\lambda,m}$. Consider the function $C_{\lambda,m}\rightarrow {\mathbf Z}$ that takes $\mathcal{L}_m$ to $h^0(C\times T_i,\mathcal{L}_i)$, where $\mathcal{L}_i$ is the image of $\mathcal{L}_m$ in $\pic^d(C)_i\cong \pic^d(C\times T_i)$. Lemma \ref{dimension lemma} implies that this function is constant on $C_{\lambda,m}$. By the Base Change Theorem, we deduce that $\mathcal{D}_{\lambda,i}$ is a locally free sheaf of rank $\sum\limits_{j=1}^{i+1}n_j(\lambda)$ on $C_{\lambda,m}$, whose fiber over a point $\mathcal{L}_m$ is $H^0(C,L_i)$. For every $i$ and $j$ with $0\leq j\leq i\leq m$, the embedding map $\nu^m_j$ factors through $\nu^m_{i}$. We thus have a natural morphism of sheaves $$(\nu^m_{i})_*(\nu^m_{i})^*(\mathcal{B}_{\lambda,m})\rightarrow(\nu^m_j)_*(\nu^m_j)^*(\mathcal{B}_{\lambda,m})$$ on $C_{\lambda,m}\times C\times T_{m}$. Applying $(p_1)_*$ to it, we obtain a vector bundle map $$\Phi^{i}_j:\mathcal{D}_{\lambda,i}\rightarrow \mathcal{D}_{\lambda,j}$$ on $C_{\lambda,m}$ whose restriction to the fiber over $\{\mathcal{L}_m\}$ is the truncation map $$\pi^{i}_j: H^0(\mathcal{L}_{i})\rightarrow H^0(\mathcal{L}_j).$$ For a fixed partition $\lambda\in \Lambda_{l,m+1}$, we consider $\kappa=(\kappa_1, \cdots, \kappa_m)$ a signature with $k_{j}\leq n_{j+1}(\lambda)$ for every $j\leq m$. For every $i\leq m$, a point in the fiber product of Grassmann bundles $$\mathcal{G}_{\lambda,i,\kappa}:=Gr(\kappa_1, \mathcal{D}_{\lambda,1})\times_{C_{\lambda,m}} \cdots \times_{C_{\lambda,m}} Gr(\kappa_i, \mathcal{D}_{\lambda,i})$$ over $C_{\lambda,m}$ is written as an $(m+1)$--tuple $(\mathcal{L}_m;\widetilde{V}_1,\cdots, \widetilde{V}_i)$, where $\mathcal{L}_m\in C_{\lambda,m}$ and $\widetilde{V}_j$ is a dimension $\kappa_j$ subspace of $(\mathcal{D}_{\lambda,j})|_{\mathcal{L}_m}\cong H^0(\mathcal{L}_j)$ for every $j\leq i$. For every weak flag $\textbf{V}$ of $H^0(C,L)$ of signature $\kappa$, we denote by $\mathcal{P}_{m,i,\textbf{V}}^\lambda$ the subset of points $(\mathcal{L}_m;\widetilde{V}_1,\cdots, \widetilde{V}_i)\in \mathcal{G}_{\lambda,i,\kappa}$, where $\mathcal{L}_m\in C_{\lambda,m}$ and $\{\widetilde{V}_1,\ldots, \widetilde{V}_i\}$ is a compatible extension of $\textbf{V}_{(i)}$ to the line bundle $\mathcal{L}_i$. We also write $\mathcal{P}_{m,i,\kappa}^\lambda$ for $\bigcup\limits_{\textbf{V}}\mathcal{P}_{m,i,\textbf{V}}^\lambda$, where $\textbf{V}$ varies over all weak flags of $H^0(C,L)$ of signature $\kappa$. Recall that $\text{Flag}_{\kappa}$ is the variety parameterizing weak flags of $H^0(C,L)$ of signature $\kappa$. We denote by $\widetilde{\mathcal{P}}_{m,i,\kappa}^\lambda$ the subset of points $$(\mathcal{L}_m;\widetilde{V}_1,\cdots, \widetilde{V}_i; \textbf{V}')\in \mathcal{G}_{\lambda,i,\kappa}\times \text{Flag}_{\kappa}$$ where $\textbf{V}'\in \text{Flag}_{\kappa}$ and $(\mathcal{L}_m;\widetilde{V}_1,\cdots, \widetilde{V}_i)\in \mathcal{P}_{m,i,\textbf{V}'}^\lambda$. \begin{lemma}\label{consofgf} Let $\lambda\in \Lambda_{l,m+1}$ and $\kappa$ be a signature of length $m$ with $\kappa_j\leq n_{j+1}(\lambda)$ for every $1\leq j\leq m$. Then for every $i$ with $1\leq i\leq m$, $\widetilde{\mathcal{P}}_{m,i,\kappa}^\lambda$ is a constructible subsets of $\mathcal{G}_{\lambda,i,\kappa}\times \text{Flag}_{\kappa}$. \end{lemma} \begin{proof} For simplicity, we write $X$ for the scheme $\mathcal{G}_{\lambda,i,\kappa}\times \text{Flag}_{\kappa}$. For $j$ with $1\leq j\leq i$, we denote by $p_j$ the projection $X$ onto $Gr(\kappa_j,\mathcal{D}_{\lambda,j})$ and by $p_{i+1}$ the projection of $X$ onto $\text{Flag}_{\kappa}$. For a fixed $j$ with $1\leq j\leq i$, we denote by $q_j: Gr(\kappa_j,\mathcal{D}_{\lambda,j})\rightarrow C_{\lambda,m}$. The composition map $$X\xrightarrow{p_j} Gr(\kappa_j,\mathcal{D}_{\lambda,j})\xrightarrow{q_j} C_{\lambda,m}$$ does not depend on a particular choice of $j$ for $j\leq i$. We denote it by $\chi$. For every $j$ with $1\leq j\leq i$, we denote by $T_j$ the tautological subbundle of $q_j^*(\mathcal{D}_{\lambda,j})$ on $Gr(\kappa_j,\mathcal{D}_{\lambda,j})$. Let $\mathcal{T}_j=p_j^*{T_j}$ and $\mathcal{F}_j$ be the vector bundle $p_j^*{q_j^*(\mathcal{D}_{\lambda,j})}=\chi^*(\mathcal{D}_{\lambda,j})$. Hence $\mathcal{T}_j$ is a subbundle of $\mathcal{F}_j$ for each $j$. Over a point $x=(\mathcal{L}_m;\widetilde{V}_1,\cdots, \widetilde{V}_i; \textbf{V}')\in X$, we have $\mathcal{T}_{j,x}=\widetilde{V}_{j}$ and $\mathcal{F}_{j,x}$ is $H^0(\mathcal{L}_j)$ where $\mathcal{L}_j$ is the image of $\mathcal{L}_m$ under $\pic^d(C)_{m,L}\rightarrow \pic^d(C)_{j,L}$. For every $k$ and $j$ with $0\leq k\leq j\leq i$, we write $\Psi^j_k$ for the composition $\mathcal{T}_j\hookrightarrow \mathcal{F}_j\rightarrow\mathcal{F}_k$. Let $R_1\supseteq R_2\supseteq\cdots \supseteq R_m$ be the tautological flag bundles on $\text{Flag}_{\kappa}$, where the fiber of $R_j$ over a point $\textbf{V}'=\{V'_n\}_n$ in $\text{Flag}_{\kappa}$ is $V'_j$. We write $\mathcal{R}_j$ for the pull back of $R_j$ via $p_{i+1}:X\rightarrow \text{Flag}_{\kappa}$. Over a point $x=(\mathcal{L}_m;\widetilde{V}_1,\cdots, \widetilde{V}_i; \textbf{V}')\in X$, where $\textbf{V}'=\{V'_j\}\in \text{Flag}_\kappa$ we have $\mathcal{R}_{j,x}=V'_j$. Note that $\mathcal{D}_0$ is the trivial vector bundle on $C_{\lambda,m}$ with fiber $H^0(C,L)$. Hence $\mathcal{F}_0$ is a trivial bundle on $X$ with fiber $H^0(C,L)$. It implies that $\mathcal{R}_{j}$ is a subbundle of $\mathcal{F}_0$. With the notation in Lemma \ref{degeneracy}, we have \begin{eqnarray*} \widetilde{\mathcal{P}}_{m,i,\kappa}^\lambda=\bigcap\limits_{j=1}^{i} (I_{\Psi^j_0}\cap M^{\Psi^j_{j-1}}_{\mathcal{T}_{j-1}}\cap M^{\Psi^j_0}_{\mathcal{R}_j}) \end{eqnarray*} This completes the proof. \end{proof} \begin{corollary}\label{consofp} With the notation in Lemma \ref{consofgf}, let $\textbf{V}$ be a weak flag of $H^0(C,L)$ of signature $\kappa$. For every $i$ with $1\leq i\leq m$, $\mathcal{P}_{m,i,\kappa}^\lambda$ and $\mathcal{P}_{m,i,\textbf{V}}^\lambda$ are both constructible subsets of $\mathcal{G}_{\lambda,i,\kappa}$. \end{corollary} \begin{proof} We denote by $pr_1$ and $pr_2$ the projections of $\mathcal{G}_{\lambda,i,\kappa}\times \text{Flag}_{\kappa}$ onto $\mathcal{G}_{\lambda,i,\kappa}$ and $\text{Flag}_{\kappa}$, respectively. We thus deduce that $\mathcal{P}_{m,i,\kappa}^\lambda$, as the image of $\widetilde{\mathcal{P}}_{m,i,\kappa}^\lambda$ under $pr_1$, is a constructible subset of $\mathcal{G}_{\lambda,i,\kappa}$. It is clear that $\mathcal{P}_{m,i,\textbf{V}}^\lambda$ is the image of $\widetilde{\mathcal{P}}_{m,i,\kappa}^\lambda\cap pr_2^{-1}(\textbf{V})$ under $pr_1$. Lemma \ref{consofgf} implies that $\widetilde{\mathcal{P}}_{m,i,\kappa}^\lambda\cap pr_2^{-1}(\textbf{V})$ is a constructible subset of $\mathcal{G}_{\lambda,i,\kappa}\times \text{Flag}_{\kappa}$. This completes the proof. \end{proof} \begin{corollary} Let $\kappa$ be a signature of length $m$ with $k_j\leq n_{j+1}(\lambda)$ for every $j\leq m$, and $\textbf{V}\in \text{Flag}_\kappa$. For every $i$ with $1\leq i\leq m$, the subsets $S_{i,\kappa}^{\lambda}$ and $S_{i,\textbf{V}}^{\lambda}$ are constructible subset of $\pic^d(C)_{i,L}$. \end{corollary} \begin{proof} For a fixed $i$, let $\overline{\kappa}$ be a signature of length $i$ such that $\overline{\kappa}_j=\kappa_j$ for every $j\leq i$. Recall that $\overline{\lambda}$ is the image of $\lambda$ under $\Lambda_{l,m+1}\rightarrow \Lambda_{l,i+1}$. By the definition of $S_{i,\kappa}^{\lambda}$, we have $S_{i,\kappa}^{\lambda}=S_{i,\overline{\kappa}}^{\overline{\lambda}}$ for every $i\leq m$. It suffice to prove the assertion in case $i=m$. Recall that $\chi$ is the morphism of projection $\mathcal{G}_{\lambda,m,\kappa}\rightarrow C_{\lambda,m}$. The fact that $S_{m,\kappa}^{\lambda}$ is the image of $\mathcal{P}_{m,m,\kappa}$ under $\chi$ and Corollary \ref{consofp} shows that $S_{m,\kappa}^{\lambda}$ is a constructible subset of $\pic^d(C)_{m,L}$. The assertion for $S_{i,\textbf{V}}^{\lambda}$ is proved similarly. \end{proof} \begin{lemma}\label{consofS} $\widetilde{S}_{i,j,\textbf{V}}^\lambda$ is a constructible subset of the Grassmann bundle $Gr(\kappa_j,\mathcal{D}_i)$ on $C_{\lambda,i}$. \end{lemma} The proof of this lemma is similar to those of Lemma \ref{consofgf} and Corollary \ref{consofp}, hence we leave it to the reader. \providecommand{\bysame}{\leavevmode \hbox \o3em {\hrulefill}\thinspace}
1,477,468,751,301
arxiv
\section{Diffractive $ep$ Scattering} It is well known that in high-energy deep-inelastic $ep$-collisions a large fraction of the observed events are diffractive. These events are defined experimentally by the presence of a forward-going system $Y$ with four-momentum $p_Y$, low mass $M_Y$ (in most cases a single proton and/or low-lying nucleon resonances), small momentum transfer squared $t=(p-p_Y)^2$, and small longitudinal momentum transfer fraction $x_{I\!\!P}=q(p-p_Y)/qp$ from the incoming proton with four-momentum $p$ to the system $X$ (see Fig.\ \ref{fig:1}). The presence of a hard scale, as for \begin{figure}[ht] \centerline{\epsfxsize=0.8\textwidth\epsfbox{fig1.eps}} \caption{\label{fig:1}Diffractive scattering process $ep\to eXY$, where the hadronic systems $X$ and $Y$ are separated by the largest rapidity gap in the final state.} \end{figure} example the photon virtuality $Q^2=-q^2$ in deep-inelastic scattering (DIS) or the large transverse jet momentum $p_T^{*}$ in the photon-proton centre-of-momentum frame, should then allow for calculations of the production cross section for the central system $X$ with the known methods of perturbative QCD. Under this assumption, the cross section for the inclusive production of two jets, $e+p \rightarrow e+2~{\rm jets}+X'+Y$, can be predicted from the well-known formul\ae\ for jet production in non-diffractive $ep$ collisions, where in the convolution of the partonic cross section with the parton distribution functions (PDFs) of the proton the latter ones are replaced by the diffractive PDFs. In the simplest approximation, they are described by the exchange of a single, factorizable pomeron/Regge-pole. \section{Diffractive Parton Distribution Functions} The diffractive PDFs have been determined by the H1 Collaboration at HERA from high-precision inclusive measurements of the DIS process $ep\rightarrow eXY$ using the usual DGLAP evolution equations in leading order (LO) and next-to-leading order (NLO) and the well-known formula for the inclusive cross section as a convolution of the inclusive parton-level cross section with the diffractive PDFs \cite{h1ichep02}. A similar analysis of inclusive measurements has been published by the ZEUS Collaboration \cite{Chekanov:2004hy,Abramowicz:2005yc}. A longer discussion of the extraction of diffractive PDFs can be found elsewhere \cite{Newman:2005wm,% Martin:2004xw}. \section{QCD Factorization in Hard Diffraction} For inclusive diffractive DIS it has been proven by Collins that the formula referred to above is applicable without additional corrections and that the inclusive jet production cross section for large $Q^2$ can be calculated in terms of the same diffractive PDFs \cite{Collins:1997sr}. The proof of this factorization formula, usually referred to as the validity of QCD factorization in hard diffraction, may be expected to hold for the direct part of photoproduction ($Q^2\simeq0$) or low-$Q^2$ electroproduction of jets \cite{Collins:1997sr}. However, factorization does not hold for hard processes in diffractive hadron-hadron scattering. The problem is that soft interactions between the ingoing two hadrons and their remnants occur in both the initial and final state. This agrees with experimental measurements at the Tevatron \cite{Affolder:2000vb}. Predictions of diffractive dijet cross sections for $p\bar{p}$ collisions as measured by CDF using the same PDFs as determined by H1 \cite{h1ichep02} overestimate the measured cross section by up to an order of magnitude \cite{Affolder:2000vb}. This suppression of the CDF cross section can be explained by considering the rescattering of the two incoming hadron beams which, by creating additional hadrons, destroy the rapidity gap \cite{Kaidalov:2001iz}. \section{Factorization Breaking in Diffractive Photoproduction} Processes with real photons ($Q^2 \simeq 0$) or virtual photons with fixed, but low $Q^2$ involve direct interactions of the photon with quarks from the proton as well as resolved photon contributions, leading to parton-parton interactions and an additional remnant jet coming from the photon (for a review see \cite{Klasen:2002xb}). As already said, factorization should be valid for direct interactions as in the case of DIS, whereas it is expected to fail for the resolved process similar as in the hadron-hadron scattering process. In a two-channel eikonal model similar to the one used to calculate the suppression factor in hadron-hadron processes \cite{Kaidalov:2001iz}, introducing vector-meson dominated photon fluctuations, a suppression by about a factor of three for resolved photoproduction at HERA is predicted \cite{Kaidalov:2003xf}. Such a suppression factor has recently been applied to diffractive dijet photoproduction \cite{Klasen:2004tz,Klasen:2004qr} and compared to preliminary data from H1 \cite{h1ichep04} and ZEUS \cite{zeusichep04}. While at LO no suppression of the resolved contribution seemed to be necessary, the NLO corrections increase the cross section significantly, showing that factorization breaking occurs at this order at least for resolved photoproduction and that a suppression factor $R$ must be applied to give a reasonable description of the experimental data. \section{Factorization Scale Dependence for Real Photons} As already mentioned elsewhere \cite{Klasen:2004tz,Klasen:2004qr}, describing the factorization breaking in hard photoproduction as well as in electroproduction at very low $Q^2$ \cite{Klasen:2004ct} by suppressing the resolved contribution only may be problematic. An indication for this is the fact that the separation between the direct and the resolved process is uniquely defined only in LO. In NLO these two processes are related. The separation depends on the factorization scheme and the factorization scale $M_{\gamma}$. The sum of both cross sections is the only physically relevant cross section, which is approximately independent of the factorization scheme and scale \cite{BKS}. As demonstrated in Fig.\ \ref{fig:2}, \begin{figure}[ht] \centerline{\epsfxsize=0.8\textwidth\epsfbox{fig2.eps}} \caption{\label{fig:2}Photon factorization scale dependence of resolved (dashed) and direct (dotted) contributions to the diffractive dijet photoproduction cross section (full curve). Also shown is the sum of the direct and suppressed resolved contribution (dot-dashed curve).} \end{figure} multiplying the resolved cross section with the suppression factor $R=0.34$ destroys the correlation of the $M_{\gamma}$-dependence between the direct and resolved part \cite{Klasen:2004tz,Klasen:2004qr}, and the sum of both parts has a stronger $M_{\gamma}$-dependence than for the unsuppressed case ($R=1$), where the $M_{\gamma}$-dependence of the NLO direct cross section is compensated to a high degree by the $M_{\gamma}$-dependence of the LO resolved part. \\ The introduction of the resolved cross section is dictated by perturbation theory. At NLO, collinear singularities arise from the photon initial state, which are absorbed at the factorization scale into the photon PDFs. This way the photon PDFs become $M_{\gamma}$-dependent. The equivalent $M_{\gamma}$-dependence, just with the opposite sign, is left in the NLO corrections to the direct contribution. With this knowledge, it is obvious that we can obtain a physical cross section at NLO, {\it i.e.} the superposition of the NLO direct and LO resolved cross section, with a suppression factor $R<1$ and no $M_{\gamma}$-dependence left, if we also multiply the $\ln M_{\gamma}$-dependent term of the NLO correction to the direct contribution with the same suppression factor as the resolved cross section. We are thus led to the theoretical conclusion that, contrary to what one may expect, not {\em all} parts of the direct contribution factorize. Instead, the {\em initial state} singular part appearing beyond LO breaks factorization even in direct photoproduction, presumably through soft gluon attachments between the proton and the collinear quark-antiquark pair emerging from the photon splitting. This would be in agreement with the non-cancellation of initial state singularities in diffractive hadron-hadron scattering \cite{Collins:1997sr}. \section{The Transition Region of Virtual Photoproduction} We now present the special form of the $\ln M_{\gamma}$-term in the NLO direct contribution and demonstrate that the $M_{\gamma}$-dependence of the physical cross section cancels to a large extent in the same way as in the unsuppressed case ($R=1$). These studies can be done for photoproduction ($Q^2 \simeq 0$) as well as for electroproduction with fixed, small $Q^2$. Since in electroproduction the initial-state singularity in the limit $Q^2 \rightarrow 0$ is more directly apparent than for the photoproduction case, we shall consider in this contribution the low-$Q^2$ electroproduction case just for demonstration. This diffractive dijet cross section has been calculated recently \cite{Klasen:2004ct}. A consistent factorization scheme for low-$Q^2$ virtual photoproduction has been defined and the full (direct and resolved) NLO corrections for inclusive dijet production have been calculated in \cite{Klasen:1997jm}. In this work we adapt this inclusive NLO calculational framework to diffractive dijet production at low-$Q^2$ in the same way as in \cite{Klasen:2004ct}, except that we multiply the $\ln M_{\gamma}$-dependent terms as well as the resolved contributions with the same suppression factor $R=0.34$, as an example, as in our earlier work \cite{Klasen:2004tz,Klasen:2004qr,% Klasen:2004ct}. The exact value of this suppression factor may change in the future, when better data for photoproduction and low-$Q^2$ electroproduction have been analyzed. We present the $\ln M_{\gamma}$-dependence of the partly suppressed NLO direct and the fully suppressed NLO resolved cross section ${\rm d}\sigma/{\rm d} Q^2$ and their sum for the lowest $Q^2$ bin. \\ The NLO corrections for virtual jet photoproduction have been implemented in the NLO Monte Carlo program JET\-VIP \cite{Potter:1999gg} and adapted to diffractive dijet production in \cite{Klasen:2004ct}. The subtraction term, which is absorbed into the PDFs of the virtual photon $f_{a/\gamma}(x_\gamma,M_{\gamma})$, can be found in \cite{Klasen:2005dq}. The main term is proportional to $\ln(M_{\gamma}^2/Q^2)$ times the splitting function \begin{equation} P_{q_i \leftarrow \gamma}(z) = 2 N_c Q_i^2 \frac{z^2+(1-z)^2}{2}, \label{eq:2} \end{equation} where $z=p_1p_2/p_0q \in [x;1]$ and $Q_i$ is the fractional charge of the quark $q_i$. $p_1$ and $p_2$ are the momenta of the two outgoing jets, and $p_0$ and $q$ are the momenta of the ingoing parton and virtual photon, respectively. Since $Q^2=-q^2 \ll M_{\gamma}^2$, the subtraction term is large and is therefore resummed by the DGLAP evolution equations for the virtual photon PDFs. After this subtraction, the finite term $M(Q^2)_{\overline{\rm MS}}$, which remains in the matrix element for the NLO correction to the direct process \cite{Klasen:1997jm}, has the same $M_{\gamma}$-dependence as the subtraction term, {\it i.e.} $\ln M_{\gamma}$ is multiplied with the same factor. As already mentioned, this yields the $M_{\gamma}$-dependence before the evolution is turned on. In the usual non-diffractive dijet photoproduction these two $M_{\gamma}$-dependences cancel, when the NLO correction to the direct part is added to the LO resolved cross section \cite{BKS}. Then it is obvious that the approximate $M_{\gamma}$-independence is destroyed, if the resolved cross section is multiplied by a suppression factor $R$ to account for the factorization breaking in the experimental data. To remedy this deficiency, we propose to multiply the $\ln M_{\gamma}$-dependent term in $M(Q^2)_{\overline{\rm MS}}$ with the same suppression factor as the resolved cross section. This is done in the following way: we split $M(Q^2)_{\overline{\rm MS}}$ into two terms using the scale $p_T^{*}$ in such a way that the term containing the slicing parameter $y_s$, which was used to separate the initial-state singular contribution, remains unsuppressed. In particular, we replace the finite term after the subtraction by \begin{eqnarray} M(Q^2,R)_{\overline{\rm MS}} &=& \left[ -\frac{1}{2N_c} P_{q_i\leftarrow \gamma}(z)\ln\left( \frac{M_{\gamma}^2 z}{p_T^{*2}(1-z)}\right) +\frac{Q_i^2}{2}\right] R \nonumber \\ && \ -\frac{1}{2N_c} P_{q_i\leftarrow\gamma}(z) \ln\left( \frac{p_T^{*2}}{zQ^2+y_s s}\right) ,\label{eq:4} \end{eqnarray} where $R$ is the suppression factor. This expression coincides with the finite term after subtraction (see Ref.\ \cite{Klasen:2005dq}) for $R=1$, as it should, and leaves the second term in Eq.\ (\ref{eq:4}) unsuppressed. In Eq.\ (\ref{eq:4}) we have suppressed in addition to $\ln(M_{\gamma}^2/ p_T^{*2})$ also the $z$-dependent term $\ln (z/(1-z))$, which is specific to the $\overline{\rm MS}$ subtraction scheme as defined in \cite{Klasen:1997jm}. The second term in Eq.\ (\ref{eq:4}) must be left in its original form, {\it i.e.} being unsuppressed, in order to achieve the cancellation of the slicing parameter ($y_s$) dependence of the complete NLO correction in the limit of very small $Q^2$ or equivalently very large $s$. It is clear that the suppression of this part of the NLO correction to the direct cross section will change the full cross section only very little as long as we choose $M_{\gamma} \simeq p_T^{*}$. The first term in Eq.\ (\ref{eq:4}), which has the suppression factor $R$, will be denoted by ${\rm DIR}_{\rm IS}$ in the following. To study the left-over $M_{\gamma}$-dependence of the physical cross section, we have calculated the diffractive dijet cross section with the same kinematic constraints as in the H1 experiment \cite{Schatzel:2004be}. Jets are defined by the CDF cone algorithm with jet radius equal to one and asymmetric cuts for the transverse momenta of the two jets required for infrared stable comparisons with the NLO calculations \cite{Klasen:1995xe}. The original H1 analysis actually used a symmetric cut of 4 GeV on the transverse momenta of both jets \cite{Adloff:2000qi}. The data have, however, been reanalyzed for asymmetric cuts \cite{Schatzel:2004be}. For the NLO resolved virtual photon predictions, we have used the PDFs SaS1D \cite{Schuler:1996fc} and transformed them from the DIS$_{\gamma}$ to the $\overline{\rm MS}$ scheme as in Ref.\ \cite{Klasen:1997jm}. If not stated otherwise, the renormalization and factorization scales at the pomeron and the photon vertex are equal and fixed to $p_T^{*} = p_{T,jet1}^{*}$. We include four flavors, {\it i.e.} $n_f=4$ in the formula for $\alpha_s$ and in the PDFs of the pomeron and the photon. With these assumptions we have calculated the same cross section as in our previous work \cite{Klasen:2004ct}. First we investigated how the cross section ${\rm d}\sigma/{\rm d} Q^2$ depends on the factorization scheme of the PDFs for the virtual photon, {\it i.e.} ${\rm d}\sigma/{\rm d} Q^2$ is calculated for the choice SaS1D and SaS1M. Here ${\rm d}\sigma/{\rm d} Q^2$ is the full cross section (sum of direct and resolved) integrated over the momentum and rapidity ranges as in the H1 analysis. The results, shown in Fig.\ 2 of Ref.\ \cite{Klasen:2005dq}, demonstrate that the choice of the factorization scheme of the virtual photon PDFs has negligible influence on ${\rm d}\sigma/{\rm d} Q^2$ for all considered $Q^2$. The predictions agree reasonably well with the preliminary H1 data \cite{Schatzel:2004be}. We now turn to the $M_{\gamma}$-dependence of the cross section with a suppression factor for DIR$_{\rm IS}$. To show this dependence for the two suppression mechanisms, (i) suppression of the resolved cross section only and (ii) additional suppression of the DIR$_{\rm IS}$ term as defined in Eq.\ (\ref{eq:4}) in the NLO correction of the direct cross section, we consider ${\rm d}\sigma/{\rm d} Q^2$ for the lowest $Q^2$-bin, $Q^2\in [4,6]$ GeV$^2$. In Fig.\ \ref{fig:3}, this cross section \begin{figure}[ht] \centerline{\epsfxsize=0.8\textwidth\epsfbox{fig3.eps}} \caption{\label{fig:3}Photon factorization scale dependence of resolved and direct contributions to ${\rm d}\sigma/{\rm d} Q^2$ together with their weighted sums for (i) suppression of the resolved cross section and for (ii) additional suppression of DIR$_{\rm IS}$, using SaS1D virtual photon PDFs.} \end{figure} is plotted as a function of $\xi=M_{\gamma}/p_T^{*}$ in the range $\xi\in [0.25;4]$ for the cases (i) (light full curve) and (ii) (full curve). We see that the cross section for case (i) has an appreciable $\xi$-dependence in the considered $\xi$ range of the order of $40\%$, which is caused by the suppression of the resolved contribution only. With the additional suppression of the DIR$_{\rm IS}$ term in the direct NLO correction, the $\xi$-dependence of ${\rm d}\sigma/{\rm d} Q^2$ is reduced to approximately less than $20\%$, if we compare the maximal and the minimal value of ${\rm d}\sigma/ {\rm d} Q^2$ in the considered $\xi $ range. The remaining $\xi $-dependence is caused by the NLO corrections to the suppressed resolved cross section and the evolution of the virtual photon PDFs. How the compensation of the $M_{\gamma}$-dependence between the suppressed resolved contribution and the suppressed direct NLO term works in detail is exhibited by the dotted and dashed-dotted curves in Fig.\ \ref{fig:3}. The suppressed resolved term increases and the suppressed direct NLO term decreases by approximately the same amount with increasing $\xi$. In addition we show also ${\rm d}\sigma/ {\rm d} Q^2$ in the DIS theory, {\it i.e.} without subtraction of any $\ln Q^2$ terms (dashed line). Of course, this cross section must be independent of $\xi$. This prediction agrees very well with the experimental point, whereas the result for the subtracted and suppressed theory (full curve) lies slightly below. We notice, that for $M_{\gamma}=p^{*}_T$ the additional suppression of DIR$_{\rm IS}$ has only a small effect. It increases ${\rm d}\sigma/{\rm d} Q^2$ by $5\%$ only \cite{Klasen:2005cz,Bruni:2005eb}. \section{Conclusion} When comparing experimental data from the H1 and ZEUS Collaborations at HERA for diffractive dijet production in DIS and photoproduction with NLO QCD predictions using diffractive parton densities from H1 and ZEUS, good agreement is found for DIS assuming the H1 diffractive PDFs. However, the dijet photoproduction data are overestimated by the NLO theory, showing that factorization breaking occurs at this order. While this is expected theoretically for resolved photoproduction, the fact that the data are better described by a global suppression of direct {\em and} resolved contribution by about a factor of two comes as a surprise. We have therefore discussed in some detail the factorization scheme and scale dependence between direct and resolved contributions and proposed a new factorization scheme for diffractive dijet photoproduction. \section*{Acknowledgments} The author thanks the organizers of the Ringberg workshop on {\em New Trends in HERA Physics 2005} for the kind invitation, G.\ Kramer for his continuing collaboration, and the {\em Comit\'e de Financement des Projets de Physique Th\'eorique de l'IN2P3} for financial support.
1,477,468,751,302
arxiv
\section{Introduction} Recent observations of galaxy clustering in both photometric and spectroscopic surveys have found more relative power on large scales, $R_p \sim 20 {\,h^{-1}\,{\rm Mpc}}$ ($h=H_0/100$ km/sec/Mpc), than that expected in the standard cold dark matter (CDM) model of structure formation (e.g., Maddox, etal. 1990, Efstathiou, etal. 1990, Baumgart and Fry 1991, Gramann and Einasto 1991, Hamilton, etal. 1991, Peacock and Nicholson 1991, Saunders, etal. 1991, Loveday, etal. 1992, Fisher, etal. 1992, Park, etal. 1992, Vogeley, etal. 1992, Feldman, etal. 1993). More precisely, the {\it shape} of the observed galaxy power spectrum $P_g(k)$ or of its Fourier transform, the two-point galaxy correlation function $\xi_g(r)$, differs on these scales from the standard CDM model prediction. Recall that in the standard CDM model, the Universe is spatially flat, with a density $\Omega_{cdm} = 1 - \Omega_B \simeq 0.95$ in non-baryonic, weakly interacting particles which have negligible free-streaming length, and the Hubble parameter $h=0.5$. Additionally, one posits that the density perturbations responsible for large-scale structure are adiabatic and Gaussian, with a scale-invariant primordial power spectrum $P(k) =\langle|\delta_k(t_i)|^2\rangle \sim k$, as expected in canonical inflation scenarios. The present spectrum is related to the primordial one through the transfer function, $T(k;\Omega_i,h)$, which encodes the scale-dependence of the linear growth of perturbations, $\langle|\delta_k(t_0)|^2\rangle= T^2(k)\langle|\delta_k(t_i)|^2\rangle$. Finally, the galaxy power spectrum is related to the density spectrum by a bias factor $b_g$, \begin{equation} P_{g}(k)=b^2_g T^2(k)|\delta_k(t_i)|^2 ~~~. \label{pbt} \end{equation} A number of alternatives have been suggested to remedy the shape of the CDM galaxy spectrum, each of which involve modifications of one or more of the standard ingredients of the CDM model in {equation~}(\ref{pbt}). These include models with a lower density of cold dark matter, $\Omega_{{\rm cdm}}h \simeq 0.2$, plus a cosmological constant to retain spatial flatness (Efstathiou, Sutherland, and Maddox 1990), and models with a mixture of cold and hot dark matter, $\Omega_{{\rm cdm}} \simeq 0.7$, $\Omega_{{\rm hdm}} \simeq 0.3$ (e.g., Schaefer, etal. 1989, van Dalen and Schaefer 1992, Taylor and Rowan-Robinson 1992, Davis, etal. 1992, Pogosyan and Starobinsky 1992, Klypin, etal. 1992). In these two cases, the transfer function $T(k)$ is flattened on scales $k^{-1}\sim R_p$ compared to standard CDM. CDM models with `tilted' non-scale-invariant, power-law primordial spectra, $\langle|\delta_k(t_i)|^2\rangle \sim k^n$ with $n < 1$, which arise naturally in several models of inflation, have also been recently explored (Adams, etal. 1993, Cen, etal. 1992, Gelb, etal. 1993, Liddle and Lyth 1992, Liddle, etal. 1992, Vittorio, etal. 1988). In addition, there is a growing literature on models with non-Gaussian initial fluctuations; in some cases, initial skewness and/or kurtosis can lead to enhanced structure on large scales (e.g., Moscardini, etal. 1993 and references therein). While such models can display interesting behavior of the higher order moments, in this paper we will focus on initially Gaussian fluctuations. In all these variations on the CDM theme, one important assumption is left unchanged: that the observable galaxy distribution is related through a simple bias mechanism to the underlying matter distribution predicted by theory (e.g., Bardeen, etal. 1986). In essence, following Kaiser (1984a,b) and Bardeen (1984), one assumes that galaxies form from peaks above some global threshold in the smoothed linear density field. In the limit of high threshold and small variance, this model is well approximated by the commonly employed linear bias scheme, in which the galaxy and mass density fields, $\delta_g(\hbox{\twelveBF x})= (n_g(\hbox{\twelveBF x})-\bar{n_g})/\bar{n_g}$ and $\delta(\hbox{\twelveBF x})=(\rho(\hbox{\twelveBF x})-\bar{\rho})/\bar{\rho}$, are linearly related through a constant bias factor, \begin{equation} \delta_g(\hbox{\twelveBF x}) = b_g \delta(\hbox{\twelveBF x}) ~~~. \label{bg} \end{equation} This relation, implicitly assumed in {equation~}(\ref{pbt}), embodies the standard model for biased galaxy formation. Early numerical evidence for biasing came from the CDM simulations of White, etal. (1987), which showed that dark matter halos are more strongly clustered than, and thus `naturally' biased with respect to, the mass. However, since galaxy formation is a complex, non-linear process involving both gravitational and non-gravitational interactions, the relation between the mass and the galaxy distributions may be more complicated than in the peak bias model. Even purely gravitational high-resolution N-body simulations suggest that virialized halos are not always well identified with peaks in the linear density field (Katz, Quinn, and Gelb 1992). It is therefore of interest to ask whether a more or less well-motivated modification of the standard bias scheme can generate the excess large-scale power within the context of the standard CDM model. This idea has been recently studied by Babul and White (1991) and by Bower, etal. (1993) (for precursors, see Rees 1985, Silk 1985 and Dekel and Rees 1987). The common thread in these ideas is that the bias mechanism can be modulated by environment-dependent effects. For example, in their cooperative galaxy formation scenario, Bower, etal. (1993) (hereafter BCFW) suggest that the threshold above which perturbations actually form bright galaxies may be lower in large-scale, high-density regions than elsewhere. Or perhaps baryons may be inhibited from cooling in regions photoionized by an early generation of quasars (Babul and White 1991). The net result of these feedback mechanisms is that the transformation from the density field $\delta(\hbox{\twelveBF x})$ to the galaxy field $\delta_g(\hbox{\twelveBF x})$ becomes {\it non-local} (by contrast with {equation~}(\ref{bg})), and the effective bias factor becomes scale-dependent. If the bias factor increases with scale, the galaxy spectrum will have more power at large scales, as desired. This modification of the standard CDM scenario is fundamentally different from those mentioned above: with scale-dependent bias, the extra large-scale power relative to standard CDM is only apparent, in the sense that it is only a property of the galaxy field, not the underlying mass density field; by contrast, in the other CDM variants (non-zero $\Lambda$, tilt, or mixed dark matter), there is genuine extra power in the density field. In this paper, we consider how the higher order irreducible moments of the galaxy distribution can be used as a test of models for large-scale structure. We consider the standard CDM model and its variants with extra large-scale power (in particular, $\Omega h =0.2$ CDM), as well as a generalized version of the non-local, scale-dependent bias scheme embodied in the cooperative galaxy formation (hereafter, CGF) model of BCFW in the context of otherwise-standard CDM. Using the results of second-order perturbation theory (Fry 1984), we compare in detail the predictions of these models for the three-point function $\xi_3$ with data from the Center for Astrophysics (CfA, Huchra, etal. 1983), Southern Sky (SSRS, Da Costa, etal. 1991), and Perseus-Pisces (Haynes and Giovanelli 1988) redshift surveys in the mildly non-linear regime ($\xi_2 <1$). Since $\xi_3$ is of second-order in the density perturbation amplitude for initially Gaussian fluctuations, for self-consistency we must generalize the models to include the possibility of non-linear (as well as non-local) bias and extend them from Gaussian to hierarchical matter fields. [We will use the well known result that, at least in the mildly non-linear regime, the matter field evolved gravitationally from Gaussian initial conditions leads to hierarchical statistics of the form $\mathop{\bigl\langle}\delta^J\mathop{\bigr\rangle} \propto \mathop{\bigl\langle}\delta^2\mathop{\bigr\rangle}^{J-1}$ (cf. Fry 1984, Goroff {\it et al.\ } 1986, Bernardeau 1992).] The allowance for non-linear bias introduces an additional dimensionless parameter into the model. Even with this additional degree of freedom, we find that the CGF model tends to require rather large values of the bias parameter in order to match the 3-point function data, because scale-dependent bias modifies the correlation hierarchy, leading to a dramatic decrease of the hierarchical amplitudes $Q_J$ at large scales, $r \gtilde R_p$. In the context of standard CDM, such a high bias is in conflict with the COBE DMR observations of microwave anisotropy on large scales. We show that observations of the 3-point function in Fourier space, $Q(k)$, on the largest scales accessible to current redshift surveys should provide a definitive test of the CGF model and of more general models with scale-dependent bias. Our basic conclusion is that the scale-dependent bias solution to the problem of extra large-scale power affects the 3-point functions very differently from models with genuine extra power (such as CDM with $\Omega h = 0.2$). Thus, the higher-order correlations provide an important test to distinguish between different solutions of the extra power problem. The paper is organized as follows. In section II, since it may be less familiar to the reader, we briefly review and generalize the CGF model and recapitulate the results of BCFW, demonstrating the enhancement of the two-point function on large scales required to fit the APM angular correlation function data. In section III, we review the results on the 3-point and higher order correlations in perturbation theory, focusing on the evolution of an initial Gaussian density field into a hierarchical field. In section IV, we study the higher order moments in the CGF model. Self-consistency demands that we further extend the model to include non-linear bias. In section V, we compare the standard CDM, low-density CDM, and CGF-modified CDM predictions to the data on the 3-point function from the CfA, SSRS, and Perseus-Pisces redshift surveys and we conclude in section VI. \section{Cooperative Galaxy Formation and Scale-Dependent Bias} The cooperative galaxy formation (CGF) model of BCFW is a simple phenomenological prescription for obtaining a scale-dependent bias. It starts with the standard assumptions of the CDM model, but the biasing mechanism is modified from the high peak threshold scenario. In the standard peak bias model (Kaiser 1984a, Bardeen, etal. 1986), the sites of galaxy formation are identified with peaks of the smoothed linear density field. That is, one convolves the initial density field with a filter of characteristic scale $R_g \sim 1 {\,h^{-1}\,{\rm Mpc}}$, and then identifies galaxies with peaks of the smoothed field above some threshold $\nu \sigma$, i.e., with density maxima satisfying $\delta(\hbox{\twelveBF x}_{pk}) > \nu \sigma$, where $\sigma^2_{R_g} = \langle (\rho - \bar{\rho})^2 \rangle/\bar{\rho}^2$ is the variance of the smoothed field, and $\nu$ sets the threshold height. (Hereafter, we implicitly assume the field $\delta(\hbox{\twelveBF x})$ is smoothed on the scale $R_g$.) For example, for an infinitely sharp threshold, the galaxy field is $\delta_g(\hbox{\twelveBF x}_{pk}) = \theta(\delta(\hbox{\twelveBF x}_{pk})-\nu \sigma)$. The combination of the threshold peak height $\nu$ and the spatial smoothing scale $R_g$ is chosen so that the density of peaks reproduces the observed abundance of luminous galaxies; moreover, these parameters are taken to be global, spatially invariant quantities. In the limit of high threshold ($\nu \gg 1$) and small variance, the two-point correlation function of the peaks is enhanced over that of the mass by an approximately constant factor (Kaiser 1984a), \begin{equation} \xi_{pk}(r;\nu) \simeq \left({\nu^2\over \sigma^2}\right) \xi(r) ~~~, \label{pknu} \end{equation} where $\xi(r) = \langle \delta(\hbox{\twelveBF x}) \delta(\hbox{\twelveBF x}+\hbox{\twelveBF r}) \rangle$. Since $\xi(r)$ is quadratic in the density field, this is equivalent to the linear bias model of {equation~}(\ref{bg}), with the identification of the bias factor as $b_g = (\nu/\sigma)$. Following Kaiser (1984a) and BCFW, we will apply this model to regions above the threshold, $\delta(\hbox{\twelveBF x}) > \nu \sigma$, rather than to maxima; this simplifies the model while retaining its important features. BCFW extend the standard bias model by replacing the universal threshold $\nu$ with a threshold that depends on the mean mass density in a surrounding `domain of influence' of characteristic size $R_s > R_g$. The motivation is to model the possibility that peaks form galaxies more easily (or perhaps form brighter galaxies which are included in a magnitude-limited catalog) if there are other peaks nearby--thus the name cooperative galaxy formation. Specifically, they assume that galaxies form from regions satisfying \begin{equation} \delta(\hbox{\twelveBF x}) > \nu \sigma - \kappa \bar{\delta}(\hbox{\twelveBF x};R_s) ~~, \label{mod} \end{equation} where $\bar{\delta}(\hbox{\twelveBF x};R_s)$ is the density field smoothed on the scale $R_s$, and $\kappa$ is the modulation coefficient of the threshold. If $\kappa > 0$, the threshold for galaxy formation is lower in ``protosupercluster" regions than in ``protovoids". The parameters $R_s$ and $\kappa$ parametrize the scale and strength of cooperative effects; they are also constrained by the observed galaxy abundance. The model of {equation~}(\ref{mod}) is equivalent to applying the standard threshold bias model to the new density field defined by \begin{equation} \delta'(\hbox{\twelveBF x}) \equiv\delta(\hbox{\twelveBF x})+\kappa\bar{\delta}(\hbox{\twelveBF x};R_s) ~~~, \label{prime} \end{equation} that is, to imposing the condition $\delta' > \nu \sigma$. Note that $\delta'$ is a Gaussian random field if the underlying density field $\delta$ is Gaussian. Here we consider a generalization of the CGF model: instead of applying a sharp threshold clipping to $\delta'(\hbox{\twelveBF x})$, we assume that the galaxy field is an arbitrary continuous function of the field $\delta'$, \begin{equation} {\delta_g}(\hbox{\twelveBF x})=f(\delta'(\hbox{\twelveBF x})) = f\left[\delta(\hbox{\twelveBF x})+\kappa\bar{\delta}(\hbox{\twelveBF x};R_s)\right] ~~. \label{eq:bias} \end{equation} For example, in the limit of high threshold, for the standard bias model the function $f$ is approximately an exponential, $f(x) = {\rm exp}(\nu x/\sigma)$ (Kaiser 1984b, Politzer and Wise 1984). We assume that $f$ is expandable in a Taylor series in its argument, \begin{equation} \delta_g = f(\delta') =\sum_{k=1}^\infty {b_k \over {k!}} {\delta'}^k ~~~. \label{eq:taylor} \end{equation} BCFW compute the two-point correlation function for the CGF model on large scales, using the CDM density spectrum derived from linear perturbation theory. In this regime, our generalized CGF model reduces to the linear bias model applied to the field $\delta'$, \begin{equation} {\delta_g}(\hbox{\twelveBF x})=b_g \delta'(\hbox{\twelveBF x}) = b_g\left[\delta(\hbox{\twelveBF x})+\kappa\bar{\delta}(\hbox{\twelveBF x};R_s)\right] \label{lincgf} \end{equation} where we have identified $b_g = b_1$. That is, by working only to linear order in perturbation theory, one should self-consistently include only the first (linear) term in the series of {equation~}(\ref{eq:taylor}). Conversely, when we consider second order perturbations below, we can and should include the possibility of quadratic ($k=2$) bias. Comparing {equation~}(\ref{lincgf}) with {equation~}(\ref{bg}), it is clear that cooperative effects boost the galaxy power spectrum on large scales relative to the standard global bias model. Taking a Gaussian filter for the smoothed density field, \begin{equation} \bar{\delta}(\hbox{\twelveBF x};R_s) = \left(2\pi R^2_s \right)^{-3/2}\int d^3r \delta(\hbox{\twelveBF r}) {\rm exp}\left(-{|\hbox{\twelveBF x}-\hbox{\twelveBF r}|^2\over 2 R^2_s}\right) ~~, \label{dsm} \end{equation} the Fourier transforms of the density fields satisfy \begin{equation} \delta'(\hbox{\twelveBF k})=\delta(\hbox{\twelveBF k})~\left[1+\kappa {\cal G}(\hbox{\twelveBF k})\right] ~~~, \label{ftdk} \end{equation} where ${\cal G}$ is the Fourier transform of the window filter in $\bar{\delta}(\hbox{\twelveBF x};R_s)$, \begin{equation} {\cal G}(\hbox{\twelveBF k})= {\cal G}(k)= e^{-(kR_s)^2/2} ~~, \label{Gkdef} \end{equation} with $k=|\hbox{\twelveBF k}|$. The galaxy power spectrum, $P_g(k) = \langle |\delta_g(\hbox{\twelveBF k})|^2 \rangle$, is thus related to the density power spectrum, $P(k)= \langle |\delta(\hbox{\twelveBF k})|^2\rangle$, by \begin{equation} P_g(k) = b^2_g P'(k)= b^2_g \left[1 +\kappa{\cal G}(k)\right]^2 P(k) \equiv b^2_{\rm eff}(k) ~P(k) ~~~. \label{eq:P'} \end{equation} This expression makes manifest how cooperative effects result in an effective scale-dependent bias, $b_{\rm eff}(k) = b_g[1+\kappa{\cal G}(k)]$. On small lengthscales, $k^{-1} \ll R_s$, {equation~}(\ref{eq:P'}) implies the usual bias factor, $b_{\rm eff}(k \rightarrow \infty) \simeq b_g$, while on large scales, $k^{-1} \gg R_s$, the effective bias factor is increased to $b_{\rm eff}(k \rightarrow 0) \simeq b_g(1+\kappa)$. In the parameter range studied by BCFW, the choice $\kappa = 2.29$, $R_s = 20 {\,h^{-1}\,{\rm Mpc}}$ appears to give the best fit to the observed extra large-scale power for CDM when compared to the APM angular correlation function, and we shall focus mainly on this case. We see that this choice boosts the galaxy power spectrum on scales $k \ltilde 0.05 h$ Mpc$^{-1}$ by over a factor of ten. To see what these effects look like graphically for the CDM model, we consider the linear CDM density power spectrum of Davis, etal. (1985), \begin{equation} P(k) = A \sigma^2_8 k \left(1+{1.7k\over \Omega h}+{9k^{3/2}\over (\Omega h)^{3/2}}+{k^2\over (\Omega h)^2}\right)^{-2} ~~~, \label{cdmpk} \end{equation} where the wavenumber $k$ is in units of $h$ Mpc$^{-1}$. Here the normalization is set as usual in terms of the variance of the linear mass fluctuation within spheres of radius 8${\,h^{-1}\,{\rm Mpc}}$, $\sigma_8 \equiv \langle (\delta M/M)^2 \rangle^{1/2}_{R= 8 h^{-1} {\rm Mpc}}$, where \begin{equation} \sigma_R^2 = {1\over 2\pi^2}{\intop_0^\infty} dk k^2 P(k)W^2(kR) ~~, \label{sigr} \end{equation} and the top-hat window function \begin{equation} W(kR) = {3\over{(kR)^3}}(\sin kR - kR \cos kR) \label{wkr} \end{equation} filters out the contribution from small scales. For standard CDM with $\Omega h = 0.5$, this gives $A=2.76\times 10^5({\,h^{-1}\,{\rm Mpc}})^3$. Substituting the CDM power spectrum with $\Omega h = 0.5$ into {equation~}(\ref{eq:P'}), we find the galaxy two-point correlation function for the CGF model \begin{equation} \xi_g(r) = {1\over 2\pi^2}\int dk~ k^2 {{\rm sin}kr\over kr}P_g(k) ~~, \label{xicgf} \end{equation} shown in Fig. 1 (the curve labelled CGF, with $\kappa =2.29$, $R_s = 20 {\,h^{-1}\,{\rm Mpc}}$). Note that we actually plot $\xi_g(r)/(b_g \sigma_8)^2$, where $b_g$ is the constant factor in {equation~}(\ref{lincgf}). Redshift surveys of optically selected galaxies (in particular the CfA and Stromlo-APM surveys) indicate that the variance in galaxy counts on 8 ${\,h^{-1}\,{\rm Mpc}}$ scale is of order unity. Thus, in a linear, scale-independent bias model, the bias factor for these galaxies would be expected to be $b_{opt} \simeq 1/\sigma_8$; for other galaxy populations, however, $b_{gal} \sigma_8$ may differ from unity. For comparison, in Fig. 1 we also show the two-point function for standard CDM ($\Omega h =0.5$, $\kappa =0$) and for a low-matter-density CDM model ($\Omega h = 0.2$). Both the CGF model and the low-density CDM model have sufficient relative large-scale power to approximately reproduce the observed galaxy angular correlation function $w(\theta)$ inferred from the APM survey (BCFW, Maddox, etal. 1990, Efstathiou, Sutherland, and Maddox 1990). This level of extra power is also broadly consistent with that inferred from the power spectrum of IRAS galaxies (Feldman, etal. 1993, Fisher, etal. 1992) and the redshift-space two-point function $\xi(s)$ inferred from the Stromlo-APM survey (Loveday, etal. 1992). The CGF curve in Fig. 1 should be compared to that in Fig. 2 of BCFW. Note that the linear bias approximation used here ({equation~}(\ref{lincgf})) differs from the non-linear threshold formula of BCFW (Cf. their eqn.(10)), but that our final result for $\xi(r)$ is very similar to theirs. Thus, cooperative effects can mimic extra large-scale power in the galaxy two-point function, while the other remedies for CDM, such as low-density, mixed dark matter, or tilted ($n < 1$) models, have genuine extra large-scale power in the spectrum. How can we discriminate between these choices for extra large-scale power, that is, between real power and the illusion of power? Below, we argue that the three-point function can provide a distinguishing test, at least for models with Gaussian initial fluctuations. The reason is that the galaxy three-point function induced by gravitational evolution depends in large measure on the two-point function of the {\it mass}. Before turning to higher order correlations, we remark that the treatment given here and below applies more generally than to the CGF model, and in fact to any model with scale-dependent bias. That is, the chain of reasoning above is invertible: if the galaxy and density power spectra are related by a scale-dependent bias, $P_g(k) = b^2(k) P(k)$, we can always think of the galaxy field $\delta_g(\hbox{\twelveBF x})$ as arising from some non-local transformation of the density field $\delta(\hbox{\twelveBF x})$. To see this, let $b^2(k)=b^2_g f^2(k)$, where $b_g$ is a constant and we assume that $f^2(k)$ has a limit, $f^2(k\rightarrow \infty)=1$. Then consider the field $\delta'(\hbox{\twelveBF k})=\delta_g(\hbox{\twelveBF k})/b_g=f(k)\delta(\hbox{\twelveBF k})$. We can write $f(k)=1+ \kappa G(k)$, where lim $G(k \rightarrow \infty) = 0$, and we can choose $\kappa$ such that lim $G(k \rightarrow 0) = 1$, so that $\delta'(\hbox{\twelveBF k}) = \delta(\hbox{\twelveBF k})[1+ \kappa G(k)]$. Comparing with {equation~}(\ref{ftdk}), we see that this expression, where $G(k)$ is interpreted as the Fourier transform of some window function (which in general will not be a Gaussian), is all that we need for the results discussed here and below to go through. Provided the function $b(k)$ is not too pathological, this Fourier transform should exist. \section{3-point correlation function in perturbation theory} We want to consider how scale-dependent bias, as embodied for example in the CGF model, affects the higher order correlation functions. The motivation for this study is that the galaxy three-point function is observed to scale in a particular way with the two-point function, and both perturbation theory and N-body simulations show that this scaling can arise via non-linear gravitational evolution from Gaussian initial fluctuations. Since scale-dependent bias introduces a different scale behavior into the problem, we would expect it to be manifest as a change in the scaling behavior of the higher order correlations. We will work in the context of second-order perturbation theory (Fry 1984), the results of which we review here before discussing how they are modified by scale-dependent bias. The perturbative approach should be valid in the mildly non-linear regime, $\delta \ltilde 1$. In the range where they overlap, the second-order perturbation theory results below for $S_3$ in standard CDM appear to be quite consistent with the N-body simulations of Bouchet and Hernquist (1992). Defining the Fourier transform of the density field, \begin{equation} \delta(\hbox{\twelveBF k}) = {1\over V}\int d^3 x \delta(\hbox{\twelveBF x}) e^{i {\bf k} \cdot {\bf x}} ~~, \end{equation} we consider the two- and three-point functions in $k$-space, $\mathop{\bigl\langle} \delta(\hbox{\twelveBF k}_1)\delta(\hbox{\twelveBF k}_2) \mathop{\bigr\rangle}$ and $\mathop{\bigl\langle} \delta(\hbox{\twelveBF k}_1)\delta(\hbox{\twelveBF k}_2) \delta(\hbox{\twelveBF k}_3) \mathop{\bigr\rangle}$, which are the Fourier transforms of the spatial two- and three-point correlation functions $\xi_2(\hbox{\twelveBF x}_1,\hbox{\twelveBF x}_2)$ and $\xi_3(\hbox{\twelveBF x}_1,\hbox{\twelveBF x}_2,\hbox{\twelveBF x}_3)$. By homogeneity and isotropy, the $\hbox{\twelveBF k}$-space moments are non-zero only for $\sum \hbox{\twelveBF k}_i= 0$, \begin{equation} \mathop{\bigl\langle} \delta(\hbox{\twelveBF k}_1)\delta(\hbox{\twelveBF k}_2) \mathop{\bigr\rangle} = \delta_{{\bf k}_1+{\bf k}_2,0}P(k_1)~~, {}~~~\mathop{\bigl\langle} \delta(\hbox{\twelveBF k}_1)\delta(\hbox{\twelveBF k}_2) \delta(\hbox{\twelveBF k}_3) \mathop{\bigr\rangle} = \delta_{{\bf k}_1+ {\bf k}_2+{\bf k}_3,0}B(k_1,k_2,k_3) ~~. \end{equation} This defines the power spectrum $P(k)=\mathop{\bigl\langle} |\delta(k)|^2 \mathop{\bigr\rangle}$ and the bispectrum $B_{123}=B(k_1,k_2,k_3)$. Early observations of clustering on small scales (Groth and Peebles 1977) suggested that the galaxy two- and three-point functions obey a scaling hierarchy, \begin{equation} \xi_3(\hbox{\twelveBF x}_1,\hbox{\twelveBF x}_2,\hbox{\twelveBF x}_3)= Q~\left[ \xi_2(\hbox{\twelveBF x}_1,\hbox{\twelveBF x}_2)\xi_2(\hbox{\twelveBF x}_2,\hbox{\twelveBF x}_3)+ {}~(1 \leftrightarrow 2) + ~(2 \leftrightarrow 3) \right] \label{def:Q} \end{equation} with $Q =$ constant $\sim 1$, roughly independent of the size and shape of the triangle formed by the points $\hbox{\twelveBF x}_1,\hbox{\twelveBF x}_2,\hbox{\twelveBF x}_3$. If the scaling of {equation~}(\ref{def:Q}) holds exactly, then the hierarchical 3-point amplitude $Q$ is also related to the bispectrum by the $\hbox{\twelveBF k}$-space version of {equation~}(\ref{def:Q}) (Fry and Seldner 1982), \begin{equation} Q \equiv { B_{123} \over{ P_1 P_2 + P_1 P_3 + P_2 P_3}} ~~, \label{def:qk} \end{equation} with $P_i \equiv P(k_i)$. We will consider {equation~}(\ref{def:qk}) as the definition of the amplitude $Q$, even if it is not constant. In the strongly non-linear regime $\delta \gg 1$, N-body simulations of CDM and power-law spectrum models do seem to display the approximate shape- and size-indepedence of {equation~}(\ref{def:Q}) (Fry, Melott, and Shandarin 1993). However, in second-order perturbation theory in the mildly non-linear regime, while $Q$ as defined in {equation~}(\ref{def:qk}) obeys the scaling with size, it does depend on the shape of the configuration in $\hbox{\twelveBF k}$-space. To calculate the three-point function in the weakly non-linear regime, one expands the perturbation equations in powers of $\delta$, $\delta(\hbox{\twelveBF x},t) = \delta^{(1)}(\hbox{\twelveBF x},t) + \delta^{(2)}(\hbox{\twelveBF x},t)+...$, where $\delta^{(1)}$ is the linear solution, and $\delta^{(2)} = {\cal{O}}(\delta^{(1)})^2$ is the second-order solution, obtained by using the linear solution in the source terms. For Gaussian initial fluctuations, the three-point function vanishes to linear order, $\mathop{\bigl\langle} \delta^{(1)}(\hbox{\twelveBF x}_1) \delta^{(1)}(\hbox{\twelveBF x}_2) \delta^{(1)}(\hbox{\twelveBF x}_3)\mathop{\bigr\rangle} = 0$, and the lowest order contribution to the bispectrum is $B_{123}=\mathop{\bigl\langle} \delta^{(1)}(\hbox{\twelveBF k}_1) \delta^{(1)}(\hbox{\twelveBF k}_2) \delta^{(2)}(\hbox{\twelveBF k}_3)\mathop{\bigr\rangle} + (1 \leftrightarrow 3) + (2 \leftrightarrow 3)$, with the result \begin{equation} B_{123}=\left[ {10\over{7}}~+ \left({{\hbox{\twelveBF k}_1 \cdot \hbox{\twelveBF k}_2} \over{k_1 k_2}}\right) \left({k_1 \over{ k_2}}+{k_2 \over{ k_1}}\right)+ {4\over{7}}\left({{\hbox{\twelveBF k}_1 \cdot \hbox{\twelveBF k}_2} \over{k_1 k_2}}\right)^2 \right] P_1 P_2+ {}~(1 \leftrightarrow 3) + ~(2 \leftrightarrow 3) \label{eq:Qm} \end{equation} (Fry 1984). Strictly speaking, this result holds for initially Gaussian fluctuations in a matter-dominated universe with $\Omega=1$, but the work of Juszkiewicz and Bouchet (1991) shows that the dependence of the three-point function on $\Omega$ is extremely slight. A particular case of importance is that of equilateral triangle configurations in $\hbox{\twelveBF k}$-space, $k_1=k_2=k_3$, for which $Q(k)\equiv Q_\Delta = 4/7$, independent of $P(k)$. The independence of this result of the power spectrum makes it a useful quantity for distinguishing gravitational from non-gravitational (e.g., bias) effects. (In general, for other configurations or averages over configurations, there will be a small dependence on $P(k)$.) In section IV, we will see how this result is modified by constant and scale-dependent bias, and compare these predictions with observations. Another useful and increasingly popular characterization of the three-point amplitude, which does depend on $P(k)$, is the hierarchical averaged amplitude $ S_3$, \begin{equation} S_3(V) ={{\overline{\xi}_3(V)}\over{\overline{\xi}_2^2(V)}}= {\mathop{\bigl\langle} \delta^3(\hbox{\twelveBF x};V)\mathop{\bigr\rangle} \over \mathop{\bigl\langle} \delta^2(\hbox{\twelveBF x};V)\mathop{\bigr\rangle}^2} ~~. \label{S} \end{equation} Here $\overline{\xi}_2$ and $\overline{\xi}_3$ are the 2-point and 3-point density correlation functions averaged over a window function $W(\hbox{\twelveBF r})$ of characteristic volume $V$: \begin{eqnarray} \overline{\xi}_2(V) &=& {1\over{V^2}}\int \int d^3r_1d^3r_2~ \xi_2(|r_1-r_2|) W(\hbox{\twelveBF r}_1) W(\hbox{\twelveBF r}_2) \nonumber \\ \overline{\xi}_3(V) &=& {1\over{V^3}} \int \int \int d^3r_1 d^3r_2 d^3r_3~ \xi_3(r_1,r_2,r_3) W(\hbox{\twelveBF r}_1) W(\hbox{\twelveBF r}_2) W(\hbox{\twelveBF r}_3) \label{xibarv} \end{eqnarray} In comparing with model predictions, it is useful to think of $S_3$ as the ratio of moments of the density field $\delta(\hbox{\twelveBF x};V)$ smoothed over the volume $V$ (Cf. {equation~}(\ref{S})), \begin{equation} \delta(\hbox{\twelveBF x};V)={1\over V} \int d^3r~ \delta(\hbox{\twelveBF x}+\hbox{\twelveBF r}) W(\hbox{\twelveBF r}) ~~. \end{equation} Thus, $\overline{\xi}_2(V)$ is just the variance of the smoothed density field, given by {equation~}(\ref{sigr}), and $\overline{\xi}_3(V)$ is its skewness. (The smoothing discussed here should not be confused with the smoothed density field introduced in the CGF model of {equation~}(\ref{prime}); in the CGF model, the smoothing radius is associated with the physical scale of threshold modulation effects, while here it merely defines the resolution with which one observationally probes the density field.) Following standard practice, we evaluate $S_3$ with a top-hat window: for the volume $V=4\pi R^3/3$, $W(r) = 1$ for $r<R$ and vanishes for $r>R$; its Fourier transform $W(kR)$ is given by {equation~}(\ref{wkr}). In this case, $\overline{\xi}_2$ and $\overline{\xi}_3$ are related to the moments of counts in cells of volume $V$, and the skewness is given by \begin{equation} \mathop{\bigl\langle} \delta^3(R)\mathop{\bigr\rangle} = {3\over (2\pi)^6}\int \int d^3k_1 d^3k_2 B(k_1,k_2,|{\bf k}_1+{\bf k}_2|) W(k_1R) W(k_2R) W(|{\bf k}_1+{\bf k}_2|R) ~. \end{equation} In Fig.2, we plot $S_3$ as a function of the top-hat smoothing radius $R$ for CDM power spectra with $\Omega h = 0.5$ and $\Omega h = 0.2$ (Cf. (\ref{cdmpk})), using the second-order perturbation theory result (\ref{eq:Qm}) for the bispectrum (and assuming that the smoothing radius $R$ is much larger than the galaxy smoothing radius $R_g \sim 1 {\,h^{-1}\,{\rm Mpc}}$). In computing $S_3$ for the low-density model, we have ignored the tiny correction for $\Omega \neq 1$ (Juszkiewicz and Bouchet 1991). The result for the CGF model, also shown here, will be discussed below in section IV. So far as we are aware, these numerical results for $S_3$ for CDM are new. (Goroff et al. 1986 roughly integrated $S_3$ for CDM with a Gaussian smoothing window using Monte Carlo integration, and we have also studied $S_3$ for Gaussian smoothing. Top hat smoothing requires a more accurate numerical integrator, and we have checked our integration code by comparing with the analytic results of Juszkiewicz and Bouchet (1991) for $S_3$ for power law spectra--see below). Where our results overlap with the N-body results of Bouchet and Hernquist (1992), the agreement is quite good. We see that $S_3$ does vary with scale $R$ in a manner that depends on the shape of the power spectrum, because the CDM spectrum is not exactly scale-free. For a scale-free, power-law spectrum $P(k) \propto k^n$, $R$ can be scaled out of the expression for $S_3$, i.e., $S_3$ is a constant, and its value can be found analytically, $S_3(R)=34/7-(n+3)$ (Juszkiewicz and Bouchet 1991). On the other hand, for a purely unsmoothed field, $R = 0$, $W(kR) = 1$, the normalized skewness is $S_3(0) = 34/7$, independent of the power spectrum (Peebles 1980). The hierarchical behavior of the three-point function in perturbation theory extends to higher order correlations, so one can define higher order hierachical amplitudes $Q_J \simeq \xi_J/\xi_2^{J-1}$ or $S_J=\overline{\xi}_J/\overline{\xi}_2^{J-1}$ which have characteristic amplitudes set by gravitational instability (see Peebles 1980, Fry 1984, Goroff et al. 1986, Bernardeau 1992). \section{Scale-dependent bias and the 3-point correlations} We now turn to study how the $J$-point correlation amplitudes, and in particular the three-point function, are affected by constant and scale-dependent biasing. Because we consider the 3-point function $\xi_3$, we must extend the CGF model to the case in which the matter distribution is not just Gaussian but hierarchical, i.e., we consider the contribution of second-order gravitational evolution. Fry and Gazta\~naga (1993a) have shown that the first-order contribution of biasing to $\xi_3$ is comparable to the contribution from second-order gravitational evolution and, thus, it is not consistent to assume a purely Gaussian density field. We first consider how the non-local cooperative modulation of the density field affects the 3-point function, and then study how it is further affected by linear and non-linear bias, that is, we consider the sequence of transformations $\delta \rightarrow \delta' \rightarrow \delta_g$. \subsection{Cooperative bias} Consider the effect on the 3-point amplitude of the non-local, cooperative linear transformation of the density field given in {equation~}(\ref{ftdk}). The bispectrum of the cooperative field $\delta'(\hbox{\twelveBF x})$ is \begin{equation} B'_{123}=B_{123}~(1 +\kappa{\cal G}_1)(1 +\kappa{\cal G}_2)(1 +\kappa{\cal G}_3) ~~~, \label{eq:B'} \end{equation} where ${\cal G}_i \equiv {\cal G}(k_i)$ is given by {equation~}(\ref{Gkdef}). The hierarchical 3-point amplitude $Q'$ of the field $\delta'$, defined in {equation~}(\ref{def:qk}), can be expressed in terms of the 3-point amplitude $Q$ for the underlying density field, $\delta$: \begin{equation} Q' = Q ~{(P_1 P_2 + P_1 P_3 + P_2 P_3)~(1 +\kappa{\cal G}_1)(1 + \kappa{\cal G}_2)(1 +\kappa{\cal G}_3) \over{ P_1 P_2 (1 +\kappa{\cal G}_1)^2(1 +\kappa{\cal G}_2)^2+ {}~(1 \leftrightarrow 3) + ~(2 \leftrightarrow 3)}} ~~. \label{eq:Q'} \end{equation} Note that the ratio $Q'/Q$ has no explicit angular dependence in $\hbox{\twelveBF k}$-space, i.e., it depends only on the magnitudes $k_1$, $k_2$, $k_3$. Using this property, we can point to several important limiting behaviors of $Q'/Q$. For example, on small length scales, $k_1$, $k_2$, $k_3 \gg R^{-1}_s$, we obviously retrieve $Q'=Q$, and in the opposite limit of large scales (small triangles in $\hbox{\twelveBF k}$-space), $k_1$, $k_2$, $k_3 \ll R^{-1}_s$, we have $Q'/Q \simeq 1/(1+\kappa)$, independent of the power spectrum and the triangle configuration. The other limiting case of interest is a triangle with two large sides and one small side, e.g., $k_1$, $k_2 \gg R^{-1}_s$, $k_3 \ll R^{-1}_s$: if the power spectrum is approximately a power law, $P(k) \propto k^n$, then for $n>0$ (and $k_3/k_{1(2)} \ll (1+\kappa)^{-2/n}$), $Q'/Q \simeq 1+\kappa$; for $n=0$, $Q'/Q = 3(1+\kappa)/[1+2(1+\kappa)^2]$; and for $n<0$, $Q'/Q \simeq 1/(1+\kappa)$. As noted in section III, an important class of configurations is equilateral triangles in $\hbox{\twelveBF k}$-space, $k_1 = k_2 = k_3 = k$, for which \begin{equation} Q_\Delta'= Q_\Delta~ {{(1 +\kappa{\cal G})^3}\over{(1 +\kappa{\cal G})^4}} = {Q_\Delta\over{(1 +\kappa{\cal G})}} ~~. \label{Qeq} \end{equation} With the Gaussian CGF smoothing window, ${\cal G}(k)= e^{-(kR_s)^2/2}$, for scales larger than $R_s$, $kR_s \ll 1$, we have $Q_\Delta'= Q_\Delta~(1+\kappa)^{-1}$, whereas for scales smaller than $R_s$, $kR_s \gg 1$, we have $Q_\Delta'=Q_\Delta=4/7$. For the preferred parameter values considered by BCFW to match the APM data, $R_s=20 {\,h^{-1}\,{\rm Mpc}}$ and $\kappa = 2.29$, we see that within the range of the weakly non-linear regime, $k^{-1} \sim 10 {\,h^{-1}\,{\rm Mpc}}$, there is a sharp transition from $Q_\Delta' \simeq Q_\Delta$ to $Q_\Delta'= 0.3Q_\Delta$. We explore the observational consequences of this behavior in the next section (see Fig. 3). It is also of interest to study the normalized skewness of the smoothed cooperative density field, $S'_3(R)= \mathop{\bigl\langle} \delta'^3(\hbox{\twelveBF x};R)\mathop{\bigr\rangle}/\mathop{\bigl\langle} \delta'^2(\hbox{\twelveBF x};R)\mathop{\bigr\rangle}^2$, where the Fourier-transform of the top-hat-smoothed cooperative density field is $\delta'(\hbox{\twelveBF k};R)=W(kR)\delta(\hbox{\twelveBF k})[1+\kappa {\cal G}(kR_s)]$, with $W(kR)$ given by (\ref{wkr}) and ${\cal G}(kR_s)$ given by (\ref{Gkdef}). The function $S'_3(R)$ is shown, for the CDM $\Omega h =0.5$ spectrum with the canonical CGF parameters $\kappa = 2.29$, $R_s = 20 {\,h^{-1}\,{\rm Mpc}}$, as the curve labelled CGF in Fig. 2. As expected, in this case $S_3$ has a steeper dependence on $R$ for scales $R \ltilde R_s$ than either the standard or low-density CDM models, due to the rather sharp, non-power-law feature in $\delta'(\hbox{\twelveBF k})$ arising from cooperative effects. These different behaviors are compared with data in Fig. 4 below. One can also define higher order amplitues, $Q_J$, by $\mathop{\bigl\langle}\delta^J(k)\mathop{\bigr\rangle} \simeq Q_J P^{J-1}$, with $Q_3=Q$. From the above arguments it is straightforward to show that, in general, for regular $J$-sided polygons in $\hbox{\twelveBF k}$-space, \begin{equation} Q_J' = { Q_J \over{(1 +\kappa{\cal G})^{J-2}}} \end{equation} Again, for a Gaussian filter ${\cal G}$, we have $Q_J'= Q_J~(1+\kappa)^{-J+2}$ for $kR_s \ll 1$ and $Q_J'=Q_J$ for $kR_s \gg 1$. Thus if the underlying density field $\delta(\hbox{\twelveBF x})$ is hierarchical, with $Q_J$ approximately constant as a function of $k$, the new field $\delta'$ is also hierarchical, $\mathop{\bigl\langle}\delta^{'J}\mathop{\bigr\rangle} = Q_J' \mathop{\bigl\langle}\delta^{'2}\mathop{\bigr\rangle}^{J-1}$, with the hierarchical amplitudes $Q_J'$ varying with scale from $Q_J'=Q_J$ to $Q_J'= Q_J~(1+\kappa)^{-J+2}$. \subsection{Non-linear, local bias} In the previous subsection, we considered the three-point function for the cooperative density field $\delta'(\hbox{\twelveBF x})$ defined in {equation~}(\ref{prime}). We now want to relate this to the three-point function of the galaxy field $\delta_g(\hbox{\twelveBF x})$, defined by the arbitrary, local, non-linear transformation of the cooperative field in {equation~}(\ref{eq:bias}). Fry \& Gazta\~naga (1993a) have shown that, in the weakly non-linear limit $\mathop{\bigl\langle}\delta^2\mathop{\bigr\rangle} <1$, the hierarchical relation between the moments of the density field, $\mathop{\bigl\langle}\delta^j\mathop{\bigr\rangle}\propto\mathop{\bigl\langle}\delta^2\mathop{\bigr\rangle}^{j-1}$, is preserved under an arbitrary local transformation of this form. Nevertheless, the higher order moments of the galaxy field will differ quantitatively from the hierarchical amplitudes of the cooperative field. The analysis of Fry \& Gazta\~naga (1993a) is valid as long as the amplitudes of the original field (here, the cooperative field $\delta'(\hbox{\twelveBF x})$) are of zeroth order in the two-point function $\xi_2=\mathop{\bigl\langle}\delta'^2\mathop{\bigr\rangle}$, i.e., under the assumption that $Q_J' ={\cal O} \mathop{\bigl\langle}\delta'^2\mathop{\bigr\rangle}^0$; in particular, their results apply even if the original field is not hierarchical in the strict sense that $Q_J'$ is constant. Let the hierarchical amplitudes of the smoothed galaxy field ${\delta_g}(\hbox{\twelveBF x})=f(\delta')$ be denoted by $Q_{g,J}$. To consider the 3-point galaxy amplitude, $Q_g \equiv Q_{g,3}$, we must keep terms up to quadratic order in $\delta'$ in the expansion (\ref{eq:taylor}) of the biasing function $f(\delta')$. Applying the results of Fry and Gazta\~naga (1993a), we find \begin{equation} Q_g = b^{-1}( Q' + c_2) + {\cal O}\mathop{\bigl\langle}\delta'^2\mathop{\bigr\rangle} ~~~, \label{Qgtot} \end{equation} where $c_2=b_2/b$ and $b=b_1$ in {equation~}(\ref{eq:taylor}), and $Q'$, the 3-point amplitude for the cooperative field $\delta'$, is related to the 3-point amplitude of the underlying density field by {equation~}(\ref{eq:Q'}). For example, for the high peaks model, in the limit $\nu \gg 1$ and $\sigma \ll 1$, the bias function $f(\delta')$ is exponential, and we have $c_2=b$. This suggests that the $c_2$ term in (\ref{Qgtot}) is of the same order as the $Q'$ term, i.e., that the contribution of non-linear bias to the galaxy 3-point function may be comparable to the second-order gravitational contribution. For equilateral triangles in $\hbox{\twelveBF k}$-space, we can use (\ref{Qeq}) and (\ref{Qgtot}) to relate the galaxy 3-point amplitude $Q_{g,\Delta}$ to that of the underlying density field, $Q_\Delta$, \begin{equation} Q_{g,\Delta} = b^{-1}\left( { Q_\Delta \over{1 +\kappa{\cal G}}} + c_2\right) ~~~, \label{eq:Qg} \end{equation} with $Q_\Delta=4/7$. On small scales, $k_{NL} \gg k \gg R_s^{-1}$, where cooperative effects are negligible, but still in the mildly non-linear regime $\mathop{\bigl\langle}\delta^2\mathop{\bigr\rangle} < 1$, we have $Q_{g,SS} \simeq b^{-1}( Q + c_2)$, just the result in the absence of cooperative effects. On large scales, $k \ll R_s^{-1}$, the galaxy 3-point amplitude is $Q_{g,LS} \simeq b^{-1}[(1+\kappa)^{-1}Q + c_2]$. The fractional change in $Q_g$ between large ($kR_s<1$) and small ($kR_s>1$) scales is thus \begin{equation} {\Delta Q_g \over{Q_g}} \simeq \left({\kappa \over{1+\kappa}}\right) {Q\over{b Q_g}} ~~~. \label{coop:q3} \end{equation} For the case of purely linear biasing, $c_2=0$, this gives $\Delta Q_g/Q_{g,SS} \simeq \kappa/(1+\kappa)$, independent of $b$ or $Q$. A similar expression can be derived for the hierarchical amplitudes $S_J=\overline{\xi}_J/\overline{\xi}_2^{J-1}$ of the volume-averaged correlation functions. Following the arguments above, the small-to-large-scale variation in the galaxy 3-point amplitude $S_g \equiv S_{g,3}$ is related to the density amplitude $S \equiv S_3$ by \begin{equation} {\Delta S_g \over{S_g}} \simeq \left({\kappa \over{1+\kappa}}\right) {S\over{b S_g}} ~~. \label{coop:s3} \end{equation} Again for purely linear bias, this gives ${\Delta S_g/{S_{g,SS}}} \simeq \kappa/(1+\kappa)$. Finally, for the non-CGF models, note that {equation~}(\ref{Qgtot}) relates the galaxy and matter density 3-point amplitudes with the replacement $Q' \rightarrow Q$ (Fry and Gazta\~naga 1993a); we will make use of this in comparing the CDM models to observations below. It is also worth reiterating that all of our results apply to models with initially Gaussian fluctuations. For non-Gaussian models, there is an additional first-order contribution to the 3-point amplitude, which can be thought of as a (possibly scale-dependent) contribution to the parameter $c_2$. \section{Comparison with observations} We now compare the model predictions for the three-point amplitudes with observations from the CfA, SSRS, and Perseus-Pisces redshift surveys. As above, we focus on three models: standard CDM ($\Omega h =0.5$), low-density CDM ($\Omega h = 0.2$), and CGF-modified standard CDM, and we employ the results of second-order perturbation theory. Comparison with the observed galaxy amplitudes, $S_{g,3}$ and $Q_{g,\Delta}$, can in principle be used to constrain the bias parameters $b$ and $c_2$ as well as the CGF parameters $\kappa$ and $R_s$. In combination with other observations, e.g., of the galaxy power spectrum, and of the large-angle microwave anisotropy as seen by COBE DMR and other experiments, these results can help point toward preferred models for large-scale structure. \subsection{Limits from $Q_{\Delta}$} Baumgart and Fry (1991) have estimated the galaxy power spectrum and the Fourier-space three-point amplitude for equilateral triangle configurations, $Q_\Delta$, using data from the Center for Astrophysics and Perseus-Pisces redshift surveys. It is worth noting that the power spectrum $P(k)$ for these samples does show evidence for the extra large-scale power inferred in other spectroscopic (e.g., IRAS) and photometric (e.g., APM) surveys. Their results for $Q_\Delta(k)$, averaged over 3 subsamples each from the CfA and Perseus-Pisces surveys, are shown in Fig. 3. The errors bars in each bin indicate the variance between subsamples, and we only show results for values of the wavenumber away from the strongly non-linear regime. The striking feature of these results is the relative constancy of the three-point amplitude over more than a decade in wavenumber, $k = 0.1 - 1.6 ({\,h^{-1}\,{\rm Mpc}})^{-1}$. Moreover, the observed amplitude of $Q_\Delta$ over this range is apparently in reasonable agreement with the prediction of second-order perturbation theory {\it without} cooperative effects, and under the assumption of no bias, $b = 1$, $c_2 = 0$, namely $Q_\Delta = 4/7$ (shown as the short-dash line in Fig. 3). Turning this around, using the perturbation theory relation $Q_{g,\Delta} = b^{-1}[(4/7) + c_2]$, one can in principle use the results in Fig. 3 to constrain the parameter space of $b-c_2$ for any model with scale-independent bias. In practice, however, the derived constraints are not terribly stringent. First, searching this two-dimensional space and treating the data points as independent, one finds a minimum $\chi^2 \simeq 25.5$ (for 12 data points and a 2-parameter fit, i.e., 10 degrees of freedom) for $c_2 \simeq 0.52b-(4/7)$. This is consistent with a mean value of $Q_\Delta \simeq 0.52$ over the plotted range of $k$, close to the expected perturbation theory result of $4/7=0.57$. In particular, for purely linear bias, $c_2=0$, the best fit value of the bias parameter is $b=1.1\pm 0.1$, consistent with the visual impression from Fig. 3. On the other hand, this constraint on the bias parameter space should be interpreted with a great deal of caution, since the best fit curve for perturbation theory has a chi-squared of 2.5 per degree of freedom, more than 3-$\sigma$ above the expected value. A better fit to the data would be obtained with a model in which $Q_\Delta$ falls gently with increasing $k$. However, given the likelihood that the true data errors are larger than those shown here, it would certainly be premature to exclude the perturbation theory result on this basis. The statements above apply for local, non-cooperative bias models. On the other hand, as noted in section 4.1, the CGF model predicts a dramatic scale-dependence of $Q_\Delta (k)$ around the scale $kR_s \sim 1$. This behavior is shown in Fig. 3 for the 3 CGF parameter choices considered by BCFW, $\kappa, R_s = 0.84, 10 {\,h^{-1}\,{\rm Mpc}}$ (dot-long dash curve), 2.29, $20 {\,h^{-1}\,{\rm Mpc}}$ (solid curve), and 4.48, $30 {\,h^{-1}\,{\rm Mpc}}$ (dot-short dash curve). As above, these models are plotted for $c_2=0$, $b=1$. The `smoking gun' of these models is the sharp downturn in $Q_\Delta$ on large scales. Since, within the observational errors, no such downturn is observed, one can use this to constrain the CGF parameter space. In particular, for $R_s = 10 {\,h^{-1}\,{\rm Mpc}}$, $\kappa =0.84$, the CGF model is always a significantly poorer fit to the data than the scale-independent bias models. For this choice of CGF parameters, the requirement of a fit that is within 1-$\sigma$ of the scale-independent models (i.e., a fit with $\chi^2 < 3$ per degree of freedom) necessitates a linear bias parameter $b > 2.6$ and a significant non-linear bias, $c_2 > 0.8$. In this case, the large linear bias factor suppresses the gravitational and cooperative contribution to $Q$, and the match with the observations is obtained chiefly by the non-linear bias. This would make the apparent agreement between the observed $Q_\Delta$ and the perturbation theory prediction of 4/7 purely coincidental. This behavior is an instance of our general conclusion that models with sharply varying scale-dependent bias are forced to uncomfortably large values of the linear bias $b$. On the other hand, for larger values of the CGF `scale of influence' $R_s$, the 3-point data do not extend to large enough scales for the downturn to be significant. Consequently, the $R_s = 20 {\,h^{-1}\,{\rm Mpc}}$ CGF model, when fitted to the $Q_\Delta (k)$ data, occupies the same region in the two-dimensional $b-c_2$ bias parameter space, with only a slightly higher $\chi^2$ than the non-CGF models. Clearly, to more strongly constrain or rule out the CGF model, it would be useful to have data on $Q_\Delta (k)$ which extends down to $k \ltilde 0.05$ h Mpc$^{-1}$; this should be feasible with currently available redshift samples drawn from the IRAS catalog. \subsection{Limits on $S_3$} To compare model predictions to observations of the volume-averaged normalized skewness $S_3$, we use the results of the $S_3$ analysis by Gazta\~naga (1992) for samples in the CfA and SSRS redshift catalogs (we use the largest samples, denoted SSRS115 and CfA92 in Gata\~naga 1992). The average over these samples is shown in Fig. 4, where we plot $S_{g,3}$ as a function of top-hat smoothing radius (or cell size) $R$. Each data point in Fig. 4 is an average over bins that correspond to different degrees of freedom: for a given value of $R$, the average number of galaxies in that cell size is at least one unit larger than in the cell of the next smallest value of $R$ shown in the figure. The errorbars shown here are the larger of the intersample dispersion in the given $R$-bin and the intrinsic errors in the original samples. From Fig. 4, it is apparent that $S_{g,3}(R) \simeq 2$ is quite constant over the range of $R$ shown, with a variation of about 25\%. We also show in Fig. 4 the same model predictions for $S_3$ as in Fig. 2, again for the bias parameters $b=1$ and $c_2=0$. The reader should mentally note that the curves in Fig. 4 can be shifted vertically, and have their slopes magnified or depressed, by changing the values of $b$ and $c_2$. In Fig. 5 we plot the contours of $\chi^2$ for the comparison between the three models and the $S_3$ observations in the $b-c_2$ parameter space. The 3 contours correspond to $\chi^2=5$, 8, and 14 for $11$ data points fit with $2$ parameters (9 degrees of freedom). Again, because of the way error bars have been assigned to the data, we caution against absolute interpretations of these $\chi^2$ values; however, the difference in $\chi^2$ values for different models should provide a measure of the relative goodness of fit to the data. In this sense, both CDM models give comparably good fits to the data for values of the bias parameter above $b=1$, with $b>1.8$ for the best fit (lowest $\chi^2$ contour) range. For a given value of $b$, the non-linear bias $c_2$ is slightly larger for the $\Omega h=0.2$ case than for standard $\Omega h = 0.5$ CDM. For the CGF model, on the other hand, the linear bias parameter must satisfy $b>2$ for a reasonable fit, while the best fit range requires $b > 3$. The large value of bias for the CGF model inferred from $S_3$ is qualitatively similar to the result above from the Fourier-amplitude $Q_\Delta$: fits to the data with large values of $b$ are in a sense {\it ad hoc}, because the agreement is obtained by depressing the gravitational contribution and then fitting with the non-linear bias $c_2$ alone. In particular, for $c_2/b \simeq 0.6$, the galaxy amplitude $S_{g,3} \simeq 2 \simeq 3Q$ is completely produced by non-linear bias, not by gravitational or cooperative effects. Therefore the fit for the CGF model, for which $c_2/b=0.6$, does not really reflect agreement between the data and the CGF model, but rather the possibility that, in any model, the observed signal comes from the non-linear component of biasing. At this point, it is worth noting several features of the $S_3$ observations. The skewness has been measured from other redshift and angular catalogs in addition to those used above; a useful compendium of results in the literature is given in Fry and Gazta\~naga (1993b). Except for the Lick catalog, the values of $S_3$ inferred from other surveys are broadly consistent with those shown in Fig. 4 (e.g., Bouchet, etal. 1991, 1993, Meiksin, etal. 1992). A second issue concerns redshift distortions of the higher order moments. It is well known that peculiar velocities distort the galaxy power spectrum (Kaiser 1987), so that the power measured in a redshift catalog does not precisely represent the clustering power in real space. The transformation from the real space to redshift space power spectrum depends on the ratio $\Omega^{0.6}/b$. The extent to which this affects higher moments has been somewhat controversial: in N-body simulations, Lahav etal. (1993) find that $S_3$ is significantly distorted in redshift space in the strongly non-linear regime, while Coles, etal. (1993) do not see this affect. In their analysis of higher moments in the CfA, SSRS, and IRAS 1.9 Jy catalogs, Fry and Gazta\~naga (1993b) find that the volume-average 3-point function $\overline{\xi}_3$ is affected by redshift distortions, but that the normalized skewness $S_3$ is insensitive to them. This empirical insensitivity justifies our comparison of the model results to the $S_3$ data in redshift space. We finish this section with some comments about the implications of the $Q$ and $S_3$ observations for the bias parameter(s) and how these compare with other data on large-scale structure. We will focus on the CDM models (as opposed to the CGF model). First, as noted above, the $Q_\Delta$ observations do not significantly constrain the bias parameter $b$ once one allows for non-linear bias (although they do imply a relation between $b$ and $c_2$). On the other hand, the $S_3$ observations do appear to favor larger values of the bias, $b \gtilde 1.8$, for both CDM models. In a simple bias prescription, for CfA galaxies we would expect $b \sigma_8 \sim 1$, so that, taken at face value, this constraint on $b$ would imply a low normalization amplitude for the CDM models, $\sigma_8 \ltilde 0.56$. For standard $\Omega h = 0.5$ CDM, this is uncomfortably low compared to the amplitude inferred from the COBE DMR measurement of the large-angle microwave anisotropy, $\sigma_{8,dmr} \sim 1$. For the low-density CDM model, the $\sigma_8$ amplitude inferred from COBE has a large range, depending on the choice of $\Omega$ and $h$ (Efstathiou, Bond, and White 1992). For example, for the choice $\Omega = 0.3$, $h=2/3$, Efstathiou, Bond, and White (1992) infer $\sigma_8 \sim 0.7$ from COBE, closer to the range implied by the $S_3$ observations. (On the other hand, for lower $\Omega$ and larger $h$, e.g., $\Omega=0.2$ and $h=1$, the COBE value for $\sigma_8$ becomes larger than unity, which is disfavored by the 3-point data.) While it is tempting to draw conclusions about the viability of different models from this comparison, in particular, to argue against standard COBE-normalized CDM, there are potential pitfalls which mitigate against making high confidence-level statements of this type. In particular, if one focused only on the $S_3$ data in Fig.4 at large $R$ (where the perturbation result is more trustworthy), one would conclude that standard $\Omega h = 0.5$ CDM fits the data well with $b \simeq 1$, $c_2 \sim 0$, in agreement with the COBE normalization. A conclusion which can be drawn with more confidence from Fig. 5 is that the high peaks model prediction $c_2/b = 1$ is inconsistent with the $S_3$ data for any of the Gaussian models we have studied. \section{Conclusion} We have studied the three-point galaxy correlations in models of large-scale structure, focusing on the CDM model and its variants with extra large-scale power, working in the context of biased galaxy formation in second order perturbation theory. In the non-local bias scheme, galaxies form and light up in just such a way as to create the illusion of extra power. We have shown that models with effective scale-dependent (or non-local) bias, such as the CGF model, can display the same enhanced large-scale power as other variations of standard CDM, but that they break the scaling hierarchy between the two- and three-point functions that arises from gravitational evolution. The resulting step in the Fourier-space three-point function $Q_\Delta (k)$ at the scale $k \sim R^{-1}_s$ of the bend in the bias function (which produces the extra large-scale power) should provide a strong observational test of scale-dependent bias models. However, this step can be partially masked if $b$ is large and if there is significant non-linear bias. Consequently, using data currently available, we have shown that the scale-dependent bias explanation of large-scale power requires a larger value of the linear bias factor $b$ than in the standard CDM model, and a substantial non-linear bias, in order to account for the observed flatness of the three-point amplitudes. On the other hand, the three-point amplitudes $S_3$ and $Q$ do not strongly discriminate between standard and low-density CDM with scale-independent bias; this conclusion also extends to the tilted CDM and mixed dark matter models. In these cases, however, the $S_3$ data tentatively point to moderately large values of the bias, $b \gtilde 1.8$, but more data on large scales is needed to confirm this. We emphasize that it is useful to have observational tests using both $S_3$ and $Q_\Delta$, since the former depends on the power spectrum while the latter does not. For completeness, we note that the CGF and other non-local bias models have other hurdles to overcome in addition to the higher moments. In the CGF and related models, the effective bias factor increases with lengthscale. On the other hand, recent N-body simulations of CDM incorporating hydrodynamics suggest that the bias factor $b(k)$ {\it decreases} with lengthscale (Cf. Katz, Hernquist, and Weinberg 1992, Fig.2 of Cen and Ostriker 1992). In addition, the modifications introduced by CGF do not apparently address the difficulties which CDM faces with excessive pairwise velocities on small scales (Gelb and Bertschinger 1993 and references therein). On the other hand, it would be interesting to study whether there might be a cooperative analogue for velocity bias (Couchman and Carlberg 1992). \bigskip \noindent {\large\bf Acknowledgements} \bigskip We thank Jim Fry for providing the data for Fig. 3 and L. N. da Costa for providing the SSRS catalog. This work was supported in part by DOE and by NASA (grant NAGW-2381) at Fermilab. After this work was completed, we received a preprint of Juszkiewicz, Bouchet, and Colombi which also gives some approximate numerical results for $S_3(R)$ for standard (unbiased) CDM. \newpage \bigskip \noindent {\large\bf Figure Captions} \bigskip \noindent{\bf Fig. 1.} The two-point spatial correlation function $\xi(r)/(b\sigma_8)^2$ in linear theory for standard CDM $(\Omega h = 0.5)$, low-density CDM $(\Omega h =0.2)$, and CGF-modified standard CDM with $\kappa = 2.29$, $R_s = 20 {\,h^{-1}\,{\rm Mpc}}$. \bigskip \noindent{\bf Fig. 2.} The volume-averaged normalized skewness $S_3(R)$ in second-order theory is shown as a function of top-hat smoothing radius $R$ for the three models of Fig. 1. \bigskip \noindent{\bf Fig. 3.} The Fourier-space 3-point amplitude for equilateral triangles $Q_\Delta (k)$ is shown as a function of wavenumber $k$. The data points (from Baumgart and Fry 1991) are an average over subsamples from the CfA and Perseus-Pisces surveys. The model points are for standard perturbation theory (short dashed line, $Q_\Delta =4/7$), and for the three CGF models discussed by BCFW: $\kappa, R_s = 0.84, 10 {\,h^{-1}\,{\rm Mpc}}$ (dot-long dash), $2.29, 20 {\,h^{-1}\,{\rm Mpc}}$ (solid), and $4.48, 30 {\,h^{-1}\,{\rm Mpc}}$ (dot-short dash). For the models, we have taken $b=1$, $c_2=0$. \bigskip \noindent{\bf Fig. 4.} The volume-average skewness $S_3(R)$ for the same models as in Fig. 2 are shown in comparison with data from the CfA and SSRS surveys (from Gazta\~naga 1992). The models are shown with $b=1$, $c_2 = 0$. \bigskip \noindent{\bf Fig. 5.} Contours of $\chi^2 = 5$, 8, and 14 (for 9 degrees of freedom) in the $b-c_2$ parameter space for fits of the 3 models to the data in Fig. 4. The darker regions correspond to lower $\chi^2$. (a) CDM $\Omega h = 0.5$, (b) CDM $\Omega h = 0.2$, (c) CGF $\kappa = 2.29$, $R_s = 20 {\,h^{-1}\,{\rm Mpc}}$. \newpage \bigskip \noindent {\large\bf References} \bigskip \def\par\parshape 2 0truecm 16.5truecm 1truecm 15.5truecm\noindent{\par\parshape 2 0truecm 16.5truecm 1truecm 15.5truecm\noindent} \def\paper#1;#2;#3;#4; {\par\parshape 2 0truecm 16.5truecm 1truecm 15.5truecm\noindent#1, {#2}, {#3}, #4} \def\book#1;#2;#3;#4; {\par\parshape 2 0truecm 16.5truecm 1truecm 15.5truecm\noindent#1, {\sl #2} (#3: #4)} \def\preprint#1;#2; {\par\parshape 2 0truecm 16.5truecm 1truecm 15.5truecm\noindent#1, #2} \paper Adams, F. C., Bond, J. R., Freese, K., Frieman, J. A., \& Olinto, A. V. 1993;Phys.Rev.D;47;426; \paper Bardeen, J. M., Bond, J. R., Kaiser, N., \& Szalay, A. S. 1986; ApJ;304;15; \paper Babul, A., \& White, S. D. M. 1991;MNRAS;253;31P; \preprint Bardeen, J. 1984;% in {\sl Inner Space/Outer Space}, eds. E. Kolb, M. Turner, D. Lindley, K. Olive, and D. Seckel, (Chicago:Univ. of Chicago press, 1986); \paper Baumgart, D. J., \& Fry, J. N. 1991;ApJ;375;25; \paper Bernardeau, F. 1992;ApJ;392;1; \preprint Bower, R., Coles, P., Frenk, C.S., \& White, S.D.M. 1993;ApJ;405;403; \preprint Bouchet, F. R., Strauss, M., Davis, M., Fisher, K., Yahil, A., \& Huchra, J. 1993;preprint; \preprint Bouchet, F. R., Davis, M., \& Strauss M. 1991;% in {\sl The Distribution of Matter in the Universe}, eds. G. Mamon \& D. Gerbal (Meudon: Observatoire de Paris); \paper Bouchet, F. R. \& Hernquist, L. 1992;ApJ;400;25; \paper Couchman, H. M. P. \& Carlberg, R. 1992;ApJ;389;453; \paper Cen, R., Gnedin, N. Y., Kofman, L. A., \& Ostriker, J. P. 1993;ApJ;399;L11; \paper Cen, R., \& Ostriker, J. P. 1992; ApJ;399;L113; \preprint Coles, P., Moscardini, L., Lucchin, F., Matarrese, S., \& Messina, A. 1993;preprint; \paper Da Costa, L. N., Pellegrini, P., Davis, M., Meiksin, A., Sargent, W., \& Tonry, J. 1991;ApJS;75;935; \paper Davis, M., Summers, F. J., \& Schlegel, D. 1992;Nature;359;393; \paper Dekel, A., \& Rees, M. J. 1987;Nature;326;455; \paper Efstathiou, G., Bond, J. R., \& White, S. D. M. 1992;MNRAS;258;1P; \paper Efstathiou, G., Kaiser, N., Saunders, W., Lawrence, A., Rowan-Robinson, M., Ellis, R. S., \& Frenk, C. S. 1990;MNRAS;247;10P; \paper Efstathiou, G., Sutherland, W., \& Maddox, S. J. 1990;Nature;348;705; \preprint Feldman, H., Kaiser, N., \& Peacock, J. 1993; preprint UM AC 93-5; \paper Fisher, K. B., Davis, M., Strauss, M. A., Yahil, A., \& Huchra, J. P. 1993;ApJ;402;42; \paper Fry, J. N. 1984;ApJ;279;499; \paper Fry, J. N. 1985;ApJ;289;10; \paper Fry, J. N. 1986;ApJ;308;L71; \preprint Fry, J.N. \& Gazta\~naga, E. 1993a; ApJ in press (FERMILAB-Pub-92/367-A); \preprint Fry, J.N. \& Gazta\~naga, E. 1993b;preprint FERMILAB-Pub-93/097-A; \paper Fry, J. N. \& Seldner, M. 1982;ApJ;259;474; \paper Gazta\~naga, E. 1992;ApJ;398;L17; \preprint Gelb, J., and Bertschinger, E. 1993; preprint FERMILAB-Pub-92/74-A; \paper Gelb, J., Gradwohl, B., \& Frieman, J. A. 1993;ApJ;403;L5; \paper Goroff, M. H., Grinstein, B., Rey, S. J., \& Wise, M. B. % 1986;ApJ;311;6; \paper Gramann, M., \& Einasto, J. 1991;MNRAS;254;453; \paper Groth, E. J. \& Peebles, P. J. E. 1977;ApJ;217;385; \paper Hamilton, A. J. S. 1988;ApJ;332;67; \paper Hamilton, A. J. S., Kumar, P., Lu, E., \& Matthews, A. 1991;ApJ;374;L1; \preprint Haynes, M., \& Giovanelli, R. 1988; in {\sl Large-Scale Motions in the Universe}, ed. V. C. Rubin \& G. V. Coyne (Princeton: Princeton University Press); \paper Huchra, J., Davis, M., Latham, D., \& Tonry, J. 1983;ApJS;52;89; \preprint Juszkiewicz, R., \& Bouchet, F. 1991;in {\sl The Distribution of Matter in the Universe}, eds. G. Mamon \& D. Gerbal (Meudon: Observatoire de Paris); \paper Kaiser, N. 1984a;ApJ;284;L9; \preprint Kaiser, N. 1984b; in {\sl Inner Space/Outer Space}, eds. E. Kolb, M. Turner, D. Lindley, K. Olive, and D. Seckel, (Chicago: University of Chicago press, 1986); \paper Kaiser, N. 1987;MNRAS;227;1; \paper Katz, N., Hernquist, L., \& Weinberg, D. H. 1992;ApJ;399;L109; \preprint Katz, N., Quinn, P., \& Gelb, J. 1992;preprint; \paper Lahav, O., Itoh, M., Inagaki, S., \& Suto, Y. 1993;ApJ;402;387; \preprint Klypin, A., Holtman, J., Primack, J., \& Regos, E. 1992; preprint; \preprint Liddle, A., \& Lyth, D. H. 1992;preprint; \paper Liddle, A., Lyth, D. H., \& Sutherland, W. 1992;Phys.Lett.B;279;244; \paper Loveday, J., Efstathiou, G., Peterson, B. A., \& Maddox, S. J. 1992; ApJ;400;L43; \paper Maddox, S. J., Efstathiou, G., Sutherland, W. J., \& Loveday, J. 1990;MNRAS;242;43P; \paper Meiksin, A., Szapudi, I., \& Szalay, A. S. 1992;ApJ;394;87; \preprint Moscardini, L., Borgani, S., Coles, P., Lucchin, F., Matarrese, S., Messina, A., \& Plionis, M. 1993; preprint; \paper Park, C., Gott, J. R., \& da Costa, L. N. 1992;ApJ;392;L51; \paper Peacock, J. A., \& Nicholson, D. 1991;MNRAS;253;307; \preprint Peebles, P. J. E. 1980; {\sl The Large Scale Structure of the Universe}, (Princeton: Princeton University press); \preprint Pogosyan, D., \& Starobinsky, A. 1992;preprint; \paper Politzer, H. D., \& Wise, M. B. 1984;ApJ;285;L1; \paper Rees, M. J. 1985;MNRAS;213;75P; \paper Saunders, W., etal. 1991;Nature;349;32; \paper Schaefer, R. K., Shafi, Q., \& Stecker, F. 1989;ApJ;347;575; \paper Silk, J. 1985;ApJ;297;1; \paper Szalay, A. S. 1988;ApJ;333;21; \paper Szapudi, I., Szalay, A. S., \& Boschan, P. 1992;ApJ;390;350; \paper Taylor, A. N., \& Rowan-Robinson, M. 1992;Nature;359;396; \paper van Dalen, A., \& Schaefer, R. K. 1992;ApJ;398;33; \paper Vittorio, N., Mattarese, S., \& Lucchin, F. 1988;ApJ;328;69; \paper Vogeley, M. S., Park, C., Geller, M. J., \& Huchra, J. P. 1992;ApJ;395; L5; \paper White, S. D. M., Davis, M., Efstathiou, G., \& Frenk, C. S. 1987; Nature;330;451; \end{document}
1,477,468,751,303
arxiv
\section{\@startsection{section}{1}% \z@{1.5\linespacing\@plus .2\linespacing}{.7\linespacing}% {\normalfont\sc\centering}} \makeatother \makeatletter \def\ps@headings{\ps@empty \def\@evenhead{% \setTrue{runhead}% \normalfont\footnotesize \rlap{\thepage}\hfil \def\thanks{\protect\thanks@warning}% \leftmark{}{}\hfil}% \def\@oddhead{% \setTrue{runhead}% \normalfont\footnotesize\hfil \def\thanks{\protect\thanks@warning}% \rightmark{}{}\hfil \llap{\thepage}}% \let\@mkboth\markboth} \makeatother % \makeatletter \renewenvironment{proof}[1][\proofname]{\par \pushQED{\qed}% \normalfont \topsep6\p@\@plus6\p@\relax \trivlist \itemindent\normalparindent \item[\hskip\labelsep \bfseries #1\@addpunct{.}]\ignorespaces }{% \popQED\endtrivlist\@endpefalse } \providecommand{\proofname}{Proof} \makeatother % \newcommand{\cone}{\times\!\!\!\!\!\times\,} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{M}}{\mathbb{M}} \newcommand{\textbf{TP}}{\textbf{TP}} \newcommand{\textbf{OTP}}{\textbf{OTP}} \newcommand{\mathscr{P}}{\mathscr{P}} \newcommand{\rightharpoonup}{\rightharpoonup} \newcommand{\stackrel{*}{\rightharpoonup}}{\stackrel{*}{\rightharpoonup}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathscr{I}}{\mathscr{I}} \newcommand{\mathscr{O}}{\mathscr{O}} \newcommand{\mathscr{E}}{\mathscr{E}} \newcommand{\mathbb{E}^\alpha}{\mathbb{E}^\alpha} \newcommand{\mathbb{M}^\alpha}{\mathbb{M}^\alpha} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathscr{D}}{\mathscr{D}} \newcommand{\mathscr{F}}{\mathscr{F}} \newcommand{\mathscr{G}}{\mathscr{G}} \newcommand{\mathscr{H}}{\mathscr{H}} \newcommand{\mathscr{L}}{\mathscr{L}} \newcommand{\mathscr{K}}{\mathscr{K}} \newcommand{\mathscr{M}}{\mathscr{M}} \newcommand{\mathscr{M}_+}{\mathscr{M}_+} \newcommand{\mathscr{P}}{\mathscr{P}} \newcommand{\mathscr{X}}{\mathscr{X}} \newcommand{{\rm cut}}{{\rm cut}} \newcommand{1}{1} \newcommand{\mathbf{i}}{\mathbf{i}} \newcommand{\mathbf{j}}{\mathbf{j}} \newcommand{{\rm length}}{{\rm length}} \newcommand{\mathrm{Lip}}{\mathrm{Lip}} \newcommand{\mathrm{span}}{\mathrm{span}} \newcommand{\mathrm{Tan}}{\mathrm{Tan}} \newcommand{\mathrm{Int}}{\mathrm{Int}} \newcommand{\mathrm{dist}}{\mathrm{dist}} \newcommand{\mathrm{supp}}{\mathrm{supp}} \newcommand{\mathrm{loc}}{\mathrm{loc}} \newcommand{\mathrm{Gr}}{\mathrm{Gr}} \newcommand{d_{\mathrm{gr}}}{d_{\mathrm{gr}}} \newcommand{w.r.t.\ }{w.r.t.\ } \newcommand{\partial}{\partial} \newcommand{\varepsilon}{\varepsilon} \newcommand{\varepsilon}{\varepsilon} \newcommand{d_V\kern-1pt}{d_V\kern-1pt} \newcommand{\ove}[1]{\smash{\overline{#1}}} \newcommand{\textfrac}[2]{{\textstyle\frac{#1}{#2}}} \newcommand{\scal}[2]{\langle #1\, ; \, #2\rangle} \newcommand{\bigscal}[2]{\big\langle #1 \, ; \, #2\big\rangle} \newcommand{\Bigscal}[2]{\Big\langle #1 \, ; \, #2\Big\rangle} \DeclareMathOperator{\largewedge}{\mbox{\large$\wedge$}} \DeclareMathOperator{\largewedgef}{\mbox{\small$\wedge$}} \newcommand{\trait}[3]{\vrule width #1ex height #2ex depth #3ex} \newcommand{\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}}}{\mathchoice% {\mathbin{\trait{.12}{1.2}{.03}\trait{.8}{0.09}{0.03}}} {\mathbin{\trait{.12}{1.2}{.03}\trait{.8}{0.09}{0.03}}} {\mathbin{\hskip.15ex\trait{.09}{.84}{0.02}\trait{.56}{.07}{.02}}\hskip.15ex} {\mathbin{\trait{.07}{.6}{.01}\trait{.4}{.06}{.01}}}} \newcommand{\footnoteb}[1]{\footnote{~#1}} \makeatletter \@addtoreset{footnote}{section} \makeatother \newenvironment{itemizeb} {\begin{itemize}\itemsep=2pt}{\end{itemize}} \begin{document} % \pagestyle{empty} \pagestyle{myheadings} \markboth% {\underline{\centerline{\hfill\footnotesize% \textsc{Maria Colombo, Antonio De Rosa, and Andrea Marchese}% \vphantom{,}\hfill}}}% {\underline{\centerline{\hfill\footnotesize% \textsc{On the well-posedness of branched transportation}% \vphantom{,}\hfill}}} % \thispagestyle{empty} ~\vskip -1.1 cm % \vspace{1.7 cm} % {\large\bf\centering On the well-posedness of branched transportation\\ } \vspace{.6 cm} % \centerline{\sc Maria Colombo, Antonio De Rosa, and Andrea Marchese} \vspace{.8 cm} {\rightskip 1 cm \leftskip 1 cm \parindent 0 pt \footnotesize % {\sc Abstract.} We show in full generality the stability of optimal traffic paths in branched transport: namely we prove that any limit of optimal traffic paths is optimal as well. This solves an open problem in the field (cf. Open problem 1 in the book \emph{Optimal transportation networks}, by Bernot, Caselles and Morel), which has been addressed up to now only under restrictive assumptions \par \medskip\noindent {\sc Keywords: } Transportation network, Branched transport, Traffic path, Stability. \par \medskip\noindent {\sc MSC :} 49Q20, 49Q10. \par } \section{Introduction} This paper deals with optimizers of the branched transportation problem. Given a source $\mu^-$ and a target $\mu^+$, positive measures on $\mathbb{R}^d$ with compact support, a \emph{traffic path} transporting $\mu^-$ onto $\mu^+$ is given by a $1$-rectifiable current $T$ whose boundary $\partial T$ is $\mu^+-\mu^-$. This can be identified with a vector-valued measure $T=\vec T(\theta\mathscr{H}^1\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} E)$ (with unit vector field $\vec T$ and non-negative multiplicity $\theta$), supported on a bounded set $E\subset \mathbb{R}^d$, which is contained in a countable union of curves of class $C^1$ and having distributional divergence ${\mbox{div}}~T=\mu^--\mu^+.$ Given a parameter $\alpha \in (0,1)$, quantifying the convenience of grouping particles during the transportation, we consider the \emph{$\alpha$-mass} of $T$ \begin{equation} \label{eqn:alphamass} \mathbb{M}^\alpha(T):=\int_E \theta(x)^{\alpha}d\mathscr{H}^1(x), \end{equation} and the minimal transport energy to connect $\mu^-$ to $\mu^+$ \begin{equation}\label{mainp} W^\alpha(\mu^-,\mu^+):=\inf\{\mathbb{M}^\alpha(T): \mbox{$T$ is a traffic path transporting $\mu^-$ onto $\mu^+$}\}. \end{equation} The optimizers in the minimization problem are called optimal traffic paths; the set of optimizers is denoted by $\textbf{OTP}(\mu^-,\mu^+)$. The existence of solutions is obtained by direct methods and in general one does not expect uniqueness. Arguably the main open question concerning the well-posedness of the problem, of special relevance in view of numerical simulations, is whether or not the optima are \emph{stable} with respect to variations of the initial and final distribution of mass. In other words, we ask if the limit of suitable sequences of optima (with respect to the usual notion of convergence of vector-valued measures denoted by $T_n\overset{*}{\rightharpoonup} T$) is still an optimum. The main result of our paper provides a positive answer to this question, raised in \cite[Problem 15.1]{BCM}, for every $\alpha \in (0,1)$. \begin{theorem}[(Stability of optimal traffic paths)]\label{thm:main} Let $\alpha \in(0,1)$, $\mu^-,\mu^ $ be mutually singular positive measures on $\overline{B(0,R)}$, $R>0$, satisfying $\mu^-(\mathbb{R}^d)=\mu^+(\mathbb{R}^d)$. Let $\{\mu^-_n\}_{n\in \mathbb{N}}, \{\mu^+_n\}_{n\in \mathbb{N}}$ be positive measures on $\overline{B(0,R)}$ such that $\mu^-_n(\mathbb{R}^d)=\mu^+_n(\mathbb{R}^d)$ for every $n \in \mathbb{N}$ and \begin{equation}\label{hp:supp-n-convergence} \mu^\pm _n \overset{*}{\rightharpoonup} \mu^\pm, \end{equation} and assume there exist $T_n\in \textbf{OTP}(\mu^-_n,\mu^+_n)$ optimal traffic paths satisfying \begin{equation} \label{hp:energy-bound} \sup_{n\in \mathbb{N}}\mathbb{M}^\alpha(T_n)<\infty. \end{equation} Then, the (non-empty) family of subsequential weak-$*$ limits of $T_n$ is contained in $\textbf{OTP}(\mu^-,\mu^+)$. \end{theorem} \begin{remark}[($H$-masses)] With minor changes, Theorem \ref{thm:main} holds true for every \emph{$H$-mass}. Namely we can replace the integrand $x \mapsto x^\alpha$ in \eqref{eqn:alphamass} with a general function $H:\mathbb{R}\to[0,\infty)$ which is even, sub-additive, lower semi-continuous, monotone non-decreasing in $(0,+\infty)$, continuous in $0$ and satisfies $H(0)=0$. These functionals have been widely studied (see e.g. \cite{White1999,depauwhardt,flat-relax,BW,CFM,MW}). The interest is twofold: firstly a general formulation of the branched transportation problem allows to consider several interesting models, which are relevant for applied mathematics and numerical approximations as in \cite{BW}, secondly the possibility to prove the result in such generality shows the flexibility and the robustness of our strategy, which does not employ any peculiar property of the function $x \mapsto x^\alpha$. In Remark \ref{hmas} we detail how to modify the proof of Theorem \ref{thm:main} to include such generalization. \end{remark} \subsection{Background} In the case of discrete measures $\mu^-$ and $\mu^+$, the minimization problem \eqref{mainp} was suggested by Gilbert \cite{Gilbert}, who proposed finite directed weighted graphs $G$ as transportation networks. For arbitrary measures $\mu^-$ and $\mu^+$, two generalizations of the the Gilbert problem have been proposed. On one hand, the above description in terms of traffic paths is due to Xia \cite{Xia,xia2}, and it is related to a problem which arises in the characterization of weakly approximable Sobolev maps with values in a manifold \cite{hrActa}. On the other hand, a different model was introduced and studied in \cite{MSM,BCM}: here the transportation networks (called \emph{traffic plans}) consist in measures on the set of Lipschitz paths, where each path represents the trajectory of a single particle. In both models, the existence of optimizers in the minimization problem has been established \cite{Xia,MSM,BeCaMo,brabutsan,Pegon} (see also the reference book \cite{BCM}). The correspondence between traffic plans and traffic paths can be established by means of Smirnov's theorem on the structure of acyclic, normal 1-dimensional currents \cite{Smirnov93}. Indeed, the two formulations were proved to be equivalent (see \cite{BCM,Pegon} and references therein). Under some restrictions on $\alpha, \mu^-$ and $\mu^+$, optimizers exhibit regularity properties both in the interior (roughly speaking, they are locally finite graphs) and close to their boundary, that is the supports of $\mu^\pm$ \cite{xia2,MR2250166,DevSolElementary,morsant,xiaBoundary,BraSol}. The models described above can be used and generalized to describe a variety of problems related to branched transportation: for instance, one can study the mailing problem \cite{BCM} (for which the first stability result was proved in \cite{CDRM3}), the urban planning model \cite{BranK}, including two different regimes of transportation, or the recent multi-material transport problem \cite{MMT,MMST}, allowing simultaneous transportation of different goods or commodities. Recently, shape optimization problems related to the functional \eqref{eqn:alphamass} were analysed in \cite{PeSaXia,BrSun} and similar branching structures are observed in superconductivity models and for minimizers of Ginzburg-Landau type functionals, see for instance \cite{chok3,chok,chok1,chok2,con}. Explicit optima are known only in few (mainly discrete) cases; for this reason, some effort has been put in developing numerical strategies to compute minimizers, for instance in term of phase-field approximations \cite{OuSan,BCF,BLS}, in the spirit of numerical calibrations \cite{massoubo,BOO}, or exploiting the convex nature of different formulations of some aspects of the problem (which is overall highly nonconvex) \cite{marmass,marmass1,BranRS}. \begin{remark}[(Stability in previous works)]\label{remarkone} The answer to the stability question was previously known for $\alpha \in (1-\sfrac1d,1]$. In this case, a simple argument relies on the fact that the minimal transport energy $W^\alpha(\nu_n,\nu)$ metrizes the weak-$*$-convergence of probability measures $\nu_n\overset{*}{\rightharpoonup} \nu$ (see \cite[Lemma 6.11 and Proposition 6.12]{BCM}). This property is false for $\alpha\leq 1-\sfrac1 d$, as shown in \cite{CDRM1}. The threshold $\alpha=1-\sfrac{1}{d}$ appears also because for $\alpha$ above this value any two probability measures with compact support in $\mathbb{R}^d$ can be connected with finite cost. The same threshold is then recurrent in other results: for instance, above the threshold interior regularity holds (see \cite[Theorem 8.14]{BCM}) and a possible proof is obtained using the stability property. \end{remark} \subsection{Strategy of the proof} In analogy with previous works \cite{BeCaMo, BCM,CDRM1}, to prove Theorem~\ref{thm:main} we assume by contradiction that $T$ is not optimal, denote $T_{opt}$ a minimizer, and we construct a better competitor for $T_n$ ($n$ large enough) by ``sewing'' a small portion of the traffic path $T_n$ with a large portion of $T_{opt}$. In the following we shortly describe some of the main ideas and difficulties behind the proof of Theorem~\ref{thm:main}. \subsubsection{Lagrangian description of traffic paths} By means of Smirnov theorem we decompose the optimal path $T_n$ as a superposition of curves without cancellations. At difference from previous works, our energy competitor for $T_n$ is not only expressed in Lagrangian terms as a cut and paste of trajectories, to exploit the full power of the \emph{slicing} operation defined for currents (see \S \ref{s:slicing}). \subsubsection{Cancellations in the Lagrangian description of $T$} A technical difficulty for our construction is related to the fact that, although the limit of the Lagrangian descriptions of $T_n$ provides a Lagrangian description of $T$, the latter could contain cycles and cancellations at the level of currents. This issue did not appear in \cite[Theorem 1.2]{CDRM1} because there the convergence $T_n \stackrel{*}{\rightharpoonup} T$ was not necessary to obtain a cheap connection of the slices. To overcome this and obtain a lower semi-continuity result which keeps track in the limit of those Lagrangian trajectories which have opposite orientations and therefore they would cancel at the Eulerian level, we employ some ideas from the theory of currents with coefficients in normed groups (see \S \ref{s:g-currents}). \subsubsection{Sewing trajectories} Lemma \ref{high_multiplicity} shows that, even though $\mathbb{M}^\alpha$ does not metrize the weak-$*$ convergence of measures for $\alpha$ below the critical threshold (as explained in Remark~\ref{remarkone}), this holds true on the class of atomic measures with uniformly bounded energy (the energy of an atomic measure is defined in \eqref{defn:alpha-mass-meas}). This lemma is applied to the slices of some portions of $T_n$ and $T$ along the boundary of small cubes and it allows us to have a cheap connection between $T_n$ and $T$ in proximity of the boundary. For such operation we need to exploit the convergence of the slices of $T_n$ to the slices of $T$: for this reason we cannot directly connect the trajectories of $T_n$ to the trajectories of $T_{opt}$. \subsubsection{Comparison with previous strategies} In \cite[Theorem 1.2]{CDRM1} we employed a dimension-reduction argument to cut the trajectories of $T_n$ and glue them with the trajectories of $T_{opt}$. There are three substantial differences in the approach we adopt in the present paper: firstly, in the previous work we guaranteed the smallness of the connection by making it act on a $d-1$ dimensional surface (hence the bound $\alpha>1-\sfrac{1}{d-1}$); secondly, to guarantee the smallness of the connection we required that $\mu^\pm$ were supported on an $\mathscr{H}^1$-null set; lastly, while in \cite[Theorem 1.2]{CDRM1} the connection acted on Lagrangian trajectories, in this paper we need to perform the slicing at the Eulerian level of currents, possibly introducing cancellations in mass. \section{Notation and preliminaries}\label{s:notation} \subsection{Sets and Measures} We add below a list of frequently used notations: \begin{itemizeb}\leftskip 0.8 cm\labelsep=.3 cm \item[${\bf e_1},\dots,{\bf e_d}$] standard basis of $\mathbb{R}^d$; \item[$B(x,r)$] \emph{open} ball with center $x$ and radius $r$; \item[$\overline A$] \emph closure of the set $A$; \item[$1_E$] characteristic function of a set $E$, taking values $0$ and $1$; \item[${\rm Im}\gamma$] image (or support) of a curve $\gamma$; \item[$|v|$] Euclidean norm of a vector $v\in \mathbb{R}^d$; \item[${\rm dist}(x,A)$] $:=\inf_{y\in A}\{|x-y|\}$, distance between the point $x$ and the set $A$; we also denote ${\rm dist}(A,B):=\inf_{y\in A}\{{\rm dist}(y,B)\}$ and $B(A,\rho):=\{x:\mathrm{dist}(x,A)<\rho\}$; \item[$\mathscr{M}_+(Y)$] set of positive Radon measures on the space $Y$; we use $\mathscr P(Y)$ for the subset of probability measures; \item[$f\mu $] measure associated to a measure $\mu$ and a function $f$, namely $[f\mu](E):=\int_E f \, d\mu$; \item[$\mu\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} E$] $:=1_E\mu$, restriction of a measure $\mu$ to a set $E$; \item[$f_\#\,\mu$] push-forward of a measure $\mu$ on $Y$ according to a map $f:Y\to Y'$, that is, the measure on $Y'$ given by $[f_\#\,\mu](E):=\mu(f^{-1}(E))$; \item[$|\mu|$] total variation measure associated to a real- or vector-valued measure $\mu$; we call \emph{positive} and \emph{negative part} of a real-valued measure $\mu$ respectively the measures $\sfrac{1}{2}(|\mu|+\mu)$ and $\sfrac{1}{2}(|\mu|-\mu)$; \item[$\mathrm{supp} (\mu)$] support of $\mu$; we say that $\mu$ is \emph{supported} on $E$ if $|\mu|(Y\setminus E)=0$; we say that two measures $\mu$ and $\nu$ are \emph{mutually singular} if $\mu$ is supported on a set $E$ such that $|\nu|(E)=0$; \item[$\mathbb{M}(\mu)$] $:=|\mu|(Y)$, mass of a measure $\mu$ on a space $Y$; \item[$\mu\leq\nu$] means that $\mu(A)\leq\nu(A)$ for every Borel set $A$; \item[$\delta_x$] Dirac delta at the point $x$; \item[$\mathscr{H}^k$] $k$-dimensional Hausdorff measure; \item[$L^p(\mu)$] space of $p$-integrable functions w.r.t.\ $\mu$; we also use $L^p(\mu; V)$ for $p$-integrable functions with values in the normed space $V$. \item[$\|\cdot \|_p$] $L^p$-norm; we use $\|\cdot\|_\infty$ also to denote the supremum norm; \item[$\mu_n\stackrel{*}{\rightharpoonup}\mu$] denotes the weak-$*$ convergence of measures, that is $\int f d\mu_n\to\int f d\mu$ for every $f\in C^0_c$. \end{itemizeb} \subsection{Rectifiable sets and currents}\label{ss:currents} We recall here the basic terminology related to $k$-dimensional rectifiable sets and currents. We refer the reader to the introductory presentation given in the standard textbooks \cite{SimonLN}, \cite{KrantzParks} and to the most complete treatise \cite{FedererBOOK}. For the purposes of this paper, we point out that in \cite{CDRM1} the same was used and more extensively presented in the context of branched transport. For $k=0,1,\dots,d$, a set \(E\subset \mathbb{R}^d\) is said \emph{\(k\)-rectifiable} if it can be covered, up to an \(\mathscr{H}^k\)-negligible set, by countably many $k$-dimensional submanifolds of class \(C^1\). In the sequel we use the following notation: \begin{itemizeb}\leftskip 0.8 cm\labelsep=.3 cm \item[${\rm{Tan}}(E,x)$] tangent $k$-plane to the $k$-rectifiable set $E$ at the point $x$ (defined at $\mathscr{H}^k$-a.e. $x\in E$); \item[$\mathscr{D}^k(\mathbb{R}^d)$] space of smooth and compactly supported differential $k$-forms on $\mathbb{R}^d$. The topology on $\mathscr{D}^k(\mathbb{R}^d)$ is analogous to the topology defined on the space of test functions with respect to which distributions are dual; \item[$\mathscr{D}_k(\mathbb{R}^d)$] \emph space of $k$-dimensional currents in $\mathbb{R}^d$, namely continuous linear functionals on $\mathscr{D}^k(\mathbb{R}^d)$; \item[$\langle T,\omega\rangle$] duality pairing between a $k$-current $T$ and a $k$-form $\omega$. We use the same symbol for the duality pairing between a $k$-covector and a $k$-vector; \item[$T_n \rightharpoonup T$] weak-$*$ convergence of currents, namely $\langle T_n,\omega\rangle\to \langle T,\omega\rangle$ for every $\omega\in\mathscr{D}^k(\mathbb{R}^d)$; \item[$\partial T$] boundary of $T$, that is the $(k-1)$-dimensional current defined via $\langle\partial T, \phi \rangle := \langle T, d\phi\rangle$ for every $\phi\in \mathscr{D}^{k-1}(\mathbb{R}^d)$; \item[$\|\omega\|$] $:=\sup_{x,\tau}\{\langle \omega(x),\tau\rangle$: $x\in\mathbb{R}^d$, $\tau$ is a unit simple $k$-vector$\}$ is the comass norm of the form $\omega$; \item[$\mathbb{M}(T)$] $:=\sup_{\omega}\{\langle T, \omega\rangle$: $\|\omega\|\le 1\}$ is the mass of the current $T$; \item[$T=\vec{T} |T|$] representation of a current with finite mass (or a vector valued measure)\footnote{Even though currents with finite mass and vector valued measures can be naturally identified, the convergence of currents does not imply in general convergence of vector valued measures. This is the reason for using the two different symbols $\mu_n\stackrel{*}{\rightharpoonup}\mu$ and $T_n\rightharpoonup T$.}, namely $\langle T, \omega\rangle = \int_{\mathbb{R}^d} \langle\omega(x), \vec{T}(x)\rangle d\|T\|(x) $, where $|T|\in\mathscr{M}_+(\mathbb{R}^d)$ and $\vec{T}$ is a unit $k$-vector field. In particular $\mathbb{M}(T)=\mathbb{M}(|T|)$; \item[$\mathrm{supp}(T)$] support of $T$ (in the distributional sense); \item[$\mathbf{N}_k(\mathbb{R}^d)$] normal currents, that is currents $T$ such that both $T$ and $\partial T$ have finite mass; \item[$\partial_+T, \partial_-T$] (for $T\in \mathbf{N}_1(\mathbb{R}^d)$) positive and negative part of the (finite) measure $\partial T$; \item[$T \mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} A$] restriction of a current $T$ with finite mass to the Borel set $A$, namely $\langle T \mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} A, \omega\rangle := \int_{A} \langle\omega(x), \vec{T}(x)\rangle d|T|(x)$; \item[$\mathbb{F}(T)$] flat norm of the current $T$, that is $\mathbb{F}(T):=\inf\{\mathbb{M}(R)+\mathbb{M}(S): T=R+\partial S,\, R \in \mathscr{D}_k(\mathbb{R}^d),\, S \in \mathscr{D}_{k+1}(\mathbb{R}^d) \}$; \item[$\mathbf{R}_{k}(\mathbb{R}^d)$] space of $k$-rectifiable currents, represented as $T=[E,\tau,\theta]$, which means $\langle[E,\tau,\theta],\omega\rangle:= \int_{E} \langle\omega(x), \tau(x)\rangle \, \theta(x) d \mathscr{H}^k(x),$ where $E$ is a $k$-rectifiable set, $\tau(x)$ is a unit, simple $k$-vector field spanning ${\rm{Tan}}(E,x)$ for $\mathscr{H}^k$-a.e $x\in E$, and $\theta\in L^1_{loc}(\mathscr{H}^k\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} E)$; in particular $\mathbb{M}(T)=\int_{E} |\theta(x)| d \mathscr{H}^k(x)$; \item[$\mathbb{M} ^\alpha (T)$] $:=\int_E |\theta|^\alpha(x) d\mathscr{H}^k(x)$ is the $\alpha$-mass of $T$, where $\alpha \in (0, 1]$ and $T=[E,\tau,\theta]$. We set $\mathbb{M} ^\alpha (T)=+\infty$ for every $T\in \mathbf{N}_{k}(\mathbb{R}^d)\setminus \mathbf{R}_{k}(\mathbb{R}^d)$. \end{itemizeb} \begin{remark}[(Flat norm and weak-$*$ convergence)]\label{rmk_andrea} In general $\mathbb{F}(T_n-T)\to 0$ implies that $T_n\rightharpoonup T$. If the $T_n$'s are all supported on a common compact set, and they have equi-bounded masses and masses of the boundaries the reverse is also true. This fact can be easily deduced from \cite[Theorem 4.2.17(1)]{FedererBOOK}. \end{remark} \subsection{Traffic paths} Fix $R>0$. From now on, by $X$ we denote the closed ball of radius $R$ in $\mathbb{R}^d$ centered at the origin. Following \cite{Xia} and \cite{BCM}, given two positive measures $\mu^-,\mu^+ \in \mathscr{M}_+(X)$ with the same total variation, we define the set $\textbf{TP}(\mu^-,\mu^+)$ of the \emph{traffic paths} connecting $\mu^-$ to $\mu^+$ as $$\textbf{TP}(\mu^-,\mu^+):=\{T\in\mathbf{N}_1(\mathbb{R}^d): \mathrm{supp}(T)\subset X, \partial T=\mu^+-\mu^-\},$$ and the \emph{minimal transport energy} associated to $\mu^-,\mu^+$ as $$W^{\alpha}(\mu^-,\mu^+):= \inf \{\mathbb{M}^\alpha(T): T \in \textbf{TP} (\mu^- ,\mu^+)\}.$$ Moreover we define the set of \emph{optimal traffic paths} connecting $\mu^-$ to $\mu^+$ by \begin{equation} \label{eqn:otp} \textbf{OTP} (\mu^- ,\mu^+):=\{T \in \textbf{TP} (\mu^- ,\mu^+) : \mathbb{M}^\alpha(T)=W^\alpha(\mu^-,\mu^+) \}. \end{equation} As observed in \cite[Proposition 2.5]{CDRM1}, in order to minimize the $\alpha$-mass among currents with boundary in $X$, it is not restrictive to consider only currents supported in $X$. \subsection{Structure of optimal traffic paths and good decompositions}\ In the class of rectifiable 1-currents, some basic objects are given by the ones associated to Lipschitz simple curves with finite length. The aim of this subsection is to describe the so called ``superposition principle'' according to which every acyclic normal 1-current can be written as a weighted average of such curves. We denote by $\mathrm{Lip}$ the space of $1$-Lipschitz curves $\gamma: [0,\infty) \to \mathbb{R}^d$ which are eventually constant (and hence of finite length). For $\gamma\in\mathrm{Lip}$ we denote by $T_0(\gamma)$ and $T_\infty(\gamma)$ the values $$T_0(\gamma):=\sup\{t:\gamma \mbox{ is constant on }[0,t]\} \qquad T_\infty(\gamma):=\inf\{t:\gamma \mbox{ is constant on }[t,\infty)\}.$$ Given $\gamma \in \mathrm{Lip}$, we call $\gamma(\infty):=\lim_{t\to\infty}\gamma(t)$. We say that a curve $\gamma\in\mathrm{Lip}$ is \emph{simple} if $\gamma(s)\neq\gamma(t)$ for every $T_0(\gamma)\leq s<t\leq T_\infty(\gamma)$ such that $\gamma$ is non-constant in the interval $[s,t]$. We associate canonically to each simple curve $\gamma\in \mathrm{Lip}$, the rectifiable $1$-current $ I_\gamma:=[{\rm{Im}}\gamma,\sfrac{\gamma'}{|\gamma'|},1]. It is easy to check that $\mathbb{M}(I_\gamma)=\mathscr{H}^1(\rm{Im}\gamma)$ and $\partial I_\gamma=\delta_{\gamma(\infty)}-\delta_{\gamma(0)}$; since $\gamma$ is simple, if it is also non-constant, then $\gamma(\infty) \neq \gamma(0)$ and $\mathbb{M}(\partial I_\gamma)=2$. A normal current $T\in \mathbf{N}_1(\mathbb{R}^d)$ is said \emph{acyclic} if there exists no non-trivial current $S$ such that $$\partial S=0 \qquad \mbox{and} \qquad \mathbb{M}(T)=\mathbb{M}(T-S)+\mathbb{M}(S).$$ We recall a fundamental result of Smirnov (\cite{Smirnov93}) which establishes that every acyclic normal 1-current can be written as a weighted average of simple Lipschitz curves in the following sense. \begin{definition}[\bf{(Good decomposition)}]\label{defn:GD} Let $T\in \mathbf{N}_1(\mathbb{R}^d)$ be represented as a vector-valued measure $\vec T |T|$, and let $P \in \mathscr{M}_+(\mathrm{Lip})$ be a finite positive measure, supported on the set of curves with finite length, such that \begin{equation} \label{eqn:buona-dec} T=\int_{\mathrm{Lip}} I_\gamma d P (\gamma), \end{equation} namely for every smooth compactly supported 1-form $\varphi: \mathbb{R}^d \to \mathbb{R}^d$ it holds \begin{equation} \label{eqn:good-dec-operativa} \int_{\mathbb{R}^d} \langle\varphi, \vec T\rangle \,d| T|= \int_{\mathrm{Lip}} \int_{0}^\infty \langle\varphi(\gamma(t)), \gamma'(t)\rangle\, dt \, d P(\gamma). \end{equation} We say that $P$ is a good decomposition of $T$ if $P$ is supported on non-constant, simple curves and satisfies the equalities \begin{equation} \label{eqn:buona-dec-mass-T} \mathbb{M}(T) = \int_{\mathrm{Lip}} \mathbb{M}(I_\gamma) d P(\gamma) = \int_{\mathrm{Lip}} \mathscr{H}^1({\rm{Im}}\gamma) d P(\gamma) \, ; \end{equation} \begin{equation} \label{eqn:buona-dec-mass-boundaryT} \mathbb{M}(\partial T) = \int_{\mathrm{Lip}} \mathbb{M}(\partial I_\gamma) d P(\gamma) = 2 P({\mathrm{Lip}}) \, . \end{equation} \end{definition} It has been shown in \cite[Theorem 10.1]{PaoliniStepanov} that optimal traffic paths $T\in \textbf{OTP}(\mu^-, \mu^+)$ are acyclic, hence they admit such a good decomposition. In the next result, we collect some useful properties of good decompositions, whose proof can be found in {\cite[Proposition 3.6]{CDRM1}} \begin{theorem}[(Existence and properties of good decompositions){\cite[Theorem 5.1]{PaoliniStepanov1}} and {\cite[Proposition 3.6]{CDRM1}}]\label{t:propr_good_dec} Let $\mu^-, \mu^+ \in \mathscr{M}_+(\mathbb{R}^d)$ and $T \in \textbf{OTP}(\mu^-, \mu^+)$ with finite $\alpha$-mass. Then $T$ is acyclic and there is a Borel finite measure $P$ on $\mathrm{Lip}$ such that $P$ is a good decomposition of $T$. Moreover, if $P$ is a good decomposition of $T \in \mathbf{N}_1(\mathbb{R}^d)$ as in \eqref{eqn:buona-dec}, the following statements hold: \begin{enumerate} \item The positive and the negative parts of the signed measure $\partial T$ are $\partial_- T = \int_{\mathrm{Lip}}\delta_{\gamma(0)} d P (\gamma) and $\partial_+ T = \int_{\mathrm{Lip}}\delta_{\gamma(\infty)} d P (\gamma). \item If $T= T[E, \tau, \theta]$ is rectifiable, then $|\theta(x)| = P(\{\gamma: x \in {\rm{Im}}\gamma \})$ for $\mathscr{H}^1$-a.e. $x\in E$ \item For every $P' \leq P$ the representation $ T' := \int_{\mathrm{Lip}} I_\gamma dP'( \gamma )$ is a good decomposition of $T'$; moreover, if $T= T[E, \tau, \theta]$ is rectifiable, then $T'$ can be written as $T'=T[E, \theta',\tau]$ with $|\theta'| \leq \min\{|\theta|, P'(\mathrm{Lip})\}$ and $\theta\cdot\theta'\geq 0$, $\mathscr{H}^1$-a.e.. \end{enumerate} \end{theorem} \begin{remark}[(Lagrangian description of the limit)]\label{rmk:limit} Let $T_n \rightharpoonup T $ be a sequence of currents converging weakly-$*$ with uniformly bounded mass and mass of the boundaries and let $P_n$ be good decompositions of $T_n$. Up to a subsequence, $P_n \stackrel{*}{\rightharpoonup} P \in\mathscr{P}(\mathrm{Lip}) $ (thanks to \eqref{eqn:buona-dec-mass-boundaryT} and to \eqref{eqn:buona-dec-mass-T}, which ensure pre-compactness of the sequence of measures). Then $P$ might fail to be a good decomposition of $T$, but \eqref{eqn:buona-dec} remains valid. Indeed, every smooth compactly supported $1$-form $\omega$, induces a continuous function on curves $\mathrm{Lip} \ni \gamma \to \langle I_\gamma, \omega\rangle$ and we can test both weak-$*$ convergences $T_n \rightharpoonup T $ and $P_n \stackrel{*}{\rightharpoonup} P$ to obtain the equality. \end{remark} \section{Preliminary results}\label{sec:con} Given a cube $Q\subset\mathbb{R}^d$ whose faces are parallel to the coordinate hyperplanes and $k\in\mathbb{N}$ we denote $$\Lambda(Q,k):=\{Q^\ell\}_{\ell=1}^{2^{kd}}$$ the collection of the $2^{kd}$ cubes obtained dividing each edge of $Q$ into $2^k$ subintervals of equal length. We denote by $$\mathcal{S}(Q,k):=\bigcup_{\ell=1}^{2^{kd}}\partial Q^\ell$$ the $(d-1)$-skeleton of the grid $\Lambda(Q,k)$. Moreover we denote by $\rho Q^\ell$ the concentric cube to $Q^\ell$, with homothety ratio $\rho$. Given two cubes $Q, R$ , we define $\mathrm{Lip} (Q, R)$ as the set of curves in $\mathrm{Lip}$ which start in $Q $ and end in $R$, namely $$\mathrm{Lip} (Q, R):= \{ \gamma \in \mathrm{Lip}: \gamma(0) \in Q, \ \gamma(\infty)\in R\}.$$ Given an atomic measure $\mu\in\mathscr{M}_+(X)$ of the form $\mu=\sum_{i\in\mathbb{N}}\theta_i\delta_{x_i}$, we define its $\alpha$-mass \begin{equation} \label{defn:alpha-mass-meas} \mathbb{M}^\alpha(\mu) = \sum_{i\in\mathbb{N}}\theta_i^\alpha. \end{equation} The alpha mass of a real-valued atomic measure is simply the sum of the $\alpha$-mass of its positive and its negative part (the $\alpha$-mass of a measure is considered to be infinite if the measure is not atomic). If $\mu$ is atomic and supported on a cube $Q_l(x)\subset \mathbb{R}^d$, centred at $x$ and with diameter $l$, the \emph{cone} over $\mu$ with vertex $x$, is defined as the 1-current \begin{equation}\label{defn:cone} x\cone\mu:=\sum_{i\in\mathbb{N}}\theta_iS_i, \end{equation} where $S_i$ is the 1-dimensional current canonically associated to the oriented segment connecting $x$ to $x_i$. It is easy to check that \begin{equation}\label{stimecono} \partial(x\cone\mu)=\mu-\bigg(\sum_{i\in\mathbb{N}}\theta_i\bigg)\delta_x \quad\mbox{ and }\quad \mathbb{M}^\alpha(x\cone\mu)\leq l\cdot\mathbb{M}^\alpha(\mu). \end{equation} \begin{lemma}[(Existence of a sequence of negligible nested grids)]\label{lemmagriglia} Let $Q\subset\mathbb{R}^d$ be a cube. Let $\{\mu_n\}_{n\in\mathbb{N}}\subset \mathscr{M}_+(Q)$ be a countable family of measures. Then there exists a cube $Q'\supset Q$ such that \begin{equation}\label{griglia} \mu_n(\mathcal{S}(Q',k))=0, \quad \mbox{ for all } (k,n)\in\mathbb{N}^2. \end{equation} \end{lemma} \begin{proof} Denote $\mu:=\sum_{n\in\mathbb{N}}2^{-n}\sfrac{\mu_n}{\mathbb{M}(\mu_n)}$. Let $Q''$ be cube such that $d(Q,(\mathbb{R}^d\setminus Q''))\geq 1$ and such that the edge length of $Q''$ is an integer number. For every $j=1,\dots,d$ and $k\in\mathbb{N}$ we denote $H_{j,k}$ the union of $2^k+1$ hyperplanes, orthogonal to ${\bf{e}}_j$, partitioning $Q''$ into $2^k$ slabs of equal volume. Denote also $$L_j:=\bigcup_{k\in\mathbb{N}}H_{j,k}.$$ Since $L_j+r{\bf{e}}_j$ is disjoint from $L_j+s{\bf{e}}_j$ whenever $r-s\in \mathbb{R}\setminus\mathbb{Q}$, then for every $j$ there exists $\rho_j\in[0,1]$ such that $$\mu(L_j+\rho_j{\bf{e}}_j)=0.$$ We conclude that $Q':=Q''+\sum_j\rho_j{\bf{e}}_j$ yields \eqref{griglia}. \end{proof} \subsection{A metrization property for $\mathbb{M}^\alpha$}\label{sec:metrizes} We show that if a sequence $\mu_n$ of measures, satisfying a uniform bound on the $\alpha$-masses, weak-$*$ converges to a measure $\mu$, then the connection cost $W^\alpha(\mu_n,\mu)$ converges to zero, for every $\alpha \in (0,1)$ (compare with Remark~\ref{remarkone}, which requires instead $\alpha>1-\sfrac 1d$). \begin{lemma}[(Metrization property for $\mathbb{M}^\alpha$)]\label{high_multiplicity} Let $Q\subset\mathbb{R}^d$ be a cube and $C>0$. Let $\mu_n,\nu_n\in\mathscr{M}_+(Q)$ be atomic measures such that\footnote{We remind the reader that the symbol $\rightharpoonup$ denotes the weak-$*$ convergence of $0$-currents. Under the assumptions of the lemma, this is equivalent to the weak-$*$ convergence of the associated real-valued measures.} $\mu_n- \nu_n\rightharpoonup 0$ and for all $n\in\mathbb{N}$ $$\mathbb{M}(\mu_n)=\mathbb{M}(\nu_n), \quad \mathbb{M}^\alpha(\mu_n)+\mathbb{M}^\alpha(\nu_n)\leq C.$$ Then $\lim_{n \to \infty} W^\alpha(\mu_n,\nu_n)= 0$. \end{lemma} \begin{proof} By Lemma \ref{lemmagriglia} we can assume that, up to enlarging the cube Q, \begin{equation}\label{griglia2} \mu_n(\mathcal{S}(Q,k))=\nu_n(\mathcal{S}(Q,k))=0, \quad \mbox{ for all } (k,n)\in\mathbb{N}^2. \end{equation} Now fix $k\in\mathbb{N}$ and $\gamma>0$; let $\{Q^\ell\}_{\ell=1,\dots,2^{kd}}$ be the cubes in $\Lambda(Q,k)$. Denote by $\sigma_n$ the real-valued measure $$\sigma_n:=\sum_{\ell=1}^{2^{kd}}\theta_\ell\delta_{x_\ell}\quad \mbox{ where $x_\ell$ is the barycenter of $Q_\ell$ and } \theta_\ell:=\nu_n(Q^\ell)-\mu_n(Q^\ell).$$ By \eqref{griglia2}, the assumption $\mu_n - \nu_n\rightharpoonup 0$ yields \begin{equation}\label{massaalfacono} \mathbb{M}^\alpha(\sigma_n) =\sum_{\ell=1}^{2^{kd}}|\nu_n(Q^\ell)-\mu_n(Q^\ell)|^\alpha\leq \gamma, \quad\mbox{ for $n$ sufficiently large}. \end{equation} For every $\ell=1,\dots,2^{kd}$, we consider the cone over $(\mu_n-\nu_n)\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} Q^\ell$ of vertex $x_\ell$ as in \eqref{defn:cone} $$C^\ell:=x_\ell\cone \big((\mu_n-\nu_n)\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} Q^\ell\big).$$ Its boundary is given by $(\mu_n-\nu_n)\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} Q^\ell + \sigma_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} Q^\ell$. Denoting by $l$ the diameter of $Q$ and $C_1:=\sum_{\ell=1}^{2^{kd}}C^\ell$, we have \begin{equation}\label{cono1} \begin{split} \mathbb{M}^\alpha(C_1)&\leq\sum_{\ell=1}^{2^{kd}}\mathbb{M}^\alpha(C^\ell)\overset{\eqref{stimecono}}{\leq} 2^{-k}l\sum_{\ell=1}^{2^{kd}}(\mathbb{M}^\alpha(\mu_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} Q^\ell)+\mathbb{M}^\alpha(\nu_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} Q^\ell))\\ &\overset{\eqref{griglia2}}{\leq} 2^{-k}l(\mathbb{M}^\alpha(\mu_n)+\mathbb{M}^\alpha(\nu_n))\leq 2^{-k+1}lC, \end{split} \end{equation} and \begin{equation}\label{cono2} \partial C_1=\mu_n-\nu_n + \sigma_n. \end{equation} Denote also $x$ the center of $Q$ and $C_2:= x\cone \sigma_n$. Again by \eqref{stimecono} and \eqref{massaalfacono}, since $\sum_{\ell=1}^{2^{kd}}\theta_\ell=0$ we have \begin{equation}\label{cono3} \partial C_2=\sigma_n \quad\mbox{ and }\quad \mathbb{M}^\alpha(C_2)\leq l\cdot \gamma. \end{equation} Combining \eqref{cono1}, \eqref{cono2}, and \eqref{cono3}, we deduce that $$\partial(C_1-C_2)=\mu_n-\nu_n, \quad \mbox{ and } \quad \mathbb{M}^\alpha(C_1-C_2)\leq l(2^{-k+1}C+\gamma).$$ The conclusion follows from the arbitrariness of $k$ and $\gamma$. \end{proof} \subsection{Slicing}\label{s:slicing} A fundamental tool for the proof of Theorem \ref{thm:main} is the notion of slicing of rectifiable 1-currents. Here we recall the definition and some fundamental properties. We refer the reader to \cite[Section 28]{SimonLN} for further details\footnote{As many classical references, \cite{SimonLN} considers only rectifiable currents with integer multiplicities. It is easy to check that every statement we refer to is valid also in the case of real multiplicities.}. \begin{definition}[(Slicing of 1-rectifiable currents)] Let $T=[E,\tau,\theta] \in \mathbf{R}_1(\mathbb{R}^d)$ and let $f:\mathbb{R}^d \to \mathbb{R}$ be a Lipschitz function. For a.e. $t\in\mathbb{R}$ we define the slice of $T$ according to $f$ at $t$ to be the 0-rectifiable current $$\langle T, f, t \rangle=[E_t,\tau_t,\theta_t],$$ where: \begin{itemize} \item $E_t=E\cap f^{-1}(t)$ and it is at most countable (hence 0-rectifiable) for a.e. $t$; \item $\tau_t(x)=1$ if the scalar product $\nabla_Ef(x)\cdot\tau(x)$ is positive (where $\nabla_Ef$ denotes the tangential gradient); $\tau_t(x)=-1$ otherwise; \item $\theta_t=1_{E_t}\theta$. \end{itemize} \end{definition} We will use the following characterization of the slices (see \cite[Lemma 28.5(2)]{SimonLN}). Let $T$ and $f$ as above. Then \begin{equation}\label{e:def_slicing} \langle T, f, t \rangle:= \partial (T \mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \{f < t\})-(\partial T)\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \{f < t\}, \end{equation} for a.e. $t \in (0,+\infty)$. We conclude this short review with a simple consequence of the Coarea formula for rectifiable sets (see \cite[Lemma 28.5(1)]{SimonLN}). Let $T$ and $f$ as above, then \begin{equation}\label{e:massa_slices} \int_a^b\mathbb{M} (\langle T, f, t \rangle) dt\leq {\rm Lip}(f)\mathbb{M} (T\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \{a<f<b\}). \end{equation} In the following, we choose $f:=d_x$, where $d_x(z):=\|z-x\|_\infty$. \begin{lemma}[(Estimate of $\mathbb{M}^\alpha$ of suitable slices)]\label{slice} Let $x,y \in \mathbb{R}^d$, $r_0>0, \eta_0 \in (1,2)$, $\{T_n=[E_n,\tau_n,\theta_n]\}_{n\in \mathbb{N}} \subset \mathbf{R}_1(\mathbb{R}^d)$ with $\mathbb{M}^\alpha(T_n)\leq C.$ Then there exists a set of positive measure $E \subseteq [r_0, \eta_0r_0]$ such that for every $r\in E$ there exist infinitely many $n\in \mathbb{N}$ satisfying \begin{equation}\label{eqn:slicing_control} \mathbb{M}^\alpha(\langle T_n, d_{x}, r \rangle )+\mathbb{M}^\alpha(\langle T_n, d_{y}, r \rangle ) \leq 4\frac{ \mathbb{M}^\alpha(T_n)}{(\eta_0-1)r_0},\quad\mbox{ for $j=1,\dots,m$.} \end{equation} \end{lemma} \begin{proof} For every $n\in \mathbb{N}$ we define the set $$F_n:=\Big\{r\in (r_0, \eta_0r_0): \mathbb{M}^\alpha(\langle T_n, d_{x}, r \rangle)+\mathbb{M}^\alpha(\langle T_n, d_{y}, r \rangle) \leq \frac{4 \mathbb{M}^\alpha(T_n)}{(\eta_0-1)r_0} \Big\}. $$ We apply Chebyshev inequality and \eqref{e:massa_slices} to the $1$-rectifiable current $\tilde T_n =[E_n,\tau_n,\theta_n^\alpha]$ to obtain $$\mathscr{H}^1((r_0, \eta_0r_0)\setminus F_n) \frac{4 \mathbb{M}^\alpha(T_n)}{(\eta_0-1)r_0}\leq \int_{r_0}^{\eta_0r_0} \mathbb{M}^\alpha(\langle T_n, d_{x}, r \rangle)+\mathbb{M}^\alpha(\langle T_n, d_{y}, r \rangle) dr$$ $$= \int_{r_0}^{\eta_0r_0} \mathbb{M}(\langle \tilde T_n, d_{x}, r \rangle)+\mathbb{M}(\langle \tilde T_n, d_{y}, r \rangle) dr \leq 2\mathbb{M} (\tilde T_n) = 2\mathbb{M}^\alpha(T_n).$$ We deduce that $\mathscr{H}^1(F_n) \geq (\eta_0-1)r_0/2$. By Fatou's lemma $$\frac{(\eta_0-1)r_0}{2}\leq \limsup_{n\to \infty} \int_{r_0}^{\eta_0r_0} 1_{F_n}(r) \, dr \leq \int_{r_0}^{\eta_0r_0} \limsup_{n\to \infty} 1_{F_n}(r) \, dr ,$$ hence there exists a set of positive measure of radii where $ \limsup_{n\to \infty} 1_{F_n}(r) =1$. Any $r$ in this set satisfies \eqref{eqn:slicing_control} (for a possibly $r$-dependent family of indices $n$). \end{proof} \subsection{Improved lower semi-continuity} Given $\{x_1, \dots, x_N\}\in\mathbb{R}^d$ we consider a sequence of sets $\{G_k\}_{k\in\mathbb{N}}$ with the following property. For every $k$ there are closed disjoint cubes $Q^k_1,\dots,Q^k_N$ of diameters $\rho^k_1, \dots, \rho^k_N$ such that $\rho^k_j\to 0$ for every $j=1,\dots,N$, as $k\to 0$, $x_j\subset Q^k_j$ for $j=1,\dots,N$ and moreover $Q^k_j\supset Q^h_j$, for every $h>k$, for every $j=1,\dots,N$. Define \begin{equation}\label{e:gicappa} G_k=\mathbb{R}^d\setminus\bigcup_{j=1}^NQ^k_j. \end{equation} \begin{lemma}\label{l:andrea} Let $\{G_k\}_{k \in \mathbb{N}}$ be as in \eqref{e:gicappa} and let $\{T_n\}_{n \in \mathbb{N}} \subset \mathbf{R}_{1}(\mathbb{R}^d)$ and $T \in \mathbf{R}_{1}(\mathbb{R}^d)$ such that \begin{equation}\label{e:andrea1} \lim_{n \to +\infty} \mathbb{F} (T_n- T) = 0. \end{equation} Then there exists a subsequence $\{T_{n_k}\}$ and a sequence of open sets $G'_k\subset G_k$ such that \begin{equation}\label{e:andrea2} \lim_{k \to +\infty}\mathbb{F} (T_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} G'_k-T)=0. \end{equation} \end{lemma} \begin{proof} For every $n\in\mathbb{N}$, let $\varepsilon_n:=\mathbb{F}(T_n-T)$. By assumption $\varepsilon_n\to 0$ as $n\to +\infty$. For every $k\in \mathbb{N}$, let $\rho_k>0$ be such that $\rho_k\to 0$ as $k\to\infty$ and ${\rm dist}(Q^k_i,Q^k_j)\geq 2\rho_k$, for every $1\leq i< j\leq N$. By definition of flat distance, for every $n\in\mathbb{N}$ there exist $R_n,S_n$ such that $T_n-T=R_n+\partial S_n$ and $\mathbb{M} (R_n)+\mathbb{M} (S_n)\leq 2\varepsilon_n$. Choose $n_k$ such that $2\varepsilon_{n_k}\leq \rho_k^2$. By \eqref{e:massa_slices}, for every $k$ and for every $j=1,\dots,N$, there exists $0<r^k_j<\rho_k$ such that, denoting $d^k_j(x):={\rm dist}(x, Q^k_j)$, we have \begin{equation}\label{e:slic1} \sum_{j=1}^N\mathbb{M}(\langle S_{n_k}, d^k_j, r^k_j\rangle)\leq \rho_k^{-1}\varepsilon_{n_k}\leq\rho_k. \end{equation} Denote $G'_k:=\mathbb{R}^d\setminus \cup_{j=1}^N \overline{B(Q^k_j, r^k_j)}$. Obviously $G'_k\subset G_k$ for every $k$. Moreover, since $$T_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} G'_k-T=T_{n_k}-T - T_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k),$$ then in order to prove \eqref{e:andrea2} it is sufficient to prove that $$\lim_{k\to\infty}\mathbb{F}(T_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k))=0.$$ Observe firstly that $\mathbb{M}(T\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k))\leq \mathbb{M}(T\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\cup_{j=1}^N \overline{B(Q^k_j, \rho_k)}))\to 0$, as $k\to\infty$ because $\cup_{j=1}^N \overline{B(Q^k_j, \rho_k)}$ monotonically converges to the complementary of the $\mathscr{H}^1$-null set $\{x_1, \dots, x_N\}$, hence $$\lim_{k\to\infty}\mathbb{F}(T\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k))=0.$$ Therefore, it suffices to show that $$\lim_{k\to\infty}\mathbb{F}((T_{n_k}-T)\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k))=0.$$ We can write, denoting $\langle S_{n_k},\partial G'_k\rangle:=\sum_{j=1}^N\langle S_{n_k}, d^k_j, r^k_j\rangle$, \begin{equation} \begin{split} (T_{n_k}-T)\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k)&= R_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k)+\partial S_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k)\\ &\stackrel{\eqref{e:def_slicing}}{=} R_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k)+\partial (S_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k))+\langle S_{n_k},\partial G'_k\rangle. \end{split} \end{equation} Hence, denoting $R'_k:=R_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k)+\langle S_{n_k},\partial G'_k\rangle$ and $S'_k:=S_{n_k}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k)$, we have $(T_{n_k}-T)\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\mathbb{R}^d\setminus G'_k)=R'_k+\partial S'_k$, and, by \eqref{e:andrea1}, $\mathbb{M}(R'_k)+\mathbb{M}(S'_k)\leq\rho_k+2\varepsilon_{n_k}$, which tends to 0 as $k\to\infty$. \end{proof} We improve \cite[Lemma 4.10]{CDRM1} as follows: \begin{lemma}[(Semi-continuity with lower bound on the density)]\label{second} Let $T \in \mathbf{R}_{1}(\mathbb{R}^d)$. For every $\Delta>0$, there exists $\delta_{T,\Delta}>0$ satisfying the following property. Let $\{G_k\}_{k \in \mathbb{N}}$ be as in \eqref{e:gicappa} and let $\{T_n\}_{n \in \mathbb{N}} \subset \mathbf{R}_{1}(\mathbb{R}^d)$ such that $T_n=[E_n,\tau_n,\theta_n]$ and \begin{equation}\label{ass1} \mathbb M^\alpha(T_n) + \mathbb{M}^\alpha(T) \leq C, \quad \text{and} \quad \lim_{n \to +\infty} \mathbb{F} (T_n- T) = 0. \end{equation} Then there exists $\bar k \in \mathbb{N}$ such that for any $k \geq \bar k$ and for infinitely many $n$ (possibly depending on $k$) \begin{equation}\label{concl} \mathbb{M}^\alpha\Big (T_{n}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \Big (G_{k} \cap \Big \{|\theta_{n}|>\Big(\frac{\delta_{T,\Delta}}{2C}\Big)^\frac{1}{1-\alpha}\Big\}\Big)\Big ) \geq \mathbb{M}^\alpha(T)-\Delta. \end{equation} \end{lemma} \begin{proof} Given $\Delta>0$, let $\delta_{T,\Delta}>0$ be such that, by the lower semi-continuity of $\mathbb{M}^\alpha$ with respect to the flat convergence (as stated in \cite[Proposition 2.5]{flat-relax}), \begin{equation}\label{semic} \mathbb{F} (T- T') \leq \delta_{T,\Delta} \quad \Rightarrow \quad \mathbb{M}^\alpha(T) \leq \mathbb{M}^\alpha(T')+\frac{\Delta}{2}. \end{equation} Let us denote $\varepsilon=(\sfrac{\delta_{T,\Delta}}{2C})^\frac{1}{1-\alpha}$. By contradiction, there exist increasing sequences $k_i$ and $m_i$ such that \begin{equation}\label{concl1} \mathbb{M}^\alpha(T_{n}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_{k_i} \cap \{|\theta_{n}|> \varepsilon\})) < \mathbb{M}^\alpha(T)-\Delta, \quad \forall n\geq m_i, \, \, \forall i \in \mathbb{N}. \end{equation} By Lemma \ref{l:andrea}, there exists a subsequence $\{T_{n_i}\}_{i \in \mathbb{N}}\subset \{T_{m_i}\}_{i \in \mathbb{N}}$ and a sequence of open sets $G'_{k_i}\subset G_{k_i}$ such that \begin{equation}\label{ass2} \mathbb{F} (T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} G_{k_i}' - T) \leq \frac{\delta_{T,\Delta}}{2}, \qquad \forall i \in \mathbb{N}. \end{equation} Moreover, since $m_i$ is an increasing sequence, we deduce that $n_i \geq m_i$. By \eqref{ass1} it holds \begin{equation} \label{eqn:mass-to-0} \mathbb M(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i}\cap \{|\theta_{n_i}|\leq \varepsilon \}))<\varepsilon^{1-\alpha}\mathbb M^\alpha(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i} \cap \{|\theta_{n_i}|\leq \varepsilon\}))<C\varepsilon^{1-\alpha}. \end{equation} Hence, by \eqref{ass2} and \eqref{eqn:mass-to-0}, we compute \begin{equation}\label{contr} \begin{split} \mathbb{F}(T-T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i} \cap \{|\theta_{n_i}|> \varepsilon\})) &\leq\mathbb{F}(T-T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} G'_{k_i})+\mathbb{F}(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} G'_{k_i} - T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i} \cap \{|\theta_{n_i}|> \varepsilon\}))\\ & =\mathbb{F}(T-T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} G'_{k_i})+\mathbb{F}(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i} \cap \{|\theta_{n_i}|\leq \varepsilon\})) \\ & \overset{\eqref{ass2}}{\leq} \frac{\delta_{T,\Delta}}{2}+ \mathbb M(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i} \cap \{|\theta_{n_i}|\leq \varepsilon\}))\overset{\eqref{eqn:mass-to-0}}{\leq} \frac{\delta_{T,\Delta}}{2}+ C \varepsilon^{1-\alpha} \\ &\leq \frac{\delta_{T,\Delta}}{2}+ \frac{\delta_{T,\Delta}}{2}=\delta_{T,\Delta}. \end{split} \end{equation} Combining \eqref{contr}, \eqref{concl1} and \eqref{semic}, for every $i \in \mathbb{N}$, we deduce the desired contradiction $$\mathbb M^\alpha(T) \overset{\eqref{semic}}{\leq} \mathbb M^\alpha(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i} \cap \{|\theta_{n_i}|> \varepsilon\})) + \frac{\Delta}{2}\leq \mathbb M^\alpha(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_{k_i} \cap \{|\theta_{n_i}|> \varepsilon\})) + \frac{\Delta}{2} \overset{\eqref{concl1}}{<} \mathbb M^\alpha(T) - \frac{\Delta}{2}.$$ \end{proof} \subsection{Currents with coefficients in $\mathbb{R}^M$}\label{s:g-currents} A technical difficulty in the proof of Theorem \ref{thm:main} comes from the fact that the limit of a sequence of good decompositions (as in Definition \ref{defn:GD}) is not necessarily a good decomposition. More precisely, we need a lower semi-continuity type result, which heuristically keeps track in the limit of those Lagrangian trajectories which have opposite orientations and therefore they would cancel as classical currents. To this aim we require notions from the theory of currents with coefficients in groups. In particular we work in the normed group $G:=(\mathbb{R}^M, \|\cdot \|_{1})$ and we obtain in Lemma \ref{lemmasem} a stronger statement with respect to the usual lower semi-continuity of the $\alpha$-mass. For the purposes of this paper it is sufficient to regard a current $T$ on $\mathbb{R}^d$ with coefficients in $\mathbb{R}^M$ as an ordered $M$-tuple of classical currents on $\mathbb{R}^d$ (i.e. with real coefficients), henceforth called the \emph{components} of $T$, and denoted $T^1,\dots, T^M$. In particular one can represent a rectifiable 1-current $T$ on $\mathbb{R}^d$ with coefficients in $\mathbb{R}^M$ as a triple $[E,\tau,\Theta]$, where $E$ is a 1-rectifiable set on $\mathbb{R}^d$, $\tau$ is an orientation of $E$ and $\Theta=(\theta_1,\dots,\theta_M):E\to\mathbb{R}^M$, with $\Theta\in L^1(\mathscr{H}^1\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} E;\mathbb{R}^M)$. The components of $T$ are the classical 1-rectifiable currents $T^j:=[E,\tau,\theta_j]$, for $j=1,\dots,M$. The space of 1-rectifiable currents on $\mathbb{R}^d$ with coefficients in $\mathbb{R}^M$ is denoted ${\bf R}_1^{\mathbb{R}^M}(\mathbb{R}^d)$. We refer the reader to \cite[Section 4]{MMST} for a more rigorous introduction. For every $\alpha\in (0,1)$ and for $T=[E,\tau,\Theta]\in{\bf R}_1^{\mathbb{R}^M}(\mathbb{R}^d)$ we define the quantity $$\mathbb{M}_{\mathbb{R}^M}^\alpha(T):=\int_{E}\|\Theta\|_{1}^\alpha d\mathscr{H}^1.$$ By \cite[Section 6]{White1999} this quantity is lower semi-continuous with respect to the standard notion of convergence in flat norm for currents with coefficients in groups, which by \cite[Section 4.6]{MMST} is equivalent to the joint convergence in flat norm of all components. \begin{lemma}[(Lower semi-continuity without cancellations)]\label{lemmasem} For every $n \in \mathbb{N}$, let $\{T_n^\ell\}_{\ell=1}^M$, $\{T^\ell\}_{\ell=1}^M \subset \mathbf{R}_1(\mathbb{R}^d)$ with $T_n^\ell=[E_{n,\ell}, \tau_{n,\ell}, \theta_{n,\ell}]$ and $T^\ell=[E_\ell, \tau_\ell, \theta_\ell]$. We assume that \begin{equation}\label{gruppo0} \lim_{n \to +\infty}\mathbb{F}(T_n^\ell-T^\ell)= 0, \quad \forall \ell=1, \dots, M, \end{equation} and \begin{equation}\label{gruppo1} \mathbb{M}(T_n)=\sum_{\ell=1}^M\mathbb{M}(T_n^\ell), \quad \mbox{ where } \quad T_n:=\sum_{\ell=1}^MT_n^\ell. \end{equation} We denote $E=\cup_{\ell=1}^ME_\ell$ and $\theta: x \in E \mapsto \sum_{\ell=1}^M|\theta_\ell(x)|$. Then \begin{equation}\label{gruppo2} \int_E\theta^\alpha d\mathscr{H}^1 \leq \liminf_{n \to \infty}\mathbb{M}^\alpha(T_n). \end{equation} \end{lemma} \begin{proof} We first observe that by \eqref{gruppo1}, for every $n \in \mathbb{N}$, there exists a unitary vector field $\tau_n$ on $E_n:=\cup_{\ell=1}^ME_{n,\ell}$ such that \begin{equation}\label{gruppo3} T_n=[E_{n}, \tau_{n}, \theta_{n}],\quad \mbox{ where $\theta_n:=\sum_{\ell=1}^M|\theta_{n,\ell}|$.} \end{equation} For every $\ell=1,\dots, M$, we can associate to the classical current $T^\ell_n$ the current $S^\ell_n=[E_{n,\ell}, \tau_{n,\ell}, \theta_{n,\ell}{\bf{e}}_\ell]\in {\bf R}_1^{\mathbb{R}^M}(\mathbb{R}^d)$. Analogously we associate to the current $T^\ell$ the currents $S^\ell=[E_{\ell}, \tau_{\ell}, \theta_{\ell}{\bf{e}}_\ell]$. We define $S_n:=\sum_{\ell=1}^MS^\ell_n$ and $S:=\sum_{\ell=1}^MS^\ell$. In other words $S_n$ is the current with coefficients in $\mathbb{R}^M$ whose components are $T_n^1,\dots, T_n^M$, while $S$ has components $T^1,\dots, T^M$. By \eqref{gruppo1}, we can compute \begin{equation}\label{gruppo5} \mathbb{M}^\alpha_{\mathbb{R}^M}(S_n)=\int_{E_n}\bigg(\sum_{\ell=1}^M|\theta_{n,\ell}|\bigg)^\alpha d \mathscr{H}^1\overset{\eqref{gruppo3}}{=}\mathbb{M}^\alpha(T_n). \end{equation} By the lower semi-continuity of $\mathbb{M}^\alpha_{\mathbb{R}^M}$, (see \cite[Section 6]{White1999}), we deduce that $$\int_E\theta^\alpha d\mathscr{H}^1 = \int_E\|(\theta_1,\dots,\theta_M)\|_1^\alpha d\mathscr{H}^1=\mathbb{M}^\alpha_{\mathbb{R}^M}(S)\stackrel{\eqref{gruppo0}}{\leq} \liminf_{n \to \infty}\mathbb{M}^\alpha_{\mathbb{R}^M}(S_n)\overset{\eqref{gruppo5}}{=}\liminf_{n \to \infty}\mathbb{M}^\alpha(T_n).\qedhere$$ \end{proof} \section{Proof of Theorem \ref{thm:main}}\label{pro} Up to a simple scaling argument (detailed at the beginning of the proof of \cite[Theorem 1.2]{CDRM1}), we can assume without loss of generality that $\mathbb{M}(\mu_n^\pm)=\mathbb{M}(\mu^\pm)=1$. By contradiction, we assume that there exists a (non-relabelled) subsequence $\{T_n\}_{n \in \mathbb{N}}$ and a traffic path $T \in \textbf{TP}(\mu^-,\mu^+)$ such that $\mathbb{F} (T_n -T) \to 0$ and $T$ is not optimal. We consider $T_{opt}\in \textbf{OTP}(\mu^-,\mu^+)$ and denote \begin{equation}\label{gap} \Delta:=\mathbb{M}^\alpha(T)-\mathbb{M}^\alpha(T_{opt})>0. \end{equation} Let $\delta_{\Delta/4}>0$ be defined as in Lemma \ref{second} with respect to $\Delta/4$ and $T$, denote \begin{equation}\label{e:definCmod} C:=\sup_{n\in \mathbb{N}}\mathbb{M}^\alpha(T_n) \end{equation} and fix \begin{equation}\label{eps} \varepsilon:=\min\left\{\frac{\Delta}{16},\left(\frac{\Delta}{8C}\right)^\frac{2}{\alpha}, \left(\frac{\delta_{\Delta/4}}{2C}\right)^\frac{2}{1-\alpha} \right\}, \end{equation} \bigskip {\it Step 1: Partitioning Smirnov curves of $T_n$ according to their initial and final points.} Since $T_n$ are optimal traffic paths, by Theorem~\ref{t:propr_good_dec} we can find for every $n \in \mathbb{N}$ a good decomposition (see Definition~\ref{defn:GD}) $$T_n=\int_{\mathrm{Lip}}I_\gamma dP_n(\gamma), \quad \mbox{ with } P_n \in \mathscr{P}(\mathrm{Lip}), \mbox{ supported on curves parametrized by arc length}.$$ Applying Lemma \ref{lemmagriglia}, we can find a cube $Q$ containing $X$, such that \begin{equation} \label{eqn:grid-no-meas} \mu^\pm(\mathcal{S}(Q,k))=\mu^\pm_n(\mathcal{S}(Q,k))=0, \quad \mbox{ for all } (k,n)\in\mathbb{N}^2. \end{equation} Without loss of generality we will assume that the edge length of $Q$ is 2, so that for every $Q^i\in\Lambda(Q,k)$ the distance between the center of $Q^i$ and $\partial Q^i$ is $2^{-k}$. For every $k \in \mathbb{N}$, we consider $\Lambda(Q,k):=\{Q^\ell\}_{\ell=1}^{2^{kd}}$. Moreover, denoting $J_k:=\{1,\dots,2^{kd}\}^2$ for every $n \in \mathbb{N}$ and every $(i,j)\in J_k$, we define \begin{equation} \label{eqn:Pn} T_n^{ij}:=\int_{\mathrm{Lip}(Q^i,Q^j)} I_\gamma d P_n(\gamma), \end{equation} which represents the portion of $T_n$ associated to the paths which begin in $Q^i$ and end in $Q^j$. Notice that $T_n^{ij}$ depends implicitly on $k$; we will not explicit this dependence in the proof, apart from the steps 8 and 9 where the dependence on $k$ for the construction is more relevant. By Theorem \ref{t:propr_good_dec}(3), we observe that \eqref{eqn:Pn} is a good decomposition. In particular, for every $n\in \mathbb{N}$, denoting $T_n=[E_n,\tau_n,\theta_n]$, we have that $T_n^{ij}$ can be represented as $T_n^{ij}=[E_n,\tau_n,\theta_n^{ij}]$, with $\theta_n^{ij}(x)\cdot\theta_n(x)\geq 0$ and \begin{equation} \label{eqn:theta-n-ij} |\theta_n^{ij}(x)| \leq \min\{|\theta_n(x)|, P_n(\mathrm{Lip}(Q^i,Q^j))\}, \qquad \mbox{for $\mathscr{H}^1$-a.e $x\in E_n$}. \end{equation} \bigskip {\it Step 2: Lagrangian description of $T$ and partition of the associated trajectories.} By Theorem~\ref{t:propr_good_dec}(2), $|\theta_n|\leq 1$ for $\mathscr{H}^1$-a.e. $x$. By \eqref{hp:energy-bound}, since $T_n$ are optimal, we deduce the following tightness condition for $P_n$: \begin{equation}\label{e:definC} \sup_{n\in \mathbb{N}}\int_{\mathrm{Lip}}{\text{length}}(\gamma) dP_n(\gamma)\overset{\eqref{eqn:buona-dec-mass-T}}{=}\sup_{n\in \mathbb{N}}\mathbb{M}(T_n)\overset{|\theta_n|\leq 1}{\leq} C <\infty. \end{equation} By \cite[Theorem 3.28]{BCM}, up to a further (non-relabelled) subsequence, $P_n \stackrel{*}{\rightharpoonup} P \in\mathscr{P}(\mathrm{Lip}) $. By \cite[Lemma 3.21]{BCM} $P$ is supported on eventually constant curves, and by Remark~\ref{rmk:limit} \begin{equation}\label{rep1} T=\int_{\mathrm{Lip}}I_\gamma dP(\gamma). \end{equation} Notice that in general \eqref{rep1} could fail to be a good decomposition of $T$ in the sense of Definition~\ref{defn:GD}. Analogously to \eqref{eqn:Pn}, one can define the portion of $T$ associated to the paths which begin in $Q^i$ and end in $Q^j$, as $$ T^{ij}:=\int_{\mathrm{Lip}(Q^i,Q^j)} I_\gamma d P(\gamma).$$ Again we recall that the latter may fail to be a good decomposition. By Theorem~\ref{t:propr_good_dec}(1) applied to $T^{ij}_n$ and $P_n$, we deduce that $$\partial_- T^{ij}_n = (e_0)_\# (P_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \mathrm{Lip}(Q^i,Q^j)) \quad \text{ and } \quad \partial_+ T^{ij}_n = (e_\infty)_\# (P_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \mathrm{Lip}(Q^i,Q^j)),$$ where $e_0: \gamma \in \mathrm{Lip} \mapsto \gamma(0)$ and $e_\infty: \gamma \in \mathrm{Lip} \mapsto \gamma(\infty)$. Passing to the limit in $n$, we deduce that \begin{equation} \label{Tij-boundary} \partial_- T^{ij} = \int_{\mathrm{Lip}(Q^i,Q^j)}\delta_{\gamma(0)} d P (\gamma) \qquad \mbox{and} \qquad \partial_+ T^{ij} = \int_{\mathrm{Lip}(Q^i,Q^j)}\delta_{\gamma(\infty)} d P (\gamma). \end{equation} For every $(i,j)\in J_k$ we remark that $T_n^{ij} \rightharpoonup T^{ij}$. Since $\mathbb{M}(T_n^{ij}) \leq 1$ and $\mathbb{M}(\partial T_n^{ij}) \leq 1$, by Remark \ref{rmk_andrea}, we have \begin{equation} \label{eqn:conv-Tij} \lim_{n}\mathbb{F}(T_n^{ij}-T^{ij})=0. \end{equation} Indeed, $P_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \mathrm{Lip}(Q^i,Q^j) \stackrel{*}{\rightharpoonup} P\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \mathrm{Lip}(Q^i,Q^j)$, because they are obtained localizing the weakly-$*$ converging sequence $P_n \stackrel{*}{\rightharpoonup} P$ to the set $\mathrm{Lip}(Q^i,Q^j)$, whose boundary has $0$ $P$-measure by \eqref{eqn:grid-no-meas} \begin{equation*} \begin{split} P \big(\partial \mathrm{Lip}(Q^i,Q^j)\big) &\leq P \big(\{ \gamma\in \mathrm{Lip}: \gamma(0) \in \partial Q^i\}\big) + P \big(\{ \gamma\in \mathrm{Lip}: \gamma(\infty) \in \partial Q^j \}\big) \\ &= \mu^- (\partial Q^i) + \mu^+(\partial Q^j) = 0. \end{split} \end{equation*} \bigskip {\it Step 3: Isolating ``bad'' cubes containing most of the atomic part of $\mu^\pm$.} In the following, given a measure $\nu \in \mathscr{M}_+(X)$, we denote by $\nu_a$ its atomic part, i.e. the only measure such that $\nu_a \leq \nu$, $\nu_a$ is supported on a countable set and $(\nu-\nu_a)(\{x\})=0$ for every $x \in X$. Since $\mu^\pm$ are finite measures, there exists $N \in \mathbb{N}$, such that the sum of their atomic parts can be written as \begin{equation} \label{eqn:pocamassaneicubi1} \mu_a^++\mu_a^-=\bigg(\sum_{h=1}^Nc_h\delta_{x_h}\bigg)+\mu_r, \quad \text{with} \quad \mathbb{M}(\mu_r)<\varepsilon, \end{equation} for some $c_1, \dots, c_N \in \mathbb{R}$ and $N$ distinct points $x_1, \dots, x_N \in X$, (we are implicitly assuming that the two addenda in the RHS of \eqref{eqn:pocamassaneicubi1} are mutually singular). We observe that, for every $k \in \mathbb{N}$, the set $\{x_h: h=1,\dots,N\}$ is contained in at most $N$ cubes of $\Lambda(Q,k)$. By \eqref{eqn:grid-no-meas}, and since $\mu^+,\mu^-$ are mutually singular, there exists $k_0$ such that, for every $k \geq k_0$, all these cubes are disjoint (hence their mutual distances is larger or equal than the edge length of each cube, i.e. $2^{-k+1}$) and contain a single Dirac delta. For every $k \in \mathbb{N}$, up to reordering, we denote these cubes by $\{Q^h: h=1\dots,N\}$. Again, we do not explicit the dependence of these cubes on $k$, but we observe that their number $N$ does not depend on $k$. We recall that $\sfrac{5}{4}Q^h$ is the concentric cube to $Q^h$, enlarged by the factor $\sfrac{5}{4}$, so that the cubes $\sfrac{5}{4}Q^h$ remain disjoint; we denote \begin{equation}\label{defB} B_k=\bigcup_{h=1}^N \frac{5}{4}Q^h \qquad \text{ and } \qquad G_k:=B^c_k, \end{equation} Since the sequence $B_k$ converges monotonically decreasing to the finite set $\{x_h: h=1\dots,N\}$, there exists $k_1\geq k_0$ such that, for every $k \geq k_1$, \begin{equation}\label{cubi cattivi} \mathbb{M}^\alpha(T\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} B_k) = \int_{B_k\cap E} |\theta|^\alpha\, d\mathscr{H}^1 \leq \varepsilon. \end{equation} \bigskip {\it Step 4: Multiplicity estimate for the pieces of $T_n$ which do not connect bad cubes.} Since $\mu^\pm_d:=\mu^\pm - \mu^\pm_a$ has trivial atomic part, then there exists $k_2 \geq k_1$ such that, for every $k\geq k_2$ \begin{equation} \label{eqn:pocamassaneicubi0} \max\{ \mu^+_d(Q^i), \mu^-_d(Q^i)\} <\varepsilon, \qquad \text{for all } Q^i \in \Lambda(Q,k). \end{equation} Then, by \eqref{eqn:pocamassaneicubi1}, for every $k\geq k_2$ $$ \max\{ \mu^+(Q^i), \mu^-(Q^i)\} <2\varepsilon, \qquad \text{for all } Q^i \in \Lambda(Q,k)\setminus \{Q^h: h=1\dots,N\} . $$ Hence, by \eqref{hp:supp-n-convergence} and \eqref{eqn:grid-no-meas}, for every $k \geq k_2$ there exists $n_0= n_0(k)$ such that, for every $n \geq n_0$ \begin{equation} \label{eqn:pocamassaneicubi} \max\{ \mu^+_n(Q^i), \mu^-_n(Q^i)\} <3\varepsilon, \qquad \text{for all } Q^i \in \Lambda(Q,k)\setminus \{Q^h: h=1\dots,N\} . \end{equation} Since $\mu^+$ and $\mu^-$ are mutually singular by assumption, and since each cube in $\{Q^h: h=1\dots,N\}$ contains at most $1$ of the $N$ points $x_1,\dots,x_N$, then by \eqref{eqn:pocamassaneicubi1} and \eqref{eqn:pocamassaneicubi0}, for every $k \geq k_2$ \begin{equation} \label{eqn:pocamassaneicubi2} \min\{ \mu^+(Q^i), \mu^-(Q^i)\} <2\varepsilon \qquad \mbox{for all } Q^i \in \Lambda(Q,k). \end{equation} Hence, for every $k \geq k_2$, there exists $n_1=n_1(k)\geq n_0(k)$ such that for every $n \geq n_1$ \begin{equation} \label{eqn:pocamassaneicubi3} \quad\min\{ \mu^+_n(Q^i), \mu^-_n(Q^i)\} <3\varepsilon, \qquad \text{for all } Q^i \in \Lambda(Q,k). \end{equation} Using Theorem~\ref{t:propr_good_dec} (1,3) applied to $T^{ij}_n$, we deduce from \eqref{eqn:pocamassaneicubi3} that, for every couple of cubes $Q^i,Q^j$ such that either $Q^i$ or $Q^j$ belong to $ \Lambda(Q,k)\setminus \{Q^h: h=1\dots,N\}$, for every $k\geq k_2$ and for every $n\geq n_1$, \begin{equation}\label{small density} \begin{split} |\theta_{T^{ij}_n}(x)| \leq P_n(\mathrm{Lip}(Q^i,Q^j)) \leq \min\{ \partial_- T_n(Q^i), \partial_+ T_n(Q^j)\} = \min\{ \mu^-_n(Q^i), \mu^+_n(Q^j)\} \leq 3\varepsilon, \end{split} \end{equation} for $\mathscr{H}^1$-a.e. $x\in E_n$. \bigskip {\em Step 5: Choice of slightly enlarged cubes to have a control on the slices.} In the following we use the short notation $S_n^{ij}(\rho)$ and $S^{ij}(\rho)$ to denote respectively $$\langle T_n^{ij},d_{x_i},\rho\rangle + \langle T_n^{ij},d_{x_j},\rho\rangle \quad\mbox{and}\quad \langle T^{ij},d_{x_i},\rho\rangle + \langle T^{ij},d_{x_j},\rho\rangle,$$ where $x_i$ denotes the center of the cube $Q^i$ and $d_x$ is defined in \S \ref{s:slicing}. For every $k\in\mathbb{N}$, and for a given pair $(i,j)\in J_k$, applying Lemma \ref{slice}, we get that, up to a (non relabelled) subsequence $\{T_n\}_{n\in\mathbb{N}}$, there exists a set of positive measure of radii $\rho_k^{ij}\in (2^{-k},\tfrac{5}{4}2^{-k})$ such that \begin{equation}\label{stima2-firstversion} \mathbb{M}^\alpha(S_n^{ij}(\rho_k^{ij})) \leq 4 \frac{ \mathbb{M}^\alpha(T_n^{ij})}{2^{-k-2}}\leq 2^{k+4}\mathbb{M}^\alpha(T_n)\stackrel{\eqref{e:definCmod}}{\leq} 2^{k+4}C,\quad \mbox{for every $n\in\mathbb{N}$}, \end{equation} where the second inequality follows form Theorem \ref{t:propr_good_dec} (3). Since by \eqref{eqn:conv-Tij} and \eqref{e:def_slicing} for almost every radius $\rho_k^{ij}$ \begin{equation}\label{stima2conv} \lim_{n\to \infty}\mathbb{F}(S^{ij}(\rho_k^{ij}) - S_n^{ij}(\rho_k^{ij}))=0, \end{equation} by lower semi-continuity of $\mathbb{M}^\alpha$ with respect to the flat convergence we deduce that \begin{equation}\label{stima_massa_alfa_slice_T} \mathbb{M}^\alpha(S^{ij}(\rho_k^{ij})) \leq 2^{k+4}C. \end{equation} Since for every $k\in\mathbb{N}$ the number of possible pairs $(i,j)$ is finite, up to choosing iteratively a (non relabelled) subsequence $\{T_n\}_{n\in\mathbb{N}}$, we can assume that estimates \eqref{stima2conv} and \eqref{stima_massa_alfa_slice_T} hold for every $(i,j) \in J_k$. We observe that $\partial T_{n}^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij} Q^j)=\partial T_{n}^{ij}$ and analogously $\partial T^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij} Q^j)=\partial T^{ij}$, which combined with \eqref{e:def_slicing} gives respectively \begin{equation}\label{e:defSnij} S_n^{ij}(\rho_k^{ij}) =\partial (T_{n}^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j))- \partial T_{n}^{ij}, \end{equation} and \begin{equation}\label{e:defSij} S^{ij}(\rho_k^{ij})=\partial (T^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j))- \partial T^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij} Q^j). \end{equation} Consequently, we deduce respectively that \begin{equation} \label{eqn:sij} [S^{ij}_n(\rho_k^{ij})](\mathbb{R}^d) = 0, \quad \text{and} \quad [S^{ij}(\rho_k^{ij})](\mathbb{R}^d) = 0. \end{equation} We denote $$S:=\sum_{(i,j)\in J_k}S^{ij}(\rho_k^{ij}) \quad \text{and} \quad S_n:=\sum_{(i,j)\in J_k}S^{ij}_n(\rho_k^{ij}).$$ {\em Step 6: Transport between $\partial T$ and the corresponding slices $S$.} We define \begin{equation}\label{def1} T_{n,1}^{ij}:=T_{n}^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j), \quad T_{n,1}:=\sum_{(i,j)\in J_k} T_{n,1}^{ij} . \end{equation} We remark that, by Theorem \ref{t:propr_good_dec}(2), for $\mathscr{H}^1$-a.e. $x$ \begin{equation}\label{stimadens} \begin{split} |\theta_{T_{n,1}}(x)|&= \bigg|\sum_{(i,j)\in J_k} \theta_{T_{n,1}^{ij}}(x)\bigg| \leq \sum_{(i,j)\in J_k} \theta_{T_{n}^{ij}}(x) \\ &\leq \sum_{(i,j)\in J_k} P_{n}(\{\gamma \in \mathrm{Lip}(Q^i,Q^j): x \in {\rm{Im}}\gamma \}) \leq P_{n}(\{\gamma \in \mathrm{Lip}: x \in {\rm{Im}}\gamma \}) = |\theta_{T_{n}}(x)|. \end{split} \end{equation} We observe that, since $\partial T_{n}^{ij}= \partial T_{n}^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (Q^i\cup Q^j)$, by \eqref{e:def_slicing} \begin{equation}\label{eqn:tn3-boundary} \partial T_{n,1}=\sum_{(i,j)\in J_k}\partial T_{n,1}^{ij}= \sum_{(i,j)\in J_k}\partial T_{n}^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij} Q^j)+ S_n^{ij}(\rho_k^{ij})=\sum_{(i,j)\in J_k}\partial T_{n}^{ij}+ S_n^{ij}(\rho_k^{ij})=\partial T_{n}+ S_n. \end{equation} Analogously one can define $T^{ij}_1$ and $T_1$ as $$ T_{1}^{ij}:=T^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j), \quad T_{1}:=\sum_{(i,j)\in J_k} T_{1}^{ij}. $$ We have \begin{align}\nonumber \partial T_1&= \sum_{(i,j)\in J_k} \partial T_{1}^{ij}= \sum_{(i,j)\in J_k} \partial (T^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j))\stackrel{\eqref{e:defSij}}{=} \sum_{(i,j)\in J_k} (\partial T^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j)+ S^{ij}(\rho_k^{ij}))\\ &=\sum_{(i,j)\in J_k} (\partial T^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (Q^i\cup Q^j))+S=(\mu^+-\mu^-)+S=\partial T_{opt}+S.\label{eqn:t3-boundary} \end{align} \bigskip {\em Step 7: Connection of the slices of $T_n$ and $T$. } We define $$\sigma_n:=\sum_{(i,j)\in J_k} (S_n^{ij}(\rho_k^{ij}))_++\sum_{(i,j)\in J_k} ( S^{ij}(\rho_k^{ij}))_- \quad \mbox{and} \quad \nu_n:= \sum_{(i,j)\in J_k} ( S^{ij}(\rho_k^{ij}))_++\sum_{(i,j)\in J_k} (S_n^{ij}(\rho_k^{ij}))_-.$$ We observe that $$\sigma_n- \nu_n\rightharpoonup 0, \quad \mbox{and} \quad \mathbb{M}(\sigma_n)=\mathbb{M}(\nu_n). $$ Indeed, the weak-$*$ convergence holds because, by \eqref{stima2conv}, we get $$\sum_{(i,j)\in J_k} (S_n^{ij}(\rho_k^{ij}))_+ \rightharpoonup \sum_{(i,j)\in J_k} (S^{ij}(\rho_k^{ij}))_+ \quad \mbox{and} \quad \sum_{(i,j)\in J_k} (S_n^{ij}(\rho_k^{ij}))_-\rightharpoonup \sum_{(i,j)\in J_k} ( S^{ij}(\rho_k^{ij}))_- .$$ By \eqref{eqn:sij}, we deduce that $$\mathbb{M} (\nu_n)-\mathbb{M}(\sigma_n)=\sum_{(i,j)\in J_k} (S^{ij}(\rho_k^{ij}))_+(\mathbb{R}^d) - (S^{ij}(\rho_k^{ij}))_-(\mathbb{R}^d) (S_n^{ij}(\rho_k^{ij}))_-(\mathbb{R}^d) (S_n^{ij}(\rho_k^{ij}))_+(\mathbb{R}^d)=0.$$ Moreover, thanks to \eqref{stima2-firstversion} and \eqref{stima_massa_alfa_slice_T} we have that $$\mathbb{M}^\alpha(\sigma_n)+\mathbb{M}^\alpha(\nu_n)\leq 2^{2kd+k+6}C.$$ Applying Lemma \ref{high_multiplicity}, for every $k \geq k_2$, there exists $n_2=n_2(k)\geq n_1(k)$ such that for every $n\geq n_2$ there exists a transport $T_{n,conn}$ such that \begin{equation}\label{connessionebord} \partial T_{n,conn}= \nu_n- \sigma_n=S-S_n, \quad \mbox{and} \quad \mathbb{M}^\alpha(T_{n,conn})<\varepsilon. \end{equation} \bigskip {\it Step 8: Improved semi-continuity of the energy to bound a modified density of $T$ which neglects cancellations among different partitions.} In this step we will label the dependence of $T^{ij}$ and $T_n^{ij}$ from $k$ explicitly, with the notation $T^{ij}_k$ and $T^{ij}_{n,k}$. In particular we write $T^{ij}_k=[E_k^{ij}, \tau_k^{ij}, \theta_k^{ij}]$. Let us consider the rectifiable set $E=\cup_{k\in \mathbb{N}} \cup_{i,j}E_k^{ij}$ and $\bar \theta_k=\sum_{ij}|\theta_k^{ij}|$. We claim that for $\mathscr{H}^1$-a.e. $x\in E$, the sequence $\bar \theta_k(x)$ is non-decreasing in $k$ and that, setting $\bar \theta=\sup_{k\in \mathbb{N}}\bar \theta_k$, we have \begin{equation}\label{bound1} \int_{E}\bar \theta^\alpha d\mathscr{H}^1 \leq C. \end{equation} To prove this claim, we define the positive measures $\nu^{ij}_k:=|\theta_k^{ij}| \mathscr{H}^1\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} E_k^{ij}\in \mathscr{M}_+(\mathbb{R}^d)$ associated to $T^{ij}_k$ and the measure $\nu_k:=\sum_{ij}\nu^{ij}_k=\bar\theta_k \mathscr{H}^1\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} E$. By the good decomposition of $T_n$, we deduce that \begin{equation}\label{bound0} \mathbb{M}(T_n)=\sum_{ij}\mathbb{M}(T^{ij}_{n,k}). \end{equation} By \eqref{eqn:conv-Tij} and \eqref{bound0}, we can then apply Lemma \ref{lemmasem} to the sequence $T^{ij}_{n,k}$ to deduce that for every fixed $k \in \mathbb{N}$ \begin{equation}\label{bound} \int_{E}\bar \theta_k^\alpha d\mathscr{H}^1 \leq \liminf_{n \to \infty}\mathbb{M}^\alpha(T_n)\leq C. \end{equation} Furthermore, we observe that $\nu_k \leq \nu_{k+1}$ for every $k \in \mathbb{N}$. Indeed, $$\theta_k^{ij}=\sum_{s,t: Q^s\subset Q^i, Q^t\subset Q^j}\theta_{k+1}^{st},$$ where we intend that $Q^s, Q^t$ belong to $\Lambda(Q, k+1)$ and $Q^i, Q^j$ belong to $\Lambda (Q,k)$. Therefore $$\bar \theta_k=\sum_{ij}|\theta_k^{ij}|=\sum_{ij}\bigg|\sum_{s,t: Q^s\subset Q^i, Q^t\subset Q^j}\theta_{k+1}^{st}\bigg|\leq \sum_{ij}\sum_{st: Q^s\subset Q^i, Q^t\subset Q^j}|\theta_{k+1}^{st}|=\sum_{s,t}|\theta_{k+1}^{st}|= \bar \theta_{k+1}.$$ Consequently, the monotonicity together with the uniform bound in $k$ \eqref{bound}, yields \eqref{bound1}. \bigskip {\em Step 9: Energy estimate for $T_1$.} We claim that there exist infinitely many indexes $\{k_h\}_{h\in\mathbb{N}}$ such that \begin{equation}\label{eqn:en-T3} \mathbb{M}^\alpha(T_1)<\varepsilon. \end{equation} In the proof of this step we will trace the dependence of $T_1$ from $k$ explicitly with the notation $T_1^k$. We first observe that $\mathbb{M}(T_1^k)\to 0$ as $k \to +\infty$. To this aim, we denote by ${\rm length}(\gamma)$ the length of any curve $\gamma\in \mathrm{Lip}$. Since the function ${\rm length}$ is lower semi-continuous on $\mathrm{Lip}$ and $P_n$ converge weakly-$*$ as measures, by the good decomposition property \eqref{eqn:buona-dec-mass-T} of $T_n$, and since finally by Theorem~\ref{t:propr_good_dec}(2) the density of $T_n$ is bounded by $1= P_n(\mathrm{Lip})$, we have \begin{equation} \label{eqn:length-int} \int_{\mathrm{Lip}} {\rm length}(\gamma) \,dP \le \liminf_{n\to \infty} \int_{\mathrm{Lip}} {\rm length}(\gamma) \,dP_n = \liminf_{n\to \infty} \mathbb{M}(T_n) \leq \liminf_{n\to \infty} \mathbb{M}^\alpha(T_n). \end{equation} Hence we know that ${\rm length}(\gamma)\in L^1(P)$. Now we define $$A_k(\gamma):= \bigcup \{\rho_k^{ij} Q^i \cup \rho_k^{ij} Q^j: \, Q^i,Q^j \in \Lambda(Q,k), \, \gamma(0) \in Q^i \, \text{and } \gamma(\infty)\in Q^j\},$$ and the function ${\rm length}_k:\mathrm{Lip} \to [0,+\infty)$ as $${\rm length}_k(\gamma):=\int_{0}^\infty|\dot{\gamma}|(t)\chi_{\{s : \gamma(s) \in A_k(\gamma)\}}(t)dt = \mathscr{H}^1({\rm Im}\gamma \cap A_k(\gamma) ).$$ We can then estimate $$\mathbb{M}(T_1^k)\leq \int_{\mathrm{Lip}} {\rm length}_k(\gamma) dP(\gamma).$$ As observed above, the limit $P$ has the property that $\gamma$ is an eventually constant curve for $P$-a.e. $\gamma$. We consequently deduce that ${\rm length}_k(\gamma) \to 0$ for $P$-a.e. $\gamma \in \mathrm{Lip}$. Moreover, ${\rm length}_k(\gamma)\leq {\rm length}(\gamma)$. Since ${\rm length}\in L^1(P)$, by dominated convergence we deduce that \begin{equation}\label{mass} \lim_{k\to \infty}\mathbb{M}(T_1^k)\leq \lim_{k\to \infty}\int_{\mathrm{Lip}} {\rm length}_k(\gamma) dP(\gamma)= . \end{equation} By \eqref{mass}, there exists a subsequence $\{k_h\}_{h\in\mathbb{N}}$ such that the density $\theta_{1,{k_h}}$ of $T_{1}^{k_h}$ satisfies $\theta_{1,{k_h}}(x)\to 0$ as $h \to \infty$ for $\mathscr{H}^1$-a.e. $x\in E$. Moreover, thanks to \eqref{bound1}, we deduce that $|\theta_{1,{k_h}}|^\alpha \leq \bar \theta_{k_h}^\alpha \leq \bar \theta^\alpha \in L^1(\mathscr{H}^1\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} E)$ (where the set $E$ and the multiplicities $\bar \theta_{k_h}$ and $\bar \theta$ have been defined in Step 8) and consequently, by dominated convergence, that $$\mathbb{M}^\alpha(T_1^{k_h})=\int_{E}|\theta_{1,k_h}|^\alpha d\mathscr{H}^1 \to 0 \quad \mbox{as } h \to \infty,$$ which implies the claim in \eqref{eqn:en-T3}. \bigskip {\em Step 10: Construction of the energy competitor for $T_n$.} In the rest of the proof we fix \begin{equation*} k\in\{k_h\}_{h\in\mathbb{N}} \text{ with } k\geq\max\{\bar{k}, k_2\}, \qquad \text{and} \qquad n\geq n_2(k), \end{equation*} where $\bar{k}$ and $n$ are obtained in Lemma \ref{second}, with $\sfrac{\Delta}4$ in place of $\Delta$ and $\{G_k\}_{k \in \mathbb{N}}$, $\{T_n\}_{k \in \mathbb{N}}$ and $T$ being those used so far in the proof of Theorem \ref{thm:main}. We recall that $k_2$ was defined \eqref{eqn:pocamassaneicubi0}, $\{k_h\}$ in \eqref{eqn:en-T3}, and $n_2(k)$ in \eqref{connessionebord}. We deduce from \eqref{concl1} the following estimate \begin{equation}\label{conc2} \mathbb{M}^\alpha\left (T_{n}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_{k} \cap \left \{\theta_{n}>\sqrt{\varepsilon}\right\})\right ) \overset{\eqref{eps}}{\geq} \mathbb{M}^\alpha\left (T_{n}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \left(G_{k} \cap \left \{\theta_{n}>\left(\frac{\delta_{\Delta/4}}{2C}\right)^\frac{1}{1-\alpha}\right\}\right )\right) \overset{\eqref{concl1}}{\geq} \mathbb{M}^\alpha(T)-\frac{\Delta}{4}. \end{equation} In the first inequality we used that $\sqrt{\varepsilon} \leq \left(\sfrac{\delta_{\Delta/4}}{2C}\right)^\frac{1}{1-\alpha}$, by \eqref{eps}. We define the following traffic path: $$T_{n,comp}:= T_{n,conn}+T_{opt}-T_1+T_{n,1}.$$ This is a competitor for $T_n$, namely $\partial T_{n,comp}=\partial T_{n}$. Indeed, thanks to \eqref{eqn:t3-boundary}, \eqref{eqn:tn3-boundary}, and finally \eqref{connessionebord}, we compute $$ \partial T_{n,comp}=\partial T_{n,conn}+ \partial T_{opt}-\partial T_1+\partial T_{n,1}= \partial T_{n,conn} -S+\partial T_{n}+ S_n \overset{\eqref{connessionebord}}{=}\partial T_{n}. $$ \bigskip {\em Step 11: Energy estimate and conclusion.} To estimate the energy of the competitor $T_{n,comp}$ we first use the sub-additivity of $\mathbb{M}^\alpha$ and the smallness of the energy contributions of $T_{n,conn}$ and $T_1$, in view of \eqref{connessionebord} and \eqref{eqn:en-T3 . We obtain that $$ \mathbb{M}^\alpha(T_{n,comp})\leq \mathbb{M}^\alpha(T_{n,1})+\mathbb{M}^\alpha(T_{n,conn})+\mathbb{M}^\alpha(T_{opt})+\mathbb{M}^\alpha(T_1)\leq \mathbb{M}^\alpha(T_{n,1})+\mathbb{M}^\alpha(T_{opt})+2\varepsilon, $$ which, combined with \eqref{gap} and \eqref{conc2}, reads \begin{equation}\label{eqn:en-est} \mathbb{M}^\alpha(T_{n,comp})\overset{\eqref{gap}}{\leq} \mathbb{M}^\alpha(T_{n,1})+\mathbb{M}^\alpha(T)-\Delta+2\varepsilon \overset{\eqref{conc2}}{\leq} \mathbb{M}^\alpha(T_{n,1})+\mathbb{M}^\alpha(T_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_k\cap \{|\theta_n| >\sqrt{\varepsilon}\}))-\frac{3\Delta}{4}+2\varepsilon, \end{equation} Next, we call $\overline{T}_1:= T_{n,1}$ and $\overline{T}_2:= T_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_k\cap \{|\theta_n| >\sqrt{\varepsilon}\})$ and we estimate their densities. We first observe that, by \eqref{defB}, it holds $G_k \cap (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j)\subset B_k^c$. This implies that for every $x\in G_k \cap (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j)$, either $Q^i$ or $Q^j$ belong to $ \Lambda(Q,k)\setminus \{Q^h: h=1\dots,N\}$. Recalling the definition \eqref{def1} $T_{n,1}^{ij}=T_{n}^{ij}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (\rho_k^{ij}Q^i\cup \rho_k^{ij}Q^j)$, applying \eqref{small density}, we can estimate the density of $\overline{T}_1$ as follows \begin{equation} \label{eqn:t1bounds-0} |\theta_{\overline{T}_1}| \leq { \varepsilon}\qquad \mbox{for $\mathscr{H}^1$-a.e. $x\in G_k$}. \end{equation} Notice that \eqref{eqn:t1bounds-0} may no longer hold for $x\notin G_k$: indeed \eqref{small density} may fail if both $Q^i$ and $Q^j$ belong to $\{Q^h: h=1\dots,N\}$. On the other side, the density of $\overline{T}_2$ satisfies \begin{equation} \label{eqn:t2bounds} \sqrt {\varepsilon} \leq |\theta_{\overline{T}_2}(x)| \leq |\theta_{T_n}(x)|,\qquad \mbox{for $\mathscr{H}^1$-a.e. $x\in G_k\cap \{ |\theta_{\overline{T}_2}|>0\}$}. \end{equation} Combining the bounds \eqref{eqn:t1bounds-0} and \eqref{eqn:t2bounds}, we deduce that \begin{equation}\label{ineq} |\theta_{\overline{T}_1}|^\alpha+ |\theta_{\overline{T}_2}|^\alpha \leq \varepsilon^\alpha+ |\theta_{\overline{T}_2}|^\alpha \leq (\varepsilon^{\alpha/2} +1)|\theta_{\overline{T}_2}|^\alpha,\qquad \mbox{for $\mathscr{H}^1$-a.e. $x\in G_k\cap \{ |\theta_{\overline{T}_2}|>0\}$}. \end{equation} We employ this inequality together with \eqref{stimadens} in the energy estimate \begin{equation*} \begin{split} \mathbb{M}^\alpha(\overline{T}_1)+ \mathbb{M}^\alpha(\overline{T}_2) &= \mathbb{M}^\alpha(\overline{T}_1 \mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_k^c \cup(G_k \cap \{ \theta_{\overline{T}_2}=0\})))+ \int_{G_k \cap \{ |\theta_{\overline{T}_2}|>0\}}|\theta_{\overline{T}_1}|^\alpha+|\theta_{\overline{T}_2}|^\alpha d\mathscr{H}^1 \\ &\overset{\eqref{stimadens},\eqref{ineq}}{\leq}\mathbb{M}^\alpha(T_n \mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_k^c \cup(G_k \cap \{ \theta_{\overline{T}_2}=0\})))+ (\varepsilon^{\alpha/2} +1)\int_{G_k \cap \{ |\theta_{\overline{T}_2}|>0\}} |\theta_{\overline{T}_2}|^\alpha d\mathscr{H}^1 \\ &\overset{\eqref{eqn:t2bounds}}{\leq} \mathbb{M}^\alpha(T_n \mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_k^c \cup(G_k \cap \{ \theta_{\overline{T}_2}=0\})))+ (\varepsilon^{\alpha/2} +1)\int_{G_k \cap \{ |\theta_{\overline{T}_2}|>0\}} \theta_{T_n}^\alpha d\mathscr{H}^1 \\ &= \mathbb{M}^\alpha(T_n \mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_k^c \cup(G_k \cap \{ \theta_{\overline{T}_2}=0\})))+(\varepsilon^{\alpha/2} +1)\mathbb{M}^\alpha(T_n\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G_k \cap \{ |\theta_{\overline{T}_2}|>0\}))\\ & = (\varepsilon^{\alpha/2} +1)\mathbb{M}^\alpha(T_n). \end{split} \end{equation*} We plug this estimate in \eqref{eqn:en-est} and we recall that $\mathbb{M}^\alpha(T_n)\leq C$, so that \begin{equation}\label{eqn:finita} \begin{split} \mathbb{M}^\alpha(T_{n,comp}) \leq (\varepsilon^{\alpha/2} +1)\mathbb{M}^\alpha(T_n)-\frac{3\Delta}{4}+2\varepsilon \leq \mathbb{M}^\alpha(T_n)-\frac{3\Delta}{4}+2\varepsilon+C\varepsilon^{\alpha/2} \overset{\eqref{eps}}{\leq} \mathbb{M}^\alpha(T_n)-\frac{\Delta}{2}. \end{split} \end{equation} The estimate \eqref{eqn:finita} contradicts the optimality of $T_n$. \begin{remark}\label{hmas} In the spirit of the works \cite{White1999,depauwhardt,flat-relax}, we can replace $x \mapsto |x|^\alpha$ with more general functions $H:\mathbb{R}\to[0,\infty)$ that are even, sub-additive, lower semi-continuous, monotone non-decreasing in $(0,+\infty)$, continuous in $0$ and satisfying $H(0)=0$. The associated functionals on traffic paths are usually called $H$-masses and are defined as $$\mathbb{M}_H (T):=\int_E H(\theta(x)) d\mathscr{H}^1(x), \qquad \mbox{where $T=[E,\tau,\theta]\in \mathbf{R}_{1}(\mathbb{R}^d)$}.$$ The obvious analogue of Theorem \ref{thm:main} holds true. We divide the argument in two cases: \begin{itemize} \item {\em First case: $\lim_{\theta\to 0^+} \sfrac{H(\theta)}{\theta}=+\infty$.} For every $\delta>0$ there exists $\varepsilon(\delta,H)>0$ such that $\sfrac{\varepsilon(\delta,H)}{H(\varepsilon(\delta,H))}<\delta$. One can repeat the proof of all the statements of Section \ref{sec:con} just changing $\mathbb{M}^\alpha$ with $\mathbb{M}_H$. The only differences are in Lemma \ref{second}: the statement \eqref{concl} becomes $$ \mathbb{M}_H\left (T_{n}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} \left(G_{k} \cap \left \{|\theta_{n}|>\varepsilon\left(\frac{\delta_{T,\Delta}}{2C},H\right)\right\}\right )\right) \geq \mathbb{M}_H(T)-\Delta, $$ in the proof we choose $\varepsilon:=\varepsilon(\sfrac{\delta_{T,\Delta}}{2C},H)$ and we change \eqref{eqn:mass-to-0} in $$\mathbb M(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i}\cap \{|\theta_{n_i}|\leq \varepsilon \}))<\frac{\varepsilon}{H(\varepsilon)}\mathbb M_H(T_{n_i}\mathbin{\trait{.5}{5.5}{.5}\trait{3.5}{0}{.5}} (G'_{k_i} \cap \{|\theta_{n_i}|\leq \varepsilon\}))<C\frac{\varepsilon}{H(\varepsilon)}<\frac{\delta_{T,\Delta}}{2}.$$ We can then repeat verbatim Section \ref{pro}, with the same proof of Theorem \ref{thm:main}, just changing $\mathbb{M}^\alpha$ with $\mathbb{M}_H$ and modifying \eqref{conc2} according to the new version of Lemma \ref{second}. \item {\em Second case: $\liminf_{\theta\to 0^+} \sfrac{H(\theta)}{\theta}<+\infty$.} Then it is easy to show that the minimal transport energy $$W^H(\mu^-,\mu^+):=\inf\{\mathbb{M}_H(T): \mbox{$T$ is a traffic path connecting $\mu^-$ to $\mu^+$}\},$$ defined analogously to \eqref{mainp}, metrizes the weak-$*$ convergence of measures. We can then simply repeat the proof in \cite[Proposition 6.12]{BCM} to get the validity of Theorem \ref{thm:main}. \end{itemize} We observe moreover that the continuity of $H$ in $0$ is a necessary hypothesis for the validity of Theorem \ref{thm:main}. Indeed consider the case of the size, i.e. \begin{equation}\label{size} H(\theta)=1 \quad \mbox{on $\mathbb{R}\setminus \{0\}$ \qquad and \qquad }H(0)=0. \end{equation} Consider $\mu^-:=\delta_0$ and $\mu^+:= \delta_{e_1}$; for every $n \in \mathbb{N}$ we define $$\mu^-_n:= \delta_0 \qquad \mbox{and}\qquad \mu^+_n:=\frac 1n \delta_{\sfrac{e_1}{2}+\sfrac{e_2}{8}}+\left(1-\frac 1n\right)\delta_{e_1}.$$ Since $\mu^-_n$ and $\mu^+_n$ are finite atomic measures, by \cite[Proposition 9.1]{BCM} the optimal traffic path $T_n$ is a finite graph made of segments with no loops. Moreover, by \eqref{size}, the energy is the sum of the length of the segments composing the graph. In particular, the graph has to be connected, since both the points $\sfrac{e_1}{2}+\sfrac{e_2}{8}$ and $e_1$ have to be connected to $0$. As a consequence, the energy of any traffic path in $\textbf{TP}(\mu^-_n,\mu^+_n)$ must be bigger or equal than the length of the minimal tree connecting the three points, which is the union of the support of the following two curves $\gamma_1:[0,1]\to \mathbb{R}^d$ \begin{equation} \gamma_1(t):= t \left(\frac{e_1}2+\frac {e_2}8\right), \qquad \mbox{and} \qquad \gamma_2(t):= \frac {1+t}2 e_1+\frac {1-t}8e_2. \end{equation} Hence $W^H(\mu^-_n,\mu^+_n)=\frac{\sqrt{17}}{4}$ for every $n \in \mathbb{N}$ and an optimal traffic path $T_n\in \textbf{OTP}(\mu^-_n,\mu^+_n)$ is \begin{equation*} T_n:=I_{\gamma_1}+(1-\sfrac{1}{n})I_{\gamma_2}. \end{equation*} We observe that $$T_n\rightharpoonup T:= I_{\gamma_1}+I_{\gamma_2}.$$ As previously observed $\mathbb M_H(T)=\sfrac{\sqrt{17}}{4}>1\geq W^H(\mu^-,\mu^+)$ (since the segment joining $\mu^-$ and $\mu^+$ has energy one). Since $\mu^\pm_n\rightharpoonup \mu^\pm$, this inequality contradicts the stability. \end{remark} \subsection*{Acknowledgments} M. C. was partially supported by the Swiss National Science Foundation grant 200021\_182565. A. M. acknowledges partial support from GNAMPA-INdAM. \nocite{}
1,477,468,751,304
arxiv
\section{Motivations for the Phase 1 upgrade of the pixel detector} \label{motivations} The silicon pixel detector \cite{pixeldet} is the innermost part of CMS. It has the key role to provide the precise spatial measurements used as seeds for the reconstruction of charged particle trajectories in proximity of the primary interaction point. Its performance is thus crucial for the identification of primary and secondary vertices, and for the measurement of long-lived particles such as $b$ quarks and $\tau$ leptons. The present detector was designed for a maximum peak luminosity of 10$^{34}$ cm$^{-2}$s$^{-1}$, the design value of the Large Hadron Collider, which will be exceeded in the so called Phase 1. Higher luminosities are not sustainable, mostly due to readout inefficiencies. \\ To fully profit from the large datasets that will be collected by the CMS experiment, a new optimized silicon pixel detector will be installed during the long LHC shutdown in 2016. The new detector will be required to retain a good hit detection efficiency and prevent data losses in the large occupancy environment, to assure good track seeding and pattern recognition performance, and to provide high resolution on track parameters.\\ All modifications will be constrained by the existing cables and off-detector services, since space limitations prevent the installation of additional components. Moreover, the number of detector module types will have to be reduced, in order to limit the time and costs for production and testing. In addition, despite the use of radiation resistant technologies, the detector is not sufficiently radiation hard to survive until the end of the Phase 1, when a total integrated luminosity of 350 fb$^{-1}$ will be collected. The innermost barrel layer will have sustained a particle fluence of about 10$^{16}$ n$_{eq}$cm$^{-2}$, producing an irreversible degradation of its performance. An intermediate replacement of this layer will therefore be needed. \section{Detector design and material budget} \label{design} The present pixel detector does not provide a hermetic three-hit coverage. The resulting seeding inefficiencies limit the performance of the High Level Trigger, and slow the offline reconstruction, based on a sophisticated iterative algorithm \cite{tracking}.\\ A new geometrical layout, shown in Fig.\ref{det}, will thus be implemented. \begin{figure}[htbp] \begin{center} \includegraphics[height=5.2cm]{detectorxsect.pdf} \caption{\small{Longitudinal cross section of the upgraded pixel detector, showing the location of barrel layers and endcap disks.}} \label{det} \end{center} \end{figure} The proposed barrel design includes four cylindrical layers, placed at radii of 3.9, 6.8, 10.9 and 16.0 cm. The innermost layer is moved closer to the interaction point, while the forth layer is added in the gap between the present third pixel and the first strip layers. This will result in a factor two increase of the radial acceptance and a reduction of the extrapolation distance between pixel and strip detectors, with a benefit to the pattern recognition. \\ Three endcap disks will be installed at each side of the barrel, at 29.1, 39.6 and 51.6 cm from the interaction point. The new layout will provide an almost hermetic four-hit coverage up to a pseudorapidity of 2.5. One module type will be used in both barrel and endcap regions. Each module includes a silicon pixel sensor \cite{sensor} bump-bonded to 16 readout chips. In the barrel, the modules will be mounted on carbon fiber ladders glued onto stainless steel cooling tubes. In the endcaps the support structure will be made of blades arranged radially into half-disks, with a similar turbine-like geometry of the present detector. Each half-disk will be composed of two concentrical rings, to remove and replace independently the innermost part after radiation damage. A limitation of the present pixel detector is the significant amount of material within the tracking acceptance, which degrades the performance of track reconstruction. The biggest contributions are the silicon sensors, the mechanical support, the cooling system, and electronics. In addition, the barrel endflange hosting cooling manifolds and electronic boards is a considerable amount of material located in front of the first forward disk. One of the objectives of the Phase 1 upgrade is a drastic reduction of the material budget. The present C$_{6}$F$_{14}$ will be replaced by a two-phase CO$_{2}$ system, which has suitable thermodynamic properties for flowing in micro-channels, low mass and sufficient radiation hardness. In both barrel and endcap the modules will be installed on ultra-lightweight support structures. Most part of the barrel services currently on the endflange will be moved to the barrel supply tube, outside the tracking acceptance. \\ These modifications will result in at least a factor of two reduction of the material budget, as shown in Fig.\ref{material}. \begin{figure} \centering \subfloat{ \label{materialA} \includegraphics[height=4.4cm]{materialA.pdf}} \hspace{0.1cm} \subfloat{ \label{materialB} \includegraphics[height=4.4cm]{materialB.pdf}} \caption{\small{Material budget of the barrel (top) and the endcap (bottom) detector in term of radiation lengths for the present (black dots) and the upgraded (green histogram) systems, as a function of track pseudorapidity. The grey bands show the pseudorapidity region outside the tracking acceptance. }} \label{material} \end{figure} Finally, a new powering system will be needed to power the increased number of components with the present cables and services. DC-DC converters will be used for this purpose. \section{Readout} \label{readout} The present pixel readout chip (PSI46v2) \cite{roc} was designed to provide high hit detection efficiency at the design LHC peak luminosity of 10$^{34}$ cm$^{-2}$s$^{-1}$. Assuming a Level 1 Trigger accepted rate of 100 kHz, at the instantaneous luminosity of 2$\times$10$^{34}$ cm$^{-2}$s$^{-1}$ the dynamic inefficiency of the innermost barrel layer is estimated to grow from 4\% to 16\%, with unacceptable deterioration of hit reconstruction and track seeding efficiencies. The main sources of readout inefficiency must be addressed to prevent data losses. In order to keep a high single-hit efficiency, the size of the data buffers at the double column periphery will be extended from the present 32 to 80 units, compatibly with space limitations. An additional buffer stage will be introduced, to store the Level 1 Trigger accepted hit information while waiting for the readout token. \\ Moreover, the implementation of a faster readout will be needed to read the increased number of channels through the existing optical fibers. The plan is to switch from the current analogue signal to digital, with an on-chip ADC. In addition high speed (320 Mbps) links will be used to increase the bandwidth. With these modifications the dynamic inefficiency is estimated to be about 5.7\% for a trigger rate of 100 kHz and a peak luminosity of 2$\times$10$^{34}$ cm$^{-2}$s$^{-1}$. \begin{figure} \centering \subfloat{ \label{IP_transv} \includegraphics[height=4.2cm]{IP_transverse_barrel.pdf}} \hspace{0.1cm} \subfloat{ \label{IP_long} \includegraphics[height=4.2cm]{IP_barrel.pdf}} \caption{\small{ Transverse (top) and longitudinal (bottom) impact parameter resolution for present (black) and upgraded (red) pixel detectors as functions track momentum, for muon events. The baseline upgraded configuration with the innermost layer at a radius of 39 mm, a sensor thickness of 285 $\mu$m and a pixel cell size of 100$\times$150 $\mu$m$^{2}$ was considered. Only the barrel is shown. The improvement in the forward region is about 40\%.}} \label{IP} \end{figure} \section{Performance improvement} \label{performance} The enhanced features of the new pixel detector will produce a significant improvement of the performance, in terms of track parameters resolution, tracking efficiency and fake rate, vertex reconstruction and $b$-tagging. The material reduction, the increase of the radial acceptance, and the four-hit hermetic coverage will provide a better resolution of the track parameters. The improvement will affect both fully reconstructed and pixel-only tracks, used by the High Level Trigger. \\ \begin{figure} \centering \subfloat{ \label{PV_transv} \includegraphics[height=4.5cm]{PV_transv.pdf}} \hspace{0.1cm} \subfloat{ \label{PV_long} \includegraphics[height=4.5cm]{PV_long.pdf}} \caption{\small{ Transverse (top) and longitudinal (bottom) primary vertex resolution for present (black) and upgraded (red) pixel detectors as functions of number of tracks, for a sample of top events at the instantaneous luminosity of 10$^{34}$ cm$^{-2}$s$^{-1}$, from full simulation. The baseline upgraded configuration with the innermost layer at a radius of 39 mm, a sensor thickness of 285 $\mu$m and a pixel cell size of 100$\times$150 $\mu$m$^{2}$ was considered. The improvement in the both cases is about 20\%.}} \label{PV} \end{figure} Fig.\ref{IP} shows the transverse and longitudinal impact parameter resolution of fully reconstructed tracks, for the present and upgraded detectors. Only the result for the barrel is presented. The improvement will be about 25\%, and will reach 40\% in the forward region in correspondence of the location of the endflange. The effect is more pronounced in the low momentum region, where the multiple scattering is dominant and the sensitivity to the material reduction is thus bigger. A factor of four enhancement is also expected for pixel-only tracks, thanks to the extended radial coverage. The better track parameter resolution will enhance the vertex reconstruction \cite{tracking,vertex} performance. In Fig.\ref{PV} the primary vertex resolution is shown as a function of the number of tracks associated to the vertex, at the peak luminosity of 10$^{34}$ cm$^{-2}$s$^{-1}$, for the present and the upgraded detectors. The improvement is about 20\% in both transverse and longitudinal planes. This will be crucial to disentangle multiple interactions, 25 on average at 10$^{34}$ cm$^{-2}$s$^{-1}$, within a bunch crossing. Since a similar effect is expected for displaced secondary vertices, a 20\% improvement of lifetime measurements is also foreseen. The enhancement of both tracking and vertexing performance will improve the identification of $b$-jets. The Vertex tagger \cite{btag} was tested on a sample of $t \overline{t}$ events, at a peak luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. The result is shown in Fig.\ref{B}. The new system will provide a factor 6 reduction of the contamination from light quark jets with a 20\% increase of efficiency. The four-hit coverage will offer the opportunity to implement a tracking algorithm with quadruplet seeding. This will produce an increase of the efficiency with a reduction of the fake rate, which will be crucial in the dense hit environment. \begin{figure}[htbp] \begin{center} \includegraphics[height=5.6cm]{btagging.pdf} \caption{\small{$b$-tagging efficiency and contamination from light quarks for the Vertex algorithm \cite{btag}, for the present (black) and the upgraded (red) setups. A sample of top events at an instantaneous luminosity of 10$^{34}$ cm$^{-2}$s$^{-1}$, obtained with full simulation, was used.}} \label{B} \end{center} \end{figure} The pattern recognition will be faster and less affected by the combinatorics, thanks to the bigger radial acceptance and to the smaller gap between the outermost pixel and the innermost strip layer. Finally, the material decrease will reduce the photon conversion, leading to improved electron reconstruction, and will reduce the rate of secondary tracks from nuclear interactions. \section{Further development for late Phase 1} \label{latephase1} The innermost part of the pixel detector is expected to be heavily degraded before the end of the Phase 1 \cite{radhardnessBPIX,radhardnessFPIX}. All components of the present system were designed to operate up to a total particle fluence of 6$\times$10$^{14}$ n$_{eq}$cm$^{-2}$. Two years of operation at the late Phase 1 luminosity are equivalent for the innermost layer of the barrel, and for the endcap at similar radii, to a fluence of 10$^{16}$ n$_{eq}$ cm$^{-2}$. At this dose the performance of the silicon sensor will be heavily degraded. Charge collection was measured in test beams as a function of the bias voltage, for various radiation fluences \cite{radhardnessBPIX}. \begin{figure}[htbp] \begin{center} \includegraphics[height=5.0cm]{bias.pdf} \caption{\small{Collected charge as a function of the applied bias voltage for various radiation fluences, for the barrel. After a dose of 1.1$\times$10$^{15}$ n$_{eq}$ cm$^{-2}$ and for a bias voltage of 400 V only 50\% of the original charge is collected \cite{radhardnessBPIX}.}} \label{rad} \end{center} \end{figure} The results are shown in Fig.\ref{rad}. After a dose of about 10$^{15}$ n$_{eq}$ cm$^{-2}$ only 50\% of the charge is collected, assuming that the bias voltage is raised from 150 to 400 V. This will lower the single hit efficiency below 97\%. In addition, at higher bias voltages the Lorentz drift induced by the magnetic field will be smaller. The resulting reduction of charge sharing between neighboring pixels will worsen dramatically the spatial resolution, which will be dominated by the binary value of single-pixel clusters. After a dose of 1.2$\times$10$^{15}$ n$_{eq}$ cm$^{-2}$, and at a bias voltage of 600 V, the transverse hit resolution will be about two times worse than for the unirradiated sensor. \begin{figure}[htbp] \begin{center} \includegraphics[height=4.5cm]{pixelav.pdf} \caption{\small{Longitudinal (a) and transverse (b) hit position resolution (RMS) for the barrel detector as a function of the track pseudorapidity, for a pixel pitch of 75$\times$100 $\mu$m$^{2}$, a sensor thickness of 200 $\mu$m, and a 2000 electrons readout threshold. Various irradiation scenarios are considered. A detailed simulation of charge deposition and transport in the silicon sensor was used \cite{pixelav}. }} \label{hitres} \end{center} \end{figure} The innermost part of the detector will need to be replaced before the end of Phase 1, in order to assure good performance throughout the data taking period. The baseline upgrade plan foresees the replacement of the damaged parts with spare components. On the other hand, the opportunity for an enhancement in terms of detection efficiency, spatial resolution, and radiation hardness can be exploited. Both the silicon sensor and the frontend readout electronics can be improved. The use of mCz silicon instead of the present FZ would increase the sensor resistance to particle fluence, assuring at the same time a similar signal charge collection at low bias voltage. Other options are available and currently under evaluation. \\ For the readout, the possibility to move from the current 250 nm to 130 nm CMOS is considered. This would allow the implementation of a smaller pixel cell, and possibly lower thresholds. The performance of the innermost pixel layer with 75$\times$100 $\mu$m$^{2}$ pixel cells and a readout threshold of 2000 electrons has been evaluated. The advantage will be a better hit position resolution, lower than 10 $\mu$m in the transverse plane in the full pseudorapidity acceptance, as shown in Fig.\ref{hitres}. The deterioration from irradiation will also be reduced. \begin{figure}[htbp] \begin{center} \includegraphics[height=5.2cm]{IP_smallpitch.pdf} \caption{\small{Longitudinal impact parameter resolution as function of track momentum, for the baseline Phase 1 barrel detector (in black), and for a hypothetical setup with a first layer implementing smaller pixel cells and lower readout thresholds (in red). The improvement is about 40\% at high momentum. The result was obtained with full simulation.}} \label{IP_smallpitch} \end{center} \end{figure} The spatial resolution in proximity of the primary interaction region is crucial for the measurement of the track impact parameter. A sizable improvement is thus expected, especially at high momentum, where the multiple scattering effect is negligible. Fig.\ref{IP_smallpitch} shows the impact parameter resolution of the baseline upgraded pixel barrel and a scenario with a first layer with 75 $\times$ 100 $\mu$m$^{2}$ pixel cells, thinner sensors (220 $\mu$m) and lower readout thresholds (2000 electrons). The improvement is about 40\%. The implementation of smaller pixels will also enhance the performance of the $b$ and $\tau$ jet identification at high transverse momenta (above 200 GeV). \section{Conclusions} \label{conclusions} This article presents the plan for the Phase 1 upgrade of the CMS pixel detector, expected for 2016. The installation of a new detector with enhanced features will be needed, due to the necessity to provide sufficiently good performance in the high luminosity environment. A modified geometrical layout, with a reduced amount of passive material within the active tracking region, a new cooling and a new powering system will be implemented. The readout will be adapted to cope with the higher data rates and prevent data losses. These modifications will provide a significant improvement of the tracking, vertexing and $b$-tagging performance. A review of the proposals currently under evaluation for a further development of the innermost part of the pixel detector for late Phase 1, is also given.
1,477,468,751,305
arxiv
\section{Introduction}\label{ssec:intro} Superconducting quantum systems are among the leading candidates for quantum information processing. However, microwave photons, which interact efficiently with superconducting qubits, are not well suited for transmitting quantum information over long distances. This is especially important for designing quantum repeaters that distribute entanglement between remote locations \cite{briegel1998quantum,sangouard2011quantum, kumar2019towards, asadi2018quantum, childress2005fault}. To overcome this problem, the use of microwave-to-optical transducers has been suggested \cite{kurizki2015quantum, lauk2020perspectives}. There are several mediating systems to host transducers, including atomic ensembles \cite{imamouglu2009cavity, williamson2014magneto, o2014interfacing}, magnons \cite{everts2020ultrastrong, hisatomi2016bidirectional}, electro-optomechanical \cite{hill2012coherent,tian2012adiabatic}, and electro-optical systems \cite{soltani2017efficient}. Solid-state atomic ensembles such as rare-earth (RE) ions \cite{o2014interfacing, williamson2014magneto}, and NV centers \cite{zhao2012scheme,li2017quantum}, in addition to the atomic ensembles in gases \cite{hafezi2012atomic,gard2017microwave, petrosyan2009reversible}, represent one of most promising systems for designing transducers as they offer level structures with addressable optical and microwave transitions. Besides, solid-state systems are also attractive from the point of view of scalability. In rare-earth ions doped into a solid, the outer 5s and 5p shells insulate the 4f shell from the crystal environment. As a result, these ions are usually less subject to decoherence at low-temperature. Therefore, rare-earth ion doped crystals are widely used in quantum optics and in particular quantum information storage and signal processing \cite{de2008solid,thiel2011rare, lauritzen2010telecommunication}. Among rare earth ions with non-zero nuclear spins, Ytterbium ($\mathrm{^{171}Yb}$) with a nuclear spin of $I=1/2$ has the simplest possible hyperfine energy structure which makes the manipulation of spin states quite easy \cite{tiranov2018spectroscopic, kindem2018characterization}. However, it does not have a telecom-wavelength transition. In general, telecom wavelength photons are the best candidates to carry quantum information over long distances due to their minimum absorption in optical fibers. Erbium is a rare-earth ion that offers narrow homogeneous broadening and optical transitions in the telecom window. As a result, several transducer proposals have been developed based on $\mathrm{^{168}Er}$-doped into crystals. In particular, O’Brien et al. \cite{o2014interfacing} proposed the use of a controlled reversible inhomogeneous broadening (CRIB) quantum memory to absorb the incoming pulse in an Er doped Yttrium orthosilicate (YSO) crystal. The absorbed photon will then be mapped onto either ground state or optical excitations depending on the direction of the signal conversion. To improve the efficiency of this protocol, Welinski et al. \cite{welinski2019electron} proposed to use excited state spin levels instead of the ground states as the former are less subject to dephasing mechanisms, and therefore, have a longer coherence time. On the other hand, there are also some efforts to design transducers based on off-resonant approaches. In this regard, utilizing a Raman-like process, conversion of a microwave signal into the optical field at telecom wavelength has been demostrated in $^{168}$Er:YSO \cite{fernandez2019cavity,fernandez2015coherent}. Most recently, the same group proposed the use of an erbium chloride hexahydrate ($^{168}$Er Cl$_3$.6H$_2$O) crystal without disorder to design a transducer with enhanced ion densities, but small optical and spin broadening \cite{everts2019microwave}. The use of off-resonant approaches is not limited to rare-earth ions. Using a dark mode of the collective spin excitations, microwave to optical transfer of quantum states has also been discussed for nitrogen-vacancy centers \cite{li2017quantum}. Most off-resonant schemes are to some extent robust against decoherence mechanisms. Erbium has an odd isotope, $\mathrm{^{167}Er}$, with a non-zero nuclear spin of $I=7/2$. A key advantage of using $\mathrm{^{167}Er}$ instead of $\mathrm{^{168}Er}$ is that even at zero magnetic field, $\mathrm{^{167}Er}$:YSO offers around 5 GHz of hyperfine splitting. This is especially important when interacting with superconducting resonators such as superconducting coplanar waveguide cavities that suffer from energy dissipation due to Abrikosov vortex motion in the presence of magnetic fields \cite{song2009microwave}. Here, utilizing the dark state protocol, we propose the use of $\mathrm{^{167}Er}$ ions doped into YSO for microwave-to-optical transduction in a three-level system at zero external field. YSO is an attractive host crystal because of i) the small nuclear magnetic moments of yttrium ions, and ii) the low isotopic natural abundances of other constituent spins. We present a detailed analysis of the dark state transducer protocol and estimate the transfer efficiency and fidelity in Sec. \ref{protocol}. The implementation of the protocol is discussed in Sec. \ref{impl}. In this section, using the spin Hamiltonian, we investigate properties of the ground state microwave (MW) transitions of $\mathrm{^{167}Er}$:YSO at zero field, and we list some of the transitions in the GHz regime that can be used for the dark state protocol. Finally, we conclude and provide an outlook in Sec.\ref{conclusion} \section{Transduction}\label{protocol} \subsection{Dark state protocol} \begin{figure} \centering \includegraphics[width=7cm]{diagram.pdf} \caption{\textbf{(a)} Schematic design of the transducer where the ensemble of $\mathrm{^{167}Er}$ ions doped into YSO is coupled to a microwave superconducting coplanar waveguide and an optical cavity. \textbf{(b)} Level diagram for the $j$th ion coupled to an optical cavity and a microwave cavity. This three-level system is driven by a classical field with Rabi frequency $\Omega$, and the transitions $\ket{g_1}-\ket{e}$ and $\ket{g_1}-\ket{g_2}$ are coupled to the optical and microwave photons respectively. The detuning $\Delta_j$ is for the $j$th ion, set to be the same for both transitions.} \label{fig: dark} \end{figure} Inspired by work on optomechanical systems to transfer quantum states between two different frequencies \cite{wang2012using}, and on four-level nitrogen-vacancy centers in diamond for quantum transduction \cite{li2017quantum}, here we apply the dark state protocol to Er ions with a three-level structure. The main advantage of this protocol is that it is robust against spin decoherence as the collective spin state is only virtually populated during the transfer time. Erbium is a Kramers ions, as it has an odd number of 4f electrons, with the ground state $^4I_{15/2}$ and the lowest excited state of $^4I_{13/2}$. We define the three-level system using the states $\ket{g_1}$ and $\ket{g_2}$ from the $^4I_{15/2}$ ground state, and one of the energy levels of the excited state $^4I_{13/2}$ as $\ket{e}$. In Sec.\ref{Microwave-transition}, we provide some examples of energy levels of $\mathrm{^{167}Er}$:YSO that can be used as ground states $\ket{g_1}$ and $\ket{g_2}$ even at zero external field. Before we proceed to talk about how this protocol works, we first illustrate the system and the Hamiltonian associated with it. An ensemble of Er ions is placed inside an optical cavity and a microwave superconducting coplanar waveguide (CPW) cavity. As shown in Fig. \ref{fig: dark}, the optical transition $\ket{g_1}-\ket{e}$ is coupled to the optical cavity and the transition $\ket{g_1}-\ket{g_2}$ is coupled to the microwave cavity, while the transition $\ket{e}-\ket{g_2}$ is driven by a classical field with Rabi frequency $\Omega$. Here for simplicity, we ignore the inhomogeneity in the coupling strength and define two average coupling strengths for ions as $\Tilde{g}_1$ and $\Tilde{g}_2$ \cite{wesenberg2009quantum,amsuss2011cavity,li2017quantum} (for the effect of inhomogeneity in the coupling strength, see Ref\cite{kubo2010strong}). The detunings in the optical transition and transition $\ket{e}-\ket{g_2}$ are set to be the same with $\Delta_j=\omega^j_{eg_1}-\omega_1=\omega^j_{eg_2}-\omega_\Omega$, where $\omega_1,\omega_\Omega$ are the frequencies for the optical cavity and the classical control field, and the index $j$ indicates the $j$th ion. We introduce the average detuning $\Delta=\Delta_j-\delta_j$ where $\delta_j$ is the inhomogenous broadening for the $j$th spin in the excited state. In the large detuning regime when $|\Delta|\gg |\Omega|, |\delta_j|, |\Tilde{g}_1|, |\Tilde{g}_2|$, the system Hamiltonian can be written as \cite{brion2007adiabatic, james2007effective}: \begin{equation} \begin{aligned} H_{\text{eff}}&=\frac{\Tilde{g}^2_1}{\Delta}\hat{a}^\dagger_1\hat{a}_1\hat{J}_{11}+\frac{\Omega^2}{\Delta}\hat{J}_{22}+(\Tilde{g}_2\hat{a}^\dagger_2+\frac{\Tilde{g}_1\Omega}{\Delta}\hat{a}^\dagger_1)\hat{J}_{12}\\ &+(\Tilde{g}_2\hat{a}_2+\frac{\Tilde{g}_1\Omega}{\Delta}\hat{a}_1)\hat{J}_{21}, \end{aligned} \label{eq:aeff} \end{equation} where $\hat{J}_{11}=\sum_{j=1}^{N}{\ket{g_1}_j\bra{g_1}}$, $\hat{J}_{22}=\sum_{j=1}^{N}{\ket{g_2}_j\bra{g_2}}$, $\hat{J}_{12}=\sum_{j=1}^{N}{\ket{g_1}_j\bra{g_2}}$, and $\hat{J}_{21}=\sum_{j=1}^{N}{\ket{g_2}_j\bra{g_1}}$ are the collective spin operators. In the low excitation regime, we can apply the Holstein-Primakoff approximation. Then, the above Hamiltonian can be further written as: \begin{equation} H_{\text{eff}}=\frac{\Tilde{g}_1\sqrt{N}\Omega}{\Delta}\hat{a}_1\hat{b}^\dagger+\Tilde{g}_2\sqrt{N}\hat{a}_2\hat{b}^\dagger+\text{H.c.}, \label{eq:eff} \end{equation} where we ignored the first two terms in Eq. (\ref{eq:aeff}) as they only give us a global energy shift which can be compensated later on. We also used the relations $\hat{J}_{12}\approx \sqrt{N}\hat{b}$ and $\hat{J}_{21}\approx\sqrt{N}\hat{b}^\dagger$ with the operator $\hat{b}$ satisfying the commutation relation $[\hat{b},\hat{b}^\dagger]=1$. Hence, we obtain a Hamiltonian that involves three different bosonic modes. Now, let us take several important imperfections into consideration: the optical cavity decay rate $\kappa_1$, the microwave cavity decay rate $\kappa_2$, the collective spin decay rate $\gamma_s$, and the collective spin dephasing rate $\gamma^*_s$. We use the master equation to describe the system dynamics which is given by: \begin{equation} \begin{aligned} \Dot{\hat{\rho}}&=-i[\hat{H}_{\text{eff}},\hat{\rho}]+\kappa_1\mathcal{D}[\hat{a}_1]\hat{\rho}+\kappa_2\mathcal{D}[\hat{a}_2]\hat{\rho}+\gamma_s\mathcal{D}[\hat{b}]\hat{\rho}\\ &+\gamma^*_s\mathcal{D}[\hat{b}^\dagger\hat{b}]\hat{\rho}, \end{aligned} \label{eq:mas} \end{equation} where $\hat{H}_{\text{eff}}$ is given in Eq. (\ref{eq:eff}), and $\mathcal{D}[\hat{A}]\hat{\rho}=\hat{A}\hat{\rho}\hat{A}^\dagger-\hat{A}^\dagger\hat{A}\hat{\rho}/2-\hat{\rho}\hat{A}^\dagger\hat{A}/2$. Eq. (\ref{eq:eff}) can be fully diagonalized with three distinct eigenmodes, which are $\hat{C}_d=\frac{-G_2\hat{a}_1+G_1\hat{a}_2}{\sqrt{G^2_1+G^2_2}}$, $\hat{C}_{\pm}=1/\sqrt{2}(\frac{G_1\hat{a}_1+G_2\hat{a}_2}{\sqrt{G^2_1+G^2_2}}\pm\hat{b})$ with $G_1=\frac{\Tilde{g}_1\sqrt{N}\Omega}{\Delta}$ and $G_2=\Tilde{g}_2\sqrt{N}$. It is crucial to see that the mode $\hat{C}_d$ is referred to as "dark state" as it decouples from the collective spin mode $\hat{b}$. Here, the basic idea is to modulate parameters $G_1(t)$ and $G_2(t)$ such that at $t=0$, $\hat{C}_d=-\hat{a}_1$, and at $t=t_f$, $\hat{C}_d=\hat{a}_2$. It has been shown that the optimal modulation can be obtained by setting $G^2_1(t)+G^2_2(t)=G^2$, where $G$ is a constant \cite{vasilev2009optimum,wang2012using}. Here we set $G_1(t)=G \sqrt{\text{tanh}(\alpha t)}$ and $G_2(t)=G \sqrt{1-\text{tanh}(\alpha t)}$ where $\alpha$ is the modulation strength parameter. See Appendix. \ref{Optimal-modulation} for more information on the role of $\alpha$. \subsection{Efficiency and fidelity}\label{Dark-EF} Let us first define the transduction efficiency and fidelity. Here we focus on a single-photon input. In the scenario where we attempt to convert a single microwave photon to an optical photon, we define the efficiency as \begin{equation} \eta=\text{Tr}[\hat{\rho}_f a^\dagger_1a_1], \end{equation} with $\hat{\rho}_f$ being the final state of the system. Here, the final state depends on the protocol time $t_f$, which is a parameter that can be be optimized. If $t_f$ is too long, the transduction efficiency will be degraded by the decoherence in the system but if it is too short, the whole transduction process will also be affected. Without loss of generality, we assume that the protocol time takes the form $t_f=r/\alpha$ with $r>0$ being a real positive number. In the simulation, we vary $t_f$ from $0.1/\alpha$ to $2.5/\alpha$ such that it results in a range of efficiencies for different $\alpha/G$ ratios. The optimal protocol time is found to be $1/\alpha$, which is used in all simulations in this work. The optimal protocol time is found to be $1/\alpha$, which is used in all simulations in this work. In addition, for simplicity we assume ions are located in the maximum of both cavities. Therefore, we ignore the mode mismatch factor. \begin{figure} \centering \includegraphics[scale=0.47]{MTO.png} \caption{Efficiency, fidelity and noise of the dark-state protocol as a function of $\alpha/G$ for microwave-to-optical transfer. Here for the parameters we assume $G/2\pi=10$ MHz, $\kappa_1=0.1 G$, $\kappa_2=0.001 G$, $\gamma_s=0.001 G$, $\omega_{g_1g_2}/2\pi=1.33$ GHz, $T=50$ mK, and $\gamma^*_s=0.0008$ G which corresponds to the coherence time of $12 \mu s$.} \label{fig:F-E} \end{figure} Fidelity is often defined as the overlap between the density matrices at the beginning and end of the transfer process. Considering that the differences in mode shape can be corrected by unitary transformations, here we instead focus on the role of noise due to thermal excitations (microwave photons) which is likely to be the most important challenge for quantum network implementations. Therefore, to quantify the fidelity, we use the signal-to-noise ratio (SNR) and set: \begin{equation} F_\text{SNR}=\frac{1}{1+\text{SNR}^{-1}}. \end{equation} The SNR takes the form $\text{Tr}[\hat{\rho}_f a^\dagger_1a_1]/\text{Tr}[\hat{\rho}'_f a^\dagger_1a_1]$ where $\hat{\rho}'_f$ is the final state without any input, Here, the average number of thermal microwave photons is given by $\bar{n}_{\text{th}}=1/(e^{(\hbar\omega_{g_1g_2}/k_B T)}-1)$ where $\omega_{g_1g_2}$ is the frequency between two microwave transitions $\ket{g_1}-\ket{g_2}$, and $T$ is the system temperature. In Fig. \ref{fig:F-E}, we set $t_f=1/\alpha$ and plot the transduction efficiency, fidelity and noise with respect to different values of $\alpha/G$. If $\alpha$ is too large (close to $G$ or even larger than $G$) the adiabaticity can no longer be maintained as the collective spin mode in the ground state will be occupied. This can degrade the transduction efficiency. On the other hand, if $\alpha$ is too small (close to cavity decay rates), the transduction efficiency will be largely affected by the cavity decay rates. Therefore, we need to optimize this parameter. At around $\alpha/G=0.245$ and $T=50$ mK, the efficiency reaches its maximum value of $0.859$. At this efficiency, the corresponding fidelity is $0.788$. Note that here, we used $T_2=12 \mu s$ and estimated the dephasing rate using the relation $1/T_2=\gamma_s^\star+\gamma_s/2$ assuming the spin decay rate of $2\pi \times 10$ KHz. We justify the values we used for the transition frequency and coherence time in Sec. \ref{Microwave-transition}. \begin{figure} \centering $\hspace{-70mm}\mathbf{(a)}$\\ \vspace{2mm} \includegraphics[scale=0.34]{fidelity.png}\\ $\hspace{-70mm}\mathbf{(b)}$\\ \vspace{2mm} \includegraphics[scale=0.34]{effi.png} \caption{(a). Fidelity as a function of temperature for two different dephasing rates. Here we set $G/2\pi=10$ MHz, $G/\alpha=0.245$, $\kappa_1=0.1 G$, $\kappa_2=0.001 G$, $\gamma_s=0.001 G$, and $\omega_{g_1g_2}/2\pi=1.33$ GHz. The whole protocol time is also fixed to be $t_f=1/\alpha$ with $\alpha=0.245 G$. (b). Efficiency as function of time for three different dephasing rates. The other parameters are the same as those used in (a), and the temperature is $T=50$ mK.} \label{fig:Fidelity} \end{figure} In Fig. \ref{fig:Fidelity}(a), for a fixed $\alpha/G=0.245$ with the protocol time assumed to be $1/\alpha$, we have shown the change in fidelity of the microwave-to-optical transfer with respect to the temperature. By increasing the temperature, the average number of thermal microwave photons increases. As a result, the transfer fidelity decreases. Note that, superconducting qubits require temperatures in the mK range. In this figure, we have also shown fidelity changes for two different dephasing rates. Here, the dephasing rates $\gamma_s^\star=G$, and $\gamma_s^\star=0.0008 G$ are corresponding to the coherence times of $16$ ns, and $12 \mu s$, respectively. At low temperatures, increasing the dephasing rate does not significantly impact $F_\text{SNR}$. It is only at higher temperatures that fidelity slightly reduces. It appears that the transduction fidelity is extremely robust against the dephasing rates but this figure of merit does not fully capture the robustness of dark-state protocol. Thus, we plot the transduction efficiency with respect to time as shown in \ref{fig:Fidelity}(b) for better illustration. As can be seen, the efficiency is also very robust against the dephasing rates. When the dephasing rate $\gamma^*_s=0.1 G$, the efficiency is still around 80$\%$, slightly lower than the one with $\gamma^*_s=0.0008 G$. However, it still has some limits as when the dephasing rate is very large, e.g. $\gamma^*_s=G$, the efficiency is only 50$\%$. \section{experimental implementation} \label{impl} Our system is composed of an ensemble of $\mathrm{^{167}Er}$ ions in an optical and a microwave superconducting coplanar waveguide (CPW) cavity. For the microwave side, the coupling of rare-earth spin ensembles to a microwave cavity has been demonstrated \cite{probst2013anisotropic, tkalvcec2014strong, staudt2012coupling, chen2016coupling}, and the coupling strength of $34$ MHz is reported in Ref. \cite{probst2013anisotropic}. Thus, it is reasonable to assume the collective coupling strength $\Tilde{g}_2\sqrt{N}\sim 2\pi\times 10$ MHz in our scheme. Furthermore, for the CPW cavity, a quality factor $Q\sim 10^6$ is possible to achieve \cite{niemczyk2010circuit, xiang2013hybrid}, thus giving us $\kappa_2\sim 2\pi\times 1.33$ kHz for the $\omega_{g_1g_2}/2\pi=1.33$ GHz. In our simulation, for a CPW resonator that is coupled to a crystal at mK temperatures, we assume a higher decay rate of $\kappa_2\sim 2\pi\times 10$ kHz corresponding to the quality factor of $1.3\times 10^5$. Here, the spin decay rate $\gamma_s$ is assumed to be $2\pi\times 10$ kHz. Considering the coherence time of $12 \mu$s, this corresponds to the dephasing rate of $\gamma^*_s\sim 2\pi\times 8$ kHz, So far, for Fabry-Perot cavities, the quality factor of $Q\sim 10^9$ has been realized \cite{xiang2013hybrid,aoki2006observation,goto2010experimental}. Using toroid microcavities, the $Q$ factor of $4\times 10^8$ has also been measured for $1550$ nm wavelength \cite{kippenberg2004demonstration}. In addition, a quality factor exceeding $1.1 \times 10^7$ has been reported in photonic crystal nanocavities \cite{asano2017photonic}. Here, we consider the decay rate of $\kappa_1=2\pi \times 10^6$ Hz corresponding to $Q \sim10^8$ for the optical cavity. The coupling strength between $\mathrm{^{167}Er}$ ions and the optical cavity $\Tilde{g}_1\sqrt{N}$ is taken to be $\sim 2\pi\times 500$ MHz, and the inhomogeneous broadening in optical transitions is typically $\delta_j\sim 2\pi\times 1$ GHz \cite{thiel2011rare,baldit2005ultraslow}. Moreover, we take the average optical detuning $\Delta\sim 2\pi\times 10$ GHz, which satisfies that $\Delta\gg\delta_j$. For the control Rabi frequency, we choose $\Omega\sim 2\pi\times 200$ MHz, which also satisfies $\Omega\ll\Delta$. With all these values, we estimate $G_1=\Tilde{g}_1\sqrt{N}\Omega/\Delta\sim 2\pi\times 10$ MHz. \subsection{Microwave transitions of $^{167}\text{Er}$:YSO} \label{Microwave-transition} Erbium has eight Kramers’ doublets in the ground state and seven in the excited state. Due to the hyperfine and quadrupole interactions, each doublet is split into the sixteen hyperfine sub-levels. At low temperatures, only the lowest doublet is populated. The effective spin Hamiltonian of the Kramers' ions with non-zero nuclear spin can be written as \cite{abragam2012electron} \begin{equation} H_{eff}=\beta_e \bold{B} \cdot \bold{g} \cdot \bold{S} + \bold{I} \cdot \bold{A} \cdot \bold{S}+\bold{I} \cdot \bold{Q} \cdot \bold{I} - \beta_n g_n \bold{B} \cdot \bold{I},\label{H} \end{equation} where $\beta_e (\beta_n)$ is the electronic Bohr (nuclear) magneton, $\bold{B}$ is the external magnetic field, $\bold{A}$ is the hyperfine tensor, $\bold{Q}$ is the electric-quadrupole tensor, $\bold{S}$ ($\bold{I}$) is the vector of electronic (nuclear) spin operator, $\bold{g}$ is the $g$ tensor, and $g_n$ is the nuclear g-factor. The first term of the above Hamiltonian describes the electronic Zeeman interaction. The second and third terms describe the hyperfine and electric quadrupole (second-order hyperfine) interactions. Finally, the last term is the nuclear Zeeman interaction. \begin{table*}[t] \caption{ Ground state transition frequencies, transition strengths and coherence times for site 1 of $\mathrm{^{167}Er}$:YSO at zero magnetic field. Energy levels are labeled as 1 - 16 from lowest to highest frequency. Here transition dipole moments are determined relative to the three orthogonal optical extinction axes defined by $D_1(X)$, $D_2(Y)$, and $b(Z)$.} \begin{ruledtabular} \begin{tabular}{c c c c c c } Transition frequency (GHz) & $d(D_1,D_2,b)$ (GHz/T) & \,\,\,\,\,\, Coherence time ( $\mu$s) \\[0.05cm] \hline \\[-0.2cm] 1.33 (7 $\longleftrightarrow$10 )\,\, & (0.48,\,2.05,\,0.59) &12.16\\[0.05cm] 2.374 (6 $\longleftrightarrow$12 )\,\, & (3.66,\,6.43,\,1.66) &4.56\\[0.05cm] 2.366 (5 $\longleftrightarrow$11 )\,\, & (2.41,\,9.15,\,0.34) &4.4\\[0.05cm] 1.821 (7 $\longleftrightarrow$11 )\,\, & (3.35,\,12.83,\,7.3) &1.61\\[0.05cm] 1.304 (8 $\longleftrightarrow$12 )\,\, & (3.52,\,13.01,\,7.51) &1.59\\[0.05cm] \end{tabular} \end{ruledtabular} \label{tab:dipole} \end{table*} The spin Hamiltonian parameters have been estimated for $\mathrm{^{167}Er}$:YSO using the electron spin resonance experiment and crystal field model for the ground \cite{chen2018hyperfine}, and excited \cite{horvath2019extending} states, respectively. Using the spin Hamiltonian formalism, there is a good agreement between the transition frequency estimations and experimental results, i.e., the difference is less than $\sim40$ ($\sim$100) MHz for the ground (excited) states \cite{rakonjac2020long, horvath2019extending} (see Appendix \ref{energy-levels} for the MW transitions in the ground state). In Table.\ref{tab:dipole}, we have listed top five transition frequencies in the GHz regime with longest coherence times for zero magnetic field. Here, we calculated the transition dipole moments $d_{D_1,D_2,b}$ between energy levels in the ground hyperfine structure. To do so, we consider the atom-field interaction Hamiltonian \begin{equation} H_{I}=\beta_e \bold{B_{ac}} \cdot \bold{g} \cdot \bold{S} - \beta_n g_n \bold{B_{ac}} \cdot \bold{I}\label{H}, \end{equation} where the transition is being driven by an ac magnetic field. Then, to estimate the magnetic dipole moment of a transition along the direction of the ac field, we use the probability amplitude scheme \cite{scully1999quantum} \begin{equation} d_{mn}=\bra{\psi(B)_m}\beta_e \bold{g} \cdot \bold{S} - \beta_n g_n \bold{I}\label{H} \ket{\psi(B)_n}. \end{equation} In the absence of a magnetic field, electronic and nuclear states are highly mixed. Hence, transition moments are larger than the nuclear magneton. In rare-earth ions doped crystals, spin flips can occur due to the spin-spin (i.e., spin flip-flop) and spin-lattice relaxations. At low magnetic fields, which is the relevant regime here, spin flip-flops is the governing mechanism. Decreasing the flip-flop rate is possible by reducing the temperature to polarize the spins. Note that low temperatures is also required for the benefit of the superconducting qubits. Hence, we consider an ensemble of $\mathrm{^{167}Er}$:YSO in sub kelvin temperatures. In this case, flipping of nearest neighbour ions is the dominated perturbation mechanism that contributes to the spin decoherence. Considering the distance to the nearest neighbour Yttrium ions with gyromagnetic moment of $2.09$ MHz/T, one can estimate variance of the magnetic field fluctuation created at the Er site as $\Delta B=26 \mu T$ \cite{thesis}. To estimate the coherence time in Table.\ref{tab:dipole}, we assume the decoherence time occurs on a timescale much longer than the magnetic field fluctuations. In this case, the coherence time is given by \cite{zhong2015optically} \begin{equation} \frac{1}{\pi T_2}=S_1\cdot \Delta B+ \Delta B \cdot S_2 \cdot \Delta B, \label{coherenceT} \end{equation} where $S_1$ is the gradient, and $S_2$ is the curvature of the transition of the interest. To calculate $S_1$ and $S_2$, we define the Zeeman gradient and curvature Tensor parameters as \cite{mcauslan2012reducing} \begin{equation} \begin{aligned} \nu_i^{mn}(B)&=\frac{\partial\,(\omega_m(B)-\omega_n(B))}{\partial B_i},\\ C_{ij}^{mn}&=\frac{\partial^2(\omega_m(B)-\omega_n(B))}{\partial B_i\partial B_j}, \end{aligned} \end{equation} where \begin{equation} \begin{aligned} &\frac{\partial\,\omega_m(B)}{\partial B_i} =\bra{\psi(B)_m}\!\zeta_{ij}\!\ket{\psi(B)_m},\\ \frac{\partial^2\omega_m(B)}{\partial B_i\partial B_j} & =\!\!\sum\limits_{m\neq n}\!\!\frac{\bra{\psi(B)_m}\!\zeta_{ik}\!\ket{\psi(B)_n}\!\bra{\psi(B)_n}\!\zeta_{jl}\!\ket{\psi(B)_m}}{\omega_m(B)-\omega_n(B)}. \end{aligned} \end{equation} Here $\ket{\psi(B)}$ is the state of the system, $\omega(B)$ is its corresponding energy, $ \omega_m(B)-\omega_n(B)$ is the frequency difference for the $m \longleftrightarrow n$ transition, and $\zeta_{ij}=\beta_e \bold{g_{ij}}\bold{S_j}-\beta_ng_n \bold{I_i}$ where $\bold{S}$ is the spin operator, and $\bold{I}$ is the nuclear spin operator. We use the maximum curvature of each transition to estimate the lowest coherence time that can be calculated using the Eq. \ref{coherenceT}. Hence, $S_2$ is the largest of the absolute value of the eigenvalues of the $C$, and $S_1$ can be simply estimated using the magnitude of the $\nu$. Energy levels given in Table.\ref{tab:dipole} can be considered as the ground states $\ket{g_1}$ and $\ket{g_2}$ as shown in Fig.\ref{fig: dark}.b. Note that, because the dark state protocol is to some extent robust against the decoherence, the coherence time of the $\mathrm{^{167}Er}$ at zero field is long enough to achieve a high transfer fidelity and efficiency. Generally speaking, one can determine the required coherence time depending on the microwave cavity-spin coupling strength. To date, coupling of rare-earth spin ensembles to a microwave cavity has been demonstrated by several groups \cite{probst2013anisotropic, tkalvcec2014strong, staudt2012coupling, chen2016coupling}. For an efficient transfer, the cavity should be operated in the strong coupling regime and the system should remain coherent during the transfer time. Transitions with zero gradient with respect to the magnetic field (so-called zero first-order Zeeman (ZEFOZ) points) have a reduced sensitivity to the field fluctuations. Even isotopes of erbium have a pure first-order dependency on the magnetic field, and therefore, there is no zero first-order Zeeman (ZEFOZ) point for $^{168}$Er. However, an advantage of rare-earth ions that have an odd number of 4f electrons (i.e., Kramers ions) with non-zero nuclear spin (like $\mathrm{^{167}Er}$) is that the interaction between nuclear levels and electronic doublets can result in the ZEFOZ transitions even at zero magnetic field. For $\mathrm{^{167}Er}$, ZEFOZ points (where in Eq\ref{coherenceT} $S_1=0$) at the zero field are associated with the transitions with sub-GHz frequencies. At this field, the longest coherence time we have estimated is 388$\mu s$ for the ZEFOZ transition frequency of 873 MHz. Although transitions of interest for the use in transducer protocols are those with frequencies of a few GHz, frequencies around $500$ MHz can still be used for interacting with fluxonium qubits \cite{nesterov2018microwave}. \section{CONCLUSION AND OUTLOOK} \label{conclusion} One of the important applications of quantum transducers is to connect quantum processors in a quantum network. In such a network, microwave-to-optical transducers can shift the wavelength of microwave photons to optical photons that are suitable for long distance quantum communications. In this paper, using an optical and microwave cavity, we proposed the use of $\mathrm{^{167}Er}$:YSO as an intermediary for a microwave-to-optical quantum transducer. We presented a theoretical study of a proposed transducer design and calculated the achievable efficiency and fidelity of the system in the absence of external magnetic fields. Operating at nearly zero fields is important when interfacing with superconducting qubits. We have shown the robustness of the dark state protocol with respect to the dephasing rate. We then investigated ground state MW transitions and estimated transition frequencies, coherence times and transition strengths. The result can also be used in other transducer protocols where a detailed knowledge of the MW transitions is required. Note that, using the spin Hamiltonian parameters achieved from the crystal field model, one can also study the properties of the excited state energy levels. Looking forward, our investigation on the MW transitions may provide further motivation for designing MW memories that interact with superconducting systems \cite{gouzien2021factoring}. In addition, the use of $\mathrm{^{167}Er}$:YSO has already been suggested for quantum repeaters \cite{asadi2020protocols}. Therefore, using $\mathrm{^{167}Er}$:YSO, one can think of an integrated system for wavelength transduction and long-distance entanglement distribution. \section*{Acknowledgments} This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) through its Discovery Grant and CREATE programs, the National Research Council (NRC) of Canada through its High-Throughput Secure Networks (HTSN) challenge program, and by Alberta Innovates Technology Futures (AITF). \setcounter{equation}{0} \setcounter{table}{0} \setcounter{figure}{0} \renewcommand{\theequation}{A\arabic{equation}} \renewcommand{\thefigure}{A\arabic{figure}} \renewcommand{ \section{Introduction}\label{ssec:intro} Superconducting quantum systems are among the leading candidates for quantum information processing. However, microwave photons, which interact efficiently with superconducting qubits, are not well suited for transmitting quantum information over long distances. This is especially important for designing quantum repeaters that distribute entanglement between remote locations \cite{briegel1998quantum,sangouard2011quantum, kumar2019towards, asadi2018quantum, childress2005fault}. To overcome this problem, the use of microwave-to-optical transducers has been suggested \cite{kurizki2015quantum, lauk2020perspectives}. There are several mediating systems to host transducers, including atomic ensembles \cite{imamouglu2009cavity, williamson2014magneto, o2014interfacing}, magnons \cite{everts2020ultrastrong, hisatomi2016bidirectional}, electro-optomechanical \cite{hill2012coherent,tian2012adiabatic}, and electro-optical systems \cite{soltani2017efficient}. Solid-state atomic ensembles such as rare-earth (RE) ions \cite{o2014interfacing, williamson2014magneto}, and NV centers \cite{zhao2012scheme,li2017quantum}, in addition to the atomic ensembles in gases \cite{hafezi2012atomic,gard2017microwave, petrosyan2009reversible}, represent one of most promising systems for designing transducers as they offer level structures with addressable optical and microwave transitions. Besides, solid-state systems are also attractive from the point of view of scalability. In rare-earth ions doped into a solid, the outer 5s and 5p shells insulate the 4f shell from the crystal environment. As a result, these ions are usually less subject to decoherence at low-temperature. Therefore, rare-earth ion doped crystals are widely used in quantum optics and in particular quantum information storage and signal processing \cite{de2008solid,thiel2011rare, lauritzen2010telecommunication}. Among rare earth ions with non-zero nuclear spins, Ytterbium ($\mathrm{^{171}Yb}$) with a nuclear spin of $I=1/2$ has the simplest possible hyperfine energy structure which makes the manipulation of spin states quite easy \cite{tiranov2018spectroscopic, kindem2018characterization}. However, it does not have a telecom-wavelength transition. In general, telecom wavelength photons are the best candidates to carry quantum information over long distances due to their minimum absorption in optical fibers. Erbium is a rare-earth ion that offers narrow homogeneous broadening and optical transitions in the telecom window. As a result, several transducer proposals have been developed based on $\mathrm{^{168}Er}$-doped into crystals. In particular, O’Brien et al. \cite{o2014interfacing} proposed the use of a controlled reversible inhomogeneous broadening (CRIB) quantum memory to absorb the incoming pulse in an Er doped Yttrium orthosilicate (YSO) crystal. The absorbed photon will then be mapped onto either ground state or optical excitations depending on the direction of the signal conversion. To improve the efficiency of this protocol, Welinski et al. \cite{welinski2019electron} proposed to use excited state spin levels instead of the ground states as the former are less subject to dephasing mechanisms, and therefore, have a longer coherence time. On the other hand, there are also some efforts to design transducers based on off-resonant approaches. In this regard, utilizing a Raman-like process, conversion of a microwave signal into the optical field at telecom wavelength has been demostrated in $^{168}$Er:YSO \cite{fernandez2019cavity,fernandez2015coherent}. Most recently, the same group proposed the use of an erbium chloride hexahydrate ($^{168}$Er Cl$_3$.6H$_2$O) crystal without disorder to design a transducer with enhanced ion densities, but small optical and spin broadening \cite{everts2019microwave}. The use of off-resonant approaches is not limited to rare-earth ions. Using a dark mode of the collective spin excitations, microwave to optical transfer of quantum states has also been discussed for nitrogen-vacancy centers \cite{li2017quantum}. Most off-resonant schemes are to some extent robust against decoherence mechanisms. Erbium has an odd isotope, $\mathrm{^{167}Er}$, with a non-zero nuclear spin of $I=7/2$. A key advantage of using $\mathrm{^{167}Er}$ instead of $\mathrm{^{168}Er}$ is that even at zero magnetic field, $\mathrm{^{167}Er}$:YSO offers around 5 GHz of hyperfine splitting. This is especially important when interacting with superconducting resonators such as superconducting coplanar waveguide cavities that suffer from energy dissipation due to Abrikosov vortex motion in the presence of magnetic fields \cite{song2009microwave}. Here, utilizing the dark state protocol, we propose the use of $\mathrm{^{167}Er}$ ions doped into YSO for microwave-to-optical transduction in a three-level system at zero external field. YSO is an attractive host crystal because of i) the small nuclear magnetic moments of yttrium ions, and ii) the low isotopic natural abundances of other constituent spins. We present a detailed analysis of the dark state transducer protocol and estimate the transfer efficiency and fidelity in Sec. \ref{protocol}. The implementation of the protocol is discussed in Sec. \ref{impl}. In this section, using the spin Hamiltonian, we investigate properties of the ground state microwave (MW) transitions of $\mathrm{^{167}Er}$:YSO at zero field, and we list some of the transitions in the GHz regime that can be used for the dark state protocol. Finally, we conclude and provide an outlook in Sec.\ref{conclusion} \section{Transduction}\label{protocol} \subsection{Dark state protocol} \begin{figure} \centering \includegraphics[width=7cm]{diagram.pdf} \caption{\textbf{(a)} Schematic design of the transducer where the ensemble of $\mathrm{^{167}Er}$ ions doped into YSO is coupled to a microwave superconducting coplanar waveguide and an optical cavity. \textbf{(b)} Level diagram for the $j$th ion coupled to an optical cavity and a microwave cavity. This three-level system is driven by a classical field with Rabi frequency $\Omega$, and the transitions $\ket{g_1}-\ket{e}$ and $\ket{g_1}-\ket{g_2}$ are coupled to the optical and microwave photons respectively. The detuning $\Delta_j$ is for the $j$th ion, set to be the same for both transitions.} \label{fig: dark} \end{figure} Inspired by work on optomechanical systems to transfer quantum states between two different frequencies \cite{wang2012using}, and on four-level nitrogen-vacancy centers in diamond for quantum transduction \cite{li2017quantum}, here we apply the dark state protocol to Er ions with a three-level structure. The main advantage of this protocol is that it is robust against spin decoherence as the collective spin state is only virtually populated during the transfer time. Erbium is a Kramers ions, as it has an odd number of 4f electrons, with the ground state $^4I_{15/2}$ and the lowest excited state of $^4I_{13/2}$. We define the three-level system using the states $\ket{g_1}$ and $\ket{g_2}$ from the $^4I_{15/2}$ ground state, and one of the energy levels of the excited state $^4I_{13/2}$ as $\ket{e}$. In Sec.\ref{Microwave-transition}, we provide some examples of energy levels of $\mathrm{^{167}Er}$:YSO that can be used as ground states $\ket{g_1}$ and $\ket{g_2}$ even at zero external field. Before we proceed to talk about how this protocol works, we first illustrate the system and the Hamiltonian associated with it. An ensemble of Er ions is placed inside an optical cavity and a microwave superconducting coplanar waveguide (CPW) cavity. As shown in Fig. \ref{fig: dark}, the optical transition $\ket{g_1}-\ket{e}$ is coupled to the optical cavity and the transition $\ket{g_1}-\ket{g_2}$ is coupled to the microwave cavity, while the transition $\ket{e}-\ket{g_2}$ is driven by a classical field with Rabi frequency $\Omega$. Here for simplicity, we ignore the inhomogeneity in the coupling strength and define two average coupling strengths for ions as $\Tilde{g}_1$ and $\Tilde{g}_2$ \cite{wesenberg2009quantum,amsuss2011cavity,li2017quantum} (for the effect of inhomogeneity in the coupling strength, see Ref\cite{kubo2010strong}). The detunings in the optical transition and transition $\ket{e}-\ket{g_2}$ are set to be the same with $\Delta_j=\omega^j_{eg_1}-\omega_1=\omega^j_{eg_2}-\omega_\Omega$, where $\omega_1,\omega_\Omega$ are the frequencies for the optical cavity and the classical control field, and the index $j$ indicates the $j$th ion. We introduce the average detuning $\Delta=\Delta_j-\delta_j$ where $\delta_j$ is the inhomogenous broadening for the $j$th spin in the excited state. In the large detuning regime when $|\Delta|\gg |\Omega|, |\delta_j|, |\Tilde{g}_1|, |\Tilde{g}_2|$, the system Hamiltonian can be written as \cite{brion2007adiabatic, james2007effective}: \begin{equation} \begin{aligned} H_{\text{eff}}&=\frac{\Tilde{g}^2_1}{\Delta}\hat{a}^\dagger_1\hat{a}_1\hat{J}_{11}+\frac{\Omega^2}{\Delta}\hat{J}_{22}+(\Tilde{g}_2\hat{a}^\dagger_2+\frac{\Tilde{g}_1\Omega}{\Delta}\hat{a}^\dagger_1)\hat{J}_{12}\\ &+(\Tilde{g}_2\hat{a}_2+\frac{\Tilde{g}_1\Omega}{\Delta}\hat{a}_1)\hat{J}_{21}, \end{aligned} \label{eq:aeff} \end{equation} where $\hat{J}_{11}=\sum_{j=1}^{N}{\ket{g_1}_j\bra{g_1}}$, $\hat{J}_{22}=\sum_{j=1}^{N}{\ket{g_2}_j\bra{g_2}}$, $\hat{J}_{12}=\sum_{j=1}^{N}{\ket{g_1}_j\bra{g_2}}$, and $\hat{J}_{21}=\sum_{j=1}^{N}{\ket{g_2}_j\bra{g_1}}$ are the collective spin operators. In the low excitation regime, we can apply the Holstein-Primakoff approximation. Then, the above Hamiltonian can be further written as: \begin{equation} H_{\text{eff}}=\frac{\Tilde{g}_1\sqrt{N}\Omega}{\Delta}\hat{a}_1\hat{b}^\dagger+\Tilde{g}_2\sqrt{N}\hat{a}_2\hat{b}^\dagger+\text{H.c.}, \label{eq:eff} \end{equation} where we ignored the first two terms in Eq. (\ref{eq:aeff}) as they only give us a global energy shift which can be compensated later on. We also used the relations $\hat{J}_{12}\approx \sqrt{N}\hat{b}$ and $\hat{J}_{21}\approx\sqrt{N}\hat{b}^\dagger$ with the operator $\hat{b}$ satisfying the commutation relation $[\hat{b},\hat{b}^\dagger]=1$. Hence, we obtain a Hamiltonian that involves three different bosonic modes. Now, let us take several important imperfections into consideration: the optical cavity decay rate $\kappa_1$, the microwave cavity decay rate $\kappa_2$, the collective spin decay rate $\gamma_s$, and the collective spin dephasing rate $\gamma^*_s$. We use the master equation to describe the system dynamics which is given by: \begin{equation} \begin{aligned} \Dot{\hat{\rho}}&=-i[\hat{H}_{\text{eff}},\hat{\rho}]+\kappa_1\mathcal{D}[\hat{a}_1]\hat{\rho}+\kappa_2\mathcal{D}[\hat{a}_2]\hat{\rho}+\gamma_s\mathcal{D}[\hat{b}]\hat{\rho}\\ &+\gamma^*_s\mathcal{D}[\hat{b}^\dagger\hat{b}]\hat{\rho}, \end{aligned} \label{eq:mas} \end{equation} where $\hat{H}_{\text{eff}}$ is given in Eq. (\ref{eq:eff}), and $\mathcal{D}[\hat{A}]\hat{\rho}=\hat{A}\hat{\rho}\hat{A}^\dagger-\hat{A}^\dagger\hat{A}\hat{\rho}/2-\hat{\rho}\hat{A}^\dagger\hat{A}/2$. Eq. (\ref{eq:eff}) can be fully diagonalized with three distinct eigenmodes, which are $\hat{C}_d=\frac{-G_2\hat{a}_1+G_1\hat{a}_2}{\sqrt{G^2_1+G^2_2}}$, $\hat{C}_{\pm}=1/\sqrt{2}(\frac{G_1\hat{a}_1+G_2\hat{a}_2}{\sqrt{G^2_1+G^2_2}}\pm\hat{b})$ with $G_1=\frac{\Tilde{g}_1\sqrt{N}\Omega}{\Delta}$ and $G_2=\Tilde{g}_2\sqrt{N}$. It is crucial to see that the mode $\hat{C}_d$ is referred to as "dark state" as it decouples from the collective spin mode $\hat{b}$. Here, the basic idea is to modulate parameters $G_1(t)$ and $G_2(t)$ such that at $t=0$, $\hat{C}_d=-\hat{a}_1$, and at $t=t_f$, $\hat{C}_d=\hat{a}_2$. It has been shown that the optimal modulation can be obtained by setting $G^2_1(t)+G^2_2(t)=G^2$, where $G$ is a constant \cite{vasilev2009optimum,wang2012using}. Here we set $G_1(t)=G \sqrt{\text{tanh}(\alpha t)}$ and $G_2(t)=G \sqrt{1-\text{tanh}(\alpha t)}$ where $\alpha$ is the modulation strength parameter. See Appendix. \ref{Optimal-modulation} for more information on the role of $\alpha$. \subsection{Efficiency and fidelity}\label{Dark-EF} Let us first define the transduction efficiency and fidelity. Here we focus on a single-photon input. In the scenario where we attempt to convert a single microwave photon to an optical photon, we define the efficiency as \begin{equation} \eta=\text{Tr}[\hat{\rho}_f a^\dagger_1a_1], \end{equation} with $\hat{\rho}_f$ being the final state of the system. Here, the final state depends on the protocol time $t_f$, which is a parameter that can be be optimized. If $t_f$ is too long, the transduction efficiency will be degraded by the decoherence in the system but if it is too short, the whole transduction process will also be affected. Without loss of generality, we assume that the protocol time takes the form $t_f=r/\alpha$ with $r>0$ being a real positive number. In the simulation, we vary $t_f$ from $0.1/\alpha$ to $2.5/\alpha$ such that it results in a range of efficiencies for different $\alpha/G$ ratios. The optimal protocol time is found to be $1/\alpha$, which is used in all simulations in this work. The optimal protocol time is found to be $1/\alpha$, which is used in all simulations in this work. In addition, for simplicity we assume ions are located in the maximum of both cavities. Therefore, we ignore the mode mismatch factor. \begin{figure} \centering \includegraphics[scale=0.47]{MTO.png} \caption{Efficiency, fidelity and noise of the dark-state protocol as a function of $\alpha/G$ for microwave-to-optical transfer. Here for the parameters we assume $G/2\pi=10$ MHz, $\kappa_1=0.1 G$, $\kappa_2=0.001 G$, $\gamma_s=0.001 G$, $\omega_{g_1g_2}/2\pi=1.33$ GHz, $T=50$ mK, and $\gamma^*_s=0.0008$ G which corresponds to the coherence time of $12 \mu s$.} \label{fig:F-E} \end{figure} Fidelity is often defined as the overlap between the density matrices at the beginning and end of the transfer process. Considering that the differences in mode shape can be corrected by unitary transformations, here we instead focus on the role of noise due to thermal excitations (microwave photons) which is likely to be the most important challenge for quantum network implementations. Therefore, to quantify the fidelity, we use the signal-to-noise ratio (SNR) and set: \begin{equation} F_\text{SNR}=\frac{1}{1+\text{SNR}^{-1}}. \end{equation} The SNR takes the form $\text{Tr}[\hat{\rho}_f a^\dagger_1a_1]/\text{Tr}[\hat{\rho}'_f a^\dagger_1a_1]$ where $\hat{\rho}'_f$ is the final state without any input, Here, the average number of thermal microwave photons is given by $\bar{n}_{\text{th}}=1/(e^{(\hbar\omega_{g_1g_2}/k_B T)}-1)$ where $\omega_{g_1g_2}$ is the frequency between two microwave transitions $\ket{g_1}-\ket{g_2}$, and $T$ is the system temperature. In Fig. \ref{fig:F-E}, we set $t_f=1/\alpha$ and plot the transduction efficiency, fidelity and noise with respect to different values of $\alpha/G$. If $\alpha$ is too large (close to $G$ or even larger than $G$) the adiabaticity can no longer be maintained as the collective spin mode in the ground state will be occupied. This can degrade the transduction efficiency. On the other hand, if $\alpha$ is too small (close to cavity decay rates), the transduction efficiency will be largely affected by the cavity decay rates. Therefore, we need to optimize this parameter. At around $\alpha/G=0.245$ and $T=50$ mK, the efficiency reaches its maximum value of $0.859$. At this efficiency, the corresponding fidelity is $0.788$. Note that here, we used $T_2=12 \mu s$ and estimated the dephasing rate using the relation $1/T_2=\gamma_s^\star+\gamma_s/2$ assuming the spin decay rate of $2\pi \times 10$ KHz. We justify the values we used for the transition frequency and coherence time in Sec. \ref{Microwave-transition}. \begin{figure} \centering $\hspace{-70mm}\mathbf{(a)}$\\ \vspace{2mm} \includegraphics[scale=0.34]{fidelity.png}\\ $\hspace{-70mm}\mathbf{(b)}$\\ \vspace{2mm} \includegraphics[scale=0.34]{effi.png} \caption{(a). Fidelity as a function of temperature for two different dephasing rates. Here we set $G/2\pi=10$ MHz, $G/\alpha=0.245$, $\kappa_1=0.1 G$, $\kappa_2=0.001 G$, $\gamma_s=0.001 G$, and $\omega_{g_1g_2}/2\pi=1.33$ GHz. The whole protocol time is also fixed to be $t_f=1/\alpha$ with $\alpha=0.245 G$. (b). Efficiency as function of time for three different dephasing rates. The other parameters are the same as those used in (a), and the temperature is $T=50$ mK.} \label{fig:Fidelity} \end{figure} In Fig. \ref{fig:Fidelity}(a), for a fixed $\alpha/G=0.245$ with the protocol time assumed to be $1/\alpha$, we have shown the change in fidelity of the microwave-to-optical transfer with respect to the temperature. By increasing the temperature, the average number of thermal microwave photons increases. As a result, the transfer fidelity decreases. Note that, superconducting qubits require temperatures in the mK range. In this figure, we have also shown fidelity changes for two different dephasing rates. Here, the dephasing rates $\gamma_s^\star=G$, and $\gamma_s^\star=0.0008 G$ are corresponding to the coherence times of $16$ ns, and $12 \mu s$, respectively. At low temperatures, increasing the dephasing rate does not significantly impact $F_\text{SNR}$. It is only at higher temperatures that fidelity slightly reduces. It appears that the transduction fidelity is extremely robust against the dephasing rates but this figure of merit does not fully capture the robustness of dark-state protocol. Thus, we plot the transduction efficiency with respect to time as shown in \ref{fig:Fidelity}(b) for better illustration. As can be seen, the efficiency is also very robust against the dephasing rates. When the dephasing rate $\gamma^*_s=0.1 G$, the efficiency is still around 80$\%$, slightly lower than the one with $\gamma^*_s=0.0008 G$. However, it still has some limits as when the dephasing rate is very large, e.g. $\gamma^*_s=G$, the efficiency is only 50$\%$. \section{experimental implementation} \label{impl} Our system is composed of an ensemble of $\mathrm{^{167}Er}$ ions in an optical and a microwave superconducting coplanar waveguide (CPW) cavity. For the microwave side, the coupling of rare-earth spin ensembles to a microwave cavity has been demonstrated \cite{probst2013anisotropic, tkalvcec2014strong, staudt2012coupling, chen2016coupling}, and the coupling strength of $34$ MHz is reported in Ref. \cite{probst2013anisotropic}. Thus, it is reasonable to assume the collective coupling strength $\Tilde{g}_2\sqrt{N}\sim 2\pi\times 10$ MHz in our scheme. Furthermore, for the CPW cavity, a quality factor $Q\sim 10^6$ is possible to achieve \cite{niemczyk2010circuit, xiang2013hybrid}, thus giving us $\kappa_2\sim 2\pi\times 1.33$ kHz for the $\omega_{g_1g_2}/2\pi=1.33$ GHz. In our simulation, for a CPW resonator that is coupled to a crystal at mK temperatures, we assume a higher decay rate of $\kappa_2\sim 2\pi\times 10$ kHz corresponding to the quality factor of $1.3\times 10^5$. Here, the spin decay rate $\gamma_s$ is assumed to be $2\pi\times 10$ kHz. Considering the coherence time of $12 \mu$s, this corresponds to the dephasing rate of $\gamma^*_s\sim 2\pi\times 8$ kHz, So far, for Fabry-Perot cavities, the quality factor of $Q\sim 10^9$ has been realized \cite{xiang2013hybrid,aoki2006observation,goto2010experimental}. Using toroid microcavities, the $Q$ factor of $4\times 10^8$ has also been measured for $1550$ nm wavelength \cite{kippenberg2004demonstration}. In addition, a quality factor exceeding $1.1 \times 10^7$ has been reported in photonic crystal nanocavities \cite{asano2017photonic}. Here, we consider the decay rate of $\kappa_1=2\pi \times 10^6$ Hz corresponding to $Q \sim10^8$ for the optical cavity. The coupling strength between $\mathrm{^{167}Er}$ ions and the optical cavity $\Tilde{g}_1\sqrt{N}$ is taken to be $\sim 2\pi\times 500$ MHz, and the inhomogeneous broadening in optical transitions is typically $\delta_j\sim 2\pi\times 1$ GHz \cite{thiel2011rare,baldit2005ultraslow}. Moreover, we take the average optical detuning $\Delta\sim 2\pi\times 10$ GHz, which satisfies that $\Delta\gg\delta_j$. For the control Rabi frequency, we choose $\Omega\sim 2\pi\times 200$ MHz, which also satisfies $\Omega\ll\Delta$. With all these values, we estimate $G_1=\Tilde{g}_1\sqrt{N}\Omega/\Delta\sim 2\pi\times 10$ MHz. \subsection{Microwave transitions of $^{167}\text{Er}$:YSO} \label{Microwave-transition} Erbium has eight Kramers’ doublets in the ground state and seven in the excited state. Due to the hyperfine and quadrupole interactions, each doublet is split into the sixteen hyperfine sub-levels. At low temperatures, only the lowest doublet is populated. The effective spin Hamiltonian of the Kramers' ions with non-zero nuclear spin can be written as \cite{abragam2012electron} \begin{equation} H_{eff}=\beta_e \bold{B} \cdot \bold{g} \cdot \bold{S} + \bold{I} \cdot \bold{A} \cdot \bold{S}+\bold{I} \cdot \bold{Q} \cdot \bold{I} - \beta_n g_n \bold{B} \cdot \bold{I},\label{H} \end{equation} where $\beta_e (\beta_n)$ is the electronic Bohr (nuclear) magneton, $\bold{B}$ is the external magnetic field, $\bold{A}$ is the hyperfine tensor, $\bold{Q}$ is the electric-quadrupole tensor, $\bold{S}$ ($\bold{I}$) is the vector of electronic (nuclear) spin operator, $\bold{g}$ is the $g$ tensor, and $g_n$ is the nuclear g-factor. The first term of the above Hamiltonian describes the electronic Zeeman interaction. The second and third terms describe the hyperfine and electric quadrupole (second-order hyperfine) interactions. Finally, the last term is the nuclear Zeeman interaction. \begin{table*}[t] \caption{ Ground state transition frequencies, transition strengths and coherence times for site 1 of $\mathrm{^{167}Er}$:YSO at zero magnetic field. Energy levels are labeled as 1 - 16 from lowest to highest frequency. Here transition dipole moments are determined relative to the three orthogonal optical extinction axes defined by $D_1(X)$, $D_2(Y)$, and $b(Z)$.} \begin{ruledtabular} \begin{tabular}{c c c c c c } Transition frequency (GHz) & $d(D_1,D_2,b)$ (GHz/T) & \,\,\,\,\,\, Coherence time ( $\mu$s) \\[0.05cm] \hline \\[-0.2cm] 1.33 (7 $\longleftrightarrow$10 )\,\, & (0.48,\,2.05,\,0.59) &12.16\\[0.05cm] 2.374 (6 $\longleftrightarrow$12 )\,\, & (3.66,\,6.43,\,1.66) &4.56\\[0.05cm] 2.366 (5 $\longleftrightarrow$11 )\,\, & (2.41,\,9.15,\,0.34) &4.4\\[0.05cm] 1.821 (7 $\longleftrightarrow$11 )\,\, & (3.35,\,12.83,\,7.3) &1.61\\[0.05cm] 1.304 (8 $\longleftrightarrow$12 )\,\, & (3.52,\,13.01,\,7.51) &1.59\\[0.05cm] \end{tabular} \end{ruledtabular} \label{tab:dipole} \end{table*} The spin Hamiltonian parameters have been estimated for $\mathrm{^{167}Er}$:YSO using the electron spin resonance experiment and crystal field model for the ground \cite{chen2018hyperfine}, and excited \cite{horvath2019extending} states, respectively. Using the spin Hamiltonian formalism, there is a good agreement between the transition frequency estimations and experimental results, i.e., the difference is less than $\sim40$ ($\sim$100) MHz for the ground (excited) states \cite{rakonjac2020long, horvath2019extending} (see Appendix \ref{energy-levels} for the MW transitions in the ground state). In Table.\ref{tab:dipole}, we have listed top five transition frequencies in the GHz regime with longest coherence times for zero magnetic field. Here, we calculated the transition dipole moments $d_{D_1,D_2,b}$ between energy levels in the ground hyperfine structure. To do so, we consider the atom-field interaction Hamiltonian \begin{equation} H_{I}=\beta_e \bold{B_{ac}} \cdot \bold{g} \cdot \bold{S} - \beta_n g_n \bold{B_{ac}} \cdot \bold{I}\label{H}, \end{equation} where the transition is being driven by an ac magnetic field. Then, to estimate the magnetic dipole moment of a transition along the direction of the ac field, we use the probability amplitude scheme \cite{scully1999quantum} \begin{equation} d_{mn}=\bra{\psi(B)_m}\beta_e \bold{g} \cdot \bold{S} - \beta_n g_n \bold{I}\label{H} \ket{\psi(B)_n}. \end{equation} In the absence of a magnetic field, electronic and nuclear states are highly mixed. Hence, transition moments are larger than the nuclear magneton. In rare-earth ions doped crystals, spin flips can occur due to the spin-spin (i.e., spin flip-flop) and spin-lattice relaxations. At low magnetic fields, which is the relevant regime here, spin flip-flops is the governing mechanism. Decreasing the flip-flop rate is possible by reducing the temperature to polarize the spins. Note that low temperatures is also required for the benefit of the superconducting qubits. Hence, we consider an ensemble of $\mathrm{^{167}Er}$:YSO in sub kelvin temperatures. In this case, flipping of nearest neighbour ions is the dominated perturbation mechanism that contributes to the spin decoherence. Considering the distance to the nearest neighbour Yttrium ions with gyromagnetic moment of $2.09$ MHz/T, one can estimate variance of the magnetic field fluctuation created at the Er site as $\Delta B=26 \mu T$ \cite{thesis}. To estimate the coherence time in Table.\ref{tab:dipole}, we assume the decoherence time occurs on a timescale much longer than the magnetic field fluctuations. In this case, the coherence time is given by \cite{zhong2015optically} \begin{equation} \frac{1}{\pi T_2}=S_1\cdot \Delta B+ \Delta B \cdot S_2 \cdot \Delta B, \label{coherenceT} \end{equation} where $S_1$ is the gradient, and $S_2$ is the curvature of the transition of the interest. To calculate $S_1$ and $S_2$, we define the Zeeman gradient and curvature Tensor parameters as \cite{mcauslan2012reducing} \begin{equation} \begin{aligned} \nu_i^{mn}(B)&=\frac{\partial\,(\omega_m(B)-\omega_n(B))}{\partial B_i},\\ C_{ij}^{mn}&=\frac{\partial^2(\omega_m(B)-\omega_n(B))}{\partial B_i\partial B_j}, \end{aligned} \end{equation} where \begin{equation} \begin{aligned} &\frac{\partial\,\omega_m(B)}{\partial B_i} =\bra{\psi(B)_m}\!\zeta_{ij}\!\ket{\psi(B)_m},\\ \frac{\partial^2\omega_m(B)}{\partial B_i\partial B_j} & =\!\!\sum\limits_{m\neq n}\!\!\frac{\bra{\psi(B)_m}\!\zeta_{ik}\!\ket{\psi(B)_n}\!\bra{\psi(B)_n}\!\zeta_{jl}\!\ket{\psi(B)_m}}{\omega_m(B)-\omega_n(B)}. \end{aligned} \end{equation} Here $\ket{\psi(B)}$ is the state of the system, $\omega(B)$ is its corresponding energy, $ \omega_m(B)-\omega_n(B)$ is the frequency difference for the $m \longleftrightarrow n$ transition, and $\zeta_{ij}=\beta_e \bold{g_{ij}}\bold{S_j}-\beta_ng_n \bold{I_i}$ where $\bold{S}$ is the spin operator, and $\bold{I}$ is the nuclear spin operator. We use the maximum curvature of each transition to estimate the lowest coherence time that can be calculated using the Eq. \ref{coherenceT}. Hence, $S_2$ is the largest of the absolute value of the eigenvalues of the $C$, and $S_1$ can be simply estimated using the magnitude of the $\nu$. Energy levels given in Table.\ref{tab:dipole} can be considered as the ground states $\ket{g_1}$ and $\ket{g_2}$ as shown in Fig.\ref{fig: dark}.b. Note that, because the dark state protocol is to some extent robust against the decoherence, the coherence time of the $\mathrm{^{167}Er}$ at zero field is long enough to achieve a high transfer fidelity and efficiency. Generally speaking, one can determine the required coherence time depending on the microwave cavity-spin coupling strength. To date, coupling of rare-earth spin ensembles to a microwave cavity has been demonstrated by several groups \cite{probst2013anisotropic, tkalvcec2014strong, staudt2012coupling, chen2016coupling}. For an efficient transfer, the cavity should be operated in the strong coupling regime and the system should remain coherent during the transfer time. Transitions with zero gradient with respect to the magnetic field (so-called zero first-order Zeeman (ZEFOZ) points) have a reduced sensitivity to the field fluctuations. Even isotopes of erbium have a pure first-order dependency on the magnetic field, and therefore, there is no zero first-order Zeeman (ZEFOZ) point for $^{168}$Er. However, an advantage of rare-earth ions that have an odd number of 4f electrons (i.e., Kramers ions) with non-zero nuclear spin (like $\mathrm{^{167}Er}$) is that the interaction between nuclear levels and electronic doublets can result in the ZEFOZ transitions even at zero magnetic field. For $\mathrm{^{167}Er}$, ZEFOZ points (where in Eq\ref{coherenceT} $S_1=0$) at the zero field are associated with the transitions with sub-GHz frequencies. At this field, the longest coherence time we have estimated is 388$\mu s$ for the ZEFOZ transition frequency of 873 MHz. Although transitions of interest for the use in transducer protocols are those with frequencies of a few GHz, frequencies around $500$ MHz can still be used for interacting with fluxonium qubits \cite{nesterov2018microwave}. \section{CONCLUSION AND OUTLOOK} \label{conclusion} One of the important applications of quantum transducers is to connect quantum processors in a quantum network. In such a network, microwave-to-optical transducers can shift the wavelength of microwave photons to optical photons that are suitable for long distance quantum communications. In this paper, using an optical and microwave cavity, we proposed the use of $\mathrm{^{167}Er}$:YSO as an intermediary for a microwave-to-optical quantum transducer. We presented a theoretical study of a proposed transducer design and calculated the achievable efficiency and fidelity of the system in the absence of external magnetic fields. Operating at nearly zero fields is important when interfacing with superconducting qubits. We have shown the robustness of the dark state protocol with respect to the dephasing rate. We then investigated ground state MW transitions and estimated transition frequencies, coherence times and transition strengths. The result can also be used in other transducer protocols where a detailed knowledge of the MW transitions is required. Note that, using the spin Hamiltonian parameters achieved from the crystal field model, one can also study the properties of the excited state energy levels. Looking forward, our investigation on the MW transitions may provide further motivation for designing MW memories that interact with superconducting systems \cite{gouzien2021factoring}. In addition, the use of $\mathrm{^{167}Er}$:YSO has already been suggested for quantum repeaters \cite{asadi2020protocols}. Therefore, using $\mathrm{^{167}Er}$:YSO, one can think of an integrated system for wavelength transduction and long-distance entanglement distribution. \section*{Acknowledgments} This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) through its Discovery Grant and CREATE programs, the National Research Council (NRC) of Canada through its High-Throughput Secure Networks (HTSN) challenge program, and by Alberta Innovates Technology Futures (AITF). \setcounter{equation}{0} \setcounter{table}{0} \setcounter{figure}{0} \renewcommand{\theequation}{A\arabic{equation}} \renewcommand{\thefigure}{A\arabic{figure}} \renewcommand{
1,477,468,751,306
arxiv
\section{Introduction and statements of results} For a non-empty set $A \subseteq \mathbb{N}$, the {\it ratio set} of $A$ is defined by $R(A) := \{\frac{a}{b} \in \mathbb{Q} : a,b \in A\}$. One of the most fundamental results in real analysis, viz. $\mathbb{Q}$ is dense in $\mathbb{R}$, when rephrased in terms of ratio sets, reads as the ratio set of $\mathbb{N}$ is dense in $\mathbb{R}_{> 0}$. This reformulation of the denseness of $\mathbb{Q}$ in $\mathbb{R}$ has spurred a lot of research in recent times. In particular, the classification of subsets of $\mathbb{N}$ having dense ratio sets in $\mathbb{R}_{> 0}$ has been a central question of investigation. In what follows, we say that $A$ is {\it fractionally dense} in $\mathbb{R}_{> 0}$ if $R(A)$ is dense in $\mathbb{R}_{> 0}$. \smallskip One of the most natural choices for $A$ is the set $\mathbb{P}$ of prime numbers and it is known to be fractionally dense (cf. \cite{hs}, \cite{Salat}). Several generalizations of this result have been proven over the years and several interesting subsets of natural numbers have been shown to be fractionally dense (cf. \cite{gems} - \cite{Bukor-Toth 2}, \cite{dense-Gauss}, \cite{Dio} - \cite{hs}, \cite{Salat} - \cite{Salat3}, \cite{ps} - \cite{Toth2}). In \cite{CRS}, \cite{dense-Gauss} and \cite{Sittinger}, analogous questions have been dealt with in the set up of algebraic number fields. Very recently, the denseness of ratio sets in the $p$-adic completion $\mathbb{Q}_{p}$ have also been considered (cf. \cite{AB}, \cite{ABM}, \cite{Luca1}, \cite{Luca2}, \cite{Sanna2}, \cite{Sanna}). \smallskip Very recently, Leonetti and Sanna \cite{BAMS} introduced the notion of {\it direction sets}, which generalizes the notion of ratio sets as follows. For an integer $k \geq 2$ and $\emptyset \neq A \subseteq \mathbb{N}$, they considered the following sets: \begin{align*} \mathcal{S}^{k - 1} := \{\underline{x} \in [0,1]^{k} : ||\underline{x}|| = 1\}, ~ \mathcal{D}^{k}(A) := \{\rho(\underline{a}) : \underline{a} \in A^{k}\} ~ \mbox{ and } ~ \mathcal{D}^{\underline{k}}(A) := \{\rho(\underline{a}) : \underline{a} \in A^{\underline{k}}\}, \end{align*} where $\rho : \mathbb{R}_{\geq 0}^{k} \to \mathcal{S}^{k - 1}$ is the map defined by $\rho(\underline{x}) = \frac{\underline{x}}{||\underline{x}||}$ and $A^{\underline{k}} = \{\underline{a} \in A^{k} : a_{i} \neq a_{j} \mbox{ for all } i \neq j\}$. The sets $\mathcal{D}^{k}(A)$ and $\mathcal{D}^{\underline{k}}(A)$ are called the $k$-direction sets of $A$. We note that, for $k = 2$, we can identify $\mathcal{S}^{1}$ with $[0,+\infty]$ via a bijective map and thus the question of denseness in $\mathbb{R}_{> 0}$ can be translated into that in $\mathcal{S}^{1}$. Therefore, direction sets are indeed generalizations of ratio sets. Leonetti and Sanna \cite[Theorem 1.2]{BAMS} proved a necessary and sufficient criterion that determines whether a set $X \subseteq \mathcal{S}^{k - 1}$ can be realized as the set of accumulation points of $\mathcal{D}^{\underline{k}}(A)$ for some $A \subseteq \mathbb{N}$. Moreover, they proved a sufficient condition (cf. \cite[Theorem 1.5]{BAMS}) that asserts whether $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$. In this article, we further generalize the notion of direction sets and introduce generalized $k$-direction sets as follows. \begin{defn}\label{gen-defn} Let $k \geq 2$ be an integer and let $U_{1},\ldots,U_{k}$ be non-empty subsets of $\mathbb{N}$. We define the $k$-generalized direction set for the $k$-tuple $(U_{1},\ldots,U_{k})$ to be $\mathcal{D}^{k}(U_{1},\ldots,U_{k}) := \{\rho(u_{1},\ldots,u_{k}) : u_{j} \in U_{j} \mbox{ for } j = 1,\ldots,k\}$. Also, we define the distinct $k$-generalized direction set to be $\mathcal{D}^{\underline{k}}(U_{1},\ldots,U_{k}) := \{\rho(u_{1},\ldots,u_{k}) : u_{j} \in U_{j} \mbox{ for } j = 1,\ldots,k \mbox{ and } u_{i} \neq u_{j} \mbox{ for all } i \neq j\}$. \end{defn} Our first theorem is an analogue of Theorem 1.2 of \cite{BAMS} for distinct $k$-generalized direction sets. For any set $X \subseteq \mathcal{S}^{k - 1}$, we denote by $X^{\prime}$ the set of accumulation points of $X$. Also, we denote by $S_{k}$ the symmetric group on $k$ elements $\{1,\ldots,k \}$. For a permutation $\pi \in S_k$, we define $\pi(x_1,\dots,x_k):=(x_{\pi(1)},\dots,x_{\pi(k)})$ for all $\underline{x}=(x_1,\dots,x_k)$ in $\mathcal{S}^{k-1}.$ Also, for any subset $I$ of $\{1,\dots,k\}$, we define $\rho_I(\underline{x}):=\rho(\underline{y})$ where $\underline{y}=(y_1,\dots,y_k)$ is defined as $y_i:=x_i$ if $i\in I$ and the other coordinates as $0$. We say that $I$ {\it meets} $\underline{x}$ if $x_i\neq0$ for some $i \in I$. We state our first theorem as follows. \begin{theorem}\label{THE-BIG-MAIN-TH} Let $k \geq 2$ be an integer. For subsets $U_{1},\ldots,U_{k}$ of $\mathbb{N}$, let $X=\mathcal{D}^{\underline{k}}(U_1,\dots,U_k)^{\prime}$. Then, we have: \begin{enumerate} \item[(i)] $X$ is a closed subset of $\mathcal{S}^{k - 1}$. \item[(ii)] If $U_{i_1}=\dots=U_{i_m}$ for some $\{i_1,\dots,i_m\}\subseteq\{1,\dots,k\}$, then for $\pi\in S_k $ with $\pi(j)=j$ for all $j\notin \{i_1,\dots,i_m\}$, we have $\pi(\underline{x})\in X$ for every $\underline{x}\in X$. \item[(iii)] If $|U_{i}| \geq k$ for each $i \in \{1,\ldots,k\}$, then for every $I\subseteq\{1,\dots,k\}$ that meets $\underline{x}$, we have $\rho_I(\underline{x})\in X$. \end{enumerate} \end{theorem} We recall that for a non-empty set $A \subseteq \mathbb{N}$, the natural density of $A$ is defined as $d(A) := \displaystyle\lim_{X \to \infty}\frac{\#\{n \in A : n \leq X\}}{X}$, provided the limit exists. The next theorem provides a sufficient condition for $\mathcal{D}^{k}(U_1,\dots, U_k)$ to be dense in $\mathcal{S}^{k - 1}$. \begin{theorem}\label{TH-1} Let $k \geq 2$ be an integer and let $U_1,\dots, U_k\subseteq \mathbb{N}$ be such that $d(U_i)$ exists and equals $\delta_i>0$ $ \mbox{ for all } i=1,\dots,k$. Assume that $\displaystyle\bigcap_{i=1}^{k}U_i$ is an infinite set. Then $\mathcal{D}^{k}(U_1,\dots, U_k)$ is dense in $\mathcal{S}^{k-1}$. \end{theorem} The next theorem extends Theorem 1.5 of \cite{BAMS}, which asserts that if for a set $A \subseteq \mathbb{N}$, there exists an increasing sequence $\{a_{n}\}_{n = 1}^{\infty} \subseteq A$ with $\displaystyle\lim_{n \to \infty} \frac{a_{n}}{a_{n + 1}} = 1$, then $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$. We generalize this for $\mathcal{D}^{k}(U_1,\dots, U_k)$ as follows. \begin{theorem}\label{th-2} Let $k \geq 2$ be an integer and let $U_1, U_2,\dots, U_k$ be non-empty subsets of $\mathbb{N}$. If there exist increasing sequences $u_i^{(n)}\subseteq U_i$ for all $i \in \{1,\ldots,k\}$ such that $\displaystyle\lim_{n \to \infty}\frac{u_{i}^{(n - 1)}}{u_i^{(n)}}=1$, then $\mathcal{D}^{k}(U_1,\dots, U_k)$ is dense in $\mathcal{S}^{k-1}$. \end{theorem} \begin{rmk} For an integer $k \geq 2$ and for each $i \in \{1,\ldots,k\}$, let $a_{i}$ and $m_{i}$ be integers with $\gcd(a_{i},m_{i}) = 1$. Let $\mathbb{P}_{m_{i}} := \{p \in \mathbb{P} : p \equiv a_{i} \pmod{m_{i}}\}$. For $U_{i} = \mathbb{P}_{m_{i}}$, using Dirichlet's theorem for primes in arithmetic progressions, we see that the hypotheses of Theorem \ref{th-2} are satisfied. Therefore, $\mathcal{D}^{k}(\mathbb{P}_{m_{1}},\ldots,\mathbb{P}_{m_{k}})$ is dense in $\mathcal{S}^{k - 1}$. \end{rmk} \begin{theorem}\label{TH--3} Let $k \geq 2$ be an integer and for each $i \in \{1,\ldots,k\}$, let $f_{i}(X_1,\dots, X_m)\in \mathbb{Z}[X_1,\dots, X_m]$ be polynomials of total degree $d_{i}$ such that the sum of the coefficients of degree $d_{i}$ terms is positive. Let $U_{i}:=\{f_{i}(n_1,\dots,n_m)|(n_1,\dots,n_m)\in \mathbb{N}^m\}\cap \mathbb{N}.$ Then $\mathcal{D}^k(U_{1},\ldots,U_{k})$ is dense in $\mathcal{S}^{k-1}.$ \end{theorem} In \cite{Toth-Salat}, it is proven that there is a $3$-partition of $\mathbb{N} = A \cup B \cup C$, such that none of $R(A), R(B)$ and $R(C)$ is dense in $\mathbb{R}_{> 0}$. That is, none of $\mathcal{D}^{2}(A), \mathcal{D}^{2}(B)$ and $\mathcal{D}^{2}(C)$ is dense in $\mathcal{S}^{1}$. In \cite{BAMS}, Leonetti and Sanna asked for a possible generalization of this result for $k \geq 3$ \cite[Question 1.9]{BAMS}. We give a partial answer to their question in the next theorem. \begin{theorem}\label{partition} Let $k \geq 3$ be an integer. Then there exists a $3$-partition $\mathbb{N} = A \cup B \cup C$ of $\mathbb{N}$ such that none of $\mathcal{D}^{k}(A), \mathcal{D}^{k}(B)$ or $\mathcal{D}^{k}(C)$ is dense in $\mathcal{S}^{k - 1}$. \end{theorem} \begin{rmk} In view of Theorem \ref{partition}, it remains to be seen whether for a $2$-partition $\mathbb{N} = A \cup B$, either $\mathcal{D}^{k}(A)$ or $\mathcal{D}^{k}(B)$ is dense in $\mathcal{S}^{k - 1}$ or not. We note that Theorem \ref{th-2} cannot be directly applied to address this issue. This can be seen by considering $A = \displaystyle\bigcup_{k = 0}^{\infty} [3^{k},2\cdot 3^{k})\cap \mathbb{N}$ and $B = \displaystyle\bigcup_{k = 0}^{\infty} [2\cdot 3^{k}, 3^{k + 1})\cap \mathbb{N}$. For, if $\{a_{n}\}_{n = 1}^{\infty} \subseteq A$ is an infinite sequence, then there are infinitely many indices $i$ for which $a_{i} \in [3^{k},2\cdot 3^{k})$ and $a_{i + 1} \in [3^{\ell},2\cdot 3^{\ell})$ for $k < \ell$. Then it follows that $\frac{a_{i}}{a_{i + 1}} < \frac{2\cdot 3^{k}}{3^{\ell}} \leq \frac{2}{3}$. Therefore, the elements of the sequence $\{\frac{a_{n}}{a_{n + 1}}\}_{n = 1}^{\infty}$ cannot get arbitrarily close to $1$. Similar argument works for $B$ as well. Thus there exist a $2$-partition of $\mathbb{N}$, none of which contains a sequence with the ratio of consecutive terms converging to $1$. \end{rmk} \smallskip One of the interesting questions in the literature of fractionally dense sets is to look for sets $A \subseteq \mathbb{N}$ such that the ratio set $R(A)$ is dense in $\mathbb{R}_{>0}$ but $A$ contains no $3$-term arithmetic progressions. One such set is $A = \{2^{m} : m \geq 2\} \cup \{3^{n} : n \geq 2\}$, which is known to be fractionally dense in $\mathbb{R}_{>0}$ but $A$ contains no $3$-term arithmetic progressions (cf. \cite[Proposition 6]{gems}). In view of this, we may ask the following question. \begin{question}\label{quest-1} For an integer $k \geq 2$, does there exist a set $A \subseteq \mathbb{N}$ such that $A$ contains no $3$-term arithmetic progressions and $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$? \end{question} We answer Question \ref{quest-1} assertively in the following theorem. \begin{theorem}\label{prop-2} There exists a set $A \subseteq \mathbb{N}$ such that $A$ contains no $3$-term arithmetic progressions and $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$. \end{theorem} \begin{rmk} We shall see in the proof of Theorem \ref{prop-2} that we can obtain infinitely many sets $A \subseteq \mathbb{N}$ having no arithmetic progression of length $3$ such that $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$. \end{rmk} Next, we discuss the denseness of some particular type of sets whose properties have been recently considered in \cite{kfk}. For an arithmetic function $f : \mathbb{N} \to \mathbb{N}$ and a positive real number $X$, let $f_{X} := \#\{n \leq X : n = kf(k) \mbox{ for some } k \in \mathbb{N}\}$. Keeping this notation, we state the results of \cite{kfk} as follows. \begin{theorem} \cite{kfk}\label{kfk} (i) Let $\omega (n) = \displaystyle\sum_{\substack {p \mid n \\ p \in \mathbb{P}}}1$ be the prime divisor function. Then $$\omega_{X} = \frac{X}{\log \log X} + o\left(\frac{X}{\log \log X}\right).$$ \smallskip (ii) Let $\phi (n) = \#\{1 \leq k \leq n : \gcd (k,n) = 1\}$ be the Euler's totient function. Then $$\phi_{X} = cX^{\frac{1}{2}} + o(X^{\frac{1}{2}}),$$ where $c = \displaystyle\prod_{p}\left(1 + \frac{1}{p(p - 1 + \sqrt{p^{2} - p})}\right) \sim 1.365\ldots$. \end{theorem} Now, we state our result as follows. \begin{theorem}\label{kfk-TH} Let $A = \{n\omega (n) : n \in \mathbb{N}\}$ and $B = \{n\phi (n) : n \in \mathbb{N}\}$. Then for any integer $k \geq 2$, we have that both $\mathcal{D}^{k}(A)$ and $\mathcal{D}^{k}(B)$ are dense in $\mathcal{S}^{k - 1}$. \end{theorem} \section{Proof of Theorems} In this section, we prove our theorems. We first prove Theorem \ref{THE-BIG-MAIN-TH}. \begin{proof}[Proof of Theorem \ref{THE-BIG-MAIN-TH}] Since $X$ is the set of accumulation points of a subset of $\mathcal{S}^{k - 1}$, we immediately conclude that $X$ is closed and (i) is satisfied. \smallskip Now, let $\underline{x}=(x_1,x_2,\dots,x_k)\in X=\mathcal{D}^{\underline{k}}(U_1,\dots,U_k)^{\prime}$. Then there exists a sequence $\rho(\underline{a}^{(n)})\in \mathcal{D}^{\underline{k}}(U_1,\dots,U_k)$ converging to $\underline{x}$ such that $\rho(\underline{a}^{(n)})\neq \underline{x}$ for infinitely many $n$, where $\underline{a}^{(n)} \in \displaystyle\prod_{i = 1}^{k}U_{i}$. For $\pi \in S_{k}$ with $\pi (j) = j$ for all $j \notin \{i_{1},\ldots i_{m}\}$, we consider $\underline{b}^{(n)}:=\pi(\underline{a}^{(n)})\in \mathcal{D}^{\underline{k}}(U_1,\dots,U_k)$. Then $\rho(\underline{b}^{(n)})$ converges to $\pi(\underline{x}).$ Consequently, we have $\pi(\underline{x}) \in X$ for every $\underline{x} \in X$ and thus (ii) is satisfied. \smallskip Now, assume that $I$ is a non-empty subset of $\{1,\dots,k\}$ that meets $\underline{x}$. We can consider a sub-sequence of $\underline{a}^{(n)}$ such that each $a_i^{(n)}$ is non-decreasing for each $i \in \{1,\ldots,k\}$. If $j\in \{1,\dots,k\}\setminus I$, then we can choose distinct $c_j\in U_j$ such that for sufficiently large positive integer $n_{0}$, a sequence $\underline{d}^{(n)}\in U_1\times\dots\times U_k$ with distinct coordinates can be defined for all $n \geq n_{0}$ with $d_i^{(n)}:=a_i^{(n)}$ for $i\in I$ and $d_i^{(n)}:=c_i$ for $i\notin I$. This choice is possible because of the assumption $|U_{i}| \geq k$ for each $i$. It then follows that $\rho(\underline{d}^{(n)})$ converges to $\rho_I(\underline{x})$. Thus (iii) holds. This completes the proof of Theorem \ref{THE-BIG-MAIN-TH}. \end{proof} \begin{proof}[Proof of Theorem \ref{TH-1}] Let $\underline{x}\in (x_1,\dots, x_k)\in \mathcal{S}^{k-1}$ and let $I_i=(a_i,b_i)$ be open intervals such that $x_{i} \in I_{i}$ for each $i \in \{1,\ldots,k\}$. Then $\displaystyle\prod_{i=1}^{k} (a_i,b_i)\cap \mathcal{S}^{k-1}$ is a basic open set in $\mathcal{S}^{k - 1}$ containing $\underline{x}$. For a real number $X > 1$, let $U_i(X):=\#\{u_i\in U_i| u_i\leq X\}.$ By the hypothesis, we have that $\lim_{X \to \infty}\frac{U_i(X)}{X}=\delta_i>0$. This implies that $ U_i(X)=\delta_iX+o(X)$. Therefore, \[\lim_{X\rightarrow\infty}\frac{U_i(a_iX)}{U_i(b_iX)}= \lim_{X\rightarrow\infty}\frac{\delta_ia_iX+o(a_iX)}{\delta_ib_iX+o(b_iX)}=\frac{a_i}{b_i}<1.\] Thus for all sufficiently large real number $X$, there exists $u_i\in U_i$ such that $a_iX<u_i\leq b_iX$. That is, $ a_i<\frac{u_i}{X}\leq b_i.$ Since $\displaystyle\bigcap_{i=1}^{k}U_i$ is an infinite set, we can choose a large enough element $u \in \displaystyle\bigcap_{i=1}^{k}U_i$ such that $a_iu<u_i\leq b_iu$ for all $i=1,\dots,k$. This, in turn, implies that $\frac{u_i}{u}\in (a_i,b_i)$. Using the fact that $\rho(\underline{\alpha})=\frac{\underline{\alpha}}{\lVert \underline{\alpha}\rVert}$ is continuous function, we see that $\rho(u_1,\dots,u_k)\in \displaystyle\prod_{i=1}^kI_i\cap \mathcal{S}^{k-1}$. In other words, $\mathcal{D}^k(U_1,\dots,U_k)$ is dense in $\mathcal{S}^{k-1}$. \end{proof} We next prove Theorem \ref{th-2} which extends Theorem 1.5 of \cite{BAMS}. \begin{proof}[Proof of Theorem \ref{th-2}] Let $\underline{x}=(x_1,\dots,x_k)\in \mathcal{S}^{k-1}$ with $x_i>0$ $\forall$ $i\in \{1,\dots,k\}$. We pick an integer $m$ such that $m>\frac{u_i^{(1)}}{\min\{x_1,\dots,x_k\}}$ $\forall$ $i\in \{1,\dots,k\}$. Then there exist integers $m_i$ for each $i\in\{1,\dots,k\}$ such that $u_{i}^{(m_i-1)}\leq mx_i< u_{i}^{(m_i)}$. That is, $x_i<\frac{u_{i}^{(m_i)}}{m}\leq\frac{u_{i}^{(m_i)}}{u_{i}^{(m_i-1)}}x_i$. Since $m_i\rightarrow\infty$ as $m\rightarrow\infty$, it follows that $\displaystyle\lim_{m\rightarrow\infty}\frac{u_{i}^{(m_i)}}{m}=x_i$. Consequently, $\underline{u}=(u_{1}^{(m_1)},\dots,u_{k}^{(m_k)})$ converges to $\underline{x}$. Since $\rho $ is a continuous map, $\rho(\underline{u})$ converges to $\underline{x}$. Consequently, $\mathcal{D}^k(U_1,\dots,U_k)$ is dense in $\mathcal{S}^{k-1}$. \end{proof} \begin{proof}[Proof of Theorem \ref{TH--3}] For a fixed integer $i \in \{1,\ldots,k\}$, we consider the polynomial $g_{i}(X)$ obtained by replacing all variables of $g_{i}$ by the variable $X$. We get, $g_{i}(X)=a_{d_{i}}X^{d_{i}}+a_{d_{i} - 1}X^{d_{i}-1}+\dots+a_0\in \mathbb{Z}[X]$. Since $a_{d_{i}} > 0$, we conclude that for a sufficiently large positive real number $X$, we have $g_{i}(X) > 0$. Let $B_{i}:=\{g_{i}(n)|n\in \mathbb{N}\}\cap \mathbb{N}$. We have $\frac{g_{i}(X-1)}{g_{i}(X)}=\frac{a_{d_{i}}(X-1)^{d_{i}}+\dots+a_0}{a_{d_{i}}X^{d_{i}}+\dots+a_o}$ which tends to $1$ as $X$ tends to $\infty$. Also, since $g_{i}(X)$ is a polynomial in one variable, the sequence $\{g_{i}(n)\}_{n = 1}^{\infty}$ is eventually increasing. Therefore, by using Theorem \ref{th-2}, we obtain that $\mathcal{D}^k(B_{i})$ is dense in $\mathcal{S}^{k-1}.$ Since $B_{i} \subseteq U_{i}$, we conclude that $\mathcal{D}^k(U_{1},\ldots,U_{k})$ is dense in $\mathcal{S}^{k-1}$. \end{proof} We now prove Theorem \ref{partition} which gives a partial answer to \cite[Question 1.9]{BAMS}. \begin{proof}[Proof of Theorem \ref{partition}] We consider the following three sets as in \cite{Toth-Salat} (see also \cite{gems}). \begin{align*}A &:= \displaystyle\bigcup_{k = 0}^{\infty} [5^{k},2\cdot 5^{k})\cap \mathbb{N},\\ B &:= \displaystyle\bigcup_{k = 0}^{\infty} [2\cdot 5^{k},3\cdot 5^{k})\cap \mathbb{N},\\ C &:= \displaystyle\bigcup_{k = 0}^{\infty} [3\cdot 5^{k},5\cdot 5^{k})\cap \mathbb{N}. \end{align*} If $\mathcal{D}^{k}(A)$, $\mathcal{D}^{k}(B)$ or $\mathcal{D}^{k}(C)$ is dense in $\mathcal{S}^{k - 1}$, then by Theorem 1.4 of \cite{BAMS}, which states that if $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$ for some $A \subseteq \mathbb{N}$, then $\mathcal{D}^{k - 1}(A)$ is dense in $\mathcal{S}^{k - 2}$, we see inductively that $\mathcal{D}^{2}(A)$ (or $\mathcal{D}^{2}(B)$ or $\mathcal{D}^{2}(C)$) is dense in $\mathcal{S}^{1}$, which is false (cf. \cite[Proposition 3]{gems}). Therefore, we get a $3$-partition of $\mathbb{N}$ such that none of $\mathcal{D}^{k}(A)$, $\mathcal{D}^{k}(B)$ or $\mathcal{D}^{k}(C)$ is dense in $\mathcal{S}^{k - 1}$. This completes the proof of Theorem \ref{partition}. \end{proof} \begin{proof}[Proof of Theorem \ref{prop-2}] In \cite{darmon}, it has been proven that the equation $x^{n} + y^{n} = 2z^{n}$ has no non-trivial solution in $\mathbb{Z}$ if $n \geq 3$. In other words, the set $A := \{m^{r}: r, m \in \mathbb{Z}, r \geq 3\}$ does not contain any $3$-term arithmetic progressions. Since for a fixed value of $r \geq 3$, we have $\frac{m^{r}}{(m + 1)^{r}} \to 1$ as $m \to \infty$, by Theorem 1.5 of \cite{BAMS}, we conclude that $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$. \end{proof} \begin{proof}[Proof of Theorem \ref{kfk-TH}] Let $\underline{x} = (x_{1},\ldots,x_{k}) \in \mathcal{S}^{k - 1}$ and let $\displaystyle\prod_{i = 1}^{k}(a_{i},b_{i})$ be a basic neighborhood of $\underline{x}$. Then by Theorem \ref{kfk}, we see that $$\displaystyle\lim_{X \to \infty} \frac{\omega_{a_{i}X}}{\omega_{b_{i}X}} = \displaystyle\lim_{X \to \infty}\frac{a_{i}X}{\log \log a_{i}X}\cdot \frac{\log \log b_{i}X}{b_{i}X} = \frac{a_{i}}{b_{i}} < 1 \mbox{ for all } i \mbox{ with } 1 \leq i \leq k.$$ Therefore, for sufficiently large $X$, there exists $\alpha_{i} \in A$ such that $a_{i}X < \alpha_{i} < b_{i}X$ for all $i$. That is, $\left(\frac{\alpha_{1}}{X},\ldots,\frac{\alpha_{k}}{X}\right) \in \displaystyle\prod_{i = 1}^{k}(a_{i},b_{i})$. Hence $\rho(\alpha_{1},\ldots,\alpha_{k}) = \rho\left(\frac{\alpha_{1}}{X},\ldots,\frac{\alpha_{k}}{X}\right) \in \displaystyle\prod_{i = 1}^{k}(a_{i},b_{i})$. Consequently, $\mathcal{D}^{k}(A)$ is dense in $\mathcal{S}^{k - 1}$. \smallskip Similarly, for $\mathcal{D}^{k}(B)$, we note that $$\displaystyle\lim_{X \to \infty} \frac{\phi_{a_{i}X}}{\phi_{b_{i}X}} = \frac{\sqrt{a_{i}}}{\sqrt{b_{i}}} < 1 \mbox{ for all } i \mbox{ with } 1 \leq i \leq k$$ and thereafter it follows a similar line of argument. \end{proof} \section{Concluding remarks : Case of algebraic number fields} The ratio sets have been studied in the context of algebraic number fields in \cite{CRS}, \cite{dense-Gauss} and \cite{Sittinger}. It is interesting to extend the notion of direction sets in the set up of number fields and formulate analogous questions for the same. \smallskip Let $K \subsetneq \mathbb{R}$ be a number field of degree $d \geq 2$ and let $\mathcal{O}_{K}$ be its ring of integers. Let $\mathcal{O}_{K}^{0} := \{\alpha \in \mathcal{O}_{K} : {\rm{Tr}}_{K/\mathbb{Q}}(\alpha) = 0\}$ be the set of elements in $\mathcal{O}_{K}$ with trace $0$. Since $\mathcal{O}_{K}$ is a free $\mathbb{Z}$-module of rank $d$ and ${\rm{Tr}}$ is an additive group homomorphism from $\mathcal{O}_{K}$ to $\mathbb{Z}$, we see that $\mathcal{O}_{K} \cong \mathcal{O}_{K}^{0} \oplus \mathbb{Z}$. In particular, $\mathcal{O}_{K}^{0}$ is a free $\mathbb{Z}$-module of rank $d - 1$. Therefore, $\mathcal{O}_{K}^{0}$ itself is dense in $\mathbb{R}$ whenever $d \geq 3$. Also, for $d = 2$, we see that the ratio set of $\mathcal{O}_{K}^{0}$ is $\mathbb{Q}$. Consequently, the direction set of $\mathcal{O}_{K}^{0}$ is dense in $\mathcal{S}^{k - 1}$ for any integer $k \geq 2$. \smallskip We note that $\mathcal{O}_{K}^{0} \cap \mathbb{N} = \emptyset$. In view of this, we ask the following question. \begin{question} Let $d \geq 2$ and $k \geq 2$ be integers and let $K$ be a number field of degree $d$. Characterize the sets $\mathcal{A} \subseteq \mathcal{O}_{K}$ such that $\mathcal{A} \cap \mathbb{N}$ is finite and $\mathcal{D}^{k - 1}(\mathcal{A})$ is dense in $\mathcal{S}^{k - 1}$. \end{question} \bigskip {\bf Acknowledgements.} We would like to thank IIT Guwahati for providing excellent facilities to carry out the research. The third author gratefully acknowledges the National Board of Higher Mathematics (NBHM) for the Post-Doctoral Fellowship (Order No. 0204/16(12)/2020/R \& D-II/10925).
1,477,468,751,307
arxiv
\section{Introduction} A presentation of thermal instability is given by classical articles \citep{Parker1953, Zanstra1955, Field1965}, in which different types of instability are derived within linear theory. Usually, in the study of the structure of the interstellar medium, the isobaric mode of thermal instability was considered \citep{BarKrasn1977, KaplanPikelner1979, OsterbrockFerland2006}. For example, the result of evolution of this mode was proposed to explain the observed two-phase structure (the co-existence of cold clouds and warm intercloud medium in pressure equilibrium) of the diffuse atomic interstellar medium \citep{Field1969, Wolfire1995, Wolfire2003}. The criterion for the isobaric mode is stated in terms of the derivative of the generalized heat-loss function $Q$ at a constant pressure $p_0$ \begin{equation} \frac{\partial Q}{\partial T} \Bigg|_{p_0} =\Bigg( \frac{\partial Q}{\partial T} - \frac{\rho}{T} \frac{\partial Q}{\partial \rho} \Bigg) \Bigg|_{ \rho_0, T_0} > 0 , \label{eq:1} \end{equation} where $Q=\Gamma-\Lambda$ is defined as the energy gain $\Gamma$ minus energy loss $\Lambda$ (in erg g$^{-1}$ g$^{-1}$) in a static medium of density $\rho_0$ and temperature $T_0$ (i.e. $Q(\rho_0,T_0)=0$). Condition \ref{eq:1}, in the limit of small $Q$ corresponds to entropy perturbations. Significantly fewer articles are devoted to another type of thermal instability, the isentropic mode (also known as acoustic instability). This is due to the fact, that to satisfy the condition for this mode, special behaviour of the heat-loss function $Q$ is required (for more details, see Section \ref{sec:2.3}). The criterion for the isentropic mode is stated in terms of the derivative of $Q$ at constant entropy $s_0$: \begin{equation} \frac{\partial Q}{\partial T}\Bigg|_{s_0} = \Bigg( \frac{\partial Q}{\partial T} + \frac{\rho}{(\gamma-1) T} \frac{\partial Q}{\partial \rho} \Bigg) \Bigg|_{ \rho_0, T_0} > 0, \label{eq:2} \end{equation} where $\gamma$ is the adiabatic index. In the limit of small $Q$ condition \ref{eq:2} corresponds to nearly adiabatic acoustic waves (i.e. adiabatic perturbations). For the interstellar medium, acoustic instability was first studied in the article of \citet{Oppenheimer1977} for the molecular zone of photodissociation regions (PDRs). Further, this instability was discovered by \citet{Shchekinov1979} for the gas behind a radiating shock wave. The problems of non-linear evolution of isentropic perturbations were considered by \citet{KrasnobaevTarev1987}. They found that non-linear steepening of a wave occurs due to the growth of perturbations and it is accompanied by formation of a shock wave. The effects of non-linear steepening of a wave in magnetized plasmas were explored by \citet{Nakariakov2000}. Applying the Oppenheimer model, \citet{Krasnobaev1994} found that a sequence of self-sustained shock waves (also known as autowaves) is formed. \citet{Molevich2011} investigated analytically and numerically the non-linear evolution and structure of plane autowaves in the atomic surface layer of a PDR. However, they considered only one case with density $n\sim10^3$ {\cmc} and incident far-ultraviolet flux $G_0=10^2$ and did not take into account cooling in the oxygen fine-structure lines, which becomes significant under these conditions \citep{Wolfire1995}. Moreover, observations of PDRs indicate that $n$ and $G_0$ vary within a very wide range of parameters \citep{HollenbachTielens1999, Okada2013}, which will be considered below. Thus, the structure of our article is as follows. We present a model of energy balance in the atomic zone of a PDR. The model includes fine-structure emission in the carbon and oxygen lines; see Section \ref{sec:2}. Based on this model, we define conditions when the steady-state $Q(\rho_0,T_0)=0$ satisfies criterion \ref{eq:2}. We analyse wide ranges of the far-ultraviolet field $10<G_0<10^6$, gas densities $10<n<10^6$ {\cmc} and temperatures $10<T<10^4$ K, see Section \ref{sec:3}. We use the results of previous sections to identify astrophysical objects with parameters corresponding to adiabatic perturbations and we analyse the ability for instability to occur in them; see Section \ref{sec:4}. \section{Energy balance} \label{sec:2} Photodissociation regions are regions where the energy balance and gas chemistry are determined mainly by far-ultraviolet radiation (FUV) in the range $6$ -- $13.6$ eV. For example, a PDR is often formed at the surface of a neutral molecular cloud, which is close to young stars of O or B type. The general structure of a PDR has been studied in sufficient detail \citep{TielensHollenbach1985, Tielens2005} and can be described as follows. The medium around stars is ionized due to the radiation of photon energies larger than $13.6$ eV; thereby a region of ionized hydrogen (\ion{H}{ii}) is formed. We consider the structure of a PDR assuming that the \ion{H}{ii} region has reached pressure equilibrium with the surrounding medium. Radiation with energy $<13.6$ eV penetrates into the interstellar medium before the ionization front, dissociates molecular hydrogen H$_2$ in the Lyman and Werner bands ($11.2$ -- $13.6$ eV) and ionizes carbon. A neutral zone of atomic hydrogen (\ion{H}{i}) is formed; it is characterized by small impurities of heavy elements, such as (mainly) carbon ions (\ion{C}{ii}) and oxygen atoms (\ion{O}{i}). When the distance from stars increases and the FUV flux reduces, a \ion{C}{ii} transition into carbon monoxide (CO) occurs in the molecular cloud. At a greater distance, atomic oxygen transforms into molecular O$_2$. In this article, we will focus on the \ion{H}{i} zone in a PDR (it is located between the ionization and dissociation fronts). Heating of atomic gas can occur through the following main processes: the photoelectric effect on large molecules and small dust grains; photopumping of H$_2$ molecules followed by collisional de-excitation of the resulting vibrationally excited species; neutral carbon photoionization. The last process is usually negligible compared with photoelectric emission. However, the FUV-pumped H$_2$ emission at high densities ($n>10^{4-5}$ {\cmc}) can be important and it has the same order as the photoelectric effect when the Lyman and Werner radiation fields are absorbed by H$_2$ lines rather than by dust \citep{Burton1990}. A comparison of the H$_2$ line and dust absorption rates can be obtained by the steady-state H$_2$ formation-destruction equation, i.e. if we examine the ratio of the dissociation rate to the H$_2$ formation rate (or the atomic-to-molecular density ratio), which takes into account the attenuation of radiation. This ratio is expressed in the simplest approximation (\citealt{DraineBertoldi1996, HollenbachTielens1999}; for more details see \citealt{Sternberg2014}) as the ratio of the incident FUV flux $G_0$ (measured in units of $1.6\times10^{-3}$ \,erg\,cm$^{-2}$\,s$^{-1}$: \citealt{Habing1968}) to the density of hydrogen nuclei $n$. The critical value of $G_0/n$ is approximately equal to $0.04$ cm$^3$; it corresponds to atomic and molecule column densities of $N(\ion{H}{i})=N(H_2)\sim10^{21}$ cm$^{-2}$ in the dissociation front, or visual extinction $A_{\rm V}\sim1$. If $G_0/n$ exceeds the critical value, then dust opacity becomes important. Thus, when $G_0/n < 0.04$ cm$^3$ ($A_{\rm V}<1$), gas heating of the pumping H$_2$ is significant and, conversely, heating is unimportant for $G_0/n > 0.04$ cm$^3$ (the \ion{H}{i}/H$_2$ transition zone corresponds to $A_{\rm V}\sim1$--$2$). For typical PDRs we have the average value $G_0/n\sim0.1$--$1$ cm$^3$ \citep{Tielens2005}. Hereafter we will consider $G_0/n> 0.04$ cm$^3$. According to the concepts of PDR structure, the energy balance of the \ion{H}{i} zone is determined mainly by photoelectric heating from dust grains and gas cooling through infrared fine-structure lines of atoms and ions. In the next subsections, we consider the physical processes in detail. \subsection{Heating} Photoelectric emission from dust grains and polycyclic aromatic hydrocarbon molecules (PAH) dominates heating in the atomic zone of PDRs. Photoelectric heating from interstellar grains (for brevity, the PAH will be called grains) was first described by \citet{Spitzer1948}. This description was improved by \citet{TielensHollenbach1985, BakesTielens1994, Wolfire2003, WeingartnerDraine2001b}. We use the modification of the heating $\Gamma_{\rm pe}$ proposed by \citet{WeingartnerDraine2001b}, which takes dust grain-size distributions into account. We also consider the energy loss $\Lambda_{\rm pe}$ in the gas due to the accretion of charged particles on to the grains (it is significant for high temperature $T>10^3$ K). Heating and cooling are reproduced by the following functions \[ \begin{split} &\Gamma_{\rm pe}=10^{-26} \, \, {\rm \,ergs\,s^{-1}} \\ & \times \frac{G_0}{m_{\rm H}}\, \frac{C_0+C_1T^{C_4} }{1+C_2(G_0 \sqrt{T}/n_e)^{C_5} (1+C_3 (G_0 \sqrt{T} /n_e)^{C_6} )} \, , \end{split} \] \[ \begin{split} &\Lambda_{\rm pe}=10^{-28} \, \, {\rm \,ergs\,cm^3\,s^{-1}} \frac{n_e} {m_{\rm H}} \,T^{(D_0+D_1/\chi)} \\ & \times \exp(D_2+D_3 \chi - D_4 \chi^2 ) \, \, \, \, \, \, {\rm for} \, \, \, \, \chi=\ln(G_0 \sqrt{T}/n_e) \, , \end{split} \] where $n_e$ denotes the number density of electrons and $m_{\rm H}$ is the mass of the hydrogen atom. Almost all carbon near the surface of PDRs is ionized, hence $n_e=\xi_{\rm C}\,n$, where $n$ is the number density of hydrogen and $\xi_{\rm C}$ is the carbon abundance in the gas. Coefficients $C_0,...,C_6$ and $D_0,...,D_4$ are given in \citet{WeingartnerDraine2001b} and depend on the dust properties (grain size, composition) and a radiation field spectrum. According to \citet{WeingartnerDraine2001a}, grain size distributions are consistent with the observed extinction of starlight, which varies depending on the environment through which light travels. Extinction variations can be parameterized by the ratio of visual extinction to reddening $R_{\rm V}=A_{\rm V}/E_{\rm B-V}$ \citep{CardelliClaytonMathis1989}. A diffuse interstellar medium with density $n\leqslant10^2$ {\cmc} corresponds to $R_{\rm V}\sim3.1$; higher values $R_{\rm V}\sim5$--$6$ are observed for dense clouds $n>10^4$ {\cmc} and intermediate-density regions correlate with $R_{\rm V}\sim4$. Moreover, \citet{WeingartnerDraine2001a} showed that grain size distributions reproduce the observed extinction better if the contribution of very small carbonaceous grains is considered. They constructed the size distributions for various combinations of $R_{\rm V}$ and $b_{\rm C}$, where $b_{\rm C}$ is the C abundance (per H nucleus) in very small grains (radius $\leqslant 100$ \AA). \citet{LiDraine2001} found that the emission observed from dust in the diffuse interstellar medium and the corresponding extinction curve agree better when $b_{\rm C}$ reaches the maximum value from all possible variations at the given $R_{\rm V}$ (i.e. $b_{\rm C}=6\times10^{-5}$ at $R_{\rm V}=3.1$). \citet{WeingartnerDraine2001a} suggest that this assumption also holds in denser regions, therefore the largest allowed values $b_{\rm C}=4\times10^{-5}$ and $b_{\rm C}=3\times10^{-5}$ can be used for $R_{\rm V}=4$ and 5.5, respectively. Hereafter, for simplicity these combinations will be considered. However, as the application of these results to observations, we provide an example with $b_{\rm C}=0$ and $R_{\rm V}=5.5$ (see Section \ref{sec:4}, Carina N). In addition, we make the following assumptions. First, the grain size distributions are constructed so as to minimize the influence of carbon and silicate inclusions (case A by \citealt{WeingartnerDraine2001a}). Secondly, we adopt a blackbody radiation field with colour temperature $T_{\rm_c}=3\times10^4$ K. As a result, the total photoelectric heating is represented as \[\Gamma(n,T,G_0,\xi_{\rm C},R_{\rm V},b_{\rm C})=\Gamma_{\rm pe}-\Lambda_{\rm pe} \] The function $\Gamma$ is mainly dependent on the gas temperature $T$ and grain charge parameter $G_0 \sqrt{T}/n_e$, which characterizes the ratio of ionization and recombination rates of grains. An increase in $G_0 \sqrt{T}/n_e$ leads to a higher grains charge and therefore heating efficiency $\Gamma/G_0$ decreases \citep{BakesTielens1994}. Properties of the gas-dust medium show less heating efficiency for dense regions characterized by $R_{\rm V}=5.5$ ($b_{\rm C}=3\times10^{-5}$) than for diffuse regions with $R_{\rm V}=3.1$ ($b_{\rm C}=6\times10^{-5}$) \citep{WeingartnerDraine2001b}. \subsection{Cooling} \label{subsec:2.2} The atomic gas of PDRs is cooled predominantly through the fine-structure excitation of ions and atoms by atomic hydrogen impact. The largest contribution to the gas cooling comes from the [\ion{C}{ii}] 158, [\ion{O}{i}] 63 and [\ion{O}{i}] 146 {\micron} lines \citep{TielensHollenbach1985, Hollenbach1991, Tielens2005}. The radiative cooling rate due to the transition from upper level 2 to lower level 1 of some species is given by \begin{equation*} \begin{split} & \Lambda_{21}=\xi \, E_{21} \, A_{21} \, \beta(\tau_{21}) \\ & \times \frac{1 }{m_{\rm H}\left[1+{g_1}/{g_2} \, \exp({E_{21}}/{k_{\rm B} T}) (1+ \beta(\tau_{21}) \, {n_{\rm_{cr}}}/{n})\right]} \, , \end{split} \end{equation*} where $E_{21}$ is the energy difference between two levels, $A_{21}$ is the spontaneous transition probability, $g_2$ and $g_1$ are the statistical weights of two levels, $k_{\rm B}$ is the Boltzmann constant and $\xi$ is the abundance ($\xi_{\rm C}$ for carbon and $\xi_{\rm O}$ for oxygen). The critical density for de-excitation processes is $n_{\rm_{cr}}=A_{21}/\gamma_{21}$, i.e. roughly the density above which the levels thermalize collisionally. Here, $\gamma_{21}$ is the collisional de-excitation rate coefficient for atomic hydrogen collisions (Table~\ref{tab:1}). Parameter $\tau_{21}$ is the optical depth averaged over the line and $\beta(\tau_{21})$ is an escape probability at optical depth $\tau_{21}$ of the line. In the limit of small optical depth, $\beta\sim0.5$ (in a semi-infinite slab); at large optical depth, $\beta\sim1/\tau_{21}$ \citep{Jong1980}. \begin{table} \centering \caption{Parameters of the cooling} \label{tab:1} \begin{tabular}{lcccc} \hline Spacies & ${\lambda_{21}}^a$ & $E_{21}$ & $A_{21}$ & $\gamma_{21}$ \\ & ({\micron})& (K) & (s$^{-1})$ &(cm$^3$\,s$^{-1}$)\\ \hline \ion{C}{ii} & 158 & 92 & $2.4\times10^{-6}$&$ 8.12\times10^{-10} T^{0.02}$\\ \ion{O}{i} & 63 & 228 & $8.95\times 10^{-5}$&$ 4.2\times10^{-12} T^{0.67}$\\ \ion{O}{i} & 146 & 98 & $1.7\times 10^{-5}$&$1.45\times10^{-11} T^{0.44}$\\ \hline \end{tabular} \begin{tabular}{l} Note. $^a$ The wavelenght of $2\rightarrow1$ transition \end{tabular} \end{table} Next, we will estimate approximately the relations between optical depths of lines. Let $N_{\tau{21}}$ be the column density of hydrogen nuclei required for unit optical depth in the level $2\rightarrow$ level $1$ transition under the assumption that all atoms or ions of the corresponding element are in the lower level (an expression for $N_{\tau{21}}$ can be found in \,\citealt{TielensHollenbach1985}), i.e. $\sigma_{21} \xi N_{\tau{21}}$=1, where $\sigma_{21}$ is the cross-section absorption for the level $2\rightarrow$ level $1$ transition. For any $\tau_{21}$, we introduce $\tau_{21}= N/N_{\tau{21}}$, where $N$ is the column density of hydrogen nuclei. For [\ion{C}{ii}] 158, [\ion{O}{i}] 63, and [\ion{O}{i}] 146 {\micron} lines, we denote the column densities $N_{\tau{21}}$ as $N_{\rm \tau{C}}$, $N_{\rm \tau{O63}}$ and $N_{\rm \tau{O146}}$, respectively, and consequently we have the optical depths $\tau_{21}$ as $\tau_{\rm C}=N/N_{\tau{\rm C}}$, $\tau_{\rm O63}=N/N_{\tau{\rm O63}}$, and $\tau_{\rm O146}=N/N_{\rm \tau{O146}}$, respectively. Therefore, $\tau_{\rm O63}/\tau_{\rm C}=N_{\rm \tau{C}}/N_{\rm \tau{O63}}=0.72\times \xi_{\rm O}/\xi_{\rm C}$ and $\tau_{\rm O146}/\tau_{\rm C}=N_{\rm \tau{C}}/N_{\rm \tau{O146}}=0.92\times \xi_{\rm O}/\xi_{\rm C}$. In the \ion{H}{i} zone of PDRs we usually have $\tau_{\rm C}<1$ \citep{Tielens2005}. Data of the carbon C and oxygen O abundances varies for different photodissociation regions. According to observations, the ratio $\xi_{\rm O}/\xi_{\rm C}$ is approximately equal to two. For example, values of $\xi_{\rm C}$ are assumed to be $\xi_{\rm C}=1.4\times 10^{-4}$ \citep{Cardelli1996} or $\xi_{\rm C}=1.6\times 10^{-4}$ \citep{Sofia2004}, while $\xi_{\rm O}=3.2\times 10^{-4}$ \citep{Meyer1998}. However, there are also higher estimates for the abundances: for instance, in the Orion Bar: $\xi_{\rm C}=3\times 10^{-4}$ and $\xi_{\rm O}=(4$ -- $5)\times 10^{-4}$ \citep{Wolfire1995, Shaw2009}. Total energy losses in the lines considered are represented by \[ \Lambda(n,T, \xi_{\rm C}, \xi_{\rm O} ,\tau_{\rm C})= \Lambda_{\rm CII158}+ \Lambda_{\rm OI63}+\Lambda_{\rm OI146} \] Radiative cooling has its largest value when the optical depth is small, i.e. $\tau_{21} \rightarrow 0$ ($\beta \sim0.5$). This may occur near the boundary of the PDR and \ion{H}{ii} region. Thus, we have shown that generalized heat-loss function $Q=\Gamma-\Lambda$ depends on the parameters of the gas-dust medium and the radiation passing through it (i.e. $n, T, G_0, R_{\rm V}, b_{\rm C}$); also, $Q$ depends on the cooling line opacity ($\tau_{\rm C}$) and the abundances of heavy elements ($\xi_{\rm C}, \xi_{\rm O}$). \subsection{Isentropic criterion} \label{sec:2.3} For interstellar gas, acoustic instability was first demonstrated by \citet{Oppenheimer1977}. He noted that this instability can be understood as the preferential heating of compressed regions of sound wave. It happens if a heating rate (in ergs cm$^{-3}$ s$^{-1}$) is an increasing function of $n$ or $T$ under conditions where a cooling rate is relatively insensitive to $n$ or $T$ (see Fig. \ref{fig:1}(a) for our model of energy balance). Oppenheimer found such conditions in the molecular regions of PDRs, where the molecular transitions governing the cooling of the gas are thermalized (this occurs at high density) and strong heat sources are present. Here, the heating rate usually varies at least as rapidly as $n$ and the cooling rate is almost independent of density. Notice that at high density the sign of the derivative $\partial Q/\partial \rho$ determines the sign of the isentropic criterion \ref{eq:2}. We shall verify that similar conditions are satisfied for the atomic zone of PDRs at high density. Indeed, we can see that photoelectric heating is an increasing function of density $n$ \citep{BakesTielens1994} and the cooling rate depends weakly on $n$ when $n>n_{\rm cr}$. The justification for the behaviour of the cooling rate can be as follows. The line [\ion{O}{i}] 63 $\micron$ becomes an important component of the total cooling rate with the increase of density $n$ and FUV field $G_0$ (where $G_0$ influences the steady-state temperature $T_0$) and it becomes dominant at high $n$ and $G_0$ \citep{TielensHollenbach1985}. For the cooling line [\ion{O}{i}] 63 $\micron$, we have the value $n_{\rm cr} \sim 10^5$ \cmc (where $n_{\rm_{cr}}=A_{21}/\gamma_{21}$, see Table~\ref{tab:1}). At $n > n_{\rm cr}$, the total cooling rate depends weakly on $n$. This behaviour of the rates is shown in Fig. \ref{fig:1}(a). Thus, by analogy with \citet{Oppenheimer1977}, we assume that the isentropic type can arise in the dense atomic zone of photodissociation regions. However it is quite a rough estimate. Exact knowledge of the conditions under which the isentropic mode will grow can be obtained through a direct application of the corresponding criterion, i.e. checking the positivity of the derivative $(\partial Q/\partial T)|_{s_0}>0$. The locus of the heat-loss function $Q$ satisfying this criterion is shown in Fig. \ref{fig:1}(b). Assume that we have a static, homogeneous gas in thermal equilibrium at some $n$ and $T$. We have constructed Fig. \ref{fig:1}(b) for typical parameters causing acoustic instability. The region above the curve of thermal balance corresponds to $Q<0$, because cooling exceeds heating if the temperature exceeds the equilibrium value for a given density. Conversely, the region below the curve corresponds to $Q>0$. We consider a small inhomogeneity embedded in this medium and perturb it away from the equilibrium curve along the locus $s\propto \ln(p/\rho^\gamma)={\rm constant}$ (where $p/\rho^\gamma \propto T/n^{\gamma-1}$). Let the inhomogeneity exists at point $A$; we displace it slightly to lower (higher) temperatures and lower (higher) densities along the locus $s = constant$. According to the diagram, the inhomogeneity enters a region where $Q>0$ ($Q<0$), i.e. where the heating exceeds the cooling (or vice versa). Thus the inhomogeneity must heat up (cool down) again and re-expand back toward the point $A$. All gas located in the region $A$ is thermally stable. Now let us consider the case if the inhomogeneity exists in the square region in Fig. \ref{fig:1}(b), e.g. at the point B. If we take a piece of such a medium and displace it toward lower (higher) temperatures and lower (higher) densities, it will now enter a region where $Q<0$ ($Q>0$), i.e. where cooling exceeds heating (or vice versa). Thus, maintaining the same entropy as its surroundings, such a medium would get cooler (hotter) and more rarefied (denser), until it makes a transition to the thermally stable state, e.g. to the region $A$ for $Q<0$, or until the heating and compression are stopped for some reason in the case $Q>0$. Gas placed in region $B$ is isentropic thermally unstable. A medium placed in the unstable region would therefore co-exist in two states, cold rarefied gas and warm dense gas, at a common entropy $s$. Investigations of acoustic perturbations \citep{KrasnobaevTarev1987, Molevich2011} and also our calculations (see below) confirm these features. \begin{figure} \begin{minipage}{0.49\columnwidth} \center{\includegraphics[width=\columnwidth]{fig1a} \\ (a)} \end{minipage} \begin{minipage}{0.45\columnwidth} \center{\includegraphics[width=\columnwidth]{fig1b} \\ (b)} \end{minipage} \caption{Behaviour of heating and cooling functions ($G_0=10^5$, $R_{\rm V}=5.5$, $b_{\rm C}=3\times10^{-5}$, $\xi_{\rm C}=1.4\times10^{-4}$, $\xi_{\rm O}=3.2\times10^{-4}$ and $\beta=0.5$): (a) total heating $\Gamma$ and cooling $\Lambda$ rates at $T=10^3$ K (where $\Gamma_{\rm pe}$ and $\Lambda_{\rm OI63}$ are dominant processes); (b) contour of thermal balance, $Q(n, T) = 0$ (solid curve), with the locus for constant entropy $s$ (dashed line). The locus only inside a square (top right) is thermally unstable by the isentropic criterion.} \label{fig:1} \end{figure} Notice that the heat-loss function $Q$ depends not only on the variables $n$ and $T$ but also on the set of parameters that define the conditions in the interstellar medium (i.e. the gas-dust properties and the radiation passing through the medium represented by $R_{\rm V}$, $b_{\rm C}$ and $G_0$, the cooling-line opacity represented by $\tau_{\rm C}$, and the abundances of heavy elements represented by $\xi_{\rm C}$ and $\xi_{\rm O}$). Therefore, to find the conditions for isentropic instability growth, we calculate $(\partial Q/\partial T)|_{s_0}$ and find the conditions for its positivity (Section \ref{sec:3}). \section{PDR parameters causing instability} \label{sec:3} To study the instability evolution of travelling waves, we start with consideration of its general features. Thus in Section \ref{sec:unstable} we present a theoretical description of isentropic thermal instability, followed by a numerical simulation. As the PDR characteristics vary over a very wide range, in Section \ref{sec: detect} we provide a multivariable analysis to show that the instability criterion \ref{eq:2} is satisfied. \subsection{Evolution of unstable perturbations} \label{sec:unstable} To describe the gas motion in the atomic zone of a PDR, we consider the system of gas dynamics equations \begin{equation*} \begin{split} &\frac{d \rho}{dt}+\rho \, \mathrm{div} \, \textbf{v}=0 \, \\ & \frac{d \textbf{v}}{dt} +\frac{1}{\rho} \mathrm{grad} \, p=0 \, \\ &\frac{d}{dt} \Big(\frac{p}{(\gamma-1)\rho} \Big)+\frac{p}{\rho} \, \mathrm{div} \, \textbf{v}=Q \end{split} \end{equation*} Here $\rho=n m_{\rm H}$, $p=\rho R T$, $t$ and ${\bf v}$ are the mass density, pressure, time and gas velocity, $R=k_{\rm B}/m_{\rm H}$ is the universal gas constant and $\gamma=5/3$ is the adiabatic index. We consider one-dimensional plane motion with velocity $u$ along the $x$ coordinate. The steady state is characterized by $\rho=\rho_0$ and $T=T_0$ at $u=0$ such that $\Gamma(\rho_0,T_0)=\Lambda(\rho_0,T_0)=\Lambda_0$ and $Q(\rho_0,T_0)=0$. We assume that the characteristic parameters of gas motion are the density $\rho_0$, temperature $T_0$, isothermal sound speed $u_0=\sqrt{RT_0}$, time of cooling $t_0=R T_0/\Lambda_0$ and length-scale $l_0=u_0 t_0$. {We study the short-wavelength regime of the wave mode of thermal instability found by \citet{Field1965}. In this case, the wave mode satisfies the isentropic criterion \ref{eq:2} and its growth rate is given by the expression \begin{equation*} \omega=\frac{(\gamma-1)^2}{2\gamma R}\Big(\frac{\partial Q}{\partial T}+\frac{\rho}{(\gamma-1)T} \frac{\partial Q}{\partial T}\Big) \Big|_{\rho_0, T_0}\\ \end{equation*} which is also similar to equation 1.8 in \citet{KrasnobaevTarev1987} (where they use $Q$ per unit volume and time, which differs from our notation). The characteristic time of perturbation growth for the isentropic type is $t_{\rm inst}=1/\omega$. The small-wavelength limit is satisfied when the time $t_{\rm inst}$ exceeds the sound-crossing time $t_{\rm s}=\lambda/\sqrt{\gamma} u_0$ $ \sim t_0 \lambda/l_0$ \citep{Vazquez2003}, where $\lambda$ is a wavelength. For typical parameters of PDRs, cooling time $t_0 < 10^2$ yr and hence $t_{\rm s} < 10^2 \lambda/l_0$ yr, whereas usually the time of perturbation growth $t_{\rm inst} > 10^2$ yr. Therefore, to satisfy the regime $t_{\rm inst} > t_{\rm s}$ we assume, for simplicity, that the wavelength $\lambda$ is of the same order as the characteristic length-scale $l_0$. The condition $t_{\rm inst} > t_{\rm s}$ permits us to use the weak non-linear theory of \citet{KrasnobaevTarev1987}. This theory allows us to study the propagation of non-linear stationary waves of finite amplitude and verify the simulation results. } The influence of dissipative processes, e.g. thermal conductivity, on $t_{\rm inst}$ is seen in the existence of an upper limit on the wavenumber, above which the growth of perturbations is inhibited. In the general case, the damping of perturbations in the short-wavelengths limit follows from the theory of travelling waves in a thermal conducting medium, which was investigated by \citet{Landau1987}. Applied to thermal instability, the damping effect was obtained by \citet{Field1965}. The influence of thermal conductivity in PDRs is discussed in Section \ref{sec:4}. Studies of the isentropic mode \citep{KrasnobaevTarev1987, Krasnobaev1994, Molevich2011} show that the growth of initially small perturbations at the non-linear evolution stage is accompanied by the formation of a sequence of self-sustained shock waves (autowaves). \citet{Krasnobaev2016} found numerically that the waves reach saturation and hence have a maximum amplitude that is determined by the heat-loss function $Q$ and depends weakly on the parameters of the initial perturbations (wavelength $\lambda$ and amplitude $a$). We assume that $\lambda=2 l_0$. Fig. \ref{fig:2} gives an example of perturbation evolution that begins with a single pulse described by $u/u_0=a \sin(\pi x/\lambda)$ at $a=0.1$, $n/n_0=1$ and $p/p_0=1$ for $0<x<\lambda$. The wave evolution is calculated by the total variation diminishing Lax-Friedrichs scheme. \begin{figure*} \includegraphics[width=2\columnwidth]{fig2} \caption{Evolution of velocity perturbations $u$ for $n_0=5\times10^5$ {\cmc}, $T_0=943$ K, $G_0=10^5$, $R_{\rm V}=5.5$, $b_{\rm C}=3\times10^{-5}$, $\beta(\tau_{21})=0.5$, $\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$. The characteristic time of perturbation growth is $t_{\rm inst}=344$ yr, $t_{\rm inst}/t_0=20$. The distributions of density $n$ and pressure $p$ at $t/t_0=40$ are shown, as is the generalized heat-loss function $Q$ (dashed curve) at $t/t_0=150$ ($\Lambda_0=1.47\times10^2$ erg\,g$^{-1}$\,s$^{-1}$).} \label{fig:2} \end{figure*} Fig. \ref{fig:2} shows that, for the time about $t_ {\rm inst}/t_0\sim20$, the velocity perturbation grows (also, perturbations of $n$ and $p$ increase, which we can see at $t/t_0=40$) and then a shock wave forms. The gas state behind the initial perturbation is not steady and therefore a secondary wave arises. Consequently a sequence of shock waves is generated, which is shown at $t/t_0=150$. The function $Q$ in Fig. \ref{fig:2} shows typical properties of isentropic oscillations. Thus perturbations are subject to a slight heating during the compression phase, which tends to increase the amplitude of the wave. We consider the distance $L$ between the source of the initial perturbation and the primary wave when the secondary wave begins to form (see Fig. \ref{fig:2} at $t/t_0=20$). We can estimate $L$ by the expression \begin{equation*} L \sim a_0 \, t_{\rm inst} \, \, \, \, \, \, \textrm{or} \, \, \, \,\, L/l_0 \sim \sqrt{\gamma} \, t_{\rm inst}/t_0 \end{equation*} {where $a_0=\sqrt{\gamma R T_0}$}. Notice that the distance between the primary and secondary waves will increase with time, due to the difference between their velocities. \subsection{Detection of parameters causing instability} \label{sec: detect} We consider the following parameters causing instability: $n_0, T_0, G_0, R_{\rm V}, b_{\rm C}, \tau_{\rm C}, \xi_{\rm C}$, and $\xi_{\rm O}$, for which the heat-loss function $Q$ satisfies criterion \ref{eq:2}. The density $n$, temperature $T$ and FUV field $G_0$ of PDRs vary over wide ranges \citep{Tielens2005}: \begin{equation} 10<n<10^6 \textrm{{\cmc}} \, , \, \, \, 10<T<10^4 \, \textrm{K} \, , \, \, \, 10<G_0<10^6 \,. \label{eq:3} \end{equation} We want to find the range of parameters causing instability for intervals \ref{eq:3} and the values of $R_{\rm V}, b_{\rm C}, \tau_{\rm C}, \xi_{\rm C}$, and $\xi_{\rm O}$ considered in Section \ref{sec:2}. First, we consider variations of $R_{\rm V}$ and $b_{\rm C}$, which characterize the dust properties for typical abundances of carbon $\xi_{\rm C}=1.4\times10^{-4}$ and oxygen $\xi_{\rm O}=3.2\times10^{-4}$. We also assume small optical depths for the cooling lines. Secondly, we investigate the influence of optical depths on the range of parameters causing instability. We vary $\tau_{\rm C}$ from 0 to 1, where $\tau_{\rm C}\sim0$ corresponds to the position of matter near the PDR surface, while $\tau_{\rm C}=1$ corresponds to a position further into the PDR. Third, we study the contribution of carbon C and oxygen O to the variations of parameters causing instability. \subsubsection{$R_{\rm V}$ variations} \label{sec:Rv} As discussed in Section \ref{sec:2}, in diffuse regions the combination of $R_{\rm V}$ and $b_{\rm C}$ has the best agreement with observations of dust grain-size distributions when $b_{\rm C}$ attains its largest allowed values. Therefore, we consider three typical combinations: $R_{\rm V}=3.1$, $b_{\rm C}=6\times10^{-5}$ -- diffuse interstellar medium; $R_{\rm V}=5.5$, $b_{\rm C}=3\times10^{-5}$ -- dense clouds; $R_{\rm V}=4$, $b_{\rm C}=4\times10^{-5}$ -- intermediate-density regions. Criterion \ref{eq:2} in intervals \ref{eq:3} for $\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$ shows that instability appears in dense regions with $10^5 \lesssim n_0<10^6$ {\cmc}. Such dense gas usually corresponds to high values of the ratios of visual extinction to reddening, for example $R_{\rm V}=5.5$. Smaller values, $R_{\rm V}=3.1$ and 4, are characterized by smaller density, $n_0 \lesssim 10^4$ {\cmc}, while instability can occur only when $n_0 \gtrsim10^5$ {\cmc} (see Fig.~\ref{fig:3}). As a result, isentropic instability occurs at $R_{\rm V}=5.5$. However, perhaps there are objects in the interstellar medium with $R_{\rm V}>5$ for $n_0<10^4$ {\cmc} or with $R_{\rm V}<4$ for $n_0>10^5$ {\cmc}. In the case of $R_{\rm V}=5.5$, instability criterion \ref{eq:2} is satisfied, when there are high intensities of the FUV fields $1.3\times10^4<G_0<10^6$ and high gas densities $1.8\times10^5<n_0<10^6$ {\cmc} at temperatures $3.7\times10^2 <T_0 <2.5\times10^3$ K. More detailed distributions of $1/t_{\rm inst}$, $T_0$ and $G_0$ depending on $n_0$ are shown in Fig. \ref{fig:3}. We obtain the following intervals: characteristic perturbation growth time $3.1\times10^2<t_{\rm inst}<10^5$ yr (here an average value of the upper limit is given, although theoretically one can have $t_{\rm inst} \rightarrow \infty$), cooling time $12<t_0 <34$ yr and distance covering the locations of primary and secondary waves $2.1\times10^2<L<10^5$ au (also length-scale $4<l_0<32$ au and $24<L/l_0<10^4$). These parameters are shown in Fig.~\ref{fig:4} for $\tau_{\rm C} \rightarrow 0$ ($\beta \sim 0.5$). \begin{figure*} \includegraphics[width=2\columnwidth]{fig3} \caption{ Examples where the isentropic instability criterion are satisfied ($R_{\rm V}=5.5$, $b_{\rm C}=3\times10^{-5}$, $\tau_{\rm C}\sim0$, $\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$). {(a) Growth rate $1/t_{\rm inst}$}, (b) temperature $T_0$ and (c) FUV flux $G_0$ (we use the condition $G_0/n_0>0.04$ cm$^3$, for which photoelectric heating dominates in a dense gas).} \label{fig:3} \end{figure*} \begin{figure*} \begin{minipage}{1.3\columnwidth} \center{ \includegraphics[width=\columnwidth]{fig4a} \\ (a)} \end{minipage} \begin{minipage}{0.7\columnwidth} \center{\includegraphics[width=\columnwidth]{fig4b} \\ (b)} \caption{Functions (a) $t_{\rm inst}$, $T_0$, $t_0$, $L$ and (b) $G_0$ for the optical depth $0<\tau_{\rm C}\leqslant 1$ at $R_{\rm V}=5.5$, $b_{\rm C}=3\times10^{-5}$, $\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$. The logarithmic relation between $G_0$ and $n_0$ in (b) shows the boundaries of the parameters causing instability (the boundaries correspond to the case $1/t_{\rm inst} \rightarrow 0$ and conditions \ref{eq:3} and $G_0/n_0>0.04$ cm$^3$).} \label{fig:4} \end{minipage} \end{figure*} Thus, we find that acoustic thermal instability in the surface layers of PDRs can occur when the gas density and intensity of the incident FUV field are high. Next, we explore how the range of parameters causing instability changes if the opacity of the cooling lines and variations of element abundances are considered. \subsubsection{Opacity of the fine-structure lines} \label{sec:opacity} Strictly speaking, to consider the opacity effect consistently we should use the distribution of gas parameters in the atomic zone. As we know, the value $\tau_{12}$ depends on the depth $z$ of the plane-parallel layer (where $0<z<Z$, i.e. $z$ varies from the ionization (I) front to the dissociation (D) front), the level populations of the coolant element (which can be expressed through the density in all levels $n$) and the temperature $T$ \citep{TielensHollenbach1985}. The approximate structure of the \ion{H}{i} zone (the thickness $Z$, distributions of $n(z)$ and $T(z)$) are calculated by solving the problem of I-D front propagation depending on the incident FUV field, dust properties and abundances of elements. This is a complex problem even for one particular object with one set of parameters. For the purposes of our study, we need to consider a very wide range of PDRs, for which the structures of the atomic zones will be distinguished substantially from each other. Therefore, we would like to simplify the estimate of the optical depth and not to produce the calculation of the \ion{H}{i} zone structure. We assume that, for any combination of $n$ and $T$, there exists a z position for which the optical depth $\tau_{12}$ takes any values in a given interval (known from the studies of PDRs: \citealt{Tielens2005}). Presumably, $\tau_{12}$ can successively take all interval values independently of $n$ and $T$. This allows us to consider the optical depth as a parameter of the cooling function, with values within the allowable range. We suppose that this approach is acceptable as the first approximation for a wide objects variation. The infrared fine-structure [\ion{C}{ii}] 158, [\ion{O}{i}] 63 and [\ion{O}{i}] 146 {\micron} lines in the atomic zone of PDRs are characterized by optical depths $\tau_{21}$ in the range 0--1 \citep{TielensHollenbach1985, Tielens2005}. When $\tau_{21}$ increases, the escape probability $\beta(\tau_{21})$ decreases. Consequently, the total cooling $\Lambda$ weakens and the heat-loss function $Q=\Gamma - \Lambda$ increases. As a result, the steady-state temperature $T_0$ rises when the density is constant \citep{TielensHollenbach1985}. This temperature behaviour can be seen in Fig.~\ref{fig:4}, where changes of all optical depths are expressed through variations $\tau_{\rm C}$, the depth of the [\ion{C}{ii}] 158 {\micron} line. Fig.~\ref{fig:4} demonstrates that the inclusion of opacity in the cooling lines expands the range of PDR parameters causing instability. When $\tau_{\rm C}$ increases, criterion \ref{eq:2} is satisfied for a large number of values $G_0, n_0$ and $T_0$ and higher values of time $t_{\rm inst}$ and $t_0$. As a result, the largest depth $\tau_{\rm C}=1$ corresponds to the largest intervals of values $G_0, n_0$ and $T_0$. The lower and upper bounds of $t_{\rm inst}$, $t_0$ and $L$ correspond to the values $\tau_{\rm C} \rightarrow 0$ and $\tau_{\rm C}=1$, respectively. Within the interval $0<\tau_{21}\lesssim1$, we find the minimum and maximum values of the parameters causing instability. Thus, we obtain the total ranges: densities $4.5\times10^4<n_0<10^6$ {\cmc}, FUV fields $3\times10^3<G_0<10^6$ (we select cases for $G_0/n_0>0.04$ cm$^3$) and temperatures $360 <T_0 <10^4$ K. We also obtain the time intervals $3.1\times10^2<t_{\rm inst}<10^6$ yr, $12<t_0<2\times10^2$ yr and length-scales $2.1\times10^2<L<10^6$ au ($4<l_0<3.4\times10^2$ au, $23<L/l_0<10^4$). Consequently, the previous result (Section \ref{sec:Rv}), where isentropic instability occurs in dense PDRs and for high intensity of radiation field, is preserved (but the lower bounds of values $n_0$, $G_0$ and $T_0$ decrease slightly). \subsubsection{Carbon and oxygen abundances} The C and O abundances of PDRs have typical values $\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$\citep{Cardelli1996, Meyer1998}. To find the influence of $\xi_{\rm C}$ and $\xi_{\rm O}$ on the parameters causing instability, we consider variations of the abundances within the ranges used in early studies of PDRs, i.e. $\xi_{\rm C}=(1.4$--$3)\times10^{-4}$ and $\xi_{\rm O}=(3$--$5)\times10^{-4}$ \citep{TielensHollenbach1985, Wolfire1995}. The main results are shown in Fig.~\ref{fig:5}. \begin{figure*} \begin{minipage}{2\columnwidth} \center{ \includegraphics[width=\columnwidth]{fig5a} \\ (a)} \end{minipage} \vfill \begin{minipage}{0.72\columnwidth} \center{\includegraphics[width=\columnwidth]{fig5b} \\ (b)} \end{minipage} \begin{minipage}{0.72\columnwidth} \center{\includegraphics[width=\columnwidth]{fig5c} \\ (c)} \end{minipage} \begin{minipage}{0.55\columnwidth} \center{\includegraphics[width=\columnwidth]{fig5text}} \end{minipage} \caption{Influence of carbon and oxygen abundances ($R_{\rm V}=5.5$, $b_{\rm C}=3\times10^{-5}$, $\tau_{\rm C}\sim0$): (a) $t_{\rm inst}$, $T_0$ and $t_0$ for $G_0=10^5$ and $10^6$ (solid and dashed curves); (b) heating $\Gamma$ and cooling $\Lambda$ rates for $G_0=10^5$, $n_0=5\times10^5$ {\cmc}. Panel (c) shows the boundaries of parameters causing instability for $\xi_{\rm C}$, $\xi_{\rm O}$ as in panels (a) and (b) (solid curve) and for case $\tau_{\rm C}=1$ at $\xi_{\rm C}, \xi_{\rm O} (\times10^{4})$ equal to 1.4, 1.4 (little dashed curve) and when cooling in the [\ion{O}{i}] lines is neglected ($R_{\rm V}=4$, $b_{\rm C}=4\times10^{-5}$ and $\tau_{\rm C}\sim0$: dash-dotted line).} \label{fig:5} \end{figure*} The carbon abundance influences gas cooling and heating (where $\xi_{\rm C}$ governs the electron density $n_e$). However, in a medium with high density ($n\gtrsim10^5$ {\cmc}), the cooling in the [\ion{O}{i}] 63 {\micron} line is significantly larger than that in the [\ion{C}{ii}] 158 and [\ion{O}{i}] 146 {\micron} lines \citep{TielensHollenbach1985, Burton1990}. Therefore, the contribution of carbon to the total cooling $\Lambda$ is very small. The dependence of photoelectron emission on electron density $n_e$ is well known, i.e. a decrease of $n_e$ leads to a decrease in total heating $\Gamma$. As a result, $\xi_{\rm C}$ reduction causes a decrease of the steady-state temperature $T_0$ obtained from the equation $\Gamma-\Lambda=0$. This property is shown in Fig.~\ref{fig:5}(b), which presents a comparison of curves $\xi_{\rm C}$, $\xi_{\rm O}$ ($\times10^4$) between values 3, 5 and 1.4, 5. At the same time, the oxygen only influences the gas cooling. Therefore, $\xi_{\rm O}$ reduction leads to a decrease in heating $\Gamma$ and hence leads to an increase in $T_0$ (see Fig.~\ref{fig:5}(b) when $\xi_{\rm C}$, $\xi_{\rm O}$ ($\times10^4$) are equal to 3, 5 and 3, 3). We note that the influence of a general decrease of C and O abundances on $T_0$ is established by direct calculations of the $\Gamma$ and $\Lambda$ functions. The variations of $\xi_{\rm C}$ and $\xi_{\rm O}$ change the range of parameters causing instability (see Fig.~\ref{fig:5}). However, even if we take into account the opacity of cooling lines, then the orders of the values $n_0, G_0, T_0$, $t_{\rm inst}, t_0$, and $L$ are comparable with the corresponding orders for typical abundances $\xi_{\rm C}$ and $\xi_{\rm O}$ (see Section \ref{sec:opacity}). Thus, for $\xi_{\rm C}=(1.4$--$3)\times10^{-4}$ and $\xi_{\rm O}=(3$--$5)\times10^{-4}$ at $0<\tau_{21}\lesssim1$, we obtain the following total intervals: densities $2.2\times10^4<n_0<10^6$ {\cmc}, FUV fields $1.3\times10^3<G_0<10^6$ (when $G_0/n_0>0.04$ cm$^3$) and temperatures $322<T_0<10^4$ K. We also obtain the characteristic perturbation growth time $1.7\times10^2<t_{\rm inst}<10^6$ yr, cooling time $7<t_0<4.5\times10^2$ yr and distance covering the locations of primary and secondary waves $10^2<L<10^6$ au ($3<l_0<7.5\times10^2$ au, $23<L/l_0<10^4$). The greatest change in the range of parameters causing instability is induced by a significant reduction of the oxygen abundance. We considered the limiting situation, when the fine-structure [\ion{O}{i}] 63 and [\ion{O}{i}] 146 {\micron} lines are neglected completely (see Fig.~\ref{fig:5}(c) for $\xi_{\rm C}$, $\xi_{\rm O}$ ($\times10^4$) are equal to 1.4, 0). In this case, the isentropic instability criterion is satisfied for intermediate densities $6\times10^2<n_0<2.5\times10^4$ {\cmc} and for a wide range of FUV fields $20<G_0<10^6$ at temperatures $1.1\times10^2<T_0<9\times10^3$ K. Nevertheless, the thermal balance model of the \ion{H}{i} zone in PDRs, in which the oxygen fine-structure lines were ignored at intermediate density ($n>10^2$ {\cmc}), requires theoretical and observational arguments. We could neglect the [\ion{O}{i}] 63 {\micron} emission compared with the [\ion{C}{ii}] 158 {\micron} line only for low-density PDRs, i.e. diffuse gas with $n<10^2$ {\cmc} \citep{Hollenbach1991}. However, for such low densities the isentropic instability criterion \ref{eq:2} is not satisfied. Diffuse clouds usually have another model of chemical and energy balance \citep{Wolfire1995, Wolfire2003}, which differs from the case of dense clouds. Moreover, thermal instability may also occur in diffuse gas, but in another mode, the isobaric instability \ref{eq:1}. \subsubsection{General results} We found the conditions for which the isentropic instability criterion \ref{eq:2} on the surface layer of a PDR is satisfied. We used a model of the energy balance with photoelectric heating from interstellar grains and cooling through the fine-structure [\ion{C}{ii}] 158, [\ion{O}{i}] 63 and [\ion{O}{i}] 146 {\micron} lines. For a wide range of parameters, which characterize the generalized heat-loss function $Q = \Gamma-\Lambda$, we obtained the following results. \begin{itemize} \item Isentropic thermal instability can occur if the gas density and intensity of the incident FUV field are high. We estimated ranges of the FUV field, density, and temperature when the opacity of the cooling lines ($0<\tau_{21}\lesssim1$ ) is taken into account and C and O abundances are typical: $\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$. These intervals are \begin{equation} \begin{split} & 3\times10^3<G_0<10^6 \, , 4.5\times10^4<n_0<10^6 \textrm{{\cmc}} \, , \\ & 360<T<10^4 \, \textrm{K} \,. \label{eq:4} \end{split} \end{equation} We also obtained ranges of characteristic perturbation growth time $3.1\times10^2<t_{\rm inst}<10^6$ yr, cooling time $12<t_0<2\times10^2$ yr and distance that characterizes secondary wave formation $2.1\times10^2<L<10^6$ au (for initial perturbation wavelength $4<\lambda<3.4\times10^2$ au, where $\lambda=l_0$). \item Variations of carbon and oxygen abundances $\xi_{\rm C}=(1.4$--$3)\times10^{-4}$, $\xi_{\rm O}=(3$--$5)\times10^{-4}$ slightly change the ranges of the parameters causing instability, but the ranges correspond to within the order of their values in the case of typical abundances ($\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$). If we take into account the opacity of the cooling lines, then we obtain the intervals \begin{equation} \begin{split} &1.3\times10^3<G_0<10^6 \, , 2.2\times10^4<n_0<10^6 \textrm{{\cmc}} \, , \\ & 322<T<10^4 \, \textrm{K} \,. \label{eq:5} \end{split} \end{equation} \item A significant decrease of the oxygen contribution to gas cooling gives the greatest impact on the change of the parameters causing isentropic instability. \end{itemize} \section{Examples of observed PDR{\small s} where instability can occur} \label{sec:4} {The assumption that the turbulent motion in an atomic interstellar medium can be caused by thermal instability was discussed earlier by \citet{KritsukNorman2002, Brandenburg2007, IwasakiInutsuka2014}. These articles studied the isobaric mode of thermal instability and considered the heat-loss rate $Q$ for a diffuse atomic gas \citep{Wolfire1995}. However, as we shall see below, turbulent motions in a dense PDR can also be caused by the isentropic type of instability.} The results obtained in the previous sections can be used to find out whether instability of travelling waves arises in some observed PDRs. Let us consider examples of these PDRs and discuss the corresponding estimates of the main parameters causing instability. The main parameters are the FUV field $G_0$, steady-state density of the atomic gas $n_0$ and abundances $\xi_{\rm C}$ and $\xi_{\rm O}$. The gas temperature $T_0$ is determined from the equation of energy balance and depends on the optical depths of cooling lines. The observed PDRs with parameters satisfying the ranges \ref{eq:4} and \ref{eq:5} are given in Table~\ref{tab:2}. \begin{table*} \caption{Examples of the observed PDRs.} \label{tab:2} \begin{tabular}{ccccccccc} \hline Object & PDR& $G_0$ & n & T & $\xi_{\rm C}$ & $\xi_{\rm O}$ & R & D \\ & & & {\cmc} & K & $\times10^4$ & $\times10^4$ & pc & pc \\ \hline 1 & Orion Bar& [1-4](4) & [0.5-1](5) & [0.5-1](3) & 3 & 5, 4$^{a}$ & 0.02$^{b}$ & 0.3\\ 2 & NGC 2023 S & [3-6](3) & [0.5-2](5) & [0.3-1](3) & 1.4 & 3.2 & 0.004$^{c}$ & 0.04\\ 3 & NGC 7023 NW & [2.6-7.7](3) & [0.5-2](5) & [3-5](2) & 1.6 & 3.2 & 0.02$^{d}$ & 0.1\\ 4 & Mon R2 & [0.5-1](5) & [0.4-4](5) & [3-6](2) & 1.6 & 3.2& 0.001$^e$ & -\\ 5 & Carina N$^f$ & [0.7-1.6](4) & [2-10](5) & [3-6](2) & 1.6 & 3.2 & - & - \\ \hline \end{tabular} \begin{tabular}{l} {\it Notes}. \\ Numbers in parentheses: [1-4](4) corresponds to the interval $10^4$ -- $4\times10^4$. \\ The last two columns are approximate sizes of the PDR atomic layers, where R and D are sizes in the radial and perpendicular directions. \\ \textbf{References}. Objects: \\ \textbf{1}. \citealt{Tauber1994, YoungOwl2000}; $^a$ \citealt{Pellegrini2009}; $^b$ \citealt{Bernard-Salas2012}. \\ \textbf{2}. $^c$ \citealt{Sheffer2011, Sandell2015}. \textbf{3}. \citealt{Joblin2010}; $^d$ \citealt{Pilleri2012, Okada2013}.\\ \textbf{4}. \citealt{Berne2009}; $^e$ \citealt{Pilleri2014, Okada2013}. \textbf{5}. \citealt{Brooks2003, Kramer2008}; \\ $^f$ according to \citealt{Okada2013} we assume the absence of the [\ion{O}{i}] 146 {\micron} emission and $b_{\rm C}=0$ at $R_{\rm V}=5.5$. \end{tabular} \end{table*} \begin{figure*} \begin{minipage}{2\columnwidth} \center{ \includegraphics[width=\columnwidth]{fig6a} \\ (a)} \end{minipage} \hfill \begin{minipage}{2\columnwidth} \center{\includegraphics[width=\columnwidth]{fig6b} \\ (b)} \end{minipage} \hfill \begin{minipage}{2\columnwidth} \center{\includegraphics[width=0.9\columnwidth]{fig6text}} \end{minipage} \caption{Functions $t_{\rm inst}$, $T_0$, and $L$ of $\tau_{\rm C}$ on surfaces of PDRs with parameters similar to values from Table~\ref{tab:2}: (a) NGC 7023 NW, NGC 2023 S and Orion Bar for $R_{\rm V}=5.5$, $b_{\rm C}=3\times10^{-5}$; (b) Carina N and Mon R2 for $R_{\rm V}=5.5$. Note numbers in parentheses: 4(3)=$4\times10^{3}$.} \label{fig:6} \end{figure*} Fig.~\ref{fig:6} for each of the PDRs shows functions $T_0$, $t_{\rm inst}$ and $L$ for which criterion \ref{eq:2} is satisfied. We found typical values of the gas temperature $T_0\sim3\times10^2$--$2\times10^3$ K, characteristic perturbation growth time $t_{\rm inst}\sim10^3$ --$10^4$ yr and distance characterizing secondary wave appearance $L\sim2\times10^2$--$10^4$ au $=10^{-3}$ -- $5\times10^{-2}$ pc {at the wavelength $\lambda \sim 6\times10^{-5} - 2\times10^{-3}$ pc}. We see that the average scale $L$ is less than (or the same order as) the atomic layer sizes $R$ or $D$ in Table~\ref{tab:2}. Since the amplitude of waves for propagation time $t\sim t_{\rm inst}$ is close to the amplitude of the saturation mode \citep{Krasnobaev2016}, we can expect a significant influence of autowaves on the velocity dispersion if $R\gtrsim L$ or $D \gtrsim L$ and $t\gtrsim t_{\rm inst}$. Next we consider the influence of isentropic instability on the velocity field \textbf{v}, density $\rho$ and temperature $T$ in detail. As was shown in Section \ref{sec:unstable}, acoustic instability is characterized by presence of multiple shock waves in the gas. The corresponding relative variations ${ u/u_0}$, $\rho/\rho_0$ and $T/T_0$ behind the shocks have amplitudes in the range 0.1--0.5, where the maximum of the values corresponds to the saturation amplitude \citep{Krasnobaev2016}. Consequently, the turbulent velocity $u_{\rm turb}$, which has the same order as the gas velocity behind the shock wave, is approximately equal to several kilometers per second (see below for details). Due to collisions of the shock waves with sharp boundaries, such as ionization and dissociation fronts, the value of the turbulent velocity $u_{\rm turb}$ can be higher \citep{Chernyi1988}. These velocity variations are quite accessible to observations \citep{MieschBally1994, Yoshida2010}. Multiple shock waves can be observed morphologically as filamentary or reticulate structures, not only in an \ion{H}{i} zone but also in ionized gas (due to the penetration of perturbations into an \ion{H}{ii} region). If acoustic instability occurs, then the density and temperature in filamentary structures is higher than that in the surrounding gas. Such structures are observed, for example, in RCW 120 \citep{Zavagno2007, Deharveng2009}. They could be formed for a time shorter than the age of RCW 120. We take into account the fact that the density and temperature distributions in RCW 120 are sufficiently inhomogeneous. Using RCW 120 estimates from the literature \citep{Zavagno2007, Torii2015}, we find that in dense clouds we have $n_0\sim10^5$ {\cmc}, $T_0\sim550$ K and in a less dense medium we have $n_0\sim10^4$ {\cmc}, $T_0\sim140$ K. According to the PDR model of RCW 120 by \citet{Rodon2015}, we have the density $n_0\sim2\times10^4$ {\cmc} and FUV flux $G_0\sim6\times10^2$. RCW 120 parameters differ insignificant from the parameters causing isentropic instability (see ranges \ref{eq:4} and \ref{eq:5}). For example, if we assume $n_0=7\times10^4$ {\cmc} and $G_0=3\times10^3$ then, using our energy balance model (Section \ref{sec:2}) and criterion \ref{eq:2}, we find $T_0=5.2\times10^2$ K, $t_{\rm inst}\sim7\times10^3$ yr, $L\sim4\times10^3$ au $=2\times10^{-2}$ pc for $\tau_{\rm C}\sim1$ and $\xi_{\rm C}=1.4\times10^{-4}$ and $\xi_{\rm O}=3.2\times10^{-4}$. The characteristic perturbation growth time $t_{\rm inst}$ is less than the estimated age of the \ion{H}{ii} region, which is greater than $4\times10^5$ yr, and the length-scale $L$ is less than the thickness of the surface layer $R\sim5\times10^{-2}$ pc \citep{Zavagno2007, Torii2015}. The presence of multiple shocks (autowaves) can also be manifested as significant changes of gas parameters (density, velocity and temperature) on very small spatial scales that are the same order as the thicknesses of the corresponding shock fronts $d_{\rm S} \sim 10^{15}/n$ cm \citep{Landau1987}, where $d_{\rm S} < 10^{13}$ cm $\sim 3 \times 10^{-6}$ pc at $n>10^2$ \cmc. The existence of similar fluctuations is shown by the analysis of turbulent velocities in the Orion Nebula \citep{Ferland2012}. Observation of this object in the atomic zone of the PDR gives $u^{\rm Orion}_{\rm turb} \approx 5$ \kms \, at $T\approx10^3$ K. For such a gas temperature, the adiabatic sound speed is $a^{\rm Orion}_0\approx 3.7$ \kms (the mean mass per particle is equal to $1.3$). Since the turbulent velocity $u_{\rm turb}$ has the same order as the gas velocity $u$ (moreover, it can be estimated as $2 u$, \citealt{Ferland2012}), its magnitude corresponds to $u_{\rm turb} \sim 2 u \sim 2 \times0.5 u_0=0.8 a_0$. Therefore, in the case of the possible growth of isentropic perturbations, we can obtain the turbulent velocity in this PDR as $u_{\rm turb} \sim 3$ \kms. Though the estimate $u_{\rm turb}$ is slightly less than the observed velocity $u^{\rm Orion}_{\rm turb}$, we have satisfactory conformity in these values. The study of the observation data in this section was obtained under the assumption that we can neglect thermal conductivity. This assumption is valid if $t_{\rm inst} \ll t_{\rm h}$, where the conductive time is $t_{\rm h}=\lambda^2 n k_ {\rm B} /(\gamma-1)\kappa$ and the coefficient of thermal conductivity for atomic gas \citep{Lang1974} is $\kappa={5 k_ {\rm B}}/{2 m_{\rm H}} \, 5.7 \times 10^{-5} \,\sqrt{T}$ (ergs s$^{-1}$ K$^{-1}$ cm$^{-1}$). For typical PDR parameters such as $n \sim 10^5$ cm$^{-3}$ and $T \sim 10^3$ K at the average time $t_{\rm inst}\sim10^3$ yr (for example the Orion Bar: Table~\ref{tab:2}, Fig.~\ref{fig:6}), we find that $t_{\rm inst}<t_{\rm h}$ is satisfied when wavelength $\lambda>\lambda_{\rm cr}=10^{-6}$ pc (where $\lambda_{\rm cr}^2 n k_ {\rm B} /(\gamma-1)\kappa=t_{\rm inst}$). Since, for the PDRs studied above, we have the wavelength of adiabatic perturbations $\lambda \sim 6\times10^{-5}-2 \times10^{-3}$ pc, for such conditions the influence of thermal conductivity is insignificant. Notice that the critical wavelength $\lambda_{\rm cr}$ is similar to the length from Field's theory \citep{Field1965}, i.e. $\lambda_{\rm F}=2\pi/\sqrt{{\rho_0}(Q_T+{\rho_0 Q_\rho}/{(\gamma-1)T} )/{\kappa}}$. On the other hand, in a dense PDR, perturbations with a very small scale of the order of the shock-front thickness $d_{\rm S}$ will be damped under the influence of conductivity. We emphasize some limitations and uncertainties that appear in the development of our model. Consistent treatment of the opacity effect assumes that there are distributions of $n (z)$ and $T (z)$ in the atomic zone that correspond to one set of values $\tau_{12} (z, n, T)$ for the cooling lines. The resulting values obtained by this approach ($n_0$, $T_0$ and $\tau_{12}$ in all the atomic zone ) are contained among the values found in our rough approximation (see Section \ref{sec:opacity}). In other words, by a consistent treatment we can obtain a smaller number of resulting values (up to a total absence) satisfying the instability criteria compared with the case of our approximation (Fig.~\ref{fig:6}). The results of the rough approximation give a larger number of combinations of parameters that characterizes the medium in a state of thermal instability than is the case for real PDRs. However, our approach allows us to estimate the order and the approximate values of these parameters. Another significant limitation is the neglect of large-scale motions in PDRs. If we take these motions into account, then the energy balance and consequently the gas temperature and density can change. We cannot exclude completely the influence of the magnetic field, radiation pressure and cosmic rays \citep{Pellegrini2009} on the growth and structure of perturbations. However, detailed information about these processes is currently unavailable for most of PDRs. \section{Conclusions} The general aim of this work was to determine the implementability of isentropic thermal instability in the atomic surface layers of PDRs. Our research has verified it. \begin{itemize} \item We proposed a model of energy balance on the surface of a PDR, in which gas is heated by photoelectron emission from dust grains and cooled through the fine-structure excitation of ions and atoms by atomic hydrogen impact. We have taken into account the intensity of the far-ultraviolet radiation penetrating to the PDR, the optical depth of fine-structure lines and variations in abundances of heavy elements. \item We found that, for typical abundances of elements, the medium will be thermally unstable for a dense PDR ($n_0>2\times10^4$ {\cmc}) and high intensity of the far-ultraviolet field ($G_0>10^3$). When we take into consideration the opacity of the cooling lines, the intervals of key parameters ($G_0, n_0$ and $T_0$) causing instability are expanded. We also found that the instability criterion depends significantly on the relations of carbon and oxygen abundances. \item We gave examples of observed dense PDRs that are affected by high-intensity FUV flux and in which isentropic instability can occur. We found the characteristic perturbation growth time $t_{\rm inst}\sim10^3$--$10^4$ yr and distance covering the locations of primary and secondary waves $L\sim10^{-3}$ -- $5\times10^{-2}$ pc. For objects older than $t_{\rm inst}$ and with the scale of the atomic zone greater than $L$, we described the features of the instability (for example, RCW 120). These features include the presence of multiple shock waves and filamentous structures with higher density and temperature than the surrounding medium. \end{itemize} \bibliographystyle{mnras}
1,477,468,751,308
arxiv
\section{Introduction} In quantum mechanics the time evolution of two noninteracting subsystems can be described by an operator $e^{itH} \otimes e^{itH'}$, where $H$ and $H'$ are Hamiltonians of the subsystems (see e.g. chapters 2.2 and 3.1 in \cite{BP}). In applications, the unitary operator $e^{itH}$, which is \emph{a priori} complicated, is replaced by a random unitary matrix, to make a model tractable. This powerful idea goes back to E. Wigner. Here by a $n \times n$ random unitary matrix we mean a matrix drawn according to the Haar measure on the unitary group $U(n)$. From this point of view it seems natural to study asymptotic local properties of spectra of the tensor product $A_m \otimes B_n$ of two independent $m \times m$ and $n \times n$ random unitary matrices, to which this short note is devoted. The note, in a sense, continues the investigations commenced in \cite{T}. Some preliminaries are presented in the rest of this section, and the main result is stated. The proofs are provided in the next section. The last section is devoted to some concluding remarks concerning the tensor product of more than two matrices. \subsection{Background and notation} For a simple point process $\tau$ on $\mathbb{R}$ we denote its \emph{$k$-th correlation function}, when it exists, by $\rho_\tau^{(k)}$ (for the definitions see e.g. \cite{HKPV}). Let us introduce three point processes $\Pi$, $\Sigma$, and $\Xi_n$. By $\Pi$ we shall denote \emph{the Poisson point process} on $\mathbb{R}$ for which $\rho_{\Pi}^{(k)} \equiv 1$ for all $k$. By $\Sigma$ we shall denote \emph{the sine point process} on $\mathbb{R}$ which has the correlation functions \begin{equation} \label{eq:defsinecorr} \rho_{\Sigma}^{(k)}(x_1, \ldots, x_k) = \det\left[ Q(x_i,x_j) \right]_{i,j = 1}^k, \end{equation} where \emph{the sine kernel} $Q(x,y) = q(x-y)$ and $q$ reads as follows \begin{equation} \label{eq:defsinekernel} q(u) = \frac{\sin(\pi u)}{\pi u}. \end{equation} Given a $n \times n$ random unitary matrix with eigenvalues $e^{i\xi_1}, \ldots, e^{i\xi_n}$, where $\xi_i \in [0,2\pi)$ are \emph{eigenphases}, we define the point process $\Xi_n = \{\xi_1, \ldots, \xi_n\}$. It is well known that this process is determinantal with the kernel $S_n(x,y) = s_n(x-y)$, where \begin{equation} \label{eq:defkerS_n} s_n(u) = \frac{1}{2\pi}\frac{\sin\left(\frac{nu}{2}\right)}{\sin\left( \frac{u}{2} \right)}, \end{equation} i.e., \begin{equation} \label{eq:defCUEcorr} \rho_{\Xi_n}^{(k)}(x_1, \ldots, x_k) = \det\left[ S_n(x_i, x_j) \right]_{i,j = 1}^k. \end{equation} Since $\frac{2\pi}{n}s_n\left( \frac{2\pi}{n}u \right) \xrightarrow[n \to \infty]{} q(u)$, when $n$ becomes large, the process $\frac{n}{2\pi}(\Xi_n-\pi)$ of the rescaled eigenphases of the $n \times n$ random unitary matrix locally behaves as the sine process $\Sigma$. By \emph{superposition} of two simple point processes $\Psi = \{\psi_1, \ldots, \psi_M\}$, $\Phi = \{\phi_1, \ldots, \phi_N\}$, $M, N \leq \infty$ we mean the union $\Psi \cup \Phi = \{\psi_1, \ldots, \psi_M, \phi_1, \ldots, \phi_N\}$. \subsection{Results} Given two independent $m \times m$ and $n \times n$ random unitary matrices $A$ and $A'$ we get two independent point processes of their eigenphases $\Xi_m = \{\xi_1, \ldots, \xi_m\}$ and $\Xi_n' = \{\xi'_1, \ldots, \xi'_n\}$ respectively. We define the point process $\Xi_m \otimes \Xi_n'$ of the eigenphases of the matrix $A \otimes A'$ as \[ \Xi_m \otimes \Xi_n' = \{\xi_i + \xi'_j \ \textrm{mod} 2\pi, \ i = 1,\ldots, m, j = 1, \ldots, n \}.\] It has been recently shown \cite[Theorem 1]{T} that the process $\frac{n^2}{2\pi}(\Xi_n \otimes \Xi_n')$ behaves locally as the Poisson point process on $\mathbb{R}_+$. We refine this result and investigate what happens when $n$ becomes large with $m$ being fixed, or when both $m$ and $n$ becomes large but not necessarily $m = n$. \begin{thm*} \label{thm:main} Let $\Xi_m$ and $\Xi_n'$ be point processes of eigenphases of two independent $m \times m$ and $n \times n$ random unitary matrices. Let $\Sigma_1, \ldots, \Sigma_m$ be independent sine processes and let $\Pi$ be a Poisson process on $\mathbb{R}$. Then for each $k \leq n$ the $k$-th correlation function of the process $\Xi_m \otimes \Xi_n'$ exists and \begin{enumerate}[(a)] \item\label{thm(a)} $\rho_{\frac{mn}{2\pi}(\Xi_m \otimes \Xi_n' - \pi)}^{(k)} \xrightarrow[n \to \infty]{} \rho_{m\Sigma_1 \cup \ldots \cup m\Sigma_m}^{(k)}$, \item\label{thm(b)} $\rho_{\frac{mn}{2\pi}(\Xi_m \otimes \Xi_n' - \pi)}^{(k)} \xrightarrow[m,n \to \infty]{} \rho_{\Pi}^{(k)}$, \end{enumerate} uniformly on all compact sets in $\mathbb{R}^k$. \end{thm*} \begin{remark}[Weak convergence]\label{rem:weakconv} According to \cite{HKPV}, by a point process on $\mathbb{R}$ we mean a random variable with values in the metric space $\mathcal{M}(\mathbb{R})$ of $\sigma$-finite Borel measures on $\mathbb{R}$ (counting measures correspond to locally finite subsets of $\mathbb{R}$) endowed with the topology generated by the functions $\mu \mapsto \int f\mathrm{d} \mu$ for continuous, compactly supported $f$. We say that a sequence of point processes $(\tau_n)$ converges \emph{in distribution} to a point process $\tau$ if the law $\nu_n$ of $\tau_n$ converges weakly to that of $\tau$, say $\nu$, in the space $\mathcal{M}_1(\mathcal{M}(\mathbb{R}))$ of probability measures on $\mathcal{M}(R)$, i.e. $\int f \mathrm{d} \nu_n \to \int f \mathrm{d} \nu$ for any bounded continuous function on $\mathcal{M}(\mathbb{R})$. Clearly, these integrals can be expressed using correlation functions, hence the theorem implies the convergence in distribution of the considered point processes. \end{remark} \begin{remark}[Heuristic behind (\ref{thm(a)})]\label{rem:heuristic} In view of the mentioned theorem from \cite{T} result (\ref{thm(b)}) should not be surprising. Neither is (\ref{thm(a)}) as in the simplest case $m = 2$ we have \begin{align*} \Xi_2 \otimes \Xi_n' = &\{ \xi_1 + \xi'_1 \ \textrm{mod} 2\pi, \ldots, \xi_1 + \xi'_n \ \textrm{mod} 2\pi \} \\ &\cup \{ \xi_2 + \xi'_1 \ \textrm{mod} 2\pi, \ldots, \xi_2 + \xi'_n \ \textrm{mod} 2\pi \}. \end{align*} After shifting and rescaling we end up with two families of the rescaled eigenphases of a $n \times n$ random unitary matrix which differ roughly by a large shift $\frac{n}{2\pi}(\xi_1 - \xi_2)$ which is independent on the matrix. That makes the families independent and in the limit, according to $\rho_{\frac{n}{2\pi}(\Xi_n - \pi)}^{(k)}\xrightarrow[n\to\infty]{}\rho_{\Sigma}^{(k)}$, they look like sine processes. \qedsymbol \end{remark} \begin{remark}[Superposition of many sine processes becomes a Poisson point process]\label{rem:superposition} Notice that for any independent copies $\Phi_1, \ldots, \Phi_m$ of a point process $\Phi$ we have \[ \rho_{\Phi_1\cup\ldots\cup\Phi_m}^{(k)}(x_1, \ldots, x_k) = \sum_{p=1}^{m \wedge k} \sum_{\pi \in \mathfrak{S}(k,p)} \frac{m!}{(m-p)!} \prod_{j=1}^p \rho_\Phi^{(\sharp \pi_j)} ((x_i)_{i \in \pi_j}),\] where $\mathfrak{S}(k, p)$ is the collection of all partitions into $p$ nonempty pairwise disjoint subsets of the set $\{1, \ldots, k\}$. By this we mean that if $\pi$ is such a partition then $\pi = \{\pi_1, \ldots, \pi_p\}$, where $\pi_q = \{\pi(q,1),\ldots, \pi(q,\sharp \pi_q)\}$ is the $q$-th block of the partition $\pi$. Along with the fact that if we rescale, $\rho_{\lambda \Phi}^{(k)}(x)$ becomes $\frac{1}{\lambda^k}\rho_\Phi^{(k)}\left(\frac{1}{\lambda}x\right)$, the previous observation yields \begin{equation} \label{eq:remsuperposofsines} \rho_{m\Sigma_1\cup\ldots\cup m \Sigma_m}^{(k)}(x) = \sum_{p=1}^{m \wedge k} \sum_{\pi \in \mathfrak{S}(k,p)} \frac{1}{m^k}\frac{m!}{(m-p)!} \prod_{j=1}^p \rho_\Sigma^{(\sharp \pi_j)} \left(\frac{1}{m}(x_i)_{i \in \pi_j}\right). \end{equation} When $m$ goes to infinity we thus get \[ \lim_{m \to \infty} \rho_{m\Sigma_1\cup\ldots\cup m \Sigma_m}^{(k)}(x) = \lim_{m \to \infty} \prod_{j=1}^p \rho_\Sigma^{(1)} \left(\frac{1}{m}(x_i)_{i \in \pi_j}\right) = 1 = \rho_\Pi^{(k)}.\] It retrieves the special case of a quite expected phenomenon put forward in \cite{CD}. Namely, the authors say ``[...] a Poisson process can be viewed as an infinite superposition of determinantal or permanental point processes'' (see Theorem 4 therein and the two preceding paragraphs). Regarding Theorem (\ref{thm(a)}) that implies \[ \lim_{m\to\infty}\lim_{n\to\infty}\rho_{\frac{mn}{2\pi}(\Xi_m \otimes \Xi_n' - \pi)}^{(k)} = 1.\] Note that in the second part of the theorem we establish a stronger statement, that letting the dimensions of two independent random unitary matrices to infinity reduces all the correlations in their tensor product.\qedsymbol \end{remark} \section{Proofs} For the sake of convenience, let us recall a few basic facts which shall be frequently used. Note the following easy estimate (for the definition see \eqref{eq:defkerS_n}) \begin{equation} \label{eq:easyboundS_n} \sup_{x \in \mathbb{R}}\left|\frac{2\pi}{n}s_n(x)\right| = 1. \end{equation} Combined with Hadamard's inequality (see e.g. (3.4.6) in \cite{AGZ}), it allows us to bound the correlation functions, \begin{equation} \label{eq:corrbound} \sup_{x \in \mathbb{R}^k} \rho^{(k)}_{\Xi_n}(x) \leq k^{k/2}\| s_n \|_\infty^k = \frac{k^{k/2}}{(2\pi)^k}n^k. \end{equation} \subsection{Proof of Theorem (\ref{thm(a)})} Let $\Theta_{m,n} = \frac{mn}{2\pi}(\Xi_m \otimes \Xi_n' - \pi)$. Fix a natural number $k$. Since we will let $n$ go to infinity, we may assume that $k \leq n$. First we show that there exists functions $\fun{\rho^{(k)}_{\Theta_{m,n}}}{\mathbb{R}^k}{[0,\infty)}$ so that for any bounded and measurable function $\fun{f}{\mathbb{R}^k}{\mathbb{R}}$ we have \[ \mathbb{E} \sum f(\theta_1, \ldots, \theta_k) = \int_{\mathbb{R}^k} f(x)\rho^{(k)}_{\Theta_{m,n}}(x) \mathrm{d} x, \] where the summation is over all ordered $k$-tuples $(\theta_1, \ldots, \theta_k)$ of distinct points of $\Theta_{m,n}$. This will prove that $\rho^{(k)}_{\Theta_{m,n}}$ are the correlation functions of $\Theta_{m,n}$. Then we will deal with the limit when $n \to \infty$. Fix $f$. Since for each $s = 1, \ldots, k$, $\theta_s = \frac{mn}{2\pi}(\xi_{i_s} + \xi'_{j_s} \ \textrm{mod} 2\pi - \pi)$ for some $i_s \in \{1,\ldots, m\}$, $j_s \in \{1,\ldots,n\}$ we can write \[ \mathbb{E} \sum f(\theta_1, \ldots, \theta_k) = \mathbb{E}\sum_{\substack{i \in \{1,\ldots,m\}^k \\ j \in \{1,\ldots,n\}^k}} f\left( \left( \frac{mn}{2\pi}(\xi_{i_s} + \xi'_{j_s} \ \textrm{mod} 2\pi - \pi) \right)_{s=1}^k \right) ,\] where the second sum is subject to $k$-tuples $i$, $j$ such that the pairs $(i_1, j_1), \ldots, (i_k,j_k)$ are pairwise distinct. For sure it happens when all the $j_s$'s are distinct. Call these choices of $i$ and $j$ \emph{good} and the rest \emph{bad}. So \[ \mathbb{E} \sum_{i,j} f = \mathbb{E} \sum_{\textrm{good } i,j} f + \mathbb{E}\sum_{\textrm{bad } i,j}f.\] First we handle the \emph{good} sum. Some $i_s$'s may overlap and we will control it using partitions of the set $\{1, \ldots, k\}$ into $p \leq k \wedge m$ nonempty pairwise disjoint subsets (see Remark \ref{rem:superposition} for the notation) so that $i_s = i_t$ whenever $s$ and $t$ belong to the same block of a partition. We have \[ \mathbb{E}\sum_{\textrm{good } i,j} f = \sum_{p=1}^{k \wedge m} \sum_{\pi \in \mathfrak{S}(k,p)} \mathbb{E} \sum_{\substack{\textrm{distinct} \\ i_{\pi(1,1)}, \ldots, i_{\pi(p,1)}}} \sum_{\substack{ \textrm{distinct} \\ j_1, \ldots, j_k}} f.\] The sums over $i$'s and $j$'s have been separated. Therefore taking advantage of independence as well as recalling definitions of the $p$-th and $k$-th correlation functions of $\Xi_m$ and $\Xi_n'$ we find \begin{align*} \mathbb{E}\sum_{\textrm{good } i,j} f = \sum_{p, \pi} \int_{[0,2\pi]^p}\int_{[0,2\pi]^k}& f\left( \left( \frac{mn}{2\pi}(x_{\pi(s)} + y_s \ \textrm{mod} 2\pi - \pi) \right)_{s=1}^k \right) \\ &\rho^{(p)}_{\Xi_m}(x_1, \ldots, x_p)\rho^{(k)}_{\Xi_n'}(y_1,\ldots, y_k) \mathrm{d} x_1 \ldots \mathrm{d} x_p \mathrm{d} y_1 \ldots \mathrm{d} y_k, \end{align*} where we note $\pi(s) = q \ \Longleftrightarrow \ s \in \pi_q$. Finally, we need to address the technicality concerning the addition $\textrm{mod} 2\pi$. Keeping in mind that we integrate over $[0,2\pi]^p$ and $[0,2\pi]^k$ we consider for $\eta \in \{0,1\}^k$ the set \begin{align*} U_\eta = \bigg\{ x \in [0,2\pi]^p, y \in [0,2\pi]^k; \ \forall s \leq k \ &x_{\pi(s)} + y_s < 2\pi \textrm{ if } \eta_s = 0, \textrm{ and } \\ &x_{\pi(s)} + y_s \geq 2\pi \textrm { if } \eta_s = 1 \bigg\}. \end{align*} Then on $U_\eta$ we have $x_{\pi(s)} + y_s \textrm{ mod} 2\pi = x_{\pi(s)} + y_s - 2\pi\eta_s$, thus changing the variables on $U_\eta$ so that $z_s = \frac{mn}{2\pi}(x_{\pi(s)} + y_s - 2\pi\eta_s - \pi)$ we get \begin{align*} \mathbb{E}\sum_{\textrm{good } i,j} f = \int_{\mathbb{R}^k} f(z) \Bigg( \sum_{p,\pi,\eta} \textbf{1}_{W_\eta}(z)\int_{[0,2\pi]^p}\textbf{1}_{V_\eta}(x)\rho^{(p)}_{\Xi_m}(x)\left( \frac{2\pi}{mn} \right)^k\rho^{(k)}_{\Xi_n'}(y(z,x)) \mathrm{d} x \Bigg) \mathrm{d} z, \end{align*} where $y_s(z,x) = \frac{2\pi}{mn}z_s - x_{\pi(s)} + 2\pi\eta_s + \pi$, \[ V_\eta = \left\{ x \in \mathbb{R}^p; \ \forall s \leq k \ \frac{2\pi}{mn}z_s + 2\pi\eta_s - \pi \leq x_{\pi(s)} \leq \frac{2\pi}{mn}z_s + 2\pi\eta_s + \pi \right\},\] and \[ W_\eta = \left\{ z \in \mathbb{R}^k; \ \forall s \leq k \ z_s \leq mn/2 \textrm{ if } \eta_s = 0, \textrm{ and } z_s \geq -mn/2 \textrm { if } \eta_s = 1 \right\}.\] Summarizing, we have just seen that the correlation function $\rho^{(k)}_{\Theta_{m,n}}(z)$ takes on the form \begin{equation}\label{eq:rho} \rho^{(k)}_{\Theta_{m,n}}(z) = \sum_{p,\pi,\eta} \textbf{1}_{W_\eta}(z)\int_{[0,2\pi]^p}\textbf{1}_{V_\eta}(x)\rho^{(p)}_{\Xi_m}(x)\left( \frac{2\pi}{mn} \right)^k\rho^{(k)}_{\Xi_n'}(y(z,x)) \mathrm{d} x + B_{m,n}(z), \end{equation} where the term $B_{m,n}$ corresponds to the sum over bad indices $\mathbb{E} \sum_{\textrm{bad } i, j} f$. By the same kind of reasoning we show that roughly \begin{align*} B_{m,n}(z) = \sum_{p=1}^k\sum_{q=1}^{k-1}\sum_{\substack{\pi \in \mathfrak{S}(k,p) \\ \tau \in \mathfrak{S}(k,q)}} \sum_\eta \textbf{1}_{\tilde W_\eta}(z) \left(\frac{2\pi}{mn}\right)^k \int_{[0,2\pi]^{p+q-k}} &\textbf{1}_{\tilde V_\eta}(x) \rho^{(p)}_{\Xi_m}(\tilde x(z,x)) \\ &\rho^{(q)}_{\Xi_n'}(\tilde y(z,x)) \mathrm{d} x, \end{align*} where the sums are over appropriate partitions and $\tilde W_\eta$, $\tilde V_\eta$ are suitable sets which appear after changing the variables. Now, by \eqref{eq:corrbound}, \begin{equation} \label{eq:prodbound} \|\rho^{(p)}_{\Xi_m}\cdot \rho^{(q)}_{\Xi_n'}\|_\infty \leq \frac{p^{p/2}q^{q/2}}{(2\pi)^{p+q}}m^pn^q, \end{equation} so \[ B_{m,n}(z) \leq C_k\frac{1}{n},\] where the constant $C_k$ depends only on $k$ (roughly, it equals the number of summands times $k^k$). Hence, when taking $n \to \infty$ we will not have to take care about $B_{m,n}$. Let us have a look at \eqref{eq:rho} and compute now the limit of the first term when $n \to \infty$. We observe that $\textbf{1}_{W_\eta} \to 1$ pointwise on $\mathbb{R}^k$. Moreover, $\sum_\eta \textbf{1}_{V_\eta} \to \textbf{1}_{[0,2\pi)^p}$, and $\textbf{1}_{V_\eta} \to 0$ for $\eta$ such that $\eta_s \neq \eta_t$ but $\pi(s) = \pi(t)$ for some $s \neq t$. Thus we consider only $\eta$'s such that $\eta_s = \eta_t$ whenever $\pi(s) = \pi(t)$ and then the following simple observation \begin{equation} \label{eq:simpleobserv} \frac{2\pi}{mn}s_n\left( \frac{2\pi}{mn}u + v \right) \xrightarrow[n\to\infty]{} \begin{cases} 0, & v \neq 0 \\ \frac{1}{m}q\left(\frac{u}{m}\right), & v = 0 \end{cases} \end{equation} yields for all these $\eta$'s, \begin{align*} \left( \frac{2\pi}{mn} \right)^k\rho_{\Xi_n'}^{(k)}(y) &= \det \left[ \frac{2\pi}{mn}s_n\left( \frac{2\pi}{mn}(z_s - z_t) + 2\pi(\eta_s - \eta_t) + x_{\pi(t)} - x_{\pi(s)} \right) \right]_{s,t=1}^k \\ &\xrightarrow[n\to\infty]{} \prod_{j=1}^p \det\left[ \frac{1}{m}q\left( \frac{z_s - z_t}{m} \right) \right]_{s,t \in \pi_j} = \frac{1}{m^k}\prod_{j=1}^p \rho_\Sigma^{(\sharp \pi_j)}\left( \frac{1}{m}(z_i)_{i \in \pi_j} \right). \end{align*} By estimate \eqref{eq:corrbound}, $\left( \frac{2\pi}{mn} \right)^k\rho_{\Xi_n'}^{(k)}(y)$ is bounded by $k^{k/2}/m^k$, so the integrand in \eqref{eq:rho} can be simply bounded. Thus by Lebesgue's dominated convergence theorem \begin{align*} \rho_{\Theta_{m,n}}^{(k)}(z) \xrightarrow[n\to\infty]{} \sum_{p,\pi} \frac{1}{m^k}\prod_{j=1}^p \rho_\Sigma^{(\sharp \pi_j)}\left( \frac{1}{m}(z_i)_{i \in \pi_j} \right) \cdot \int_{[0,2\pi]^p} \rho_{\Xi_m}^{(p)}(x) \mathrm{d} x. \end{align*} For any $p \leq m$ the integral $\int_{[0,2\pi)^p} \rho_{\Xi_m}^{(p)}(x) \mathrm{d} x$ just equals $m!/(m-p)!$. Consequently, we finally obtain \[ \rho_{\Theta_{m,n}}^{(k)}(z_1, \ldots, z_k) \xrightarrow[n\to\infty]{} \sum_{p,\pi} \frac{1}{m^k}\frac{m!}{(m-p)!}\prod_{j=1}^p\rho_\Sigma^{(\sharp \pi_j)}\left( \frac{1}{m}(z_i)_{i \in \pi_j} \right).\] In view of \eqref{eq:remsuperposofsines} this completes the proof. \qedsymbol \subsection{Proof of Theorem (\ref{thm(b)})} Fix a point $z = (z_1, \ldots, z_k) \in \mathbb{R}^k$. We let $m$ and $n$ tend to infinity and want to prove that $\rho_{\Theta_{mn}}^{(k)}(z)$ tends to 1. Recall \eqref{eq:rho} and notice that due to estimate \eqref{eq:prodbound} all the terms with $p \leq k-1$ are bounded above by $C_k/m$, so we can write \begin{align*} \rho_{\Theta_{m,n}}^{(k)}(z) = O\left(\frac{1}{m}+\frac{1}{n}\right) + \sum_\eta \textbf{1}_{W_\eta}(z) \int_{[0,2\pi]^k}\textbf{1}_{V_\eta}(x) \left( \frac{2\pi}{mn} \right)^k \rho_{\Xi_m}^{(k)}(x) \rho_{\Xi_n'}^{(k)}\left( y(z,x) \right) \mathrm{d} x. \end{align*} Using the formulas for the correlation functions and the permutational definition of the determinant, we can put the integrand in the following form \begin{align*} &\frac{\textbf{1}_{V_\eta}(x)}{(2\pi)^k}\cdot \det\left[ \frac{2\pi}{m}s_m(x_s - x_t)\right]_{s,t=1}^k \cdot \det\left[ \frac{2\pi}{n}s_n\left( y_s - y_t \right) \right]_{s,t=1}^k \\ &= \frac{\textbf{1}_{V_\eta}(x)}{(2\pi)^k}\Bigg(1 + \sum_{\sigma\neq\textrm{id} \ \textrm{or} \ \tau\neq\textrm{id}}\sgn \sigma \sgn \tau \prod_{i=1}^k \frac{2\pi}{m}s_m(x_i - x_{\sigma(i)}) \cdot \frac{2\pi}{n}s_n(y_i - y_{\tau(i)})\Bigg), \end{align*} where the second summation runs through permutations $\sigma$ and $\tau$ of k indices. The point is that each term in this sum tends to zero with $m$ and $n$ going to infinity as we have $\frac{2\pi}{m}s_m(x_i - x_{\sigma(i)}) \xrightarrow[m\to \infty]{\textrm{a.e.}} 0$ for $i$ such that $i \neq \sigma(i)$, and $\frac{2\pi}{n}s_n(y_i - y_{\tau(i)}) \xrightarrow[n\to \infty]{\textrm{a.e.}} 0$ if $i \neq \tau(i)$ (see \eqref{eq:simpleobserv} and mind the fact that actually $y$ depends on $m$ and $n$). Recall also that $\textbf{1}_{W_\eta} \to 1$ and $\sum_\eta \textbf{1}_{V_\eta} \to \textbf{1}_{[0,2\pi)^k}$. Moreover, \eqref{eq:easyboundS_n} yields that the whole sum is bounded by $(k!)^2/(2\pi)^k$. Therefore by Lebsegue's dominated convergence theorem we conclude that \[ \rho_{\Theta_{m,n}}^{(k)}(z) \xrightarrow[m,n \to \infty]{} \int \textbf{1}_{[0,2\pi)^k}(x)\frac{1}{(2\pi)^k} \mathrm{d} x = 1, \] which finishes the proof. \qedsymbol \section{Concluding remarks} At the very end we shall discuss the tensor product of more than two matrices. We only briefly sketch what can be easily inferred looking at the proof of the main result. Let $\Xi_l$, $\Xi_m'$, $\Xi_n''$ be the point processes of eigenphases of independent $l \times l$, $m \times m$, and $n \times n$ random unitary matrices respectively. Proceeding along the same lines as in the proof of Theorem (\ref{thm(a)}), we conclude that the point process $\frac{2\pi}{lmn}(\Xi_l \otimes \Xi_m' \otimes \Xi_n'' - \pi)$ locally behaves as the Poisson point process on $\mathbb{R}$ when $l$ is fixed but $m$ and $n$ tend to infinity. Indeed, the asymptotics of the $k$-th correlation function $\rho^{(k)}(z)$ of that process is governed by the integrals \[ \int_{[0,2\pi]^{p+k}\cap V_\eta} \left(\frac{2\pi}{lmn}\right)^{k}\rho^{(p)}_{\Xi_l}(x)\rho^{(k)}_{\Xi_m'}(y)\rho^{(k)}_{\Xi_n''}(w(x,y,z)) \mathrm{d} x \mathrm{d} y, \] which we then sum suitably. Expanding the determinantal correlation functions of $\Xi_m'$ and $\Xi_n''$ (see the proof of Theorem (\ref{thm(b)})) we find that the limit of $\rho^{(k)}(z)$ equals $\sum_{p, \pi} \frac{1}{l^k}\frac{l!}{(l-p)!} = 1$, where the last identity is due to the well-known combinatorial fact that $\sum_{p=1}^k \sharp \mathfrak{S}(k, p) x(x-1)\cdot\ldots(x-p+1) = x^k$. The same line of reasoning applies also when in addition $l \to \infty$. Then the asymptotics depends only on the integral \[ \int_{[0,2\pi]^{2k}} \left(\frac{2\pi}{lmn}\right)^{k} \rho^{(k)}_{\Xi_l}(x)\rho^{(k)}_{\Xi_m'}(y)\rho^{(k)}_{\Xi_n''}(w(x,y,z)) \mathrm{d} x \mathrm{d} y. \] Again, we carry on as in the proof of Theorem (\ref{thm(b)}). Let $A^{(i)}_{n_i}$, $i = 1, ,2, \ldots$, be independent $n_i \times n_i$ random unitary matrices. The other cases of tensor products $\bigotimes_{i=1}^M A^{(i)}_{n_i}$, when for instance all but one of $n_i$'s are fixed, seem to be more delicate and we do not wish to go into detail here. Moreover, it looks challenging to consider the tensor products when the number of terms $M$ tends to infinity and $(n_i)_{i=1}^\infty$ is fixed. The simplest case of $n_i = 2$, $i \geq 1$ has been addressed in \cite{T}. \section*{Acknowledgements} Thanks to Prof. Neil O'Connell for his great help.
1,477,468,751,309
arxiv
\section{Introduction} $\indent$It is well known that an electron outside the liquid is attracted to helium surface by a weak image force, while it is prevented to enter the liquid by a strong repulsion due to the Pauli exclusion principle. As a result, the SSE are trapped into potential well and their motion in the direction normal to the surface is restricted to the hydrogen-like bound states with energies $\epsilon_n=-\Delta/n^2$, where $n\geq 1$ is an integer number and $\Delta$ is the effective Rydberg energy. The latter is about 4~K for electrons over $^3$He, and at typical experimental conditions $T\lesssim 1$~K almost all electrons are in the ground level. In each Rydberg level, the electrons are free to move along the surface and thus form a series of two-dimensional conductive subbands. At $T\gtrsim 0.5$~K, the intra-subband relaxation times and the inter-subband transition linewidths are determined by the interaction of the electrons with helium vapor atoms, while at $T\lesssim 0.3$~K they are limited by the interaction with ripplons because the vapor density becomes negligible. In both cases, the momentum relaxation rate of electrons differs for electrons occupying different subbands. The inter-subband transitions of the SSE above $^4$He were first directly observed by Grimes \textit{et al}~\cite{Grimes1} who measured the temperature dependence of linewidth for transition between the the ground level and the first excited Rydberg level in the vapor-atom scattering regime. Recently Collin \textit{et al}~\cite{Collin} extended these measurements to lower temperatures to cover the ripplon scattering regime and found a good agreement with theoretical predictions~\cite{Ando}. In these experiments the electrons are Stark tuned in resonance with MW field by varying the vertical electric field exerted on electrons. The MW absorption due to electron transitions from the ground level to the excited levels is then detected as the variation of the power of the MW radiation passing through the cell. Recently, we performed a similar experiment~\cite{Isshiki} to directly observe the transitions to the excited states for electrons above $^3$He. The temperature dependence of the transition linewidth for the first excited state was found also in agreement with the theory~\cite{Ando}, although the absolute value of the linewidth was somewhat higher than the theoretical estimate. The direct detection of MW power absorption is not the only way to observe the inter-subband transitions of the SSE. Several experiments showed the changes in the mobility for the electron motion parallel to the surface induced by the resonant microwave absorption. First, Edel'man~\cite{Edel'man1} observed the increase in the resonant cyclotron absorption when the SSE were exposed to MW radiation, which caused the electron transitions from the ground to the excited subbands. Later, Volodin and Edel'man~\cite{Volodin} observed the resonant change in the electron conductivity as a result of the interaction with microwaves. Both experiments were done with electrons above $^4$He and $^3$He and in the temperature range 0.3$-$0.5~K, which corresponds to ripplon scattering and vapor-atom scattering regimes for $^4$He and $^3$He respectively. The authors immediately pointed out that the observed change in conductivity is most likely due to heating of electrons with absorbed MW power. They found out that in a case of $^4$He the conductivity increases, which should be expected for hot electrons whose momentum relaxation time is limited by ripplon scattering~\cite{Saitoh_hot}. In a case of $^3$He however a rather complicated behaviour was observed, namely the conductivity decreased at small power of MW field, and it increased at higher power. As far as we know, no satisfactory explanation for this complicated behaviour was given yet. Below, we describe an experiment in which the microwave-resonance-induced change in the resistivity of the SSE above liquid $^3$He is investigated in the temperature range from 0.45 to 0.65~K. Contrary to the result obtained by Volodin and Edel'man~\cite{Volodin}, we find that the conductivity decreases at the resonance for all values of the input MW power, which was varied by more than two orders of magnitude in our experiment. We explain the observed change in the resistivity by the heating of the SSE under the conditions of the MW resonance, and propose a theory which adequately describes the heating process and its effect on the electron scattering rate. In our earlier publication,~\cite{PRL} we reported the first results of our resistivity measurements and illustrated the heating mechanism using a rather simple two-level model. In this paper, we give a detailed description of the experimental procedure and the obtained results, and perform a thorough analysis of the processes of MW absorption and the heating of SSE taking a full account of the thermal population of the higher Rydberg states. In addition, we investigate the effect of the heating on the MW absorption linewidth and make comparison of the experimental results with the predictions of our theory. The paper is organized as follows: in Sec.~2, a brief mention of previous experimental and theoretical studies of hot electrons is given and the important notations for the electron scattering and the energy relaxation rates are introduced. In Sec.~3, we give the description of the experimental setup and the measurement procedure, and present the results of our resistivity measurements. In this section we also present our hot electron theory: describe the heating mechanism, calculate the electron temperature and the electron scattering rate as a function of microwave power, and make the comparison with the experimental results. The effect of SSE heating on the absorption linewidth is discussed in Sec.~4. In the last section, some conclusions are drown and the future plans are outlined. \section{Theoretical background} It has been realized that due to the high mobility and the slow energy relaxation rate the SSE can be easily heated up to the temperatures much higher than the temperature of the liquid substrate. If the electron temperature is sufficiently high to cause an appreciable thermal population of the higher excited subbands, the electrons are said to be "hot". The occupation of the higher subbands has an important consequence: since the electron scattering rate depends on the subband index, the energy and the momentum relaxation rates, which are determined by the collisions of electrons with scatterers, depend strongly on the electron temperature and on the degree of higher subband population. This property makes the electron heating to be easily observable. In addition to the experiments mentioned in the previous section, the hot electron effect was seen in the transport~\cite{Bridges}, the plasmon resonance~\cite{Grimes2} and the cyclotron-resonance measurements~\cite{Edel'man2}. Most recently, an interesting effect, at least partially attributed to the electron heating, was reported by Penning \textit{et al} who observed a large change of magneto-conductivity of SSE under the cyclotron-resonance conditions.~\cite{Penning} The first theoretical studies of hot surface electrons were done by Crandall~\cite{Crandall} who took into account the occupation of higher subbands but ignored inter-subband scattering, and by Shikin~\cite{Shikin} who considered the population of the quasi-continuous spectrum by electrons heated under the cyclotron-resonance conditions. A rather complete theoretical treatment of the problem was given by Saitoh and Aoki~\cite{Saitoh_hot} for the SSE heated by the low-frequency in-plane driving field, and by Aoki and Saitoh~\cite{Aoki} for the SSE heated by the high-frequency cyclotron-resonance field. In Ref.~7, authors employed an effective temperature approximation and calculated the electron temperature as a function of the driving field by solving the energy balance equation. This allowed them to calculate the electron scattering rate as a function of the driving field for both vapor-atom and ripplon scattering. An important result obtained by Saitoh and Aoki was that as the electron temperature goes up and electrons become distributed over many subbands, the inter-subband scattering events become particularly important, and that this leads to an increase of the electron scattering rate in the vapor-atom scattering regime. Keeping in mind the temperature range of our experiment, in this paper we will consider only the case of the scattering by the vapor atoms. The electron motion perpendicular to the helium surface is described by the Schrodinger equation \begin{equation} \Biggl [ -\frac{\hbar^2}{2m_{\textrm{e}}} \frac{\textrm{d}^2}{\textrm{d}z^2} - \frac{(\varepsilon -1)e^2}{4(\epsilon +1)z} +eE_{\perp}z \Biggr ] \psi_n(z)=\epsilon_n \psi_n(z), \label{eq:wave} \end{equation} \noindent where $z>0$ is the distance from the surface, $\psi_n$ and $\epsilon_n$ are the eigenfunctions and eigenvalues of the above equation corresponding to the quantum number $n$, $m_{\textrm{e}}$ is the electron mass and $-e$($<$0) is the electron charge, $\varepsilon$ is the dielectric constant of helium, $E_{\perp}$ is the electric field applied perpendicular to the surface. Here we assume an infinitely large surface barrier potential, and the above equation should be solved with the boundary condition $\psi_n(0)=0$. For further discussion, it is convenient to introduce a number of notations similar to those used in Ref.~7. First, let us define the electron scattering rates $\nu_{mn}$ from the subband with index $m$ to the subband with index $n$, which for $m\geq n$ are given by \begin{equation} \nu_{mn}=\frac{\pi \hbar N_{\textrm{G}} A}{m_{\textrm{e}} B_{mn}} , \label{eq:nu} \end{equation} \noindent where $N_{\textrm{G}}(T)$ is the saturated vapor density of the helium gas, $A$ is the cross-section of helium atom, and the matrix element $B_{mn}$ is given by \begin{equation} \frac{1}{B_{mn}}=\int\limits_{0}^{\infty} \Bigl \{ \psi_m (z) \psi_n (z) \Bigr \}^2 dz. \label{eq:B} \end{equation} \noindent We note that for $m=n$, $\nu_{mn}$ coincides with the electron momentum relaxation rate for the corresponding intra-subband scattering. In addition, the following relation is valid for any arbitrary indexes $m$ and $n$ \begin{equation} \nu_{nm}=\nu_{mn} \textrm{exp} \biggr ( -\frac{\epsilon_{m}-\epsilon_{n}}{T_{\textrm{e}}} \biggl ), \label{eq:relation} \end{equation} \noindent where $\epsilon_{n}$ is given (in units of temperature) by (\ref{eq:wave}), and $T_{\textrm{e}}$ is the electron temperature. In order to calculate the electron temperature and to describe the electron transport properties, we need to know the expressions for the momentum and the energy relaxation rates of the SSE, which are distributed over many subbands. In the case when the distribution function is given by the Boltzmann distribution, such expressions were obtained by Saitoh and Aoki.~\cite{Saitoh_hot} However, in the case of the MW absorption, when the transitions between two lowest subbands are induced by the MW radiation, it is necessary to generalize their equations for arbitrary distribution function. In this case, the momentum relaxation rate $\nu$ is given by~\cite{Monarkha} \begin{equation} \nu=\sum\limits_{m,n} \rho_{m} \nu_{mn} \textrm{exp} \biggr ( -\frac{\mid \epsilon_{m}-\epsilon_{n}\mid +\epsilon_{n}-\epsilon_{m}}{2T_{\textrm{e}}} \biggl ) \biggr[ 1+\frac{\mid \epsilon_{m}-\epsilon_{n}\mid +\epsilon_{n}-\epsilon_{m}}{2T_{\textrm{e}}} \biggl ], \label{eq:rate} \end{equation} \noindent where $\rho_n$ is the fractional occupancy of the subband of index $n$. The corresponding expression for the energy relaxation rate $\tilde{\nu}$ is given by \begin{equation} \tilde{\nu}=\frac {m_{\textrm{e}}} {M} \sum\limits_{m,n} \rho_{n} \nu_{mn} \textrm{exp} \biggr ( -\frac{\mid \epsilon_{m}-\epsilon_{n} \mid +\epsilon_{m}-\epsilon_{n}}{2T_{\textrm{e}}} \biggl ) \biggr [ 2+\frac{\mid \epsilon_{m}-\epsilon_{n} \mid}{T_{\textrm{e}}}+\frac{\hbar^2 B_{mn}}{2m C_{mn} T_{\textrm{e}}} \biggl ], \label{eq:loss} \end{equation} \noindent where $M$ is the mass of the helium atom, $B_{mn}$ is given by (\ref{eq:B}), and $C_{mn}$ is given by \begin{equation} \frac{1}{C_{mn}}=\int\limits_{0}^{\infty} \biggl \{ \frac{\textrm{d}}{\textrm{d}z}[\psi_m (z) \psi_n (z)] \biggr \}^2 dz. \label{eq:C} \end{equation} \noindent Note that both rates reduce to the corresponding rates given in Ref.~7 in the case of Boltzmann distribution $\rho_n\propto \textrm{exp}(-\epsilon_n/T_{\textrm{e}})$. \section{Resonance-induced resistivity} \subsection{Experiment} The experiment was done in the same experimental cell that was used for our direct MW absorption measurements \cite{Isshiki}. The cell is attached to the mixing chamber of the dilution refrigerator and the temperature of the mixing chamber is measured with a calibrated germanium thermometer. The cell temperature is assumed to be the same with that of the mixing chamber. A layer of the SSE is trapped on a vapor-liquid interface placed approximately midway between two parallel metal plates separated by 3~mm. By measuring the capacitance between the plates, the liquid level can be adjusted with the accuracy of about 0.05~mm.~\cite{Isshiki} Electrons are pressed toward the surface by an electric field created between the plates by applying a positive voltage $V_\textrm{B}$ to the bottom plate. By varying the potential of the bottom plate, the energy difference between the first excited and the ground states can be adjusted to match the frequency of the microwave field. The MW radiation of fixed frequency 130 GHz from a MW source (Gunn oscillator) is coupled into the cell by a dielectric waveguide of rectangular cross section. The microwave power dissipated in the cell can be estimated from the cooling power of the refrigerator and was usually $\sim$10~$\%$ of the input power $P$, which is measured at the output of the MW source. The variation of the SSE resistivity is detected by a capacitive-coupling method~\cite{Shirahama}. For this purpose the top plate contains a concentric-ring copper electrode pair, which is known as a Corbino disk. The diameters of inner and outer electrodes are 14 and 20~mm respectively, and the gap between them is less than 0.2~mm. To charge the surface, a positive voltage $V_\textrm{B}$ in a range from 3 to 25~V is applied to the bottom electrode. The electrons, which are produced at 0.65~K by thermionic emission from a tungsten filament located approximately 1.5~mm above the surface, accumulate on the surface until the charge layer completely screens the electric field above the surface. This condition determines the charge density $n_e=\varepsilon V_\textrm{B} /4\pi e d$, where $d$ is the depth of the liquid. An ac voltage of 1~MHz and 3$-$5~mV$_\textrm{RMS}$ is applied to the inner electrode. This produces an in-plane electric field $E_{\parallel}$, which drives electrons parallel to the liquid surface. The current induced in the SSE is picked up at the outer electrode, and the amplified signal is synchronously demodulated at 1~MHz by means of a lock-in amplifier. From the value of this current the magnitude of $E_{\parallel}$ was estimated to be less than $10^{-3}$~V/cm, which should produce no significant heating of the SSE in the temperature range of this experiment~\cite{Saitoh_warm}. The resistivity of the SSE is found from the values of the phase shift of the output signal with respect to the driving signal using fitting procedure described in detail elsewhere~\cite{Shirahama}. To observe the resonance-induced change of the resistivity, the voltage $V_\textrm{B}$ is varied to tune the SSE in resonance with MW radiation, and the in-phase and quadrature components of the output Corbino signal are recorded. The input MW power $P$ is varied in a range from about 10~$\mu$W, at which the Corbino signal can be distinguished from the noise (we would like to notice that our direct absorption setup described in Ref.~4 allows much more sensitive detection of MW absorption than the method described in this section), to about 1500~$\mu$W, which is the maximum output power of our MW source. \subsection{Results} \begin{figure}[t] \begin{center} \includegraphics[width=12cm]{fig1.eps} \end{center} \caption{\label{fig:exp2} Variation of the resistivity $\sigma^{-1}$ of the SSE taken at (a): T=0.5 K, n=1.9$\times10^7$~cm$^{-2}$ and (b): T=0.6 K, n=9.8$\times10^7$~cm$^{-2}$. The resistivity curves in (a) correspond (from bottom to top) to 20, 40, 95, 190, 400, 670, and 1085~$\mu$W of the input MW power, while the curves in (b) correspond (from bottom to top) to 50, 95, 120, 190, 300, 400, 670, 1080, and 1500~$\mu$W.} \end{figure} The resistivity measurements were done at temperatures from 0.65~K down to 0.45~K, i.e. in the vapor-atom scattering regime. The experimental procedure described in previous section allowed us to plot the variation of resistivity $\sigma^{-1}$ with $V_\textrm{B}$ at different values of input MW power. Examples of such plots for two different temperatures are shown in Fig.~\ref{fig:exp2}. The data at 0.5 and 0.6~K were taken at electron densities of $1.9\times 10^7$ and $9.6\times 10^7$~cm$^{-2}$ respectively. The resonant change of the resistivity is due to the resonant excitation of the SSE from the ground to the first excited subband, with peak value at $V_\textrm{B}\approx$ 28.6~V at small MW power (there is a shift of about 0.4~V in the resonance voltage at high electron densities, which is attributed to the effect of the image charges induced in metal plates~\cite{shift}). The relation between the pressing electric field $E_{\perp}$ and the transition frequency $\omega_{12}/2\pi$ can be calculated from the numerical solution of (\ref{eq:wave}) and at the typical values of $E_{\perp}$ used in our experiment can be approximated by $\omega_{12}/2\pi=82.2+0.5E_{\perp}$, where frequency is in GHz and electric field is in V/cm. This estimate is in good agreement with value of $V_\textrm{B}$ at the resonance observed in our experiment. The excitation of the inter-subband transition causes $\sigma^{-1}$ to increase, as was observed at all temperatures between 0.45 and 0.65~K and at all values of $P$. The increase of $\sigma^{-1}$ far from the resonance with increasing $P$, which can be seen in Fig.~\ref{fig:exp2} as the vertical shift of signals toward higher values of $\sigma^{-1}$, is attributed to the heating of the experimental cell by MW radiation. The increase of the cell temperature can be estimated from the measured temperature dependence of the SSE conductivity and is $\sim10^{-2}$~K at the highest value of $P$. It is important to mention that the MW radiation entering our experimental cell is not vertically polarized, therefore there should be a component of MW electric field which drives electrons parallel to the surface. The heating of the SSE due to this field can be estimated as follows. The average power absorbed by the SSE from the driving field of amplitude $E_{\parallel}$ is about $\sigma E_{\parallel}^2$. In the case of the low-frequency field, the heating is expected to become noticeable at $E_{\parallel}\gtrsim 10^{-2}$~V/cm at $T\approx 0.5$~K.~\cite{Saitoh_warm} However, for such high-frequency field as $\omega\tau\gg 1$, where $\tau$ is the electron scattering time, the conductivity of electrons is reduced by a factor of $(\omega\tau)^2$ comparing with the dc value. For the typical scattering time of the order of $10^{-9}$~s at $T\approx 0.5$~K, this factor is more than $10^5$ for MW frequency of 130~GHz. Therefore, in the temperature range of the experiment, the heating of SSE due to parallel field is estimated to be negligible even at the highest values of $P$. At high MW power, the resonance line shape becomes slightly asymmetric with the left hand side being slightly steeper than the right hand side. Also, the position of the peak amplitude of the resistivity curves shifts toward the lower values of $V_\textrm{B}$ with increasing input power. The absolute value of the frequency shift was found to increase with the electron density. For example, for two sets of data shown on Fig.~\ref{fig:exp2}, which correspond to density values of $1.9\times 10^7$ and $9.8\times 10^7$~cm$^{-2}$, the shifts are estimated to be about 0.4 and 1.9~GHz respectively. Variation of the liquid level due to thermal expansion of the liquid could be one of the possible explanations of this shift. MW radiation heats up the cell and variation of the liquid level can cause the change of the magnitude of pressing field due to image charges induced in the metal plates. However, using the values of the thermal expansion coefficient of $^3$He~\cite{Roach}, the effect is estimated to be several orders of magnitude smaller. Another exciting possibility is the shift of the resonance frequency due to Coulomb interaction between electrons~\cite{Lea}. For the SSE, the average distance to the surface increases with the subband index, and the energy levels of an electron shift when the neighbour electron is excited, because the distance between electrons also changes. As a result, the transition frequency of the ground state electron increases with increasing population of the excited levels. The shift is larger near the resonance than far from it, thus introducing the asymmetric into the line shape, which is indeed observed in the experiment. The absolute value of the frequency shift is expected to increase with electron density, which is in agreement with above observation. The value of this shift can be easily estimated for two isolated electrons. Using formula similar to the one in Ref.~21, for two electrons above $^3$He in zero field, and separated by 1~$\mu$m, we have frequency shift of about 0.9 GHz, which is within an order of magnitude agreement with our observation. For more realistic estimation, the interaction between many electrons has to be considered, and long-range Coulomb interaction between distant electrons might need to be taken into account. This is a rather complicated problem, therefore we postpone any further discussion of this effect until future publications. \subsection{Discussion} \subsubsection{Hot electron theory} Let us suppose that as a result of the MW excitation a fraction $\rho_2$ of the SSE occupy the first excited state. First, let us assume that the SSE are in thermal equilibrium with the vapor, i.e. have the same temperature $T$ as the vapor and liquid. Since $T$ is much less than the energy difference between the ground and the excited levels, we can ignore the thermal population of the higher index subbands, i.e. consider only scattering of electrons within two lowest subbands. From (\ref{eq:rate}), the momentum relaxation rate $\nu$ in this case is given by \begin{equation} \nu=\rho_1 \nu_{11}+\rho_2 \nu_{22}+\rho_2 \nu_{21}+\rho_1 \nu_{12} \biggr ( 1+\frac{\epsilon_2-\epsilon_1}{T_{\textrm{e}}} \biggl ), \label{eq:2level} \end{equation} \noindent where $\rho_1=1-\rho_2$, and $\nu_{mn}$ are given by (\ref{eq:relation}). Since $\nu_{12}\ll \nu_{21}$, the last term in (\ref{eq:2level}) can be safely neglected. The numerical estimate shows that the relaxation rate monotonically decreases with increasing $\rho_2$. For example, at $T=0.5$ K and the pressing field $E_{\perp }\cong 95$~V/cm, at which $\nu_{11}$, $\nu_{22}$ and $\nu_{21}$ are about $2.0\times 10^9$, $1.1\times 10^9$ and $0.4\times 10^9$ s$^{-1}$ respectively, the relaxation rate decreases by about $3\%$ and $12\%$ at the populations $\rho_2$ of 0.1 and 0.5 respectively. Thus, if we neglect the heating of the SSE, the resistivity is expected to drop at the resonance. This is opposite to the behaviour observed in the experiment. However, the assumption that the SSE are in thermal equilibrium with the vapor can not be justified if electrons absorb large energy from an external source. On the one hand, due to very large (order of $10^{11}$~s$^{-1}$) electron-electron collision rate the SSE can rapidly re-distribute the energy among themselves, thus quickly attaining an equilibrium with effective electron temperature $T_{\textrm{e}}$. On the other hand, due to very weak coupling of the SSE to the environment, their temperature can be much higher than the temperature $T$ of the vapor and liquid. When electrons become hot and thermally populate many subbands, the inter-subband scattering among the occupied levels contribute significantly to the scattering rate. In other words, the scattering rate of the electron increases with the density of states to which it can be scattered, and the latter increases rapidly as more and more subbands become populated. As a result, in the gas atom scattering regime the momentum relaxation rate, given by equation (\ref{eq:rate}), is almost monotonically increasing function of the electron temperature $T_{\textrm{e}}$. Thus, the significant rise of the electron resistivity observed in our experiment can be explained if we assume that the SSE become hot.The heating of the SSE can be caused by the absorbed MW power and is due to very slow energy relaxation rate. The life time of the excited electron is mainly limited by the elastic collisions with helium vapor atoms. The energy exchange between the electron and the helium atom during one collision is negligible due to large difference in masses. As a result, the electron returns to the ground state carrying large kinetic energy, which is quickly distributed between all other electrons in the ground subband. The cooling of electrons is a slow process and the temperature of electrons can rise well above that of the helium bath. In order to find $T_{\textrm{e}}$ as a function of the MW power in the cell, we write the energy balance equation by equating the average energy absorbed by the electron from the MW field in the cell (per unit time) to the average energy lost in the collisions with helium atoms, i. e. \begin{equation} \frac{0.5\hbar \omega \gamma \Omega^2}{(\omega_{12} -\omega)^2+\gamma^2} ( \rho_1 -\rho_2 ) = ( T_{\textrm{e}}-T )\tilde{\nu}. \label{eq:bal} \end{equation} \noindent where $\omega_{12}$ is the transition frequency, which is the function of the pressing field $E_{\perp}$, $\omega$ is the MW frequency, $\gamma$ is the temperature dependent transition linewidth, $\Omega$ is the Rabi frequency defined below, and the energy relaxation rate $\tilde{\nu}$ is given by equation (\ref{eq:loss}). In this equation, we neglect the frequency shift observed in the experimental data and discussed at the end of Sec.~3.2. In general case, the linewidth $\gamma$ depends not only on the ambient temperature $T$, which determines the vapor density $N_G$, but also on the electron temperature $T_{\textrm{e}}$. The expression for $\gamma$, which takes into account the scattering of the SSE from two lowest subbands into higher index ($n>2$) subbands, was found by Ando.\cite{Ando} Using the notations adopted earlier, it can be written as \begin{equation} \gamma=\frac{1}{2}\bigr( \nu_{11}+\nu_{22}-\nu_{21}+\nu_{12}+\sum\limits_{n>2}\nu_{1n}+\sum\limits_{n>2}\nu_{2n} \bigl). \label{eq:gamma} \end{equation} \noindent In Fig.~\ref{fig:gamma} we show the plots of $\gamma$ as a function of $T_{\textrm{e}}$ calculated for $T$=0.5~K (solid line) and $T$=0.6~K (dashed line). At $T$=0.5~K we find that $\gamma$ increases from about 210~MHz at $T_{\textrm{e}}$=0.5~K to about 520~MHz at $T_{\textrm{e}}$=28.7~K. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig2} \caption{\label{fig:gamma} Variation of the linewidth $\gamma$ with $T_{\textrm{e}}$ calculated for $T$=0.5~K (solid line) and $T$=0.6~K (dashed line).} \end{figure} \begin{figure}[t] \centering \includegraphics[width=11cm]{fig3} \caption{\label{fig:popul} Comparison between subband occupancies $\rho_n$ given by Boltzmann distribution (open circles) and that found by numerical solution of rate equations (\ref{eq:popul}) (solid circles) at $T=0.5$~K, $T_{\textrm{e}}=28.7$~K and $\Omega=10$~GHz.} \end{figure} The fractional occupancies $\rho_{n}$ are determined by the balance between the rate of the MW excitation and the scattering rates $\nu_{mn}$. At the stationary state, i.e. $\partial \rho_{n}/\partial t$=0, they can be found from the following equations \begin{subequations} \begin{eqnarray} \frac{0.5\gamma \Omega^2}{(\omega_{12} -\omega)^2+\gamma^2} ( \rho_2 -\rho_1 )-\sum\limits_{n\neq 1}\nu_{1n}\rho_1+\sum\limits_{n\neq 1}\nu_{n1}\rho_n=0,\label{eq:popula} \\ \frac{0.5\gamma \Omega^2}{(\omega_{12} -\omega)^2+\gamma^2} ( \rho_1 -\rho_2 )-\sum\limits_{n\neq 2}\nu_{2n}\rho_2+\sum\limits_{n\neq 2}\nu_{n2}\rho_n=0,\label{eq:populb} \\ -\sum\limits_{n\neq m}\nu_{mn}\rho_m+\sum\limits_{n\neq m}\nu_{nm}\rho_n=0,\label{eq:populc} \end{eqnarray} \label{eq:popul} \end{subequations} \noindent where equation (\ref{eq:populc}) is valid for all $m\geq 3$. From the above equations, the occupancies $\rho_{n}$ can be calculated as a function of $T_{\textrm{e}}$ and the MW power. Like in Ref.~7, in our calculations we assume that the electron temperature $T_{\textrm{e}}$ is the same for all subbands of indexes $n$ (effective temperature approximation). The dependence on the MW power can be expressed in terms of the Rabi frequency. The latter is defined as $\Omega=ez_{12}E_{\textrm{RF}}/\hbar$, where $z_{12}=\langle \psi_1 |z| \psi_2 \rangle$ and $E_{\textrm{RF}}$ is the amplitude of the MW field in the cell, and is proportional to the root square of the MW power in the cell. In order to obtain $\rho_{n}$, equations (\ref{eq:popul}) were solved numerically for large number of $n\leq n_{\textrm{max}}$. We have checked the convergence of the result by taking $n_{\textrm{max}}$=200 and 400, and the result for $n_{\textrm{max}}$=400 is presented here. The thermal population of subbands with indexes $n>n_{\textrm{max}}$ was neglected. To determine the energies $\epsilon_n$ and the wave functions $\psi_n(z)$ of the electron in the non-zero pressing filed, we solved the corresponding Schrodinger equation numerically for quantum numbers $n\leq$10. In this equation, the surface barrier potential due to liquid was taken to be infinite. For $n>$10 the image potential was neglected and the solutions given by Airy functions were used. The examination of the obtained solutions of equations (\ref{eq:popul}) showed that at $\Omega \gtrsim 10$~MHz, the fractional occupancies $\rho_{n}$ for $n \geq 3$ deviate very little (less then $4\%$) from those given by the Boltzmann statistics, i.e. \begin{equation} \rho_{n}=\frac{1}{Z}\textrm{exp} \biggr( -\frac{\epsilon_{n}}{T_{\textrm{e}}} \biggl), \label{eq:Boltz} \end{equation} \noindent where $Z$ is the partition function. At the same time, the populations of the two lowest levels depend on the MW power and reach the saturation, i.e. $\rho_2\rightarrow \rho_1$, at high values of $\Omega$. As an example, in Fig.~\ref{fig:popul} we show the comparison between $\rho_{n}$ calculated from (\ref{eq:popul}) and those given by (\ref{eq:Boltz}) at some values of $T_{\textrm{e}}$ and $\Omega$. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig4} \caption{\label{fig:temp} Electron temperature versus Rabi frequency calculated for electrons above $^3$He in the pressing field $E_{\perp }$=95.33~V/cm for $T$=0.5~K (solid line) and $T$=0.6~K (dash-dotted line). Short-dashed line and dashed line are the calculations of $T_{\textrm{e}}$ which assume Boltzmann distribution for $n\geq 3$ and for $n\geq 1$ ("pure" Boltzmann distribution) respectively. Both lines are calculated for $T$=0.5~K.} \end{figure} The electron temperature $T_{\textrm{e}}$ can be calculated as a function of the Rabi frequency $\Omega$ from equations (\ref{eq:bal}) and (\ref{eq:loss}) taking the sum over $n\leq n_{\textrm{max}}$. The plot of $T_{\textrm{e}}$ versus $\Omega$ calculated for $T$=0.5~K (solid line) is shown in Fig.~\ref{fig:temp}. At small values of $\Omega$, the heating of the SSE is not significant, and the electron temperature stays close to that of the liquid. At $\Omega \gtrsim 2$~MHz, the power absorbed by the SSE increases, the cooling rate is too small to keep electrons in the thermal equilibrium with the vapor, and $T_\textrm{e}$ rises quickly. At $T_\textrm{e} \gtrsim 2$~K, the thermal population of the higher levels become appreciable, the cooling rate increases, and the electron temperature rises slower. At very high values of $\Omega$, $\rho_2$ approaches $\rho_1$ as was shown above, the power absorption saturates, $T_{\textrm{e}}$ reaches maximum value of about 28.7~K and becomes almost power independent. The plot calculated for $T$=0.6~K (dash-dotted line) is also shown in Fig.~\ref{fig:temp}. At this temperature the vapor density $N_{\textrm{G}}$, which determines the scattering rates, is higher than at $T$=0.5~K, therefore the cooling of the SSE is more efficient. As a result, $T_{\textrm{e}}$ starts to rise at higher MW power ($\Omega \gtrsim 10$~MHz), and stays less than $T_{\textrm{e}}$ calculated for $T=0.5$~K. However, at saturation it reaches almost the same maximum value (about 28.6~K) as for the $T=0.5$~K. This result is not surprising and can be expected from equations (\ref{eq:populb}) and (\ref{eq:bal}) if we take $\omega_{12}=\omega$. At saturation, the fractional occupancies $\rho_n$ reaches some constant values for all $n$. In this case, it is easily seen that $(\rho_1-\rho_2)$ is proportional to the temperature dependent vapor density ${N_G}^2(T)$. Since both the linewidth $\gamma$ and the energy relaxation rate $\tilde{\nu}$ are proportional to $N_G$, the dependence on ambient temperature $T$ cancels out from equation (\ref{eq:bal}) at $T_{\textrm{e}}\gg T$. Thus, it gives the same $T_{\textrm{e}}$ regardless of the value of $T$. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig5} \caption{\label{fig:rate} The momentum relaxation rate (solid line) due to scattering by helium vapor atoms versus Rubi frequency calculated for the SSE above $^3$He in the pressing field $E_{\perp }$=95.33~V/cm. Sort-dashed line is the calculation of $\nu$ which assumes Boltzmann distribution for $n\geq 3$. Both lines calculated for $T$=0.5~K.} \end{figure} In order to see the effect of the heating on the electron resistivity, we have calculated the momentum relaxation rate $\nu$ of the SSE as a function of $T_{\textrm{e}}$ and $\Omega$ using equation (\ref{eq:rate}). The plot of $\nu$ versus $\Omega$ calculated at $T$=0.5~K is shown in Fig.~\ref{fig:rate}. At small $\Omega$, that is at small MW power, when the increase of $T_e$ is negligible, the relaxation rate slightly decreases, as shown in the inset of Fig.~\ref{fig:rate}. This result is consistent with the condition considered under equation (\ref{eq:2level}), that is $\nu_{12}\ll \nu_{21}$. At $\Omega \gtrsim 5$~MHz, electrons start thermally populating the higher excited states, the inter-subband scattering increases and the relaxation rate quickly rises. At large values of $\Omega$, when $T_{\textrm{e}}$ becomes power independent (see Fig.~\ref{fig:temp}), the relaxation rate reaches constant value of about $5.5\times 10^{9}$~s$^{-1}$. Therefore, if we neglect the small decrease (less then 0.001$\%$) at $\Omega \lesssim 2$~MHz, the relaxation rate increases monotonically with the MW power, which qualitatively agrees with our experimental result. At $T$=0.6~K (plot is not shown) the relaxation rate is about $6\times 10^{9}$~s$^{-1}$ for cold electrons and reaches about $14\times 10^{9}$~s$^{-1}$ for hot electrons at saturation. As was shown earlier, at high enough values $T_{\textrm{e}}$ the populations of the levels with $n\geq 3$ are close to those given by the Boltzmann statistics. Therefore, to simplify the calculations and to save the computation time one can use (\ref{eq:Boltz}) to calculate $\rho_n$ for all $n\geq 3$, while determine the population of the two lowest levels from one of the rate equations, e.g. (\ref{eq:popula}), and the condition that $\sum\limits_{n}^{\infty} \rho_n=1$. In this case, the sums appearing in equations (\ref{eq:bal}) and (\ref{eq:rate}) can be taken over all values of index $n$ using the asymptotic formulas given in Appendix of Ref.~7. The results of these calculations for $T$=0.5~K are shown as short-dashed lines in Figs.~\ref{fig:temp} and \ref{fig:rate}. In this case $T_{\textrm{e}}$ and $\nu$ reach maximum values of about 27~K and $5.3\times 10^{9}$~s$^{-1}$ respectively. It is also instructive to calculate $T_{\textrm{e}}$ assuming "pure" Boltzmann distribution of the SSE over the subbands, i.e. assuming that $\rho_n$ are given by (\ref{eq:Boltz}) for all indexes $n\geq 1$. This is the situation considered by Saitoh and Aoki\cite{Saitoh_hot} and, most recently, by Ryvkine \textit{et al}.\cite{Ryvkine} For the pure Boltzmann distribution, the plot $T_{\textrm{e}}$ versus $\Omega$ is also shown in Fig.~\ref{fig:temp} (dashed line). As should be expected for Boltzmann distribution, for large $\Omega$ the occupancy of the first excited state $\rho_2$ approach the occupancy $\rho_1$ of the ground state much slower than the corresponding solutions of the rate equations (\ref{eq:rate}). As a result, the MW absorption does not saturate, and $T_{\textrm{e}}$ monotonically increases with the power, which is similar to the result obtained for a different heating mechanism in Ref.~7. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig6} \caption{\label{fig:res} Variation of $\sigma^{-1}$ (solid line) and $T_{\textrm{e}}$ (dashed line) with the transition frequency $\omega_{12}$ calculated for $T$=0.5~K, $n_\textrm{e}=1.9 \times 10^{7}$ cm$^{-2}$ and for three different values of $\Omega$: 14, 30 and 50~MHz. Experimental data taken at $T=0.5$~K and $P=1085$~$\mu$W are potted by symbols as a function of voltage $V_{\textrm{B}}$.} \end{figure} \subsubsection{Comparison with experiment} In order to make a quantitative comparison between the experimental curves shown in Fig.~\ref{fig:exp2} and the predictions of our model, we have calculated the variation of $\sigma^{-1}$ with the transition frequency $\omega_{12}$ at different values of the Rabi frequency, and compared the calculated curves with the experimental ones. First, we calculate the variation of $T_{\textrm{e}}$ with the frequency $\omega_{12}$ at a fixed values of $\Omega$ and $\omega$ from equations (\ref{eq:bal}) and (\ref{eq:loss}). Once the electron temperature is known, the corresponding variation of resistivity $\sigma^{-1}(\omega_{12})$ and change of resistivity $\Delta \sigma^{-1}(\omega_{12})$ can be calculated using (\ref{eq:rate}) and the relation $\sigma^{-1}=m_{\textrm{e}} \nu/(n_{\textrm{e}}e^2)$. As an example, in Fig.~\ref{fig:res} we show the variation of $\sigma^{-1}$ and $T_{\textrm{e}}$ with $\omega_{12}-\omega$ calculated for $T$=0.5~K, $n_\textrm{e}=1.9 \times 10^{7}$ cm$^{-2}$ and several values of $\Omega=$14, 30 and 50~MHz. As expected, $\sigma^{-1}$ monotonically increases as $\omega_{12}$ approaches MW frequency $\omega$, reaching the maximum value at the resonance. All curves calculated for $\Omega \gtrsim 5$~MHz and $T=0.5$~K show similar behaviour. For comparison, the experimental data for variation of $\sigma^{-1}$ with $\Delta V_{\textrm{B}}=V_{\textrm{B}}-{V_{\textrm{B}}}^0$, where ${V_{\textrm{B}}}^0$ is the bottom electrod voltage at which $\sigma^{-1}$ reaches maximum, is also plotted in Fig.~\ref{fig:res}. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig7} \caption{\label{fig:comp} Peak amplitude of resistivity curves plotted in Fig.~\ref{fig:exp2}(a,b) vs input power $P$ indicated on bottom axis (squares for $T$=0.5~K and circles for $T$=0.6~K), and theoretical calculations of $\Delta \sigma^{-1}$ vs $\Omega^2$ indicated on the top axis (solid lines).} \end{figure} The comparison between the peak amplitudes of the experimental and the theoretical resistivity curves calculated for $T$=0.5 and 0.6~K is given in Fig.~\ref{fig:comp}. The squares and circles represent the peak amplitude of the curves shown in Figs.~\ref{fig:exp2}(a) and \ref{fig:exp2}(b) respectively and plotted as a function of the input MW power $P$, while the calculated lines are plotted as a function of $\Omega^2$. Although the magnitude of the MW power inside the cell is not measured, we find it reasonable to assume that it is proportional to $P$. Because the proportionality coefficient between $\Omega$ and $\sqrt{P}$ is not known, the range of the horizontal axis for the theoretical line is adjusted to give the best fit between the experimental and the calculated plots for $T$=0.5~K. The vertical axis is the same for both experimental and theoretical plots. From this procedure the proportionality coefficient between $\Omega$ and $\sqrt{P}$ is found to be about 1.26$\times 10^3$ MHz/$\sqrt{\textrm{W}}$. Then, $\Omega$ is estimated to be about 40~MHz at the highest input power for the data shown in Fig.~\ref{fig:exp2}(a). As seen in Fig.~\ref{fig:comp}, this estimate also give a reasonable agreement between the experimentally observed and theoretically calculated resistivity change at $T$=0.6~K. From this result we conclude that at the input power of about 1000 $\mu$W SSE are heated up to about 9~K and to about 5~K for T=0.5 and 0.6~K respectively. \section{Absorption line broadening} \subsection{Introduction} In addition to the electrical resistivity, MW absorption-induced heating is also expected to effect the line shape of the absorption signal. The power absorption $P_{\textrm{A}}$ depends on the fractional occupancies $\rho_1$ and $\rho_2$ of two lowest subbands and in general case is given by \begin{equation} P_{\textrm{A}}=\frac{0.5\hbar \omega \gamma \Omega^2}{(\omega_{12} -\omega )^2+\gamma^2} ( \rho_1 -\rho_2 ). \label{eq:abs} \end{equation} \noindent For cold electrons ($T_e\approx T$), thermal population of higher excited subbands can be neglected. Taking into account the occupation of only two lowest levels, i.e. $\rho_1+\rho_2=1$, it is straightforward to show from (\ref{eq:rate}) that equation (\ref{eq:abs}) reduces to~\cite{Collin} \begin{equation} P_{\textrm{A}}=\frac{0.5\hbar \omega \gamma \Omega^2}{(\omega_{12} -\omega )^2+\gamma^2+\gamma\tau\Omega^2}, \label{eq:absCOLD} \end{equation} \noindent Here $\tau=\nu_{21}^{-1}$ is the life time of an electron in the excited state. For cold electrons, $\gamma\tau$ is temperature independent and is about 3.08 for $E_{\textrm{z}}=95.33$ V/cm for SSE above $^3$He. According to (\ref{eq:absCOLD}), the absorption line shape is Lorenzian with half-width at half-maximum given by $(\gamma^2+\gamma\tau\Omega^2)^{1/2}$. For $\Omega\ll \gamma/(\gamma\tau)^{1/2}$, the absorption increases linearly with input MW power at all values of $\omega_{12}-\omega$, and the line shape is independent of input power $P$. In the opposite limit of large $\Omega$, the line shape becomes power broadened, and the half-width increases proportionally to $\sqrt{P}$. The reason for the power broadening is that at high MW excitation rates $\rho_2\rightarrow \rho_1$, as illustrated in Fig.~\ref{fig:rho} where we plot the fractional occupancies as a function of $\Omega$ (solid lines) calculated for $T=0.6$~K. In the limit of large $\Omega$, $(\rho_1-\rho_2)$ become proportional to $\Omega^{-2}$. As a result, the dependence on $\Omega$ cancels out from equation (\ref{eq:abs}) and $P_{\textbf{A}}$ saturates. Since the absorbed power increases slower near the resonance than far from the resonance, this leads to the broadening of the line. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig8} \caption{\label{fig:rho} Fractional occupancies $\rho_1$ and $\rho_2$ verses $\Omega$ calculated at $T=0.6$~K for cold electrons (solid lines) and for hot electrons (dashed lines) using hot electron theory developed in Sec.~5.1.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=11cm]{fig9} \caption{\label{fig:bleach} Power absorption versus transition frequency $\omega_{12}$ calculated for $T$=0.6~K and for three different values of $\Omega$=15, 19 and 30~MHz. Solid lines are calculated ignoring heating and using (\ref{eq:absCOLD}), while dashed lines are for hot electrons using (\ref{eq:abs}).} \end{figure} However, the situation becomes quite different if electron heating is taken into account. In this case, $(\rho_1-\rho_2)$ starts decreasing with $\Omega$ due to the thermal excitations of the electrons from the ground state to the excited levels long before any noticeable MW excitation takes place. This is clearly seen in Fig.~\ref{fig:rho} where we plot the fractional occupancies for hot electrons (dashed lines) calculated using the hot electron theory developed in Sec.~5.1. As a result, the absorption line is expected to broaden long before the saturation condition is reached. This effect is illustrated in Fig.~\ref{fig:bleach} where we plot absorption lines calculated for $\Omega$=15, 19 and 30~MHz and $T$=0.6~K using the hot electron theory described in the previous section. The solid lines are calculated for cold electrons using (\ref{eq:absCOLD}), while dashed lines are for hot electrons using more general expression (\ref{eq:abs}). As seen in Fig.~\ref{fig:bleach}, at small values of $\Omega$ the heating is negligible, and the absorption is well described by (\ref{eq:absCOLD}). At higher values of $\Omega$, however, the absorption lines calculated for cold and hot electrons differ significantly in the vicinity of the resonance. The difference come from the fact that near the resonance, where the heating is most prominent, the reduction of $\rho_1$ and the enhancement of $\rho_2$ due to the thermal excitation leads to the decrease of $P_{\textrm{A}}(\omega_{12})$ in accordance with (\ref{eq:abs}). Far from the resonance, the heating is weak and $P_{\textrm{A}}(\omega_{12})$ is close to that given by (\ref{eq:absCOLD}). This leads to the additional broadening of the absorption line. We note that at $T$=0.6~K, the linewidth $\gamma$ calculated using Ando theory~\cite{Ando} is about 630~MHz (see Fig.~\ref{fig:gamma}), therefore the maximum value of $\Omega\cong 40$~MHz found in the experiment is too small to cause absorption saturation. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig10} \caption{\label{fig:deriv} Absorption derivative signals taken at $n$=10$^8$~cm$^{-2}$, $P$=5~$\mu$W and 0.55, 0.65 and 0.73~K (solid lines). The dotted lines are the fits obtained as described in the text.} \end{figure} \subsection{Experiment} In order to see the effect of heating on the absorption linewidth, we conducted an experiment in which we measured the absorption linewidth as a function of the input power. The experimental procedure for direct absorption measurements was similar to described in Ref.~4, although the experiment was done in a new experimental cell designed for magneto-resistivity measurements. In the new cell, SSE were accumulated on the liquid $^3$He surface placed half way between two parallel round conducting electrodes separated by 2.6~mm. As in the previous setup, the top plate consisted of Corbino disk used for magneto-resistivity measurements. In addition, the circular conducting guard rings were placed around each electrode. MW power at fixed frequency of 130~GHz was passed through the cell and was detected upon its exiting by means of the InSb bolometer mounted at the still of the dilution refrigerator. Before the measurements, the parallelism between helium surface and electrodes was adjusted to within about 1~mrad by tilting the cryostat and obtaining the Corbino signal which corresponded to the maximum of the magneto-resistivity. To tune SSE in resonance with MW field, the potential difference $V_{\textrm{B-T}}$ between the bottom and the top electrodes was swept in such a way as to keep the potential at the helium surface to be constant and equal to +10~V. In addition, a constant negative voltage of -30~V was applied to guard rings to make a sharper density profile at the electron sheet edge. The absorption line shape was obtained by recording the bolometer signal as voltage was swept through the resonance. Like in Ref.~4, the signal proportional to $dP_{\textrm{A}}/dE_{\perp}$, the derivative of power absorption with respect to the pressing electric field, was obtained by applying small modulation to $V_{\textrm{B-T}}$. Examples of experimental traces obtained using low MW input power and taken at T=0.55, 0.65 and 0.73~K are shown in Fig.~\ref{fig:deriv}. \subsection{Results and discussion} To see the effect of heating on the absorption linewidth and to make comparison with theory, the experimental traces were recorded at different values of the input MW power $P$ and the linewidth $\gamma_{\textrm{fit}}$ was obtained by fitting the traces to the analytical formula using $\gamma_{\textrm{fit}}$ as an adjustable parameter. To account for line asymmetry and to obtain better fitting in the whole temperature and input power ranges, we employed the fitting with an asymmetric Fano-like line given by~\cite{Isshiki} \begin{equation} F(\omega_{12})=A\frac{(\omega_{12} - \omega -B\gamma_{\textrm{fit}})^2}{(\omega_{12} -\omega)^2+\gamma_{\textrm{fit}} ^2}, \label{eq:Fano} \end{equation} \noindent with $A$, $B$, $\omega$ and $\gamma_{\textrm{fit}}$ being fitting parameters. Here $A$ is the amplitude of the resonance, $B$ is a parameter which results in the asymmetry and $\omega$ is the MW frequency, which was allowed to vary to account for the frequency shift discussed in Sec.~3.2. Examples of such fitting are shown as dotted lines in Fig.~\ref{fig:deriv}. At high temperatures and low values of $P$, the experimental traces could be fitted well with a single asymmetric line given by equation (\ref{eq:Fano}). However, at high power the line shape broadened in such a way that it was no longer possible to fit it satisfactorily with a single line. In this case, the experimental data were fitted with the sum of two lines (\ref{eq:Fano}) sharing the same parameter $\gamma_{\textrm{fit}}$. The power dependence of $\gamma_{\textrm{fit}}$ extracted from fitting of experimental traces taken at different values of $T$ are plotted as symbols in Fig.~\ref{fig:width}. The conversion factor to frequency of 2.03~GHz/V was found experimentally by changing the frequency of microwaves and observing the shift of the resonance line. To compare with hot electron theory, in the same figure we plot the half-width of the power absorption curves calculated using (\ref{eq:abs}). The half-width is shown by lines and plotted as a function of $\Omega^2$. Like in the case of comparison between experimental and theoretical resistivity change described earlier in this paper, the horizontal axis for theoretical curves was adjusted to give the best fit between experimental results and theoretical curves. Both methods give comparable estimates for $\Omega$. The values of $\Omega$ for microwaves in the cell can be also estimated from the magnitude of the power absorption signal at InSb bolometer. These values are also consistent with above estimate. The absolute value of experimental linewidth turned out to be only about 30$\%$ higher than theoretically calculated. This deviation from the theory is significantly smaller then previously reported.~\cite{Isshiki} A number of factors can account for this discrepancy. Following Ando~\cite{Ando}, in previous calculations of theoretical linewidth $\gamma$ we used approximate variational method to determine eigen functions of the ground and the first excited states. This gave the linewidth to be about 15$\%$ lower than obtained in the present calculations using numerical solutions of Schrodinger equation for electron eigen functions. Also, the conversion factor, which is used to express the experimental linewidth in frequency units, was previously obtained from calculations and turned out to be about 10$\%$ lower than the one that was experimentally determined and used in the present work. The rest can be probably accounted by better fitting procedure and by improved alighnment of the experimental setup. \begin{figure}[t] \centering \includegraphics[width=11cm]{fig11} \caption{\label{fig:width} Experimental linewidth $\gamma_{\textrm{fit}}$ plotted as a function of input MW power $P$ and calculated half-width plotted as a function of $\Omega^2$ for $T$=0.5~K ($\bullet$ and dash-double-dotted line), 0.55~K ($\circ$ and dash-dotted line), 0.6~K ($\blacksquare$ and dashed line), 0.65~K ($\square$ and short-dashed line), 0.7~K ($\blacktriangle$ and short-dash-dotted line) and 0.73~K ($\triangledown$ and dotted line).} \end{figure} In general, we find rather good agreement between experimentally observed and theoretically expected behaviours. At low temperatures ($T\leq 0.5$~K), the heating becomes significant even at lowest power, and linewidth rises quickly with $P$. At intermediate temperatures ($T=0.5-0.65$~K), the heating is not strong at small values of $P$, and linewidth have weak power dependence. As the power increases, the heating becomes appreciable, and the line broadens. At high temperatures ($T\geq 0.7$~K), the heating is negligible for all values of $P$, and the linewidth is almost power independent. The deviation between theory and experiment was observed only for data taken at $T$=0.5~K and $P\gtrsim500~\mu$W. However, we found out that at $T\leq 0.5$~K and high MW power a different phenomenon appeared. Under these conditions, the line shape became more asymmetric, and the integrated absorption line showed an offset. In addition, at $T\leq 0.3$~K a quadrature component of the modulated absorption signal was recorded. Very similar effect was also observed by Glasson $\textit{et~al}$ in their MW absorption experiment~\cite{Glasson} with SSE over liquid $^4$He and was attributed to the absorption hysteresis due to Coulomb interaction between electrons. It is not completely clear at the moment what role does electron heating play in hysteresis, and the further discussion of this effect is beyond the scope of the present report. \section{Conclusions} In conclusion, we observed the resonant increase in the electron resistivity upon irradiation of the SSE with resonant MW radiation. This effect is caused by the heating of SSE with the absorbed MW power and is due to the increase of the inter-subband scattering rate of the hot electrons. It is also demonstrated, both theoretically and experimentally, that the electron heating results in the broadening of the absorption line even long before the absorption saturation can be reached. Our results indicate that under the typical conditions of the MW absorption experiment, the electron heating can not be ignored, and the proper account for this effect is necessary for the adequate analysis of the experimental results. It would be interesting to observe the resistivity change due to electron heating at low $T$ where the momentum relaxation of the SSE is limited by the interaction with ripplons. Unlike in a vapor-atom scattering regime, in this case the inter-subband scattering rates decrease rapidly with increasing subband index. In addition to this, at high pressing fields the intra-subband relaxation rates also decrease with increasing $T_\textrm{e}$. Therefore, $\sigma^{-1}$ is expected to decrease rapidly with $T_\textrm{e}$. A new experiment to investigate this effect is currently in progress. In addition, a model, which adequately takes into account the electron-ripplon interaction, have been developing. \section*{Acknowledgement} The work is partly supported by the Grant-in-Aids for Scientific Research from Monka-sho and JSPS. One of the authors (D.K.) thanks JSPS for a postdoctoral fellowship. We appreciate valuable discussion with M. Dykman.
1,477,468,751,310
arxiv
\section{Introduction} Given a compact K\"ahler manifold $X$, foundational works of Simpson and Corlette, \cite{Si1}, \cite{Co} establish a natural equivalence between the category of local systems over $X$ and the category of certain analytical objects called {\it Higgs bundles} that consist of a holomorphic vector bundle $V$ over $X$ together with a holomorphic section $ \theta\, \in\, H^0(X,\, \text{End}(V)\otimes\Omega^1_X)$ such that the section $\theta\bigwedge\theta \, \in\, H^0(X,\, \text{End}(V)\otimes\Omega^2_X)$ vanishes identically (see also \cite{Si2}). The section $\theta$ is called a {\it Higgs field} on $V$. While local systems are topological objects on $X$, which correspond to flat vector bundles (or, equivalently, to equivalence classes of representations of the fundamental group of $X$), the Higgs bundles on $X$ are holomorphic objects. There are notions of stability and polystability for Higgs bundles which are analogous to the corresponding notions for holomorphic vector bundles, while restricting the class of subsheaves to only those that are invariant by the Higgs field (see Section \ref{s2}). This stability condition generalizes the stability of holomorphic vector bundles introduced by Mumford in the context of geometric invariant theory which he developed. The above mentioned equivalence of categories exhibits a natural correspondence between the completely reducible local systems on $X$ and the polystable Higgs bundles on $X$ with vanishing rational Chern classes. It may be mentioned that for polystable Higgs bundles, the vanishing of the first two Chern classes implies the vanishing of all Chern classes. This correspondence is constructed via a Hermitian metric on $V$ that satisfies the Yang--Mills--Higgs equation for a polystable $(V,\, \theta)$, \cite{Si1}, and a harmonic metric on a vector bundle on $X$ equipped with a completely reducible flat connection \cite{Co}. The construction of these canonical metrics can be seen as a vast generalization of Hodge Theorem on existence of harmonic forms, and for this reason the above correspondence is also called a ``nonabelian Hodge theorem''. The aim here is to extend this Corlette-Simpson (nonabelian Hodge) correspondence to the more general context of compact Fujiki class $\mathcal C$ manifolds; see Theorem \ref{thm2}. Recall that a manifold $M$ is in Fujiki class $\mathcal C$ if it is the image of a K\"ahler manifold through a holomorphic map \cite{Fu2}, or, equivalently, $M$ is bimeromorphic to a compact K\"ahler manifold \cite{Va} (see Section \ref{s2.2}). The proof of Theorem \ref{thm2} uses a well--known functoriality property of Corlette-Simpson correspondence (see Theorem \ref{thm1}) and a descent result (see Proposition \ref{propositionpullback}) which is inspired by Theorem 1.2 in \cite{GKPT}. \section{Representations, Higgs bundles and Fujiki class $\mathcal C$ manifolds} \subsection{Nonabelian Hodge theory}\label{s2} Let $X$ be a compact connected complex manifold. Fix a base point $x_0\, \in\, X$ to define the fundamental group $\pi_1(X,\, x_0)$ of $X$. Take a positive integer $r$, and consider any homomorphism $$ \rho\, :\, \pi_1(X,\, x_0)\, \longrightarrow\, \text{GL}(r,{\mathbb C})\, . $$ The homomorphism $\rho$ is called \textit{irreducible} if the standard action of $\rho(\pi_1(X,\, x_0))$ on ${\mathbb C}^r$ does not preserve any nonzero proper subspace of ${\mathbb C}^r$. The homomorphism $\rho$ is called \textit{completely reducible} if it is a direct sum of irreducible representations. Two homomorphisms $\rho_1,\, \rho_2\, :\, \pi_1(X,\, x_0)\, \longrightarrow\, \text{GL}(r,{\mathbb C})$ are called \textit{equivalent} if there is an element $g\, \in\, \text{GL}(r,{\mathbb C})$ such that $$ \rho_1(\gamma)\,=\, g^{-1} \rho_2(\gamma)g $$ for all $\gamma\, \in\, \pi_1(X,\, x_0)$. Clearly, this equivalence relation preserves irreducibility and complete reducibility. The space of equivalence classes of completely reducible homomorphisms from $\pi_1(X,\, x_0)$ to $\text{GL}(r,{\mathbb C})$ has the structure of an affine scheme defined over $\mathbb C$, which can be seen as follows. Since $X$ is compact, $\pi_1(X,\, x_0)$ is a finitely presented group; $\text{GL}(r,{\mathbb C})$ being an affine algebraic group, the space of all homomorphisms $\text{Hom}(\pi_1(X,\, x_0),\, \text{GL}(r,{\mathbb C}))$ is a complex affine scheme. The adjoint action of $\text{GL}(r,{\mathbb C})$ on $\text{Hom}(\pi_1(X,\, x_0),\, \text{GL}(r,{\mathbb C}))$ produces an action of $\text{GL}(r,{\mathbb C})$ on $\text{Hom}(\pi_1(X,\, x_0),\, \text{GL}(r,{\mathbb C}))$. The geometric invariant theoretic quotient $\text{Hom}(\pi_1(X,\, x_0),\, \text{GL}(r,{\mathbb C}))/\!\!/\text{GL}(r,{\mathbb C})$ is the moduli space of equivalence classes of completely reducible homomorphisms from $\pi_1(X,\, x_0)$ to $\text{GL}(r,{\mathbb C})$; see \cite{Si3}, \cite{Si4}. Let ${\mathcal R}(X, r)$ denote this moduli space of equivalence classes of completely reducible homomorphisms from $\pi_1(X,\, x_0)$ to $\text{GL}(r,{\mathbb C})$. It is known as the Betti moduli space. A homomorphism $\rho\, :\, \pi_1(X,\, x_0)\, \longrightarrow\, \text{GL}(r,{\mathbb C})$ produces a holomorphic vector bundle $E$ on $X$ of rank $r$ equipped with a flat holomorphic connection, together with a trivialization of the fiber $E_{x_0}$. Equivalence classes of such homomorphisms correspond to holomorphic vector bundles of rank $r$ equipped with a flat holomorphic connection; this is an example of Riemann--Hilbert correspondence. A connection $\nabla$ on a vector bundle $E$ is called irreducible if there is no subbundle $0\, \not=\, F\, \subsetneq E$ such that $\nabla$ preserves $F$. A connection $\nabla$ on a vector bundle $E$ is called completely reducible if $$ (E,\,\nabla)\,=\, \bigoplus_{i=1}^N(E_i,\,\nabla^i)\, , $$ where each $\nabla^i$ is an irreducible connection on $E_i$. We note that irreducible (respectively, completely reducible) flat connections of rank $r$ on $X$ correspond to irreducible (respectively, completely reducible) equivalence classes of homomorphisms from $\pi_1(X,\, x_0)$ to $\text{GL}(r,{\mathbb C})$. A \textit{Higgs field} on a holomorphic vector bundle $V$ on $X$ is a holomorphic section $$ \theta\, \in\, H^0(X,\, \text{End}(V)\otimes\Omega^1_X) $$ such that the section $\theta\bigwedge\theta \, \in\, H^0(X,\, \text{End}(V)\otimes\Omega^2_X)$ vanishes identically \cite{Si1}, \cite{Si2}. If $(z_1, \,\ldots,\, z_d)$ are local holomorphic coordinates on $X$ with respect to which the local expression of the section $\theta$ is $\sum_i \theta_i\otimes dz_i$, with $\theta_i$ being locally defined holomorphic endomorphisms of $V$, the above integrability condition $\theta\bigwedge\theta \,=\,0$ is equivalent to the condition that $[\theta_i,\, \theta_j]\,=\, 0$ for all $i,\, j$. A \textit{Higgs bundle} on $X$ is a holomorphic vector bundle on $X$ together with a Higgs field on it. A homomorphism of Higgs bundles $(V_1,\, \theta_1)\, \longrightarrow\, (V_2,\, \theta_2)$ is a holomorphic homomorphism $$ \Psi\, :\, V_1\, \longrightarrow\, V_2 $$ such that $\theta_2\circ\Psi\,=\, (\Psi\otimes \text{Id}_{\Omega^1_X})\circ\theta_1$ as homomorphisms from $V_1$ to $V_2\otimes\Omega^1_X$. Assume now that $X$ is K\"ahler, and fix a K\"ahler form $\omega$ on $X$. The \textit{degree} of a torsionfree coherent analytic sheaf $F$ on $X$ is defined to be $$ \text{degree}(F)\, :=\, \int_X c_1(\det F)\wedge \omega^{d-1}\, \in\, \mathbb R\, , $$ where $d\,=\, \dim_{\mathbb C} X$; see \cite[Ch.~V, \S~6]{Ko} (also Definition 1.34 in \cite{Br}) for determinant line bundle $\det F$. The number $$ \mu(F)\, :=\, \frac{\text{degree}(F)}{\text{rank}(F)}\, \in\, \mathbb R $$ is called the \textit{slope} of $F$. A Higgs bundle $(V,\, \theta)$ on $X$ is called \textit{stable} (respectively, \textit{semistable}) if for every coherent analytic subsheaf $F\, \subset\, V$ with $0\, <\, \text{rank}(F)\,<\, \text{rank}(V)$ and $\theta(F)\, \subset\, F\otimes\Omega^1_X$, the inequality $$ \mu(F)\, <\, \mu(V) \ \ {\rm (respectively,}\ \mu(F)\, \leq\, \mu(V){\rm )} $$ holds. A Higgs bundle $(V,\, \theta)$ is called \textit{polystable} if it is semistable and a direct sum of stable Higgs bundles. To verify the stability (or semistability) condition it suffices to consider coherent analytic subsheaves $F\, \subsetneq\, V$ such that the quotient $V/F$ is torsionfree \cite[Ch.~V, Proposition 7.6]{Ko}. These subsheaves are reflexive (see \cite[Ch.~V, Proposition 5.22]{Ko}). \begin{theorem}[{\cite{Si1}, \cite{Co}, \cite{Si2}}]\label{thm0} There is a natural equivalence of categories between the following two: \begin{enumerate} \item The objects are completely reducible flat complex connections on $X$, and morphisms are connection preserving homomorphisms. \item Objects are polystable Higgs bundles $(V,\, \theta)$ on $X$ such that $c_1(V)\,=\, 0\, =\, c_2(V)$, where $c_i$ denotes the $i$--th Chern class with coefficient in $\mathbb Q$; the morphisms are homomorphisms of Higgs bundles. \end{enumerate} \end{theorem} In \cite{Si2}, the conditions on the Chern classes of the polystable Higgs bundle $(V,\, \theta)$ are $\text{degree}(V)\,=\,0\, =\, (ch_2(V)\cup [\omega^{d-2}])\cap [X]$, instead of the above conditions $c_1(V)\,=\, 0\, =\, c_2(V)$. However, since the existence of flat connection on a complex vector bundle implies that all its rational Chern classes vanish, these two sets of conditions are equivalent in the given context. \begin{remark}\label{rem1} Recall that the notion of (poly)stability depends on the choice of the K\"ahler class of $\omega$. In the case of vanishing first two Chern classes, these notions are actually independent of the class of $\omega$. In fact, given an equivalence class of completely reducible homomorphism $\rho\, :\, \pi_1(X,\, x_0)\, \longrightarrow\, \text{GL}(r,{\mathbb C})$, the Higgs bundle $(V,\, \theta)$ of rank $r$ associated to $\rho$ by the equivalence of categories in Theorem \ref{thm0} is in fact independent of the choice of $\omega$. Indeed, the local system $\rho$ is obtained from $(V,\, \theta)$ by constructing a Hermitian metric $h$ on $V$ that satisfies the Yang--Mills--Higgs equation $K(\nabla^h) + \lbrack \theta ,\, \theta_h^* \rbrack\,=\,0$, where $K(\nabla^h)$ is the curvature of the Chern connection $\nabla^h$ on $V$ corresponding to $h$, and $\theta_h^*$ is the adjoint of $\theta$ with respect to $h$. Since this Yang--Mills--Higgs equation does not depend on the K\"ahler form $\omega$, the flat connection corresponding to $(V,\, \theta)$ is independent of $\omega$. \end{remark} We recall a basic property of the equivalence of categories in Theorem \ref{thm0}. \begin{theorem}[{\cite{Si2}}]\label{thm1} Let $X$ and $X_1$ be compact connected K\"ahler manifolds, and let $$\beta\, :\, X_1\, \longrightarrow\, X$$ be any holomorphic map. Let $(E,\, \nabla)$ be a completely reducible flat connection on $X$, and let $(V,\, \theta)$ be the corresponding polystable Higgs bundle on $X$ with $c_1(V)\,=\, 0\, =\, c_2(V)$. Then the pulled back Higgs bundle $(\beta^*V,\, \beta^*\theta)$ is polystable, and, moreover, the flat connection corresponding to it coincides with $(\beta^*E,\, \beta^*\nabla)$. \end{theorem} To explain Theorem \ref{thm1}, take any completely reducible homomorphism $\rho\, :\, \pi_1(X,\, x_0)\, \longrightarrow\,\text{GL}(r, {\mathbb C})$. Let $\alpha\, :\, \widetilde{X}\, \longrightarrow\, \text{GL}(r, {\mathbb C})/{\rm U}(r)$ be the $\rho$-equivariant harmonic map defined on the universal cover of the K\"ahler manifold $(X,\, x_0)$. Then $\alpha\circ\widetilde{\beta}$ is $\beta^*\rho$-equivariant harmonic, where $\widetilde \beta$ is the lift of the map $\beta$ in Theorem \ref{thm1} to a universal covering of $X_1$. Also, as noted in Remark \ref{rem1}, the Yang--Mills--Higgs equation for $(V,\,\theta)$ does not depend on the K\"ahler form $\omega$. Theorem \ref{thm1} follows from these facts. Note that Remark \ref{rem1} can be seen as a particular case of Theorem \ref{thm1} by setting $\beta$ to be the identity map of $X$ equipped with two different K\"ahler structures. A useful particular case of Theorem \ref{thm1} is the following: \begin{corollary}\label{cor1} Let the map $\beta$ in Theorem \ref{thm1} be such that the corresponding homomorphism of fundamental groups $$ \beta_*\, :\, \pi_1(X_1,\, x_1)\, \longrightarrow\, \pi_1(X,\, \beta(x_1)) $$ is trivial. For any polystable Higgs bundle $(V,\, \theta)$ on $X$ of rank $r$ with $c_1(V)\,=\, 0\, =\, c_2(V)$, \begin{enumerate} \item $\beta^*V\,=\, {\mathcal O}^{\oplus r}_{X_1}$, and \item $\beta^*\theta\,=\, 0$. \end{enumerate} \end{corollary} The equivalence of categories in Theorem \ref{thm0}, between the equivalence classes of completely reducible flat connections on $X$ and the polystable Higgs bundles $(V,\, \theta)$ on $X$ such that $c_1(V)\,=\, 0\, =\, c_2(V)$, extend to the context of principal $G$--bundles, where $G$ is any complex affine algebraic group \cite{BG}. Let ${\mathcal H}(X,r)$ denote the moduli space of polystable Higgs bundles $(V,\, \theta)$ of rank $r$ on $X$ such that $c_1(V)\,=\, 0\, =\, c_2(V)$, where $c_i$ denotes the rational $i$--th Chern class. It is canonically homeomorphic to the earlier defined moduli space ${\mathcal R}(X, r)$ (of equivalence classes of completely reducible homomorphisms from $\pi_1(X,\, x_0)$ to $\text{GL}(r,{\mathbb C})$). However the complex structure of these two moduli spaces are different in general. Now assume that $X$ is a smooth projective variety defined over $\mathbb C$. Simpson proved the following two results in \cite{Si2}. \begin{theorem}[{\cite{Si2}}]\label{thms} There is an equivalence of categories between the following two: \begin{enumerate} \item The objects are flat complex connections on $X$, and morphisms are connection preserving homomorphisms. \item Objects are semistable Higgs bundles $(V,\, \theta)$ on $X$ such that $c_1(V) \,=\, 0\, =\, c_2(V)$, where $c_i$ denotes the $i$--th Chern class with coefficient in $\mathbb Q$; the morphisms are homomorphisms of Higgs bundles. \end{enumerate} \end{theorem} The equivalence of categories in Theorem \ref{thms} extends the one in Theorem \ref{thm0} but by imposing the condition that $X$ is complex projective. \begin{proposition}[{\cite{Si2}}]\label{props} Take $X$ and $X_1$ in Theorem \ref{thm1} and Corollary \ref{cor1} to be smooth complex projective varieties. Then Theorem \ref{thm1} and Corollary \ref{cor1} remain valid if polystability is replaced by semistability. \end{proposition} \subsection{Fujiki class $\mathcal C$ manifolds}\label{s2.2} A compact complex manifold is said to be in {\it the Fujiki class $\mathcal C$} if it is the image of a compact K\"ahler space under a holomorphic map \cite{Fu2}. A result of Varouchas, \cite[Section\,IV.3]{Va}, asserts that a compact complex manifold $M$ belongs to Fujiki class $\mathcal C$ if and only if there is a holomorphic map $$ \phi\, :\, X\, \longrightarrow\, M $$ such that \begin{itemize} \item $X$ is a compact K\"ahler manifold, and \item $\phi$ is bimeromorphic. \end{itemize} In other words, $M$ lies in class $\mathcal C$ if and only if it admits a compact K\"ahler modification. \section{A descent result for vector bundles} A holomorphic line bundle $L$ on a compact complex Hermitian manifold $(Y,\, \omega_Y)$ is called numerically effective if for every $\epsilon \, >\, 0$, there is a Hermitian structure $h_\epsilon$ on $L$ such that $\text{Curv}(L, h_\epsilon) \, \geq\, -\epsilon \omega_Y$, where $\text{Curv}(L, h_\epsilon)$ is the curvature of the Chern connection on $L$ for $h_\epsilon$; since $Y$ is compact, this condition does not depend on the choice of the Hermitian metric $\omega_Y$ \cite[Definition 1.2]{DPS}. A holomorphic vector bundle $\mathcal E$ on $Y$ is called numerically effective if the tautological line bundle ${\mathcal O}_{{\mathbb P}({\mathcal E})}(1)$ on ${\mathbb P}({\mathcal E})$ is numerically effective \cite[p.~305, Definition 1.9]{DPS}. A holomorphic vector bundle ${\mathcal E}$ on $Y$ is called numerically flat if both ${\mathcal E}$ and ${\mathcal E}^*$ are numerically effective \cite[p.~311, Definition 1.17]{DPS}. \begin{proposition}\label{propositionpullback} Let $\phi\, :\, X\,\longrightarrow\, M$ be a proper bimeromorphic morphism between complex manifolds, and let $V\, \longrightarrow \,X$ be a holomorphic vector bundle such that for every $x\, \in\, M$, the restriction $V\vert_{\phi^{-1}(x)}$ is a numerically flat vector bundle. Then there exists a holomorphic vector bundle $W$ on $M$ such that $V \,\simeq\, \phi^* W$. \end{proposition} \begin{proof} Set $r\,:=\,{\text rank}(V)$. We shall prove the proposition in three steps. {\em Step 1. Assume that $\phi$ is the blow-up of $M$ along a smooth center $Z$.}\, This is well-known; for the convenience of the reader we give an argument in the spirit of the method of \cite{GKPT} (Sections 4 and 5). Let $E \,\subset\, X$ be the exceptional divisor of the blow-up. The restriction $V|_E$ is a vector bundle such that the restriction to every fiber of $$\phi|_E\,:\, E\,\longrightarrow \,Z$$ is numerically flat. As the fibers are projective spaces, it can be shown that the vector bundle $V|_E$ is trivial on the fibers of $\phi|_E$. Indeed, a numerically flat bundle admits a filtrations of holomorphic subbundles such that each successive quotient admits a unitary flat connection \cite[p.~311, Theorem 1.18]{DPS}. Since a projective space is simply connected, each successive quotient is actually trivial. On the other hand, an extension of a trivial bundle bundle on ${\mathbb C}{\mathbb P}^k$ by a trivial bundle is also trivial, because $H^1({\mathbb C}{\mathbb P}^k,\, {\mathcal O}_{{\mathbb C}{\mathbb P}^k})\,=\, 0$. Since $\phi|_E$ is locally trivial, we see that there exists a vector bundle $W_Z$ on $Z$ such that $V|_E \,\simeq\, (\phi|_E)^* W_Z$. Consider now the projectivised vector bundle $\pi\, :\, \ensuremath{\mathbb{P}}(V)\,\longrightarrow\, X$. Then $$\pi^* E \,\simeq \,\ensuremath{\mathbb{P}}(V|_E) \,\simeq\, \ensuremath{\mathbb{P}}((\phi|_E)^* W_Z) \,\simeq\, (\phi|_E)^*\ensuremath{\mathbb{P}}(W_Z)$$ is a divisor that admits a fibration onto $\ensuremath{\mathbb{P}}(W_Z)$. In fact, for any point $z \,\in\, Z$, we have $$ \ensuremath{\mathbb{P}}(V|_{\phi^{-1}(z)}) \,\simeq\, \phi^{-1}(z) \times \ensuremath{\mathbb{P}}(W_{Z, z}) $$ and the fibration is given by projection onto the second factor. Since the restriction of the divisor $E$ to $\phi^{-1}(z)$ is anti-ample, this also holds for the restriction of $\pi^* E$ to the fibers of $\pi^* E\,\longrightarrow\, \ensuremath{\mathbb{P}}(W_Z)$. Now we can apply a theorem of Fujiki, \cite[p.~495, Theorem 2]{Fu1}, to see that there exist a variety $T$ and a bimeromorphic morphism $\widetilde{\phi}\, :\, \ensuremath{\mathbb{P}}(V)\, \longrightarrow\, T$ such that $\widetilde{\phi}|_{\pi^* E}$ is the fibration $\pi^* E \,\longrightarrow\, \ensuremath{\mathbb{P}}(W_Z)$ and the restriction of it to $\ensuremath{\mathbb{P}}(V) \setminus \pi^* E$ is an isomorphism. By construction the variety $T$ admits a morphism onto $M$ such that all the fibers are isomorphic to $\ensuremath{\mathbb{P}}^{r-1}$; in particular, $T$ is smooth and $\ensuremath{\mathbb{P}}(V)$ is the blowup of $T$ along $\ensuremath{\mathbb{P}}(W_Z)$. The push-forward of $c_1(\mathcal O_{\ensuremath{\mathbb{P}}(V)}(1))$ onto $T$ defines a Cartier divisor on $T$ such that the restriction to the fibers of $T \,\longrightarrow\, M$ is the hyperplane class. Thus the corresponding direct image sheaf defines a vector bundle $W \,\longrightarrow\, M$ satisfying the condition that $V \,\simeq\, \phi^* W$. {\em Step 2. Assume that $\phi$ is the composition of smooth blowups.}\, Set $X_0\,:=\,X$ and $X_k\,:=\,M$, and for $i \,\in \,\{1,\, \cdots, \,k\}$, let $\nu_i\, :\, X_{i-1}\,\longrightarrow\, X_i$ be blowups such that $$ \phi \,= \,\nu_k \circ \cdots \circ \nu_1. $$ Since every $\nu_1$-fiber is contained in a $\phi$-fiber, it is evident that the restriction of $V$ to every $\nu_1$-fiber is trivial. Thus, by Step 1, there exists a vector bundle $V_1$ on $X_1$ such that $V \,\simeq\, \nu_1^* V_1$. We shall now proceed by induction on $i \, \in\, \{1,\, \cdots,\, k\}$ and assume that we have found a vector bundle $V_i \,\longrightarrow \,X_i$ such that its pull-back to $X$ is isomorphic to $V$. We have to check that $V_i$ satisfies the triviality condition with respect to the morphism $\nu_{i+1}$: let $G \,\subset\, X_i$ be any $\nu_{i+1}$-fiber and let $Z \,\subset\, (\nu_i \circ \ldots \circ \nu_1)^{-1}(G)$ be an irreducible component that surjects onto $G$. Since $G$ is contracted by $\nu_{i+1}$, the variety $Z$ is contained in a $\phi$-fiber. Consequently, $V|_{Z}$ is trivial. Since $$ V|_{Z} \,\simeq \,((\nu_i \circ \ldots \circ \nu_1)|_Z)^* (V_i|_G)\, , $$ this shows that $V_i|_G$ is numerically flat. Thus we can apply Step 1 to $\nu_{i+1}$. {\em Step 3. The general case.}\, If $\nu\,:\, X'\,\longrightarrow\, X$ is another bimeromorphic morphism, then the pull-back $\nu^* V$ is a vector bundle on $X'$ that satisfies the assumption with respect to the morphism $\phi \circ \nu$. Thus it suffices to prove the statement for $\phi \circ \nu$. Since any bimeromorphic morphism between manifolds is dominated by a sequence of blowups with smooth centers, we can assume without loss generality that $\phi$ is a composition of blowups with smooth centers. Therefore, the proof is completed using Step 2. \end{proof} \section{Nonabelian Hodge theory for Moishezon manifolds} Let $M$ be a compact Moishezon manifold. Recall that Moishezon manifolds, defined as manifolds of maximal algebraic dimension, were introduced and studied by Moishezon in \cite{Mo}, where he proved that they are birational to smooth complex projective manifolds \cite{Mo} (see also \cite[p.~26, Theorem~3.6]{Ue}). \begin{definition}\label{def1} Take a Higgs bundle $(V,\, \theta)$ on $M$ such that $c_1(V)\,=\, 0\,=\, c_2(V)$. The Higgs bundle $(V,\, \theta)$ is called \textit{semistable} (respectively, \textit{polystable}) if for every pair $(C,\, \tau)$, where $C$ is a compact connected Riemann surface and $\tau\, :\, C\, \longrightarrow\, M$ is a holomorphic map, the pulled back Higgs bundle $(\tau^*V,\, \tau^*\theta)$ is semistable (respectively, polystable). Clearly, it is enough to consider only nonconstant maps $\tau$. \end{definition} When $M$ is a smooth complex projective variety, then semistability and polystability according to Definition \ref{def1} coincide with those described in Section \ref{s2}. Indeed, from Proposition \ref{props} (respectively, Theorem \ref{thm1}) we know that a semistable (respectively, polystability) Higgs bundle $(V,\, \theta)$ on $M$ with $c_1(V)\,=\, 0\,=\, c_2(V)$ is semistable (respectively, polystability) in the sense of Definition \ref{def1}. Conversely, if $(V,\, \theta)$ is a Higgs bundle on $M$ with $c_1(V)\,=\, 0\,=\, c_2(V)$ such that it is semistable (respectively, polystability) in the sense of Definition \ref{def1}, then it is straightforward to deduce that $(V,\, \theta)$ is semistable (respectively, polystability). \begin{theorem}\label{thmfm} Let $M$ be a compact Moishezon manifold. There is an equivalence of categories between the following two: \begin{enumerate} \item The objects are flat complex connections on $M$, and morphisms are connection preserving homomorphisms. \item Objects are Higgs bundles $(V,\, \theta)$ on $M$ satisfying the following conditions: $c_1(V)\,=\, 0\, =\, c_2(V)$, and $(V,\, \theta)$ is semistable. The morphisms are homomorphisms of Higgs bundles. \end{enumerate} \end{theorem} \begin{proof} Fix a holomorphic map $$\phi\, :\, X\, \longrightarrow\, M$$ from a smooth complex projective variety $X$ such that $\phi$ is bimeromorphic. Let $(E,\, \nabla)$ be a flat complex connection on $M$. Consider the flat complex connection $(\phi^*E,\, \phi^*\nabla)$ on $X$. Let $(V,\, \theta_V)$ be the semistable Higgs bundle on $X$ that corresponds to it by Theorem \ref{thms}. We shall show that there is a holomorphic vector bundle $W$ on $M$ such that $\phi^*W\, =\, V$. Let $\beta\,:\, F' \,\longrightarrow\, X$ be the desingularization of a subvariety $F \,\subset\, X$ that is contained in a $\phi$-fiber. Using the fact that $F$ is contracted by the map $\phi$ we conclude that the homomorphism $$\beta_*\,:\, \pi_1(F') \,\longrightarrow\, \pi_1(X)$$ induced by $\beta$ is trivial. Since Corollary \ref{cor1} remains valid when polystability is replaced by semistability (see Proposition \ref{props}), from Corollary \ref{cor1}(1) we know that $\beta^* V$ is a trivial holomorphic vector bundle. In particular, the restriction $V|_F$ is numerically flat. Therefore, by Proposition \ref{propositionpullback}, there is a holomorphic vector bundle $W$ on $M$ such that $\phi^*W\, =\, V$. Let $U\, \subset\, M$ be the open subset over which $\phi$ is an isomorphism. The Higgs field $\theta_V$ produces a Higgs field on $W\vert_U$. Now by Hartogs' extension theorem, this Higgs field on $W\vert_U$ extends to a Higgs field on $W$ over $M$; this extended Higgs field on $W$ will be denoted by $\theta_W$. We have $c_1(W)\,=\, 0\, =\, c_2(W)$, because $c_1(V)\,=\, 0\, =\, c_2(V)$. We shall show that the Higgs bundle $(W,\, \theta_W)$ on $M$ is semistable. Take any pair $(C,\, \tau)$, where $C$ is a compact connected Riemann surface and $\tau\, :\, C\, \longrightarrow\, M$ is a nonconstant holomorphic map. Then there is a triple $(\widetilde{C},\, \psi,\, \widetilde{\tau})$ such that \begin{itemize} \item $\widetilde C$ is a compact connected Riemann surface, \item $\psi\, :\, \widetilde{C}\, \longrightarrow\, C$ is a surjective holomorphic map, \item ${\widetilde\tau}\, :\, {\widetilde C}\, \longrightarrow\, X$ is a holomorphic map, and \item $\phi\circ{\widetilde\tau}\,=\, \tau\circ\psi$. \end{itemize} {}From Theorem \ref{thm1} and Proposition \ref{props} we know that the Higgs bundle $({\widetilde\tau}^*V,\, {\widetilde\tau}^*\theta_V)$ is semistable. Combining this with the two facts that $\phi\circ{\widetilde\tau}\,=\, \tau\circ\psi$ and $(V,\, \theta_V)\,=\, (\phi^*W,\, \phi^*\theta_W)$, we conclude that the Higgs bundle $(\psi^*\tau^*W,\, \psi^*\tau^*\theta_W)$ is semistable. But this implies that $(\tau^*W,\, \tau^*\theta_W)$ is semistable. Indeed, if a subbundle $W'\, \subset\, \tau^*W$ contradicts the semistability condition for $(\tau^*W,\, \tau^*\theta_W)$, then $\psi^*W'\, \subset\, \psi^*\tau^*W$ contradicts the semistability condition for $(\psi^*\tau^*W,\, \psi^*\tau^*\theta_W)$. Therefore, the Higgs bundle $(W,\, \theta_W)$ is semistable. To prove the converse, let $(V,\, \theta)$ be a Higgs bundle on $M$ satisfying the following conditions: $c_1(V)\,=\, 0\, =\, c_2(V)$, and $(V,\, \theta)$ is semistable. Consider the Higgs bundle $(\phi^*V,\, \phi^*\theta)$ on $X$. We evidently have $c_1(\phi^* V)\,=\, 0\, =\, c_2(\phi^* V)$. We shall prove that $(\phi^*V,\, \phi^*\theta)$ is semistable. Take any pair pair $(C,\, \tau_1)$, where $C$ is a compact connected Riemann surface and $\tau_1\, :\, C\, \longrightarrow\, X$ is a holomorphic map. Set $$ \tau\,=\, \phi\circ\tau_1\, . $$ Therefore, the given condition that $(\tau^*V,\, \tau^*\theta)$ is semistable implies that $(\tau^*_1\phi^*V,\, \tau^*_1\phi^*\theta)$ is semistable. But this implies that $(\phi^*V,\, \phi^*\theta)$ is semistable with respect to any polarization on $X$. Let $(E',\, \nabla')$ be the flat complex connection on $X$ that corresponds to $(\phi^*V,\, \phi^*\theta)$ by Theorem \ref{thms}. Since the map $\phi$ is bimeromorphic, the homomorphism $\phi_*\,:\, \pi_1(X) \,\longrightarrow \,\pi_1(M)$ induced by it is an isomorphism. Consequently, $(E',\, \nabla')$ produces a flat complex connection on $M$. It is straightforward to check that the above two constructions, namely from flat connection on $M$ to Higgs bundles on $M$ and vice versa, are inverses of each other. \end{proof} \begin{proposition}\label{propfm} Let $M$ be a compact Moishezon manifold. There is an equivalence of categories between the following two: \begin{enumerate} \item The objects are completely reducible flat complex connections on $M$, and morphisms are connection preserving homomorphisms. \item Objects are Higgs bundles $(V,\, \theta)$ on $M$ such that $c_1(V)\,=\, 0\, =\, c_2(V)$, and $(\phi^*V,\, \phi^*\theta)$ is polystable; the morphisms are homomorphisms of Higgs bundles. \end{enumerate} \end{proposition} \begin{proof} The proof is very similar to the proof of Theorem \ref{thmfm}. Let $(V,\, \theta)$ be a Higgs bundle on $M$ such that $c_1(V)\,=\, 0\, =\, c_2(V)$ and $(\phi^*V,\, \phi^*\theta)$ is polystable. Take any pair pair $(C,\, \tau_1)$, where $C$ is a compact connected Riemann surface and $\tau_1\, :\, C\, \longrightarrow\, X$ is a holomorphic map. Setting $\tau\,=\, \phi\circ\tau_1$ we conclude that $(\tau^*V,\, \tau^*\theta)\,=\, (\tau^*_1\phi^*V,\, \tau^*_1\phi^*\theta)$ is polystable. This implies that $(\phi^*V,\, \phi^*\theta)$ is semistable. Let $(E,\, \nabla)$ be the complex flat connection on $X$ that corresponds to $(\phi^*V,\, \phi^*\theta)$ by Theorem \ref{thms}. If $\tau_1(C)$ is an intersection of very ample hypersurfaces on $X$, the homomorphism of fundamental groups induced by $\tau_1$ $$ \tau_{1*}\, :\, \pi_1(C,\, x_0)\, \longrightarrow\, \pi_1(X,\, x_0) $$ is surjective. Since $(\tau^*_1\phi^*V,\, \tau^*_1\phi^*\theta)$ is polystable, the restriction of $(E,\, \nabla)$ to $\tau_1(C)$ is completely reducible. Now from the surjectivity of $\tau_{1*}$ it follows immediately that $(E,\, \nabla)$ is completely reducible on $X$. Since the homomorphism $\phi_*\,:\, \pi_1(X) \,\longrightarrow \,\pi_1(M)$ induced by $\phi$ is an isomorphism, the completely reducible complex flat connection $(E,\, \nabla)$ on $X$ produces a completely reducible complex flat connection on $M$. To prove the converse, let $(E,\, \nabla)$ be a completely reducible complex flat connection on $M$. Since the homomorphism $\phi_*\,:\, \pi_1(X) \,\longrightarrow \,\pi_1(M)$ induced by $\phi$ is an isomorphism, the pulled back flat connection $(\phi^*E,\, \phi^*\nabla)$ is completely reducible. Let $(V,\, \theta_V)$ be the polystable Higgs bundle on $X$ corresponding to $(\phi^*E,\, \phi^*\nabla)$. As in the proof of Theorem \ref{thmfm}, there is a Higgs bundle $(W,\, \theta_W)$ on $M$ such that $$ (\phi^*W,\, \phi^*\theta_W)\, =\, (V,\, \theta_V) $$ and $c_1(W)\,=\, 0\,=\, c_2(W)$. In the proof of Theorem \ref{thmfm} it was shown that $(W,\, \theta_W)$ is semistable. To complete the proof we need to show that $(W,\, \theta_W)$ is polystable. Take any pair $(C,\, \tau)$, where $C$ is a compact connected Riemann surface and $\tau\, :\, C\, \longrightarrow\, M$ is a nonconstant holomorphic map. There is a triple $(\widetilde{C},\, \psi,\, \widetilde{\tau})$ such that \begin{itemize} \item $\widetilde C$ is a compact connected Riemann surface, \item $\psi\, :\, \widetilde{C}\, \longrightarrow\, C$ is a surjective holomorphic map, \item ${\widetilde\tau}\, :\, {\widetilde C}\, \longrightarrow\, X$ is a holomorphic map, and \item $\phi\circ{\widetilde\tau}\,=\, \tau\circ\psi$. \end{itemize} We know that $({\widetilde\tau}^*\phi^*W,\, {\widetilde\tau}^*\phi^*\theta_W)$ is polystable and the corresponding flat connection, namely $({\widetilde\tau}^*\phi^*E,\, {\widetilde\tau}^*\phi^*\nabla)$, is completely reducible. For the homomorphism of fundamental groups induced by $\psi$ $$ \psi_*\, :\, \pi_1(\widetilde{C})\, \longrightarrow\, \pi_1(C) $$ the image is a finite index subgroup. This implies that the flat connection $(\tau^*E,\, \tau^*\nabla)$ is completely reducible. Therefore, the Higgs bundle $(\tau^*W,\, \tau^*\theta_W)$ is polystable, so $(W,\, \theta_W)$ is polystable. This completes the proof. \end{proof} \section{Nonabelian Hodge theory for Fujiki class $\mathcal C$ manifolds} Let $M$ be a compact connected complex manifold lying in Fujiki class $\mathcal C$. Fix a bimeromorphic map \begin{equation}\label{xf} \phi\, :\, X\, \longrightarrow\, M \end{equation} such that $X$ is compact K\"ahler. Let $(V, \,\theta)$ be a Higgs bundle on $M$ such that $c_1(V)\,=\, 0\,=\, c_2(V)$. Further assume that the pulled back Higgs bundle $(\phi^*V,\, \phi^*\theta)$ is polystable. As noted in Remark \ref{rem1}, this condition is independent of the choice of the K\"ahler form on $X$. \begin{lemma}\label{lem1} Let $f\, :\, Y\, \longrightarrow\, M$ be a holomorphic map from a compact K\"ahler manifold $Y$ such that $f$ is bimeromorphic. Then the pulled back Higgs bundle $(f^*V,\, f^*\theta)$ is also polystable. \end{lemma} \begin{proof} Consider the irreducible component of the fiber product $Y\times_M X$ that dominates $M$. Let $Z$ be a desingularization of it. Let $p_Y$ and $p_X$ be the natural projections of $Z$ to $Y$ and $X$ respectively. Since $(\phi^*V,\, \phi^*\theta)$ is polystable with $c_1(\phi^*V)\,=\, 0\, =\, c_2(\phi^*V)$, from Theorem \ref{thm1} we know that $(p^*_X\phi^*V,\, p^*_X\phi^*\theta)$ is polystable; as before, this condition is independent of the choice of the K\"ahler form on $Z$. The Higgs bundle $(p^*_Y f^*V,\, p^*_Y f^*\theta)$ is polystable, because $$ (p^*_X\phi^*V,\, p^*_X\phi^*\theta)\,=\, (p^*_Y f^*V,\, p^*_Y f^*\theta)\, . $$ Let $(W,\, \nabla)$ be the completely reducible flat complex connection on $Z$. The homomorphism $p_{Y*}\, :\, \pi_1(Z,\, z_0)\, \longrightarrow\, \pi_1(Y,\, p_Y(z_0))$ induced by $p_Y$ is an isomorphism, because the map $p_Y$ is bimeromorphic. So $(W,\, \nabla)$ gives a completely reducible flat complex connection $(W',\, \nabla')$ on $Y$. The Higgs bundle on $Y$ corresponding to $(W',\, \nabla')$ is isomorphic to $(f^*V,\, f^*\theta)$. In particular, $(f^*V,\, f^*\theta)$ is polystable. \end{proof} From Lemma \ref{lem1} it follows that the second category in the following theorem is independent of the choice of the pair $(X,\, \phi)$. \begin{theorem}\label{thm2} Let $M$ be a compact connected complex manifold lying in Fujiki class $\mathcal C$. There is an equivalence of categories between the following two: \begin{enumerate} \item The objects are completely reducible flat complex connections on $M$, and morphisms are connection preserving homomorphisms. \item Objects are Higgs bundles $(V,\, \theta)$ on $M$ such that $c_1(V)\,=\, 0\, =\, c_2(V)$, and $(\phi^*V,\, \phi^*\theta)$ is polystable; the morphisms are homomorphisms of Higgs bundles. \end{enumerate} \end{theorem} \begin{proof} The homomorphism of fundamental groups induced by $\phi$ $$ \phi_*\, :\, \pi_1(X,\, x_0)\, \longrightarrow\, \pi_1(M,\, \phi(x_0)) $$ is an isomorphism, because $\phi$ is bimeromorphic. Consequently, the operation of pullback, to $X$, of flat vector bundles on $M$ identifies the flat bundles on $M$ with those on $X$. Also, connection preserving homomorphisms between two flat bundles on $M$ coincide with connection preserving homomorphisms between their pullback to $X$. Let $(V_1,\, \theta_1)$ be a polystable Higgs bundle on $X$ with $c_1(V_1)\,=\,0\,=\, c_2(V_1)$. Then, as shown in the proof of Theorem \ref{thmfm}, using Proposition \ref{propositionpullback} the vector bundle $V_1$ descends to a holomorphic vector bundle on $M$, meaning there exists a bundle $W_1$ on $M$ such that $V_1\,=\, \phi^*W_1$. Since $c_1(V_1)\,=\,0\,=\, c_2(V_1)$, this implies that $c_1(W_1)\,=\,0\,=\, c_2(W_1)$. Let $U\, \subset\, X$ be the open subset over which $\phi$ is an isomorphism. The Higgs field $\theta_1$ defines a Higgs field on $W_1\vert_{\phi(U)}$. Again using Hartogs' extension theorem this Higgs field on $W_1\vert_{\phi(U)}$ extends to a Higgs field on $W_1$; this extended Higgs field on $W_1$ will be denoted by $\theta'$. It is evident that $(\phi^*W_1,\, \phi^*\theta')\,=\, (V_1,\, \theta_1)$. If $(W_2,\, \theta'')$ is a polystable Higgs bundle on $M$ with $c_1(W_2)\,=\,0\,=\, c_2(W_2)$, then it can be shown that \begin{equation}\label{st} H^0(M,\, \text{Hom}((W_1,\, \theta'),\, (W_2,\, \theta'')))\,=\, H^0(X,\, \text{Hom}((V_1,\, \theta_1),\, (\phi^*W_2,\, \phi^*\theta'')))\, . \end{equation} To prove this, let $(E_1,\, \nabla_1)$ (respectively, $(E_2,\, \nabla_2)$) be the completely reducible flat complex connection on $M$ corresponding to $(W_1,\, \theta')$ (respectively, $(W_2,\, \theta'')$). Then $$ H^0(X,\, \text{Hom}((V_1,\, \theta_1),\, (\phi^*W_2,\, \phi^*\theta'')))\,=\, H^0(X,\, \text{Hom}((\phi^*E_1,\, \phi^*\nabla_1),\, (\phi^*E_2,\, \phi^*\nabla_2)))\, . $$ But $$ H^0(X,\, \text{Hom}((\phi^*E_1,\, \phi^*\nabla_1),\, (\phi^*E_2,\, \phi^*\nabla_2)))\,=\, H^0(M,\, \text{Hom}((E_1,\, \nabla_1),\, (E_2,\, \nabla_2)))\, . $$ Hence from the isomorphism $$ H^0(M,\, \text{Hom}((E_1,\, \nabla_1),\, (E_2,\, \nabla_2)))\,=\, H^0(M,\, \text{Hom}((W_1,\, \theta'),\, (W_2,\, \theta''))) $$ we conclude that \eqref{st} holds. This completes the proof. \end{proof} It is rather routine to check that the results on \cite{BG} extend to the context of Fujiki class $\mathcal C$ manifolds. \section*{Acknowledgements} We are grateful to Andreas H\"oring who helped us providing the proof of Proposition \ref{propositionpullback}. We also thank Yohan Brunebarbe and Carlos Simpson for useful conversations on the subject. This work has been supported by the French government through the UCAJEDI Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR2152IDEX201. The first-named author is partially supported by a J. C. Bose Fellowship, and school of mathematics, TIFR, is supported by 12-R$\&$D-TFR-5.01-0500.
1,477,468,751,311
arxiv
\section{Introduction} \label{Intro} The European Space Agency {\it Gaia } \ mission was successfully launched on December 19, 2013 from the French Guiana Space centre in Kourou. Gaia is an ambitious astrometric, photometric and spectroscopic survey of a large part of the Milky Way: about 1\% of the Galactic stellar content down to $V \sim 20^{th}$ magnitude will be observed several tens of times. {\it Gaia } will thus revolutionize our view of the Galaxy together with our understanding of its formation and evolution history. Apart from the astrometric instrument, two low-resolution spectrophotometers (Blue and Red Photometers, BP/RP) and the Radial Velocity Spectrometer (RVS) are in operation onboard of {\it Gaia } . Recent summaries of these instrument characteristics, their main goals, and expected scientific impact can be found in \citet{deBruijne12,BailerJones13}\footnote{see also http://www.cosmos.esa.int/web/gaia/science-performance}. The analysis of the {\it Gaia } data is performed by the {\it Data Processing and Analysis Consortium} (DPAC) composed of nine {\it Coordination Units} (CU). One of them, CU8 {\it Astrophysical Parameters}, is in charge of the classification and the parametrization of the observed targets \citep[see][]{BailerJones13}. The CU8/{\it Working Group} that is responsible for the main parametrization of the RVS spectra is the {\it Global Stellar Parametrizer - Spectroscopy} ({\it GSP-Spec} ). To complement this {\it GSP-Spec} \ parametrization, other CU8 modules ({\it Extended Stellar Parametrizer, ESP}) will also estimate atmospheric parameters from RVS spectra of more specific types of stars as emission-line stars ({\it ESP-ELS}), hot stars ({\it ESP-HS}), cool stars ({\it ESP-CS}) and, ultra-cool stars ({\it ESP-UCD}) \citep[see][]{BailerJones13}. {\it GSP-Spec} \ is composed of three research groups having different and complementary expertises in automated stellar classification from spectral data. In this article, we present the parametrization performed within this working group for RVS spectra. The RVS is a high-resolution integral-field spectrograph that will survey the whole sky at a rate of about 100 spectra per second, producing about 15~billion of spectra during the mission. From its preliminary real performances, the RVS will collect spectra with large enough signal-to-noise ratio (S/N) in order to derive their radial velocity for stars brighter than $G_{\rm RVS}$ $\la 16$ (i.e. about 150 million stars, $G_{\rm RVS}$ \ being the {\it Gaia } magnitude of the targets through the RVS filter). This limiting magnitude corresponds to $V \la 17.3$ for a solar-type star (for the corresponding magnitudes in other photometric Gaia bands, see Tab.~\ref{Tab_GRVS} and Fig.~\ref{FigColorColor} presented in Sect.\ref{GridRandom}). Several tens of millions of stars will be observed by the RVS down to a magnitude of $G_{\rm RVS}$ $\la 13$, and about 5 million stars down to $G_{\rm RVS}$ $\la 12$. The RVS will provide spectra in the CaII IR triplet region (from 847 to 871~nm\footnote{The RVS red edge has been shifted from 874 to 871~nm with respect to the original configuration, following a change in the RVS filter effective transmission. In the present work, we have thus adopted this 871~nm cut-off for the simulated spectra.}) at a spectral resolution of $\sim$11\,200\footnote{The design specification of R$\sim$10\,500 being exceeded.}. In addition to the CaII strong lines, one encounters, in the RVS spectral range and for late-type star spectra, weak lines of Si~I, Ti~I, Fe~I, etc. In hotter (A-F types) star spectra, weak lines of N~I, S~I, and Si~I appear around the Ca~II lines and the strong Paschen hydrogen lines (see Fig.~\ref{Spec1} \& \ref{Spec2}). Even hotter ($T_{\rm eff}$ $\ga$ 15\,000~K) stellar spectra contain lines of N~I, Al~II, Ne~I, and Fe~II whereas the Ca~II lines start to decrease, and some He~I lines appear. On the other hand, the {\it Gaia } commissioning phase has revealed that the RVS suffers from (i) a level of scattered light higher than expected, and variable with time and CCD position (mainly sunlight scattered around sunshields) and thus an increased noise for part of the spectra, together with (ii) a time-variable throughput loss due to mirror contamination that reduces the collected signal by a few tenths of a magnitude. This last issue is regularly corrected thanks to de-contamination campaigns that reduce the loss to acceptable levels. Moreover, the {\it Gaia } DPAC has put in place a new version of the on-board software which results in a data collection scheme which is more robust against the stray light. This is mainly realised by lowering the auto-collimation width of read-out windows in the RVS (and possibly the astrometric field) such that less noise due to stray light is accumulated. The new video processing unit software that makes this possible has already been uploaded to the satellite \citep{Fleitas2015}. In addition, following the actual RVS performances revealed by the commissioning phase, it has also been decided that every RVS spectra will be provided in the nominal high-resolution mode to minimize the background contamination that is a function of the window width. Initially, a binning by a factor three for stars fainter than $G_{\rm RVS}$$<$10 was planned \citep[e.g.][]{BailerJones13}, decreasing the effective resolution to around 7500. This possibility has now been definitely abandoned. In this paper, the above mentioned post-launch RVS characteristics are taken into account and the new S/N-magnitude relation recently published by the European Space Agency has been adopted\footnote{See http://www.cosmos.esa.int/web/gaia/sn-rvs}. In addition, the expected final {\it GSP-Spec} \ performances are given for the actual RVS resolution of $\sim$11\,200. Nevertheless, the influence of the effective resolution change, at a constant S/N value, is studied in Sect.~\ref{Res} . The main goal of the RVS is to measure the radial velocity of the stars in order to get their full 3D space motions when combined with the {\it Gaia } proper motions. However, the RVS data will be also very useful to parametrize brighter stars observed by {\it Gaia } in complement to the parametrization performed independently from the two more sensitive low-resolution spectrophotometers \citep[see][for an estimate of the expected parametrization performances with BP/RP data together with the performance improvements presented in \citet{BailerJones13} and Andrae et al. (2015, in preparation)]{Liu12}. We point out that, in the present work, we do not consider for {\it GSP-Spec} \ any ({\it a-priori}) input from the BP/RP parametrization although this is one of the alternatives implemented in the {\it Gaia } global data processing pipeline developed by CU8 and called Astrophysical parameters inference system \citep[Apsis, see][]{BailerJones13}. Since {\it Gaia } scans the whole sky, each target will be observed several tens of times depending on their location on the sky (with an average of 40 epochs per stars for the RVS at the end of the mission, assuming the nominal 5-year mission). As a consequence, the S/N of the combined spectra will increase with time during the mission and we will have to re-parametrize these better quality spectra delivered by the successive releases. In the following, we consider RVS end-of-mission spectra that are a combination of the successive individual observations. It is expected that any star brighter than $G_{\rm RVS}$ $\la$~14 (i.e. several tens of millions) will be parametrized by {\it GSP-Spec} . The estimated stellar parameters will be the effective temperature ($T_{\rm eff}$ ), the surface gravity (log($g$) ), the global metallicity ([M/H] ), and the abundance of $\alpha$ -elements versus iron ([$\alpha$/Fe] ). In a second step and whenever possible (depending on spectral type, metallicity, S/N ratio, radial velovity shifts...), the individual chemical abundances of several species as Fe, Ca, Mg, Ti, and Si will be also measured. This should be performed for about five million sources with an expected accuracy of about 0.1~dex for $G_{\rm RVS}$ $\la$ 12 owing to a specific optimization method that is under development \citep{Guiglion14}. The amount of spectra to parametrize requires the use of automated methods that have to be fast enough and able to manage different types of stars. Therefore, {\it GSP-Spec} \ is currently composed of different independent codes based on different algorithms. The advantage of having different codes is to get independent parameter estimates and to get the best parameters all across the studied space (a given algorithm could provide excellent results for some parameter combinations and/or S/N ratios but rather poor results for others). Therefore, every RVS spectra will be parametrized by the different {\it GSP-Spec} \ codes and most stars will be assigned to several parameter estimates with quality flags. In this paper, we describe the {\it GSP-Spec} \ codes and their expected performances for different types of stellar populations as they have been implemented at the {\it Gaia } launch epoch. Increasingly optimized versions of the {\it GSP-Spec} \ module of Apsis \citep[see][]{BailerJones13} are delivered at each operations cycle. {\it GSP-Spec} \ is expected to be running in operations cycle~4 in 2017, with a possible contribution from the third {\it Gaia } \ data release (planned around 2018). In the following, we first present in Sect.~\ref{Algo} the codes specifically developed or adapted to the RVS stellar spectra by {\it GSP-Spec} \ in order to estimate their stellar atmospheric parameters together with their enrichment in $\alpha$ -elements with respect to iron. We then detail in Sect.~\ref{Tests} the methodology adopted for testing and comparing these different codes and their {\it GSP-Spec} \ performances on simulated spectra at different RVS magnitudes are investigated and discussed in Sect.~\ref{Perf}. The end-of-mission {\it GSP-Spec} \ expected parametrization is described in Sect.~\ref{final} for different types of stars and S/N ratios. We then provide a comparison with the expected performances for the parametrization from BP/RP photometry (Sect.~\ref{GSPPhot}) and from some ground-based spectroscopic surveys (Sect.~\ref{Surveys}). We finally conclude this work in Sect.~\ref{Conclu}. \section{The {\it GSP-Spec} \ automated parametrization codes} \label{Algo} In this work, four different codes for estimating the atmospheric stellar parameters and the overall [$\alpha$/Fe] \ chemical index (the latter, for cool FGK-spectral type stars, only) have been tested for {\it GSP-Spec} : FERRE, GAUGUIN, MATISSE, and Artificial Neural Networks (ANN). These codes have already been extensively described in previous papers. We hereafter refer to the indicated references for their detailed description. We below provide only a brief summary of these codes and we mainly focus on the specificities that have been developed for {\it GSP-Spec} . Furthermore, we point out that most of these codes have already been separately used to parametrize real stellar spectra collected at a similar resolution and over the same wavelength region as the ones of the RVS (see references below). However, the present work is the first one to illustrate and compare their performances in the {\it Gaia } context. Within the different data mining approaches that have been developed so far, {\it GSP-Spec} \ parametrization algorithms belong to the class using the mapping with reference models in a continuously variable space. In addition, they belong to the three main families of parametrization: optimization methods, projection methods and pattern recognition. \begin{itemize} \item The optimization codes (FERRE and GAUGUIN) perform a distance minimization between the full input spectrum and the reference spectra grid. FERRE\footnote{The code is now public and available at: http://hebe.as.utexas.edu/ferre} \citep{Allende06} is a FORTRAN90 code that compares the $\chi^2$ between models and observations to identify the optimal set of atmospheric parameters (and/or abundances) of a star. The search is performed using the Nelder-Mead (1965) algorithm. The model evaluation is based on linear interpolation in a pre-computed grid, which is held on the computer's memory for speed. The observed and (interpolated) model spectra are forced to have the same mean value. Multiple searches are performed for each spectrum, initialized at 100 random positions in the grid. The code is parallelized over multiple cores using OpenMP and is made available upon request. GAUGUIN \citep{Bijaoui12} is a classical local optimization method implementing a Gauss-Newton algorithm. It is based on a local linearisation around a given set of parameters that are associated to a reference synthetic spectrum (via linear interpolation of the derivatives). A few iterations are carried out through linearisation around the new solutions, until the algorithm converges towards the minimum distance. In practice, and in order to avoid trapping into secondary minima, GAUGUIN is initialized by parameters independently determined by other algorithms such as MATISSE (this is noted hereafter as MATISSE$_G$, see below). GAUGUIN is part of the analysis pipeline of the {\it Gaia }- ESO Survey \citep[GES,][]{Gilmore12} for the GIRAFFE spectra (Recio-Blanco et al., 2015, in preparation). \item The MATISSE algorithm \citep{Recio-Blanco06} is a projection method in the sense that the full input spectra are projected into a set of vectors derived during a learning phase, based on the noise-free reference grids (see Sect.~\ref{Grids}). These vectors are a linear combination of reference spectra and could be roughly viewed as the derivatives of these spectra with respect to the different stellar parameters. MATISSE is thus a local multi-linear regression method. Furthermore, a two-step procedure in the parameter estimation has been implemented within MATISSE to tackle non-linearity problems. Other recent applications of MATISSE to very large amounts of real observed spectra are the {\it AMBRE} project \citep{deLaverny13, Worley12}, the RAVE fourth Data Release \citep{Kordo13} and the {\it Gaia }- ESO Survey (see Recio-Blanco et al., 2015, in preparation). \item Finally, the ANN code uses a pattern recognition approach to parametrise the spectra. The implementation of the ANN method for {\it GSP-Spec} \ has already been presented in \citet{Manteiga2010}. Shortly, the architecture is a feed-forward network with three layers, trained with the online error back propagation algorithm. We generate one network for each stellar parameter where the number of neurons in the input layer coincides with the number of points of the format that was selected as signal representation. The output layer consists of the parameter to be predicted. The activation function for the hidden and output neurons of the network is a sigmoidal function. We also set the number of hidden neurons as the minimum between half the dimensionality of the input signal and 200 hidden units at maximum to reduce the computational burden of the training process. To deal with the initialization dependence and possible local minima, a series of training procedures are performed, until a near-optimal value is reached. On the other hand, early stopping is also used by means of a validation dataset to avoid the problem of over fitting, so that the network state which best generalizes is kept. Furthermore, to generalize the application to random spectra, we use 100 noised reference grid spectra during the training phase (see below). Finally, we made use of the wavelet decomposition to obtain new signal representations (low pass filter) which are used as the network inputs, both in the training and testing phases. In practice, this means that the results of this code are obtained by adopting the approximations of the first, second or third level in the wavelet pyramid (depending on the S/N value) as input for the ANN instead of the full spectra. \end{itemize} As outputs, these four codes provide the three atmospheric parameters ($T_{\rm eff}$ , log($g$) , and [M/H] ) and the [$\alpha$/Fe] \ chemical index (for FGK stars only) estimates together with their associated uncertainties (except for the ANN). For some of them (FERRE, GAUGUIN, MATISSE), a quality control parameter based on the goodness of the fit between the input spectrum and an interpolated one at the estimated parameters is also provided. We report in Tab.\ref{Tab_codes} more details on some technical aspects of these codes. \begin{table} \sidecaption \caption{Summary of technical aspects of the tested codes or algorithms.} \label{Tab_codes} \centering \begin{tabular}{lccc} & & & \\ \hline & & & \\ & Training & Programming & Already implemented \\ & phase & language & in Apsis \\ & & & \\ \hline & & & \\ FERRE & No & Fortran & No \\ GAUGUIN & No & Fortran \& Java & Yes \\ MATISSE & Yes & Fortran \& Java & Yes \\ ANN & Yes & Java & No \\ & & & \\ \hline \end{tabular} \end{table} \section{Adopted methodology for quantifying the code performances} \label{Tests} We present here the implemented homogeneous tests, including the used data and the analysis methodology, that estimate and compare the expected performances of the different {\it GSP-Spec} \ codes. The ultimate goal is to determine the optimal application fields of the different codes in terms of stellar types and S/N. This will lead to the definition of the quality flags that will be assigned to the different parameter estimates within the {\it GSP-Spec} \ pipeline. In this Section, the data used for the tests and the considered methodology for the analysis of the results are described. The different subsections present the steps of the followed procedure. In particular: \begin{itemize} \item The codes have been trained (when necessary, depending on the algorithm strategy, see Sect.~\ref{Algo}) using large grids of noise-free RVS simulated synthetic spectra that are described in Sect.~\ref{Grids}\footnote{We however point out that the non-trained codes also use the spectra grid for their parametrization based on a fitting process.} \item Then, in order to perform an homogeneous comparison of the methods, parametrisation tests have been performed using noised random grids. The six adopted S/N values are 350, 150, 125, 40, 20 and 10. Those test grids contain interpolated spectra with arbitrary parameter values (see Sect.~\ref{GridRandom}) that span the whole grid parameters range. \item A subsample of the previously noised random spectra, excluding non-physical parameter combinations and correctly populating the Hertzsprung-Russel diagram, has been selected (c.f. Sect.~\ref{RandomSamples}). \item Finally, we have tackled the problem of the recently abandoned RVS spectra rebin for stars with $G_{\rm RVS}$$>10$ (c.f. Sect.~\ref{Res}). The influence of this change in the {\it GSP-Spec} \ performances is quantified thanks to specific tests with one of the tested algorithms. \end{itemize} We would like to point out that, in this work, we favoured the use of synthetic spectra instead of real ground based observed ones (RVS data not being available yet) in order to explore any possible combination of the four parameters over a wide range of possible values and to keep a good homogeneity between the tests. We are fully aware that synthetic spectra may not be perfectly realistic when compared to observed stars for some parameter combinations but this will not affect the codes comparison. Of course, the application to real observed spectra will lead to larger errors ({\it external ones}) mostly due to the possible mismatches that could exist between synthetic and observed spectra. These effects will be quantified (and possibly corrected) during the mission owing to calibration with {\it reference stars} that will be (or already are for some of them) accurately parametrized \citep[see][]{BailerJones13}. Finally, we neglect in the following any effects that could be induced by wrong normalization or radial velocity corrections of the input spectra. Within the {\it Gaia } analysis pipeline, such problems will be examined, in collaboration with {\it GSP-Spec}, by the CU6 in charge of providing RVS spectra to CU8. The synthetic spectra described below are thus all at the rest frame and normalized. We, however, point out that, as the estimation of the atmospheric parameters and individual abundances can be quite sensitive to the pseudo-continuum normalization, an automated renormalization of the input spectra has already been implemented within {\it GSP-Spec} . Indeed, the RVS spectra can be renormalized through an iterative procedure coupled with the atmospheric parameters estimates. Such an iterative procedure has already be shown to be very successful when automatically applied to real spectra \citep[][and the GES pipeline, Recio-Blanco et al., 2015, in preparation]{Kordo11a, Worley12, Kordo13}. More specifically, we also point out that \cite{Kordo11a} have already shown that parametrization errors induced by normalization uncertainties as large as 3\% of the continuum level for RVS-like spectra can be neglected if such an iterative renormalisation is implemented. This is however not considered in the following tests since we remind that we focus on already perfectly normalized input synthetic spectra. \subsection{Grids of reference spectra} \label{Grids} \begin{figure*}[ht!] \sidecaption \includegraphics[height=12.cm,width=14.cm]{SpectresTeff.pdf} \caption{Noise-free simulated RVS spectra for late B to K-spectral type metal-rich dwarf stars, included in the {\it reference} grids. The adopted effective temperatures are 11\,000~K, 9\,000~K, 7\,500~K, 6\,500~K, 5\,500~K, and 4\,500~K from bottom to top, respectively. The other stellar parameters are kept constant at log($g$) = 4.5~cm/s$^2$ , [M/H] = +0.0~dex, and [$\alpha$/Fe] = +0.0~dex. The identified lines refer to most of the strongest atomic transitions that are present in the different spectra. } \label{Spec1} \end{figure*} \begin{figure*}[ht!] \sidecaption \includegraphics[height=12.cm,width=14.cm]{SpectresCompar.pdf} \caption{Same as Fig.~\ref{Spec1} but for cool stars only. Taking as reference a Solar spectrum defined with 5\,750~K, log($g$) = 4.5~cm/s$^2$ , [M/H] = +0.0~dex, [$\alpha$/Fe] = +0.0~dex and R$\sim$11\,000 (top panel), the following panels show the effect of (from top to bottom): a change in surface gravity to log($g$) = 2.0~cm/s$^2$ \ (second panel); and changes in the chemical composition (third, fourth and bottom panels showing respectively [M/H] = +0.5~dex and [$\alpha$/Fe] = +0.2~dex; [M/H] = -1.0~dex and [$\alpha$/Fe] = +0.4~dex; [M/H] = -2.0~dex and [$\alpha$/Fe] = +0.4~dex).} \label{Spec2} \end{figure*} \begin{figure*}[t!] \includegraphics[height=8cm, width=18.cm]{GridReference.pdf} \caption{Distribution of the {\it reference} synthetic spectra in the atmospheric parameter space ($T_{\rm eff}$ , log($g$) , [M/H] \ and, [$\alpha$/Fe] \ are in K, cm/s$^2$ , dex and, dex, respectively). The cool-star and hot-star grids are shown with open circles and filled triangles, respectively. Any combination of the shown stellar parameters has been considered when building these grids, except for the hot-grid in which no variations in [$\alpha$/Fe] \ have been considered (see filled triangles in the right panel).} \label{FigGridReference} \end{figure*} The reference grids are a collection of noise-free high-resolution normalized synthetic spectra that have been computed from Kurucz model atmospheres. We favoured these Kurucz models, contrary to previous works based on MARCS model atmospheres \citep[see, for instance,][]{deLaverny12}, because they allow us to consider consistent spectra for cool FGK and hot BA-spectral type stars. The original computed spectra preserved the continuum slope, but then we continuum normalized each spectrum by (i) fitting iteratively a line and $\sigma$-clipping 10 times points that fell between 0.1$\sigma$ and 3$\sigma$ below and above the fit, respectively, and (ii) dividing the original spectrum by this final fit. We refer in the following to such {\it normalized} spectra, in which the slope in the spectrum, within the RVS wavelength domain, is not conserved. In practice, thanks to the estimates of stellar distance, and after accurate flux calibration, it would also be possible to analyse {\it Gaia }/RVS absolute flux spectra. This possibility is however not considered in this work. Regarding the grid calculations, they were computed using \citet{Castelli03} model atmospheres and a line list from Kurucz\footnote{from his website kurucz.harvard.edu} enhanced with damping constants from \citet{Barklem00} for atomic transitions. The calculations were done using the ASSET synthesis code \citep{Koesterke08, Koesterke09}, sampling the spectra with steps corresponding to 1~km/s. The values of the solar reference abundances in the synthesis are from \citet{Asplund05}, while those of the corresponding atmospheric structures are from \citet{Grevesse98}. The abundances of the $\alpha$ -elements were changed for the synthesis, but not in the model atmospheres. Four parameters were considered for building the FGK-spectral type grid. These parameters are the effective temperature $T_{\rm eff}$ \ that varies from 4\,000 to 8\,000~K (with a step of 250~K), the surface gravity log($g$) \ varying between 2.0 to 5.0~cm/s$^2$ \ (step of 0.5~dex), the mean metallicities varying from $10^{-2.5}$ to $10^{+0.5}$ the solar metallicity (with a step of 0.5~dex), and variations of $\pm 0.4$~dex in the enrichment in the $\alpha$ -chemical species with respect to iron (step of 0.2~dex). This cool-star reference grid contains 5\,831 spectra. For hotter stars (from late B to F-spectral types), only the first three parameters were considered since the spectra become almost metal line-free with increasing $T_{\rm eff}$ . The effective temperature for this hot-star grid varies from 7\,000 to 11\,500~K (step of 500~K). The surface gravity and mean stellar metallicity ranges together with their variation steps are identical to those of the cool-star grid (without variations in [$\alpha$/Fe] ). We end up with a reference grid of 490 hot stellar spectra. We point out that any possible combination of the above mentioned stellar parameters has been considered to build these {\it square} cool and hot grids (i.e. without gaps in any of these three or four parameters). In addition, these very high-resolution grids have then been degraded in order to mimic the RVS instrumental effects by convolution with a Gaussian profile for the spectral resolution (R~$\sim$11\,200) and adopting a sampling of $\sim$0.024~nm/pixel (1\,125~pixels in total). Additionally, a sampling of $\sim$0.072~nm/pix (375~pixels) has been considered to produce two low resolution grids (R~$\sim$7\,500). They have been used to analyse the influence of the post-launch effective resolution change (c.f. Sect.~\ref{Res}). In the following, we will refer to these two RVS synthetic spectra grids as the {\it reference} grids. For illustration, some spectra corresponding to the late B- to K-spectral types are shown in Fig.~\ref{Spec1} together with the identification of some of their strongest lines. Moreover, Fig.~\ref{Spec2} shows some cool stars spectra representative of different Galactic populations. Finally, the distribution of the atmospheric parameters and the [$\alpha$/Fe] \ chemical index of the reference grids is shown in Fig.~\ref{FigGridReference}. \subsection{Grid of noised random spectra} \label{GridRandom} From the {\it reference} high-resolution (R~$\sim$11\,200) and low-resolution (R~$\sim$7\,500) noise-free grids described above, interpolated noised synthetic spectra have been generated for the codes tests. The interpolations were performed at random combinations of the four parameters. The S/N has then been simulated by adding noise (see below) to these interpolated spectra. The six adopted S/N values are 350, 150, 125, 40, 20 and 10. They correspond to the RVS filter magnitudes $G_{\rm RVS}$ \ values equal to 8.4, 10, 10.3, 11.8, 12.6, 13.4 according to the most recent RVS performances, including post-launch studied instrumental effects like straylight contamination, actual line spread function profiles, effects of window decentering and light loss. We again point out that this added noise will allow us to estimate the parameter uncertainties caused by internal and instrumental errors and not the external errors, mainly dominated by possible synthetic spectra mismatches. Moreover, we report in Tab.~\ref{Tab_GRVS} the magnitudes in the $G$ and $V$-bands corresponding to $G_{\rm RVS}$ = 13.5 for different stellar types and in Fig.~\ref{FigColorColor} the ($G$-$G_{\rm RVS}$ ) versus ($V$-$G$) relation for the range of $T_{\rm eff}$ \ studied in the present work. These estimated $V$, $G$ and $G_{\rm RVS}$ -magnitudes and associated colours are based on the colour transformations provided by \cite{Jordi10} together with the photometric colours -- $T_{\rm eff}$ \ relations of \cite{Ramirez05} and \cite{Boyajian12} for cool and hot stars, respectively. \begin{table} \sidecaption \caption{$V$ and $G$-magnitudes corresponding to $G_{\rm RVS}$ = 13.5 for some stellar types as defined in Sect.~\ref{final}.} \label{Tab_GRVS} \centering \begin{tabular}{lcc} & &\\ \hline & &\\ Stellar type & $V$ (mag) & $G$ (mag)\\ & &\\ \hline & &\\ B dwarf & 13.69 & 13.65\\ A dwarf & 13.83 & 13.77\\ & &\\ F metal-poor dwarf & 14.25 & 14.12 \\ F metal-rich dwarf & 14.32 & 14.18 \\ G metal-poor dwarf & 14.54 & 14.33 \\ G metal-rich dwarf & 14.76 & 14.48 \\ K metal-poor dwarf & 15.06 & 14.66 \\ K metal-rich dwarf & 15.37 & 14.82 \\ & &\\ G metal-poor giant & 14.54 & 14.33 \\ G metal-rich giant & 14.76 & 14.48 \\ K metal-poor giant & 15.14 & 14.70 \\ K metal-rich giant & 15.52 & 14.89 \\ & &\\ \hline \end{tabular} \end{table} \begin{figure}[ht] \includegraphics[height=4.4cm, width=8.4cm]{PlotColorColor.pdf} \caption{($G$-$G_{\rm RVS}$ ) versus ($V$-$G$) relation for the stellar types (colour-coded) defined in Sect.~\ref{final}. For clarity reasons, the dotted lines (metal-poor stars) have been slightly (+0.02) horizontally shifted and the dwarf and giant regimes separated. The ($G$-$G_{\rm RVS}$ ) and ($V$-$G$) indexes only vary with the ($B$-$V$) colour index but the covered ranges in both axis are dependent on the metallicity. The crosses along the curves refer to ($B$-$V$) varying from 0.0 to 1.5 (step of 0.25) from bottom to top.} \label{FigColorColor} \end{figure} Practically speaking, we have simulated end-of-mission RVS spectra based on the instrument performance information available to us. The CCD windows for the spectra, and therefore their noise properties depend mainly on the source brightness. On the other hand, we take into account that the binning in the spectral direction corresponds to one RVS pixel\footnote{Note that the RVS CCDs, as well as the other {\it Gaia } instruments, are operated in TDI mode, and hence talking about pixels (or {\it sample} as often used in {\it Gaia } \ literature) is not accurate, since the signal in any sample of a spectrum has been accumulated over many pixels as the star crosses the focal plane, but we will use this term for analogy.} The average signal per binned pixel in one RVS spectrum $I$ (single visit and single CCD) is approximately defined by the brightness in that band as $G_{\rm RVS}$ ~= -2.5*log($I$ x $q$) + 22.5866 where $q$, the number of binned pixels, is equal to 1\,260 for the high resolution and $q$=420 for the low one. We account for Poisson shot noise in the data, and the CCD readout noise, assumed to be 4 e-. Since the final RVS spectra will be accumulated from a variable number of visits, depending mainly on the location of a source on the sky, and since typically objects cross three CCDs per visit, we simulate individual observations (spectra acquired per CCD per visit), and then combine them to produce an end-of-mission spectrum for each source. The final high and low resolution noised grids contain 20\,000 random spectra at each selected $G_{\rm RVS}$ \ magnitude. They cover a wide range of atmospheric parameter values, including some non-physical combinations (unreal stars). Actually, 10\,000 random spectra have been interpolated in each high/low resolution and cool/hot {\it reference} grids, having stellar parameters independent from the $G_{\rm RVS}$ \ values. Moreover, other additionally 100 random noised spectra were selected to train independently the ANN code (see Sect.~\ref{Algo}). These grids are made available in FITS format upon request to the authors. \subsection{Final definition of noised random spectra samples} \label{RandomSamples} \begin{figure}[t!] \includegraphics[height=8.4cm, width=8.4cm]{GridRandom.pdf} \caption{Hertzsprung-Russell diagram of the random spectra (black dots) produced for the code performance tests ($T_{\rm eff}$ \ and log($g$) \ are in K and cm/s$^2$ , respectively). The isochrones in long-dashed, dotted, and dash-dotted lines correspond, respectively to Z=0.06 and 13~Gyr, Z=$10^{-4}$ and 1~Gyr, and Z=0.019 and 100~Myr. The higher density of {\it random} spectra between 7\,000 and 8\,000~K is a consequence of the temperature overlap between the two different {\it reference} grids, for hot and cool stars (see Sect.~\ref{Grids}), from which the random spectra have been interpolated. } \label{FigGridRandom} \end{figure} For the determination of the code performances, we selected a subsample of the previously described 20\,000 random spectra, based on the following criteria: \begin{itemize} \item In order to correctly populate the Hertzsprung-Russell diagram (in terms of stellar parameters combinations, and not of stellar lifetimes) as expected for a galaxy like the Milky Way with different stellar populations (assuming a standard initial mass function over a long period of star formation), we first retrieved PARSEC v1.1 isochrones from \citet{Bressan12}. The selected isochrones correspond to ages of 1 and 13~Gyr and metal content $Z = 10^{-4}$ and 0.06 (i.e. [M/H] = -2.2 and +0.7~dex). \item We then selected among the 20\,000 interpolated spectra described in Sect.~\ref{GridRandom}, those having a ($T_{\rm eff}$ , log($g$) ) combination located between these two isochrones. \item Finally, for any selected ($T_{\rm eff}$ , log($g$) ) combination, we chose all the available interpolated spectra (whatever their [M/H] \ and [$\alpha$/Fe] \ values are). These criteria led to the selection of 9\,067 random spectra. \item In addition, and to consider the few hot giant stars that could be present in the RVS surveyed volume, we also retrieved an isochrone that is representative of younger stars (100~Myr) with solar metallicity ($Z = 0.19$). We then selected all the giants having (i) $T_{\rm eff}$ \ varying between 9\,000 and 11\,000~K and, (ii) a surface gravity comprised between 3.5~cm/s$^2$ \ and the isochrone gravity values. This procedure added 898 stars to the total selected sample. \end{itemize} {The final high and low resolution {\it random} grids (as called hereafter) are composed of 9\,965 noised random spectra at each of the six selected $G_{\rm RVS}$ \ or S/N-values. } Their location in the Hertzsprung-Russell diagram is shown in Fig.~\ref{FigGridRandom}. Their distribution in metallicity and $\alpha$ -enrichment is perfectly flat over the ranges [-2.5, +0.5] and [-0.4,+0.8], respectively, because of their random nature. \subsection{Influence of the post-launch abandoned RVS spectra rebin on {\it GSP-Spec} \ performances} \label{Res} \begin{figure*}[t!] \includegraphics[height=18cm, width=18.cm]{LR_HR_Change.pdf} \caption{Left panels: MATISSE$_G$ results for cool random spectra of S/N$\sim$20 at the nominal high resolution (HR) compared to those for the lower resolution (LR) spectra of the same stars (and same S/N value). Right panels: normalized distribution of residuals for the high-resolution spectra (red curve) and for the low-resolution ones (black curve). The ratio of the 68\% quantile values of both distributions is given for each stellar parameter.} \label{LRHR} \end{figure*} As explained in the Introduction, following the actual RVS performances revealed by the commissioning phase, it has been decided that every RVS spectra will be provided in the nominal high-resolution mode to minimize the background contamination. Before launch, a binning by a factor three for stars fainter than $G_{\rm RVS}$$<$10 was planned \citep[e.g.][]{BailerJones13}, decreasing the effective resolution of their spectra to around 7500. The parametrization algorithms tested for {\it GSP-Spec} \ have therefore faced the problem of correctly evaluating this recent effective resolution change on their performances. In this work, we have decided to analyse and to report the influence of this post-launch revision, while providing up-to-date performance expectations in agreement with the actual nominal RVS configuration. The resolution change issue has been tackled through the following steps: \begin{itemize} \item One of the codes, MATISSE$_G$ (already integrated in the Apsis {\it Gaia } DPAC pipeline) was run both on all the noised random spectra with the nominal resolution of R~$\sim$11\,200, and on their corresponding rebinned spectra with effective resolution R~$\sim$7\,500. The four left panels of Fig.~\ref{LRHR} show the MATISSE$_G$ results for cool random spectra of S/N$\sim$20 at the nominal high resolution (HR) compared to those for the rebinned lower resolution (LR) spectra of the same stars and same S/N value. No particular trends are found between both solutions, justifying the possibility of correcting the parametrization performances for this rebinning change in the input data. \item The influence of the resolution in the parametrization precision was quantified by estimating the variation of the 68\% quantile of the error (residuals) distributions (see also Sect.~\ref{MAR}) due to the resolution change only. The four right panels of Fig.~\ref{LRHR} present normalized distribution of residuals for the S/N~$\sim$20 high-resolution spectra results (red curve) and for the low-resolution ones (black curve) of cool random stars. The ratio of the 68\% quantile values of both distributions is given for each stellar parameter. Similar ratios have been estimated for all the S/N values considered in this paper (c.f. Tab.\ref{Tab_LRHR}). In general, we can see that at constant S/N, the gain in the parameters precision when passing from the low resolution rebinned spectra to the nominal high resolution ones is of about one third. \item The results of the FERRE and ANN models which had been trained on rebinned low-resolution spectra for S/N 125, 40, 20 and 10 (fainter stars, as expected before launch), could now be rescaled to what we would expect if they had been trained on high-resolution spectra. The performances of those codes for the fainter stars, quantified through the 68\% quantile of the residuals distribution, have been corrected by multiplying by the high-to-low resolution ratios derived in the previous step for each stellar parameter and each S/N value. \end{itemize} In summary, thanks to the above described synthetic spectra samples that consider the most up-to-date S/N-magnitude relation, and properly dealing with the actual RVS configuration and resolution, the parametrization tests presented here can be used to confidently estimate the future {\it GSP-Spec} \ performances. \begin{table}[t!] \caption{Ratio of the 68\% quantile values of the error distributions for nominal high resolution data and low effective resolution data (in the sense Q68$_{\rm HR}$ divided by Q68$_{\rm LR}$), as estimated with MATISSE$_G$. The results for different parameters and S/N values are presented.} \label{Tab_LRHR} \centering \begin{tabular}{c cccc} & & & &\\ \hline & & & &\\ S/N & 125 & 40 & 20 & 10 \\ $G_{\rm RVS}$ & 10.3 & 11.8 & 12.6 & 13.4 \\ & & & &\\ \hline $T_{\rm eff}$ & 0.62 & 0.66 & 0.62 & 0.62 \\ log($g$) & 0.44 & 0.48 & 0.55 & 0.51 \\ [M/H] & 0.62 & 0.62 & 0.57 & 0.68 \\ [$\alpha$/Fe] & 0.72 & 0.73 & 0.68 & 0.83 \\ & & & &\\ \hline \end{tabular} \end{table} \section{Performance comparison of the different parametrization codes} \label{Perf} We present in this section the performances of the tested methods for the sample of noised random FGK-type and BA-type stars defined in Sect.~\ref{RandomSamples}. In the following, the reported and discussed performances are those obtained by the FERRE and ANN codes together with those of MATISSE locally improved by GAUGUIN (noted MATISSE$_G$, hereafter). In particular, the erosion of those codes performances as the information contained in the spectra decreases (ex. for increasing noise, lack of spectral signatures,...) is analysed here, in order to understand the behaviour of each method and its best applicability domain. \subsection{Distribution of residuals} \begin{figure*}[ht!] \includegraphics[height=15.cm,width=18.cm]{PlotDistribution1.pdf} \caption{Distributions of the residuals in the recovered atmospheric parameters ($\Delta \theta = \theta _{rec} - \theta _{real}$) for a subsample of cool random spectra with $G_{\rm RVS}$ = 8.4, 10.3 and, 12.6 (i.e. S/N values of 350, 125 and 20}, from top to bottom, respectively) and defined by 4\,000~$<$ $T_{\rm eff}$ $<$ and 8\,000~K. The different colours refer to the different tested methods: FERRE, ANN and, MATISSE$_G$ in red, green and blue, respectively. \label{PerfDistrib1} \end{figure*} \begin{figure*}[ht] \sidecaption \includegraphics[height=15.cm,width=13.cm]{PlotDistributionHot1.pdf} \caption{Same as Fig.~\ref{PerfDistrib1} but for a subsample of hot random spectra defined by $T_{\rm eff}$ $>$ 8\,000~K. \label{PerfDistribHot} \end{figure*} To evaluate each code performances, we first computed the differences between the recovered (i.e. {\it estimated}) and real ({\it input}) atmospheric parameters, $\Delta \theta = \theta _{rec} - \theta _{real}$ with $\theta$ referring to $T_{\rm eff}$ , log($g$) , [M/H] \ and [$\alpha$/Fe]. The $\Delta \theta$ will be referred as the {\it residuals}, hereafter. Fig.~\ref{PerfDistrib1} and Fig.~\ref{PerfDistribHot} show the distribution of these $\Delta \theta$ residuals obtained with the three tested methods, for cool and hot stars random spectra, respectively. Both figures illustrate the results obtained at different $G_{\rm RVS}$ \ magnitudes ($G_{\rm RVS}$ = 8.4, 10.3 and 12.6, that correspond to S/N values of 350, 125 and 20). It can be seen that the residual distributions are always very peaked and depart from a perfect Gaussian-like distribution only at the faintest magnitudes (it has to be taken into account the fact that both figures are in logarithmic scale). Almost no outliers (spectra that are parametrized with an error well outside the main distribution) are seen. Moreover, these distributions are not biased at all except for the faintest hot star spectra where small biases appear only for some methods. Moreover, we can notice that all the methods, at a given magnitude, recover the four atmospheric parameters with a rather similar quality. The performances are particularly excellent for the best-quality spectra ($G_{\rm RVS}$ = 8.4, this is also true as long as $G_{\rm RVS}$ $<$~10, see Fig.\ref{PerfClass1} for instance). The residual distributions get wider as the noise increases, although the large majority of the spectra (see, for instance, the discussion on the Q68$_\theta$ below) are always recovered with an acceptable accuracy, even at $G_{\rm RVS}$ = 12.6. As expected, the parametrisation in $T_{\rm eff}$ \ and [M/H] \ of the hottest faintest stars is of poorer quality since these spectra lack of sensitive spectroscopic signatures (see Fig.~\ref{Spec1} where few atomic lines are seen, except the \ion{Ca}{II} triplet, when $T_{\rm eff}$ $\ga$~8\,000~K). However, the appearance of the broad Paschen lines still allows a very good estimate of the stellar surface gravity, even for very low quality spectra. In contrast, the surface gravity of the late-type stars is always the most difficult parameter to recover as already shown in several previous studies. \subsection{Quantification of method performances} \label{MAR} To quantify each method performances, the 68\% quantile of the $\Delta \theta$ distributions (Q68$_\theta$, hereafter) was adopted. This quantile could be viewed as the 1-$\sigma$ error of the parameter recovery in case of a perfect Gaussian distribution of $\Delta \theta$. This is however not always the case, in particular, for low S/N ratios. Another statistical estimate of the performances is the systematic error (or bias) that corresponds to the mean of the differences between the recovered and real parameters ($<\Delta \theta>$). In our case, the biases are almost always very small compared to the Q68$_\theta$ quantiles, for every method (see previous subsection). They can thus be neglected for our purpose and, as a consequence, they will not be discussed hereafter. We first point out that we favoured, in the following, these Q68 quantiles over other possible statistical indicators (such as the root mean square, rms, or the Mean Absolute Residual, MAR) since we believe that the Q68 are more representative of the real performances (in terms of the bulk of the tested data) of the methods, particularly at low S/N ratios where the parametrization is more difficult to perform. As an illustration of this, we indeed compared (see Tab.\ref{Tab_MAR}) the MAR$_\theta$ and rms$_\theta$ with the Q68$_\theta$ for all the random FGK-spectral type stars (about 6\,000 spectra) and three $G_{\rm RVS}$ \ magnitudes considered in the present article. In this table, the reported numbers are obtained with the FERRE code but our conclusions are independent of the adopted method. Although the MAR$_\theta$ and Q68$_\theta$ are very similar for the best quality spectra ($G_{\rm RVS}$ $<$ 10.3), it can be seen that the MAR$_\theta$ are always smaller than the Q68$_\theta$ for every atmospheric parameter and that this departure increases for decreasing S/N. For instance, the MAR$_\theta$ gets close to the 60\% quantiles when $G_{\rm RVS}$ \ = 13.4 (S/N$=$10). This results from the fact that, for fainter spectra, the $\Delta \theta$ distributions start to depart from a pure Gaussian distribution (see Fig.~\ref{PerfDistrib1} \& ~\ref{PerfDistribHot}). In contrast, the rms$_\theta$ tend to correspond to higher quantiles than the Q68$_\theta$. As it is well known, this is caused by the high sensitivity of the rms to outliers. However, for the faintest magnitudes, these two statistical indicators become closer eachothers and the reported errors become in agreement. \begin{table*}[t!] \sidecaption \caption{Comparison between the statistical performance indicators Q68$_\theta$, MAR$_\theta$ and rms$_\theta$ for the FERRE code. We report in the parenthesis of the last six columns the quantiles corresponding to the MAR$_\theta$ and the rms$_\theta$.} \label{Tab_MAR} \centering \begin{tabular}{c ccc c ccc c ccc} & & & & & & & & & & &\\ \hline & & & & & & & & & & &\\ & \multicolumn{3}{c}{Q68$_\theta$} & & \multicolumn{3}{c}{MAR$_\theta$} & & \multicolumn{3}{c}{rms$_\theta$} \\ & & & & & & & & & & &\\ \cline{2-4} \cline{6-8} \cline{10-12} & & & & & & & & & & &\\ S/N & 125 & 40 & 10 & & 125 & 40 & 10 & & 125 & 40 & 10\\ $G_{\rm RVS}$ & 10.3 & 11.8 & 13.4 & & 10.3 & 11.8 & 13.4 & & 10.3 & 11.8 & 13.4\\ & & & & & & & & & & &\\ $T_{\rm eff}$ \ (K) & 32.1 &103.3 &381.8& & 33.4 (Q69) &101.4 (Q68) &327.0 (Q62) & & 56.1 (Q84)&153.7 (Q80)&436.0 (Q73)\\ log($g$) \ (dex) & 0.05 & 0.15 & 0.49& & 0.05 (Q68) & 0.15 (Q67) & 0.40 (Q61) & & 0.08 (Q82)& 0.22 (Q79)& 0.57 (Q71)\\ [M/H] \ (dex) & 0.05 & 0.14 & 0.36& & 0.05 (Q68) & 0.13 (Q65) & 0.29 (Q60) & & 0.08 (Q82)& 0.18 (Q76)& 0.38 (Q71)\\ [$\alpha$/Fe] \ (dex) & 0.04 & 0.14 & 0.35& & 0.04 (Q68) & 0.12 (Q66) & 0.27 (Q58) & & 0.09 (Q83)& 0.18 (Q78)& 0.35 (Q68)\\ & & & & & & & & & & &\\ \hline \end{tabular} \end{table*} \begin{figure*}[ht] \includegraphics[height=11.cm,width=18.cm]{PerfClass1.pdf} \caption{Variation of the code performances (quantified by the 68\% quantile) as a function of the magnitude, for the subsample of random cool stars defined by 4\,000~$<$ $T_{\rm eff}$ $<$ and 8\,000~K and [M/H] $\geq$ -1.0~dex (2\,951 spectra in total).} \label{PerfClass1} \end{figure*} \begin{figure} \includegraphics[height=5.3cm,width=8.4cm]{PerfHotClass1Teff.pdf} \includegraphics[height=5.3cm,width=8.4cm]{PerfHotClass1Logg.pdf} \includegraphics[height=5.3cm,width=8.4cm]{PerfHotClass1Meta.pdf} \caption{Same as Fig.~\ref{PerfClass1} but for the subsample of hot stars, defined by $T_{\rm eff}$ $>$ 8\,000~K and [M/H] $\geq$ -1.0~dex (1\,457 spectra in total).} \label{PerfHotClass1} \end{figure} \subsubsection{General cool and hot random samples: noise and effective temperature effects} As a first step of the comparison, it is important to understand how the parametrization codes react to i) S/N degradation and ii) the general {\it palette} of spectral types (and therefore stellar effective temperatures) that the {\it GSP-Spec} \ module will have to deal with. To this purpose, Fig.~\ref{PerfClass1} \& \ref{PerfHotClass1} illustrate the degradation of each code performances with increasing noise for the late-type and early-type stars samples, respectively. \begin{figure*}[ht] \includegraphics[height=11.cm,width=18.cm]{PlotScatterMethod1.pdf} \caption{Comparison of results method-to-method, at SNR$\sim125$ ($G_{\rm RVS}$$\sim$10.3), for the subsample of random cool stars defined in Fig.~\ref{PerfDistrib1}.} \label{PlotScatter1} \end{figure*} \begin{figure*}[ht] \includegraphics[height=11.cm,width=18.cm]{PlotScatterMethod2.pdf} \caption{Same as Fig.~\ref{PlotScatter1} but at SNR$\sim20$ ($G_{\rm RVS}$$\sim$12.6).} \label{PlotScatter2} \end{figure*} In addition, Fig.~\ref{PlotScatter1} and \ref{PlotScatter2} show the scatter plots of the method-to-method results comparisons at SNR$\sim125$ ($G_{\rm RVS}$$\sim$10.3) and SNR$\sim20$ ($G_{\rm RVS}$$\sim$12.6) respectively. No important trends are found between the results of the different methods, confirming the consistency, with a higher or lower agreement, between the three types of results. From the above mentioned plots, several conclusions can be derived: \begin{itemize} \item Most of the stars are well parametrized in $T_{\rm eff}$ , log($g$) \ and [M/H] \ (together with [$\alpha$/Fe] \ for the cool stars) by the three methods, working completely independently. This reinforces the idea that our estimates are robust. \item In the good to intermediate quality regime (for $G_{\rm RVS}$ $\lesssim 12.5$) and for cool and hot stars, two completely independent methods (FERRE and MATISSE$_G$) produce very compatible results with no significative differences between both codes. \item In the low quality regime (for $G_{\rm RVS}$ $\gtrsim 12.5$) and cool stars, the three methods (FERRE, ANN and MATISSE$_G$) give similar results (see also Fig.~\ref{PerfDistrib1} bottom panel), although the ANN method seems to perform slightly better for very low S/N spectra ($G_{\rm RVS}$ $\sim 13.5$). \item For $G_{\rm RVS}$ $\gtrsim 12.5$ and hot stars, MATISSE$_G$ solutions seem slightly more robust. \end{itemize} \subsubsection{Gravity and metallicity effects} To complete the robustness evaluation of the tested codes, and to understand their optimal applicability domains, we need to analyse their performances as a function of two additional parameters: stellar metallicity and surface gravity. To this purpose, we have chosen to illustrate two particular cases, concentrating on G and K-spectral type stars: rather metal-rich giants (Fig.~\ref{PerfClass4}) and dwarf stars with intermediate to low metallicities (Fig.\ref{PerfClass6}). These two stellar types correspond to extreme cases of the {\it GSP-Spec} \ performances since (i) late-type giants are more easily parametrized than corresponding dwarfs and (ii) metal-rich star spectra exhibit much more spectral lines that ease their parametrization (see Sect.~\ref{FGK}). The results show the following tendencies: \begin{itemize} \item In the good to intermediate quality regime (for $G_{\rm RVS}$ $\lesssim 12.5$), the FERRE and MATISSE$_G$ codes are confirmed as the two methods providing the best results, independently of the metallicity and the gravity. \item In the low quality regime (for $G_{\rm RVS}$ $\gtrsim 12.5$) and for metal-rich stars, the three methods have similar performances, with ANN and MATISSE$_G$ codes being slightly better for log($g$). \item In the low quality regime (for $G_{\rm RVS}$ $\gtrsim 12.5$) and for metal poor stars, the ANN method is again providing the best results. \end{itemize} \subsubsection{Summary of each code performances} In summary, from the above performance comparison, we can infer the following characteristics of each method application to the RVS data: \begin{itemize} \item The FERRE parametrization is always very satisfactory with good results for any parameters of any type of stars. However, for the faintest spectra, the performances degrade strongly leading to rather badly classified cool metal-poor dwarfs. \item The MATISSE$_G$ method performs rather similarly to FERRE with satisfactory parametrisation for every situation. MATISSE$_G$ actually produces slightly better results when $G_{\rm RVS}$ $\la$~10.5 but slightly worse for lower S/N ratios. In addition, MATISSE$_G$ can sometimes produce the best estimates when the physical parameters information is low, although still not completely degraded at $G_{\rm RVS}$ =~13.5 (as for metal-rich cool giants and early-type stars). \item The ANN code always provides the best results for late-type stars when $G_{\rm RVS}$ $\simeq$~13.5 (see Fig.~\ref{PerfDistrib2}). We stress that such stars will represent the largest sample collected by {\it Gaia } RVS. We point out that, however, for early-type stars, MATISSE$_G$ sometimes performed better. \item It clearly seems that the present version of FERRE and MATISSE$_G$ are very sensitive to the Gaussian-like dominated noise simulated in this work. This will be improved in the near future with the development of an optimized version of these codes when real RVS spectra will be available together with a precise knowledge of their noise properties. The adopted filtering for the ANN method will also have to be adapted in consequence. \end{itemize} \begin{figure*}[ht!] \includegraphics[height=11.cm,width=18.cm]{PerfClass4.pdf} \caption{Same as Fig.~\ref{PerfClass1} but for a subsample of random cool giant stars defined by $T_{\rm eff}$ $<$~6\,000~K, log($g$) ~$<$~3.5~cm/s$^2$ \ and [M/H] ~$\geq$~-0.5~dex (557 stars in total)} \label{PerfClass4} \end{figure*} \begin{figure*}[ht!] \includegraphics[height=11.cm,width=18.cm]{PerfClass6.pdf} \caption{Same as Fig.~\ref{PerfClass1} but for a subsample of random cool dwarf stars defined by $T_{\rm eff}$ $<$~6\,000~K, log($g$) $\geq$~3.5~cm/s$^2$ \ and -1.25$\leq$[M/H] $<$-0.5~dex (376 stars in total)} \label{PerfClass6} \end{figure*} \begin{figure*}[ht!] \includegraphics[height=5.cm,width=18.cm]{PlotDistribution2.pdf} \caption{Same as Fig.~\ref{PerfDistrib1} but for the subsample of Fig.~\ref{PerfClass6} and $G_{\rm RVS}$ = 13.4.} \label{PerfDistrib2} \end{figure*} \section{Expected parametrization performances for {\it Gaia } RVS end-of-mission data} \label{final} From the previous examination of the different parametrisation codes, we have derived the final (end-of-mission) {\it GSP-Spec} \ expected results by choosing the optimal method for each applicability domain. We first point out that the same code solution was adopted for all four (or three for the early-type stars) atmospheric parameters, to avoid a mix of physically inconsistent parameters. This selection was performed through the following main rules, based on the conclusions of Sect.~\ref{Perf}: \begin{itemize} \item For $G_{\rm RVS}$ $<12.6$ (S/N$> 20$), the average of the FERRE and MATISSE$_G$ solutions has been adopted as the final {\it GSP-Spec} \ performances since no significant differences appear between both methods. \item For $G_{\rm RVS}$ $\geq 12.6$ (S/N$\geq= 20$) and FGK stars, the average of the FERRE, MATISSE$_G$ and ANN is chosen for metal-rich and intermediate-metallicity stars, while the ANN method is favoured for late-type metal-poor spectra. \item $G_{\rm RVS}$ $\geq 12.6$ (S/N$\geq 20$) and hot stars, MATISSE$_G$ solutions have been selected. \end{itemize} In any case, the {\it GSP-Spec} \ pipeline will also provide the individual results of the different codes in order to avoid any possible discontinuities between the different parameter and/or S/N regimes. Such discontinuities could be accidentally produced by these adopted rules. We however point out that this should be avoided owing to an accurate validation phase based on the analysis of benchmark stars (see an example of such a procedure within GES in Recio-Blanco et al., 2015, in preparation). First of all, in order to describe the code performances for specific types of stars (in terms of spectral type, luminosity and metallicity), we have defined different stellar classes characterized by the following atmospheric parameters ranges: \begin{enumerate} \item Metallicity ranges: \begin{itemize} \item Metal-poor stars: $-2.25 \leq$ [M/H] \ $< -1.25$~dex, roughly corresponding to Halo stars. \item Intermediate-metallicity: $-1.25 \leq$ [M/H] \ $< -0.5$~dex, typical of the Galactic thick disk \item Metal-rich stars: $-0.5 \leq$ [M/H] \ $\leq 0.25$~dex, roughly corresponding to the Galactic thin disc. \end{itemize} \item Gravity ranges: \begin{itemize} \item Giant stars: 2.5 $\leq$ log($g$) \ $<$ 3.5~cm/s$^2$ \item Dwarf stars: 3.5 $\leq$ log($g$) \ $\leq$ 4.5~cm/s$^2$ \end{itemize} \item Effective temperature ranges: \begin{itemize} \item B-type stars: 10\,000 $\leq$ $T_{\rm eff}$ \ $\leq$ 11\,500~K \item A-type stars: 7\,500 $\leq$ $T_{\rm eff}$ \ $\leq$ 9\,500~K \item F-type stars: 6\,000 $\leq$ $T_{\rm eff}$ \ $\leq$ 7\,000~K \item G-type stars: 5\,000 $\leq$ $T_{\rm eff}$ \ $<$ 6\,000~K \item K-type stars: 4\,000 $\leq$ $T_{\rm eff}$ \ $<$ 5\,000~K \end{itemize} \end{enumerate} This led to the 30 stellar classes (15 for dwarfs and 15 for cool giants)\footnote{Some of these stellar classes correspond to rather infrequent real stars, particularly for the hot ones.}. The end-of-mission {\it GSP-Spec} \ parametrization performances for these different classes and $G_{\rm RVS}$ \ magnitudes are presented in Tab.~\ref{TabPerfGSPspec}. The following subsections analyse and discuss the obtained results. \begin{sidewaystable*} \caption{Expected end-of-mission performances for {\it GSP-Spec} \ (quantified by the 68\% quantile) for the different stellar classes defined in Sect.~\ref{final}.} \label{TabPerfGSPspec} \centering \begin{tabular}{l rrrrrr cccccc cccccc cccccc} \hline \hline & &&&&&& &&&&& &&&&& \\ & \multicolumn{6}{c}{$T_{\rm eff}$ \ (K)} & \multicolumn{6}{c}{log($g$) \ (dex)} & \multicolumn{6}{c}{[M/H] \ (dex)} & \multicolumn{6}{c}{[$\alpha$/Fe] \ (dex)} \\ & &&&&&& &&&&& &&&&& \\ \hline & &&&&&& &&&&& &&&&& \\ S/N & 350 & 150 & 125 & 40 & 20 & 10 & 350 & 150 & 125 & 40 & 20 & 10 & 350 & 150 & 125 & 40 & 20 & 10 & 350 & 150 & 125 & 40 & 20 & 10\\ $G_{\rm RVS}$ (mag) & 8.4&10.0&10.3&11.8&12.6&13.4 & 8.4&10.0&10.3&11.8&12.6&13.4 & 8.4&10.0&10.3&11.8&12.6&13.4 & 8.4&10.0&10.3&11.8&12.6&13.4 \\ & &&&&&& &&&&& &&&&& \\ \hline & &&&&&& &&&&& &&&&& \\ DWARFS & &&&&&& &&&&& &&&&& \\ B metal-rich & 35 & 89 & 138 & 382 & 478 & 744 & 0.01 & 0.01 & 0.01 & 0.02 & 0.04 & 0.11 & 0.01 & 0.03 & 0.05 & 0.13 & 0.32 & 0.51 &&&&& \\ B interm. met. & 36 & 105 & 144 & 420 & 490 & 808 & 0.01 & 0.01 & 0.01 & 0.02 & 0.05 & 0.12 & 0.02 & 0.05 & 0.07 & 0.19 & 0.38 & 0.58 &&&&&\\ B metal-poor & 42 & 108 & 149 & 429 & 499 & 784 & 0.01 & 0.01 & 0.02 & 0.03 & 0.05 & 0.12 & 0.02 & 0.06 & 0.10 & 0.27 & 0.45 & 0.65 &&&&&\\ & &&&&&& &&&&& &&&&& \\ A metal-rich & 8 & 21 & 31 & 101 & 210 & 323 & 0.01 & 0.01 & 0.01 & 0.03 & 0.07 & 0.12 & 0.01 & 0.01 & 0.02 & 0.07 & 0.14 & 0.18 &&&&& \\ A interm. met. & 8 & 21 & 35 & 104 & 226 & 353 & 0.01 & 0.01 & 0.01 & 0.03 & 0.07 & 0.12 & 0.01 & 0.02 & 0.03 & 0.11 & 0.26 & 0.34 &&&&& \\ A metal-poor & 9 & 24 & 35 & 104 & 232 & 393 & 0.01 & 0.01 & 0.01 & 0.04 & 0.07 & 0.14 & 0.01 & 0.03 & 0.05 & 0.17 & 0.33 & 0.45 &&&&& \\ & &&&&&& &&&&& &&&&& \\ F metal-rich & 7 & 19 & 34 & 98 & 179 & 336 & 0.01 & 0.03 & 0.04 & 0.10 & 0.15 & 0.26 & 0.01 & 0.02 & 0.02 & 0.08 & 0.12 & 0.20 & 0.01 & 0.01 & 0.02 & 0.06 & 0.14 & 0.26\\ F interm. met. & 10 & 23 & 40 & 134 & 198 & 394 & 0.01 & 0.03 & 0.05 & 0.12 & 0.16 & 0.27 & 0.01 & 0.03 & 0.06 & 0.14 & 0.20 & 0.39 & 0.01 & 0.02 & 0.05 & 0.14 & 0.24 & 0.27 \\ F metal-poor & 13 & 30 & 51 & 134 & 198 & 425 & 0.02 & 0.05 & 0.06 & 0.14 & 0.17 & 0.29 & 0.02 & 0.05 & 0.08 & 0.21 & 0.23 & 0.43 & 0.02 & 0.05 & 0.09 & 0.22 & 0.25 & 0.29\\ & &&&&&& &&&&& &&&&& \\ G metal-rich & 7 & 19 & 26 & 99 & 166 & 385 & 0.02 & 0.05 & 0.06 & 0.17 & 0.21 & 0.27 & 0.01 & 0.01 & 0.02 & 0.08 & 0.13 & 0.20 & 0.01 & 0.01 & 0.02 & 0.07 & 0.15 & 0.22\\ G interm. met. & 14 & 33 & 62 & 178 & 295 & 460 & 0.03 & 0.08 & 0.12 & 0.18 & 0.25 & 0.28 & 0.01 & 0.03 & 0.06 & 0.16 & 0.25 & 0.32 & 0.01 & 0.02 & 0.04 & 0.10 & 0.20 & 0.27 \\ G metal-poor & 23 & 69 & 105 & 200 & 326 & 487 & 0.05 & 0.13 & 0.15 & 0.22 & 0.28 & 0.32 & 0.02 & 0.06 & 0.11 & 0.22 & 0.31 & 0.38 & 0.01 & 0.04 & 0.06 & 0.14 & 0.18 & 0.25\\ & &&&&&& &&&&& &&&&& \\ K metal-rich & 4 & 7 & 22 & 44 & 143 & 255 & 0.01 & 0.04 & 0.06 & 0.12 & 0.22 & 0.28 & 0.01 & 0.01 & 0.02 & 0.06 & 0.11 & 0.20 & 0.01 & 0.01 & 0.02 & 0.04 & 0.15 & 0.19\\ K interm. met. & 8 & 16 & 27 & 74 & 219 & 305 & 0.02 & 0.05 & 0.06 & 0.13 & 0.25 & 0.30 & 0.01 & 0.02 & 0.02 & 0.06 & 0.14 & 0.28 & 0.01 & 0.01 & 0.03 & 0.06 & 0.16 & 0.19 \\ K metal-poor & 19 & 47 & 80 & 183 & 250 & 422 & 0.04 & 0.15 & 0.15 & 0.20 & 0.28 & 0.33 & 0.02 & 0.05 & 0.08 & 0.17 & 0.27 & 0.36 & 0.01 & 0.03 & 0.04 & 0.15 & 0.19 & 0.22 \\ & &&&&&& &&&&& &&&&& \\ \hline & &&&&&& &&&&& &&&&& \\ GIANTS & &&&&&& &&&&& &&&&& \\ B metal-rich & 11 & 31 & 32 & 107 & 365 & 450 & 0.01 & 0.01 & 0.01 & 0.04 & 0.06 & 0.12 & 0.01 & 0.02 & 0.03 & 0.12 & 0.28 & 0.38 &&&&& \\ B interm. met. &13 & 32 & 45 & 139 & 390 & 473 & 0.01 & 0.01 & 0.01 & 0.04 & 0.06 & 0.12 & 0.01 & 0.03 & 0.06 & 0.19 & 0.40 & 0.45 &&&&& \\ B metal-poor & 16 & 33 & 52 & 194 & 411 & 493 & 0.01 & 0.01 & 0.01 & 0.04 & 0.06 & 0.12 & 0.02 & 0.05 & 0.09 & 0.27 & 0.43 & 0.48 &&&&& \\ & &&&&&& &&&&& &&&&& \\ A metal-rich & 7 & 27 & 33 & 97 & 209 & 369 & 0.01 & 0.01 & 0.01 & 0.04 & 0.07 & 0.12 & 0.01 & 0.02 & 0.02 & 0.07 & 0.13 & 0.17 &&&&& \\ A interm. met. & 9 & 27 & 34 & 102 & 229 & 382 & 0.01 & 0.01 & 0.01 & 0.04 & 0.07 & 0.14 & 0.01 & 0.02 & 0.04 & 0.12 & 0.22 & 0.30 &&&&& \\ A metal-poor & 11 & 27 & 36 & 110 & 235 & 403 & 0.01 & 0.01 & 0.01 & 0.04 & 0.08 & 0.14 & 0.01 & 0.04 & 0.04 & 0.17 & 0.31 & 0.43 &&&&& \\ & &&&&&& &&&&& &&&&& \\ F metal-rich & 5 & 10 & 18 & 63 & 134 & 253 & 0.01 & 0.02 & 0.03 & 0.08 & 0.16 & 0.22 & 0.01 & 0.01 & 0.02 & 0.06 & 0.13 & 0.16 & 0.01 & 0.01 & 0.02 & 0.07 & 0.14 & 0.25\\ F interm. met. & 6 & 16 & 22 & 69 & 147 & 265 & 0.01 & 0.03 & 0.03 & 0.11 & 0.17 & 0.22 & 0.01 & 0.03 & 0.04 & 0.11 & 0.17 & 0.22 & 0.01 & 0.02 & 0.04 & 0.12 & 0.22 & 0.27\\ F metal-poor & 6 & 17 & 24 & 71 & 148 & 280 & 0.01 & 0.04 & 0.04 & 0.11 & 0.18 & 0.32 & 0.02 & 0.05 & 0.09 & 0.21 & 0.20 & 0.24 & 0.02 & 0.05 & 0.10 & 0.21 & 0.28 & 0.26\\ & &&&&&& &&&&& &&&&& \\ G metal-rich & 5 & 12 & 22 & 92 & 177 & 350 & 0.01 & 0.03 & 0.04 & 0.15 & 0.22 & 0.36 & 0.01 & 0.01 & 0.02 & 0.08 & 0.14 & 0.16 & 0.01 & 0.01 & 0.02 & 0.06 & 0.13 & 0.23\\ G interm. met. & 10 & 26 & 47 & 166 & 254 & 373 & 0.02 & 0.06 & 0.10 & 0.21 & 0.30 & 0.44 & 0.01 & 0.03 & 0.05 & 0.16 & 0.23 & 0.25 & 0.01 & 0.02 & 0.04 & 0.12 & 0.19 & 0.24\\ G metal-poor & 15 & 43 & 54 & 170 & 265 & 383 & 0.04 & 0.11 & 0.12 & 0.24 & 0.34 & 0.44 & 0.02 & 0.06 & 0.07 & 0.18 & 0.27 & 0.31 & 0.01 & 0.04 & 0.06 & 0.17 & 0.21 & 0.26\\ & &&&&&& &&&&& &&&&& \\ K metal-rich & 5 & 11 & 21 & 64 & 147 & 237 & 0.02 & 0.04 & 0.06 & 0.17 & 0.29 & 0.43 & 0.01 & 0.01 & 0.02 & 0.06 & 0.12 & 0.17 & 0.01 & 0.01 & 0.02 & 0.04 & 0.10 & 0.20 \\ K interm. met. & 6 & 18 & 29 & 91 & 211 & 333 & 0.02 & 0.06 & 0.08 & 0.21 & 0.33 & 0.50 & 0.01 & 0.02 & 0.03 & 0.11 & 0.19 & 0.28 & 0.01 & 0.01 & 0.03 & 0.09 & 0.17 & 0.24\\ K metal-poor & 13 & 39 & 65 & 200 & 289 & 444 & 0.04 & 0.10 & 0.13 & 0.28 & 0.34 & 0.52 & 0.02 & 0.05 & 0.07 & 0.19 & 0.25 & 0.31 & 0.01 & 0.04 & 0.05 & 0.14 & 0.19 & 0.26\\ & &&&&&& &&&&& &&&&& \\ \hline \hline & &&&&&& &&&&& &&&&& \\ \end{tabular} \end{sidewaystable*} \begin{figure*}[ht!] \sidecaption \includegraphics[height=9.cm,width=14.cm]{PerfFinale_Gdwarf.pdf} \caption{Variation of the end-of-mission {\it GSP-Spec} \ performances (quantified by the 68\% quantile of the residuals) as a function of increasing magnitudes for the G-dwarf stars defined in Sect.~\ref{final}.} \label{FigPerfGSPspecDwarf} \end{figure*} \begin{figure*}[ht!] \sidecaption \includegraphics[height=9.cm,width=14.cm]{PerfFinale_Kgiant.pdf} \caption{Same as Fig.~\ref{FigPerfGSPspecDwarf} but for the K-giant stars defined in Sect.~\ref{final}.} \label{FigPerfGSPspecGiant} \end{figure*} \begin{figure*}[ht!] \sidecaption \includegraphics[height=8.cm,width=14.cm]{PlotErreur.pdf} \caption{Variation of the 68\% quantile of the residuals for $T_{\rm eff}$ , log($g$) , [M/H] \ for early- and late-type stars (end-of-mission {\it GSP-Spec} \ performances) as a function of the real atmospheric parameters for $G_{\rm RVS}$ = 10.3 and 11.8 (S/N $=$ 125 and 40) in solid red and dashed blue line, respectively. The adopted bins in $T_{\rm eff}$ , log($g$) , [M/H] \ are 750~K, 0.5~dex \ and 0.5~dex, respectively.} \label{FigErreur} \end{figure*} \begin{figure}[ht!] \includegraphics[height=4.cm,width=8.7cm]{PlotErreurCool.pdf} \includegraphics[height=4.cm,width=8.7cm]{PlotErreurHot.pdf} \caption{Same as Fig.~\ref{FigErreur} but for the residuals in log($g$) , for cool-FGK and hot-BA stars separately (upper and lower panels, respectively).} \label{FigErreurlogg} \end{figure} \subsection{Performances for late-type stars} \label{FGK} FGK-spectral type stars will represent the majority of the {\it Gaia } RVS targets and therefore, special attention has to be given to the {\it GSP-Spec} \ parametrisation capabilities of their spectra. Figures~\ref{FigPerfGSPspecDwarf} and \ref{FigPerfGSPspecGiant} illustrate the expected errors (defined as the 68\% quantile of the $\Delta \theta$ distributions, c.f. Sect.~\ref{Perf}) for G-type dwarfs and K-type giants, respectively. The different curves on each panel correspond to the three metallicity intervals defined above and reported in Tab.~\ref{TabPerfGSPspec}. For stars with $G_{\rm RVS}$ $\la$12.5, the FGK stars parametrisation is accurate enough to precisely characterize the stellar properties (typical errors are smaller than 0.1~dex in [M/H] \ and [$\alpha$/Fe]) and, therefore, to conduct Galactic population studies as already performed from ground-based Galactic archaeology surveys \citep[see][for instance]{Recio-Blanco14}. Such accuracy in the stellar atmospheric parameters will allow, in a second step, quite accurate determinations of individual chemical abundances \citep[see, for instance,][]{Guiglion14}. This is specially true for metal rich and intermediate metallicity stars, that will be the most abundant ones in the magnitude volume sounded by the RVS. For the faintest stars at $G_{\rm RVS}$ $\geq 12.5-13.0$, where the noise amplitude becomes too strong with respect to the available stellar spectroscopic signatures, the accuracy in the parameters estimation degrades (with metallicity errors in the range 0.2 to 0.5~dex, for instance). On the other hand, the dependency of the parameters accuracy on the stellar metallicity is illustrated by the clear separation of the continuous (metal-rich), dashed (metal-intermediate) and dotted (metal-poor) curves. As expected, metal-rich stars are more easily parametrized than the cool metal-poor ones whatever the S/N ratios are. This is evidently caused by the number of spectroscopic signatures (lines sensitive to the atmospheric parameters and abundances) available to perform the spectral analysis that dramatically decreases below [M/H] $\la$-0.5~dex. This is also illustrated in Figures~\ref{FigErreur} and \ref{FigErreurlogg}, showing the dependences of the errors in $T_{\rm eff}$ , log($g$) \ and [M/H] \ on the three atmospheric parameters. First of all, the three right panels of Fig.~\ref{FigErreur}, show how the errors in $T_{\rm eff}$ \ (right upper panel), log($g$) \ (right middle panel) \ and [M/H] \ (right botom panel) depend on the stellar metallicity. In each case, two different curves are shown: the evolution of the 68\% quantile for $G_{\rm RVS}$ = 10.3 (red continuous line, S/N$=125$) and for $G_{\rm RVS}$ =11.8 (blue dashed line, S/N$=40$). As expected, the tendencies are more clearly appreciated at $G_{\rm RVS}$ =11.8 as the parametrization is more sensitive to loss (or gain) of information at lower S/N ratios. First, it can be appreciated that the errors in the three parameters increase as the metallicity decreases, with a higher sensitivity for the metallicity error itself. In addition, the errors in $T_{\rm eff}$ \ increase as the metallicity decreases down to about [M/H]=$-1.5$~dex. For lower stellar metallicities, the error in $T_{\rm eff}$ \ remains almost constant and practically independent of [M/H]. This is because for those metal-poor stars, the only useful temperature indicator that remains is the CaII triplet, which is present even at very low metallicities (see Fig.~\ref{Spec2}). A similar behaviour can be appreciated for the log($g$) \ errors as a function of [M/H] \ (right middle panel). However, in order to distinguish possible differences between FGK-type and early-type stars, Fig.~\ref{FigErreurlogg} shows the evolution of the errors in log($g$) \ and [M/H] \ on the same two parameters, but separating FGK and early-type stars. The surface gravity error shows in fact a clear metallicity dependence (right upper panel of Fig.~\ref{FigErreurlogg}) down to [M/H]=$-1.5$~dex, and no dependence for lower metallicity stars. Another important physical parameter influencing the parametrization performances is the effective temperature. This is illustrated in the three left panels of Fig.~\ref{FigErreur}. In the range concerning FGK-type stars ($T_{\rm eff}$ \ approximately between 4\,000 and 7\,000~K), the behaviour of errors in $T_{\rm eff}$ , log($g$) \ and [M/H] \ shows a maximum around $\sim$5\,500~K. From that point, the errors decrease in both directions, that is for lower and higher $T_{\rm eff}$. In the first case (for lower $T_{\rm eff}$ \ stars), molecular signatures start to be visible in the spectra, being more abundant as the temperature decreases. Those molecular signatures are sentitive to both $T_{\rm eff}$ \ and log($g$), as molecules formation is favoured for lower temperatures and higher gas pressure (and therefore log($g$)). In the second case (for stars with $T_{\rm eff}$ \ higher than about 6\,000~K), the appearance of the hydrogen Pachen lines and their rapid change with $T_{\rm eff}$ \ brings a precious gravity indicator that reduces the errors in log($g$) \ and breaks the $T_{\rm eff}$-log($g$) \ degeneracy (c.f. Sect.~\ref{hot} and Sect.~\ref{correlations}). As a consequence, the derivation of $T_{\rm eff}$ \ and [M/H] \ are also improved for those hot stars. Finally, the gravity influence on the stellar parametrization is illustrated by the middle panels of Fig.~\ref{FigErreur}. The $T_{\rm eff}$ \ (upper middle panel) and the [M/H] \ (bottom middle panel) derivation seem more difficult for dwarf stars than for giants, while the behaviour seems different for the log($g$) \ estimation (central panel). In practice, the left panels of Fig.~\ref{FigErreurlogg} showing the residuals of log($g$) \ as a function of log($g$) \ for FGK-type (upper left panel) and early-type stars (bottom left one), clarify the situation. For FGK stars, the gravity determination is in fact also more difficult for dwarfs than for giants, following the same tendency than the temperature and metallicity derivations. More generally, in both cases (FGK dwarfs and giants), the gravity is the more difficult parameter to estimate. This problem with the surface gravity is mostly caused by the lack of neutral and ionized lines of the same element in the RVS spectral domain. In any case, even for the lowest quality RVS spectra, the dichotomy between dwarf and giant stars will still be distinguishable. In summary, the {\it Gaia } RVS data of FGK-type stars will allow accurate studies of the Galactic disc and halo populations. In particular, for stars with metallicity higher than around $-0.5$~dex, that will be the majority of the RVS survey, the metallicity and $\alpha$ -enrichment estimates will be very accurate, with typical errors smaller than 0.1~dex in [$\alpha$/Fe] \ down to $G_{\rm RVS}$ $\sim$12.5 (a few tens of million of stars). In addition, K-giants will allow to perform Galactic studies up to distances of $\sim$5~kpc (for $G_{\rm RVS}$=12, and low extinction regions), or even $\sim$12~kpc (for $G_{\rm RVS}$=13.5). \subsection{Performances for early-type stars} \label{hot} \begin{figure}[ht!] \sidecaption \includegraphics[height=4.cm,width=7.cm]{PerfFinale_Adwarf1.pdf} \includegraphics[height=4.cm,width=7.cm]{PerfFinale_Adwarf2.pdf} \includegraphics[height=4.cm,width=7.cm]{PerfFinale_Adwarf3.pdf} \caption{Same as Fig.~\ref{FigPerfGSPspecDwarf} but for the A-dwarf stars defined in Sect.~\ref{final}.} \label{FigPerfGSPspecHot} \end{figure} Figure~\ref{FigPerfGSPspecHot} shows the expected errors (defined again as the 68\% quantile of the $\Delta \theta$ distributions) for A-type stars. As for FGK stars, three different curves are reported for each stellar parameter, illustrating the behaviour for metal-rich, metal-intermediate and metal-poor stars. First of all, we can conclude that the parametrisation of hot stars with $G_{\rm RVS}$ $\la$12.5, is expected to be very good (and actually excellent for the stellar surface gravity). As an example, the typical error in [M/H] \ for hot metal-rich stars will be smaller than 0.1~dex down to that magnitude. In fact, this results from the pressure sensitivity of the Paschen lines that is a classical luminosity indicator for early-type stars, specially for $T_{\rm eff}$ $\ga$9\,000~K. This comes from the pressure dependence of the Stark effect. As a consequence, thanks to this important gravity indicator, hot stars show no dependence of the gravity estimation accuracy with any atmospheric parameter (c.f. Fig.~\ref{FigErreur} left middle panel, and Fig.~\ref{FigErreurlogg} bottom panels). Only a small degradation with the S/N is detected. In contrast, both the estimation of the effective temperature and the metallicity are very sensitive to $T_{\rm eff}$. In fact, the number of metallic lines drastically decreases for hot stars spectra. Finally, even for faintest stars, the parameter accuracy is high enough to allow their classification into the main stellar classes (spectral subtypes, dwarf/subgiant/giant luminosity classes, with errors in log($g$) \ lower than approximately 0.2~dex). The stellar metallicity is expected to be recovered with an error smaller than 0.8~dex for the faintest early-type stars (being smaller than 0.3~dex for A-type metal-rich stars). We recall that A-type dwarfs are bright stars that will allow to extend the RVS sounded volume up to distances of 5~Kpc from the Sun (for $G_{\rm RVS}$=14). \subsection{Error correlations and parameter degeneracies} \label{correlations} \begin{figure*}[t!] \includegraphics[height=5.1cm,width=18.cm]{PlotErreurCorrel.pdf} \caption{Correlations between the residuals of the main atmospheric parameters for spectra at $G_{\rm RVS}$ = 10.3 (S/N$=125$). Cool FGK-dwarf, cool K-giant, and hot BA-spectral type stars are plotted in red, green and blue, respectively. The main shape of these correlations does not change with $G_{\rm RVS}$ , only their amplitude varies.} \label{FigErreurCorrel} \end{figure*} One important aspect to be considered in any parametrization exercise is the existence of error correlations. They not only inform about the robustness of the results, but also on the possible physical sources of parameter degeneracies. Figure~\ref{FigErreurCorrel} shows the parameter error correlations at a given magnitude, $G_{\rm RVS}$ = 10.3 (S/N$=125$), chosen for being the high quality regime, although with a high enough error amplitude for their analysis. Different colours have been assigned to FGK-dwarfs (red), K-giants (green) and BA-type stars (blue). First of all, a strong correlation is visible between the errors in surface gravity and effective temperature for the cool FGK-dwarfs. This comes from a known degeneracy between these two parameters \citep{Kordo11a}. On one hand, the wings of the CaII lines, carrying much of the information in the RVS domain, grow proportionally to log($g$)$^{1/3}$ for cool main sequence stars, but they also strongly depend on the $T_{\rm eff}$. This implies that differences in the spectra with rather different parameters are very small, causing the error correlations seen in Fig.~\ref{FigErreurCorrel}. This degeneracy is more important in the low metallicity regime, for which less metallic lines, carrying additional information on log($g$) , remain in the spectra. Moreover, as a consequence of the $T_{\rm eff}$-log($g$) \ degeneracy, the third atmospheric parameter, [M/H] , is also more difficult to constrain, showing also error correlations with the other two. On the other hand, K-giants do not suffer as much as dwarf stars from this $T_{\rm eff}$-log($g$) \ error correlation, as shown by the green points of Fig.~\ref{FigErreurCorrel}. This is because, as already discussed in Sect.~\ref{FGK}, the parameterisation is easier for giant stars than for dwarfs, with more uncorrelated parameter variations. Finally, BA-type stars (blue points in Fig.~\ref{FigErreurCorrel}), thanks to the presence of a strong and sensitive surface gravity indicator (the Paschen lines), are not affected by $T_{\rm eff}$-log($g$) \ or [M/H]-log($g$) \ degeneracies This is illustrated by the flat behaviour of the log($g$) \ errors as a function of the $T_{\rm eff}$ \ ones (left panel of Fig.~\ref{FigErreurCorrel}), and the absence of a relation between the [M/H] \ errors and the log($g$) \ ones (right panel). Only a small correlation of the metallicity errors with the temperature ones (middle panel) seems to exists, in agreement with the discussions of Sect.~\ref{hot}. \section{Comparison with the expected performances from {\it Gaia } spectrophotometric data} \label{GSPPhot} In a study rather similar to the present one, \citet{Liu12} reported the expected performances of stellar parameterisation from {\it Gaia } BP/RP spectrophotometry. \citet{Liu12} analysed the results of different tested methods within the context of the DPAC/{\it Working Group: Global Stellar Parametrizer - Photometry} ({\it GSP-Phot}). Some of the end-of-mission {\it GSP-Phot} \ expected performances have then been recently updated in \citet[][see their Tab.~4; see also Andrae et al., 2015, in preparation]{BailerJones13}. The present section discusses the comparison between the expected end-of-mission parametrisation performances between RVS ({\it GSP-Spec}) and BP/RP ({\it GSP-Phot}) data for stars brighter than $G_{\rm RVS}$ $\la$ 15. We recall that stars fainter than $G_{\rm RVS}$$\sim$15 will have only BP/RP based parameters. Moreover, it should also be pointed out that the present {\it GSP-Spec} \ analysis is performed for simulated RVS spectra that are not affected by any interstellar extinction, contrary to the \citet{Liu12} results that rely on BP/RP spectra showing a large range of a priory unknown interstellar extinction. As a consequence, we will therefore assume in the following discussion that {\it GSP-Spec} \ is insensitive to interstellar extinction and we will only consider the \citet{Liu12} results related to the smallest extinctions. Finally, although the tested random samples, the adopted statistical criteria and, the detailed performances for different types of stars and magnitudes are not exactly the same in our study and in the \citet{Liu12} or the \citet{BailerJones13} ones, a rough quantification of the expected differences can still be performed. We point out that, in the following (as in the core of all these papers), the reported uncertainties in the recovered stellar atmospheric parameters refer to internal errors only, i.e. relative star-to-star uncertainties. For the purpose of this {\it GSP-Phot} \ and {\it GSP-Spec} \ comparison, we adopted the relationship between the $G$-band ({\it Gaia } white light) and $G_{\rm RVS}$ \ magnitudes already presented in Tab.~\ref{Tab_GRVS} and Fig.~\ref{FigColorColor}. It is then found that the $G_{\rm RVS}$ \ magnitude is brighter than the $G$-band one of about 0.3~mag for A-type stars, of 0.6 to 0.7~mag for F-type stars (with the magnitude range reporting the metallicity effect from metal-poor to metal-rich stars), of 0.8 to 1.0~mag for G-type stars and of 1.2 to 1.4~mag for the K spectral type. The variation between cool giants and cool dwarf stars is weak and has been neglected. First, the study of \citet{Liu12} reveals that the {\it GSP-Phot} \ stellar parametrisation is performed with almost the same efficiency as long as $G \la 15$ and starts to degrade for fainter stars, only. This is also confirmed by the Tab.4 of \cite{BailerJones13} in which the performances at $G$=9 and $G$=15 are almost identical. On the contrary, the {\it GSP-Spec} \ parametrization degrades earlier, for $G_{\rm RVS}$ fainter than $\sim$11 or 12~mag, depending on the metallicity. As pointed out in Sect.\ref{MAR}, the Q68$_\theta$ and rms$_\theta$ statistical indicators can be assumed to be almost identical for low S/N RVS spectra. This allows us to compare our Tab.~\ref{TabPerfGSPspec} and the Tab.4 of \citet[][]{BailerJones13} to roughly deduce performance differences. Of course, this comparison can only show the tendencies suggested by tests on simulated data, neglecting external errors, mismatches between real data and models, parametrization methods optimisation, etc... : \begin{itemize} \item Bright dwarf and giant A-type stars ($G_{\rm RVS}$ $\la$ 12.5) should probably be always better parametrized in $T_{\rm eff}$ , log($g$) \ and [M/H] \ from their RVS spectra. We recall that very good surface gravities and global metallicities (with an accuracy better than 0.1~dex) will be available for such type of stars from their {\it GSP-Spec} \ parametrisation. For A-stars fainter than $G_{\rm RVS}$ $\sim$ 12.5, although log($g$) \ and [M/H] \ are still better estimated from their RVS spectra, the effective temperature derived from BP/RP data should be more accurate. \item The {\it GSP-Spec} \ effective temperature of F-type stars should probably be favoured for $G_{\rm RVS}$ $<$ 12.5. Their {\it GSP-Spec} \ surface gravity and global metallicity should also be adopted as long as $G_{\rm RVS}$ $\la$ 13. Their accuracy should be better than 0.1~dex when $G_{\rm RVS}$ $<$ 12. \item {\it GSP-Spec} \ stellar parameters of GK-spectral type stars should be adopted as long as $G_{\rm RVS}$ $\la$ 12.5. \item The global [$\alpha$/Fe] \ chemical index will be estimated from the {\it GSP-Spec} \ pipeline only for FGK-spectral type stars. Uncertainties of the order of 0.1~dex (or even smaller) are expected as long as $G_{\rm RVS}$ $\la$ 12-12.5, depending on the metallicity. \end{itemize} In summary, it can be concluded that, for all the considered stellar types, the stars brighter than $G_{\rm RVS}$ $\sim$ 12.5 (S/N$=20$) will be very efficiently parametrized by the {\it GSP-Spec} \ pipeline, including good estimations of the [$\alpha$/Fe] \ chemical index. From these stellar atmospheric parameters, individual chemical abundances (such as Fe, Ca, Ti, Si,...) will be derived with an expected uncertainty smaller than 0.1~dex for most of the RVS sample with about $G_{\rm RVS}$ $\la$ 12 (S/N$\ga 35$), i.e. for a few million of targets. For faintest stars that are better parametrised from their BP/RP photometry, a $T_{\rm eff}$ \ input from {\it GSP-Phot} \ as an initial condition for {\it GSP-Spec} \ will allow the improvement of its final log($g$) , [M/H] \ and [$\alpha$/Fe] \ estimates. Such a {\it GSP-Phot} / {\it GSP-Spec} \ link is already implemented in the Apsis processing system developed by the CU8 and combined performance tests are being implemented. Finally, we also stress that the spectral parametrization for extincted stars should be easier with the {\it GSP-Spec} \ pipeline since a $T_{\rm eff}$-extinction degeneracy appears in the parametrisation of BP/RP low-resolution spectra with too large line-of-sight interstellar extinction (assuming that their brightness in the RVS band is not too faint to collect high enough S/N spectra). In those cases, a feedback from {\it GSP-Spec} \ to {\it GSP-Phot}, in a second iteration of the analysis cycle, will also improve the final parameters estimations. \section{Comparison with other spectroscopic surveys} \label{Surveys} A suite of ground-based vast stellar spectroscopic surveys mapping the Milky Way is revolutionizing the observational information about Galactic stellar populations. Their synergy with the Gaia mission relays not only on the sounded spatial volume, but also on their spectral resolution and covered wavelength domain. These two characteristics primarily determine their corresponding performances in the stellar parameters and chemical abundances estimation. The Sloan Digital Sky Survey project, in its series of operations (SDSS I, II and III) has published about 250 000 spectra (R=1800) from the Sloan Extension for Galactic Understanding and Exploration \citep[SEGUE;][]{YannyRockosi09}. SDSS spectra have provided only limited information on the structures revealed in the SDSS photometry, but they produced $T_{\rm eff}$ \ and log($g$) \ estimates to 250~K and 0.5~cm/s$^2$ \ respectively, and [Fe/H] abundances to 0.3~dex for stars with 14$<$r$<$19~mag \citep[][]{Schlesinger12}. SEGUE data overlap the RVS targets only in the RVS fainter magnitudes domain. Due to the larger wavelength coverage of the SEGUE data, its $T_{\rm eff}$ \ estimations are generally better than the expected {\it GSP-Spec} \ ones. However, the higher RVS resolution should allow more precise measurements of log($g$), and [M/H]. The RAdial Velocity Experiment \citep[RAVE;][]{Steinmetz06} is obtaining accurate radial velocities ($<$ 5 km/s) and global metallicities for 5$\cdot$10$^5$ stars with J$<$12 from spectra with R$=$7500. RAVE is also estimating the individual abundances of some elements for several thousand stars. This project, due to its rather bright magnitude limit, corresponding to about $G_{\rm RVS}$$=$12 for solar type stars, is probing essentially the Galactic discs populations. In terms of parameter estimations, the RAVE internal errors at SNR$=10$ (about $G_{\rm RVS}$$=$12) are of 350~K for $T_{\rm eff}$, 0.5~dex \ for log($g$) \ and 0.3~dex for [M/H] \ for solar-type stars \citep[cf.][Table~1]{Kordo13}. More generally, the {\it GSP-Spec} \ performances should always be better than the RAVE ones, as expected from the RVS fainter magnitude limit and higher resolution. More recently, the Large sky Area Multi-Object fiber Spectroscopic Telescope \citep[LAMOST;][]{Zhao12} project has implemented a survey dedicated to Galactic exploration \citep[LEGUE;][]{Deng12}. The LEGUE survey plan includes spectra for 2.5 million stars brighter than $r<19$ and an additional 5 million stars brighter than $r < 17$. The magnitude distribution depends on the telescope throughput and the survey resolution, much lower than the RVS one, is R$=$1800. \cite{Xiang15} estimate the uncertainties of the LAMOST stellar parameter pipeline to be of about 150~K in $T_{\rm eff}$, 0.25~dex in log($g$) \ and 0.15~dex in [Fe/H]. On the other hand, for red giant stars, \cite{Liu14} report typical errors in metallicity in the range 0.15 to 0.30~dex. On the other hand, similarly to the SEGUE survey, LEGUE data mainly overlap the RVS observations in the faint magnitudes domain, for which BP/RP data will also be available. The results of the first low-resolution surveys revealed the key role of the stars chemical information to disentangle the Milky Way stellar population puzzle, motivating a new era of ground based high-resolution spectroscopic surveys. Three of them will be active from the ground during the period 2015-2019: The Gaia-ESO Survey \citep[GES;][]{Gilmore12}, the SDSS Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE;][]{Eisenstein11} and the Galactic Archaeology with AAO HERMES \citep[GALAH;][]{Zucker12} survey. All these surveys, thanks to their larger wavelength coverage and resolution will provide more accurate parameters than Gaia/RVS for a subsample of stars. Nevertheless, only the GALAH survey, targeting about one million stars with V$<$14, is expected to have an important overlap with the RVS. This overlap will correspond, in any case, to only less than one tenth of the RVS targets with $G_{\rm RVS}$$<$13. The GES survey is mainly targeting faint stars (14$<$V$<$19) thanks to the Very Large Telescope FLAMES/GIRAFFE facility ((R$\sim$20\,000) and it will primarily complement the Gaia/BPRP parameter estimations. In the RVS magnitude domain, only a small GES sample of 10$^4$ G-stars within 2 kpc of the Sun (12$<$V$<$14.5, corresponding to about 11$<$$G_{\rm RVS}$$<$13.5) is being observed with the FLAMES/UVES spectrograph (R=40 000). Finally, the APOGEE survey, is preferentially targetting high extinction regions of the disc and the bulge in the range 8$<$H$<$13.8. Although the magnitude coverage overlaps the RVS one, the APOGEE targeted fields are characterized by a high stellar crowding that limits the RVS observations. Therefore, APOGEE will mostly complement the RVS survey near the Galactic plane, rather than overlapping it. In conclusion, the RVS based stellar parameters will provide precious information about the Galactic populations in the bright part of the Gaia volume, for a number of stars tens of times higher than what will be provided by currently on-going and planned spectroscopic surveys from the ground. Those surveys, specially those at high spectral resolution, will nevertheless be crucial for the Gaia/RVS parameters validation and to complement them with precise chemical abundances for a subsample of stars. \section{Conclusion} \label{Conclu} In this work, after having analysed the results of different independent methods, we have estimated the end-of-mission expected parametrization performances of the {\it Gaia } \ DPAC pipeline ({\it GSP-Spec}) in charge of the RVS stellar spectra atmospheric parameters and chemical abundances derivation. The estimated accuracies, as a function of stellar types and magnitudes are summarized in Tab.~\ref{TabPerfGSPspec} and in Fig~\ref{FigPerfGSPspecDwarf} to \ref{FigPerfGSPspecHot}. The reported uncertainties in the recovered stellar atmospheric parameters refer to internal errors only, i.e. relative star-to-star uncertainties. Total errors will be, in many cases, dominated by external ones (partly caused by the possible synthetic spectra mismatches with respect to real observed ones) and they will be estimated from the analysis of real Gaia RVS spectra of benchmark reference stars, during a results validation phase. Nevertheless, the internal errors reported here permit to clearly identify, and quantify in detail, the enormous variety of science cases that will be obtained from the interpretation of pure {\it Gaia } \ data (without any need of references to external catalogues). The {\it GSP-Spec} \ pipeline will be optimised in the light of the first analysed real RVS spectra over the next year. Increasingly improved versions of the Apsis {\it GSP-Spec} \ module are delivered at each operations cycle. {\it GSP-Spec} \ is expected to be running in operations cycle~4 in 2017, with a possible contribution from the third {\it Gaia } data release. The current {\it GSP-Spec} \ version, integrated in the general Apsis chain, already meets the tight requirements in processing speed (17~Mflops per source), needed to repeately treat tens of million of spectra. Our tests, including first estimations of the impact caused by the on board detected stray light contamination, show that the contribution of the RVS based stellar parameters will be unique for stars with $G_{\rm RVS}$$\la$12.5 (a few tens of million of stars). On the one hand, the {\it GSP-Spec} \ parameters will probably be more accurate than the majority of the parameters derived from the spectrophotometry in that magnitude range. This will allow, thanks to the use of the {\it Gaia } parallaxes, a better estimation of the stellar evolution phase and, as a consequence, of the isochrone based age estimations (for which the effective temperature accuracy is a dominant source of error). Accurate stellar ages will be one of the revolutions in Milky Way astrophysics that the {\it Gaia } mission will accomplish, and the RVS data will strongly contribute to it, sharpening our view of the Galactic history in a volume of very precise measurements (up to $\sim$8~kpc from the Sun for K-giants and $\sim$1~kpc for G-dwarfs). On the other hand, accurate metallicity and chemical abundance measurements as the [$\alpha$/Fe] \ content are today recognized as crucial information for the understanding of the highly complex evolution of Galactic stellar populations. As an example, the classical kinematically-based definitions of the thin and the thick disc populations blurred our comprehension of the Galactic disc substructure \citep[c.f.][]{Bovy2012, Recio-Blanco14}. The RVS chemical abundance estimations, with accuracies better than 0.1~dex, will therefore be a unique and precious sample of several million of stars for which many pieces of the Milky Way history puzzle will be available, with unprecedented precision and statistical relevance. \begin{acknowledgements} We thank the Centre National d'Etudes Spatiales (CNES, France) and the French CNRS/INSU for continuous support for the preparation of the {\it Gaia } mission. This work benefited from travel supports from the European Science Foundation through the GREAT Research Network Program. Part of the computations have been done on the 'Mesocentre SIGAMM' machine, hosted by the Observatoire de la C\^ote d'Azur. The first two authors would like to thank Naia for her (too) numerous (and efficient) attempts to postpone the revision of this paper. We warmly thank C.A.L. Bailer-Jones for his constructive remarks that helped to improve this article. We are also sincerely grateful to D. Katz for his help concerning the in-flight RVS characteristics. \end{acknowledgements} \bibliographystyle{aa.bst}
1,477,468,751,312
arxiv
\section{Introduction} It is now becoming increasingly clear that the dominant contribution to low energy CP violation arises from the complex CKM matrix which parameterizes the weak quark current coupling to the W-boson. Indeed the recent measurement \cite{gamma} of the angle $\gamma=-Arg(V_{ud}V_{cb}V^*_{cd}V^*_{ub})$ provides evidence\cite{botella} for a complex CKM matrix even if one allows for New Physics (NP) contributions to $B_d-\bar{B}_d$ mixing and $B_s-\bar{B}_s$ mixings. However, this cannot be the full story of CP violation in elementary particle interaction\cite{book} since it is believed that the explanation of the only cosmic manifestation of CP nonconservation i.e. the asymmetry between matter and anti-matter must come from sources other than the CKM CP violation; similarly the solution to the QCD $\theta$ problem may also imply new forms of CP violating interactions. Moreover, there is the fundamental question of the origin and nature of CP violation and its relation to other constituents and forces. Even before the full story of CP violation is clear, one can ask the question as to whether the observed CKM CP violation is spontaneous in origin\cite{lee} or intrinsic to the Yukawa couplings in the theory. This question has nontrivial cosmological implications since spontaneous CP violation will lead to domain walls and in order to avoid conflict with observations such as WMAP data, one must have the scale of this breaking to be above that of the inflation reheating, thus imposing constraints on both cosmological as well as particle physics aspects of models. In practical construction of models with spontaneous CP breaking, one must have one or more Higgs fields to have complex vevs\cite{lee}. It is obvious that implementing this requires extending the standard model, by having either more Higgs/or fermion fields plus Higgs because gauge invariance allows no room for Higgs vevs to be complex in the standard model. Furthermore, since spontaneous CP violation (SCPV) requires nontrivial constraints on the realistic gauge models, it is not surprising that the process of implementing it can lead to unpleasant side effects. One such unpleasant effect is the plethora of flavor changing neutral current (FCNC) effects induced in the process of obtaining spontaneous CP breaking. Therefore, the challenge in constructing realistic models with spontaneous CP violation is twofold: i) One should achieve genuine spontaneous CP violation and assure that the vacuum phase does lead to a non-trivially complex CKM matrix. This is not an easy task since CP invariance of the Lagrangian requires the Yukawa couplings to be real. ii) One should find a natural suppression mechanism for FCNC in the Higgs sector. Again, this is a challenging task, since there is in general a close connection \cite{gcb} between the appearance of FCNC and the possibility of generating a complex CKM matrix through CP violating vacuum phases. The above link between SCPV and FCNC can be seen by considering a two Higgs extension $(\phi_{1,2})$ of the standard model to implement SCPV. It is well known (and we repeat the derivation in sec.II and in Appendix A) that general two Higgs models have FCNC mediated by neutral Higgs fields. In order to suppress these FCNC effects one may consider two possibilities. One consists of the introduction of extra symmetries which eliminate FCNC and guarantee natural flavour conservation ( NFC ) \cite{glashow} in the Higgs sector. It is well known that the introduction of such symmetries in the two Higgs doublet framework eliminates the possibility of having spontaneous CP violation \cite{gcb} . With three Higgs doublets one can have NFC and yet achieve spontaneous CP violation but the resulting CKM matrix is real, in contradiction with recent data. Above we have considered the case where FCNC are avoided through the introduction of extra symmetries, not by fine-tuning. It has been shown that even if one considers elimination of FCNC through fine-tuning, for three generations one cannot generate a realistic complex CKM matrix \cite{grimus}. The other possibility for suppressing FCNC effects is by choosing a large mass for the neutral Higgs which violate flavour. Indeed the strength of FCNC effects is proportional to $1/M^2_{H}$ where $H$ denotes the new neutral Higgs field (we will denote the standard model Higgs by $h$). So clearly, suppression of FCNC effects require that $M_H$ become very large . On the other hand, as we show below, the magnitude of the CP phase (denoted by $\delta$ in the text) in this model is given by $\delta \sim\frac{M_{W}}{M_H}$ so that as $M_H\to $ very large, $\delta\to 0$ and the theory becomes almost CP conserving. Note that to obtain CKM CP violation, we need $\delta \sim 1$. We will thus show that in the context of models with SCPV at the electroweak scale, it is not possible to obtain a complex CKM matrix while suppressing FCNC effects. In this class of SCPV models, obtaining a large CP phase and having significant FCNC seem to go together. In this paper, we discuss the conditions under which this connection can be avoided. We point out that the crucial point is to have CP broken at a high energy scale. We present two classes of models: one where the extension involves only the Higgs sector of the standard model and another one which involves the fermion sector as well. In the latter case, there is a small departure from unitarity of the CKM matrix. Several of the models we discuss have already been considered in the literature. We present a systematic classification of these models, adding some new ones and sharpening the connection between SCPV and FCNC. In particular, we present criteria for constructing realistic SCPV models free of FCNC constraints. This paper is organized as follows: in sec. II, we discuss the connection between SCPV and FCNC in doublet Higgs extension of the SM. In sec. III, we discuss spontaneous CP breaking at high scale in a pure Higgs extension and show how one can avoid the FCNC effects in this case. In sec. IV, we present a fermionic extension of the SM with spontaneous CP breaking at high scale. In sec. V, we discuss these two classes models into a left-right model and discuss two models one of which has the interesting property that spontaneous CP violation is triggered by spontaneous P violation. In sec. VI, we briefly comment on how our ideas can be extended to supersymmetric models and finally in sec. VII, we present our conclusions. In the appendices A and B we present a detailed demonstration of the results of sec. in sec. II and III. \section{Two Higgs Doublet Model for SCPV and FCNC} The simplest extension of the standard model that can accommodate spontaneous CP violation is the two Higgs doublet model. If we denote the two Higgs doublets as $\phi_{1,2}$, and define $V_0(x,y)=-\mu^2_1 x-\mu^2_2 y+\lambda_1 x^2 + \lambda_2 y^2+\lambda_3 xy$, we can write the potential as follows: \begin{eqnarray} V(\phi_{1,2})~=~V_0(\phi^\dag_1\phi_1, \phi^\dagger_2\phi_2)+V_{12} \label{eq1} \end{eqnarray} where \begin{eqnarray} V_{12}(\phi_1, \phi_2)~=~\mu^2_{12}\phi^\dag_1\phi_2 +~\lambda_4(\phi^\dag_1\phi_2)^2 +\lambda_5 \phi^\dag_1\phi_2 \phi^\dag_1\phi_1 +\lambda_6 \phi^\dag_1\phi_2 \phi^\dag_2\phi_2 ~+~h.c.\\ \nonumber +~\lambda'_3\phi^\dag_1\phi_2 \phi^\dag_2\phi_1~ \label{eq2} \end{eqnarray} We can now write down the potential in terms of the electrically neutral components of the doublets. It looks exactly the same as the above potential as long as we understand the various fields as the neutral components of the fields. In order to discuss spontaneous CP violation\cite{haber}, we look for a minimum of the form: \begin{eqnarray} <\phi_1>~=~\pmatrix{0\cr \frac{1}{\sqrt{2}}v_1}; <\phi_2>~=~\pmatrix{0\cr \frac{1}{\sqrt{2}}v_2e^{i\delta}}; \end{eqnarray} The potential at this minimum looks like \begin{eqnarray} V(v^2_1,v^2_2,\delta)~=~V_0(v^2_1,v^1_2)~+~\frac{1}{4}\lambda'_3 v^2_1v^2_2\\ \nonumber +\mu^2_{12}v_1v_2 cos\delta+\frac{1}{2}\lambda_4v^2_1v^2_2cos2\delta~+~\frac{1}{2} (\lambda_5v^2_1+\lambda_6 v^2_2)v_1v_2 cos\delta \end{eqnarray} The three extremum equations are: \begin{eqnarray} \left[-\mu^2_1+\lambda_1v^2_1+(\lambda_3+\lambda'_3)v^2_2+ \lambda_4v^2_2sin 2\delta \right]v_1+ v_2\left[\mu^2_{12} cos\delta+\frac{1}{2}(3\lambda_5 v^2_1+\lambda_6v^2_2)cos\delta\right] =0\\ \left[-\mu^2_1+\lambda_2v^2_2+\frac{1}{2}(\lambda_3+\lambda'_3)v^2_1+ \lambda_4v^2_1sin 2\delta \right]v_2+ v_1\left[\mu^2_{12} cos\delta+\frac{1}{2}(\lambda_5 v^2_1+3\lambda_6v^2_2)cos\delta\right] =0\\ -sin\delta \left[\mu^2_{12}v_1v_2+2\lambda_4 v^2_1v^2_2cos\delta+v_1v_2 (\lambda_5 v^2_1+\lambda_6 v^2_2)\right]~=~0 \end{eqnarray} Now let us study the implications of the extremum equations for SCPV and FCNC. Writing the Yukawa couplings as ${\cal L}_Y~=~\sum_{a,b;i} h^{u,i}_{ab}(\bar{Q}_{La}\phi_{i}u_{R,b}+ u\rightarrow d) + h.c.$, it is straightforward to see that in general there will be FCNC mediated by neutral Higgs. We will consider next two possibilities for suppressing these FCNC. One involves the introduction of extra symmetries in order to implement Natural Flavour Conservation (NFC) \cite{glashow} in the Higgs sector; the other considers the possibility of making very heavy the neutral Higgs which mediate FCNC. We will see that both possibilities do not work as far as generating a viable complex CKM matrix, but the discussion is useful in order to motivate the breaking of CP at a high energy scale which will be considered in sections 3 and 4. \subsection {Eliminating FCNC through extra symmetries} It is well known that it is possible to avoid FCNC by introducing for example a $Z_2$ symmetry which restricts the Yukawa couplings so that only one Higgs doublet gives mass terms to the down quarks while the other doublet gives mass to the up quarks. However, it has been shown \cite{gcb} that the same symmetry which leads to these selective Yukawa couplings prevents the occurrence spontaneous CP breaking. A possible way out of this difficulty involves the introduction of a third Higgs doublet. In this case it is possible to obtain a CP violating vacuum \cite{gcb} but the CKM matrix is real, in conflict with the recent experimental findings. The reason why CKM matrix is real in this case has to do with the fact that due to the selective Yukawa couplings, the vacuum phase which appears in the quark mass matrices can be eliminated by rephasing right handed quark fields. \subsection { Suppressing FCNC effects through large Higgs masses } It is straightforward to see that we could diagonalize one set of Yukawa couplings $h^{u,d,1}$ so that the neutral Higgs ($h$) coming from the doublet $\phi_1$ has flavor conserving couplings whereas that from $\phi_2$ ($H$) has flavor violating couplings. In general of course the two neutral Higgs fields mix and therefore the $h^{u,2}$ coupling which in the symmetry limit involves only the $H$ Higgs field will have an admixture of the light Higgs $h$ but mixing is is always proportional to the mass ratio $m^2_h/M^2_H$ assuming $M_H\gg m_h$. Thus FCNC processes will arise via the tree level exchange of $H$ boson and will be proportional to $M^{-2}_H$ and a contribution from the mixing term which due to the mixing will also have the same kind of power dependence on $M_H$. Therefore in order to suppress FCNC interactions, we must demand that $M_H$ be very large. This can be achieved by making $-\mu^2_2 > 0$ and $|\mu^2_2|\gg v_{wk}$. Let us now study Eq. (6): this equation tells us the scale of the vev $v_2$ which depends on the scale of the mixing term $\mu_{12}$. (Note that getting the correct weak scale fixes $\mu^2_1$ to be of order $v_{wk}$ and stopping FCNC tells us that $|\mu^2_2|\gg v^2_{wk}$ but so far $\mu^2_{12}$ remains a free parameter.) We have two cases: (i) $\mu^2_{12}\sim v^2_{wk}$ and (ii) $\mu^2_{12}\sim M^2_H \sim |\mu^2_2|\gg v^2_{wk}$. In case (i), it is easy to see using the middle equation above that: \begin{eqnarray} v_2\sim \lambda_5\frac{v^3_1}{|\mu^2_2|}\ll v_1 \end{eqnarray} i.e. the vev of $<\phi_2$ is highly suppressed in the limit of no FCNC. Note that the mass of the second neutral Higgs is not of order $v_2$ since in this case the vev is induced by a tadpole like diagram. Substituting this small value of $v_2$ in Eq.(c), we then see that for natural values of the parameters ($\lambda_i$), the only solution for the CP violating phase is $\delta=0,\pi, ...$ . On the other hand in case (ii), $v_2\sim v_{wk}$ but equation (7) above tells us that in this case also the expression in the bracket cannot give a nonzero $\delta$ since $\mu^2_{12}v_1v_2\gg 2\lambda_4 v^2_1v^2_2$ and the term within the bracket cannot vanish meaning that sin$\delta=0$ and hence no SCPV. We therefore conclude that in this simple model, the requirement of suppression of the neutral current effects implies no SCPV. The main point is that to get a large enough SCPV phase, Eq(7) tells us that $v_2$ must be comparable in magnitude to $v_1$. For this to happen, we must have $|\mu^2_2|\sim v^2_{wk}$ which again means that there must be large FCNC effects at low energies. The above result can also be seen as follows: In a two Higgs doublet theory, one can change the basis of Higgs bosons to pass to a basis where the new doublets are $\Phi_1~=~(v_2e^{i\delta}\phi_1-v_1\phi_2)/\sqrt{v^2_1+v^2_2}$ and $\Phi_2$ is the orthogonal combination to $\Phi_1$, where we have anticipated the vevs of the fields in the original basis, as discussed above. Now we see that $<\Phi_1>=0$ while $<\Phi_2>\neq 0$ and it leads to the same mass matrices for quarks as before. Now we can choose parameters of the Higgs potential such that the mass of $\Phi_1$ is very large to avoid FCNC effects. In this case, the effective theory below the mass of $\Phi_1$ i.e. $M_{\Phi_1}$ is same as the standard model up to zeroth order in $M_W/M_{\Phi_1}$. Therefore, to this order, the vev of $\Phi_2$ (which is the equivalent of the standard model Higgs) will be real, and there will be no spontaneous CP violation in the theory (to order $M_W/M_{\Phi_1}$). This again proves that in the limit of zero FCNC, there will be no SCPV. In appendix A, we give explicit calculations in the mass basis that substantiates this conclusion. This result can be generalized to the case of arbitrary number of Higgs doublets. For example for the case of three doublets, the argument is that as long as all the doublets couple to quark fields, at least two of the neutral Higgs bosons i.e. $H_{1,2}$ must be heavy in order to avoid large FCNC effects and this implies that $|\mu^2_{23,}|\gg v^2_{wk}$; in that case their vev's must be suppressed and of order $\frac{v^3_{wk}}{|\mu^2_{2,3}|}$ and therefore small. The potential will then be forced to choose the minimum such that all SCPV phases are zero. \section{High Scale Spontaneous CP violation leading to complex CKM while avoiding FCNC: Model with Extra Higgs Only} In this section, we show how the FCNC problem is avoided if spontaneous violation of CP symmetry arises at a high scale. First we discuss this using a model with two $SU(2)_L\times U(1)_Y$ Higgs doublets $\phi_{1,2}$ as before and a complex singlet $\sigma$. The potential for this case can be written as follows: \begin{eqnarray} V_{\phi_1,\phi_2,\sigma}~=~V(\phi_{1,2})+~V(\sigma)+~V (\phi,\sigma) \end{eqnarray} where $V(\phi_{1,2})$ is defined in Eqs.(\ref{eq1}), ( \ref{eq2}) and the other two terms are given by \begin{eqnarray} V(\sigma)~=~-M^2_0\sigma^*\sigma + M^2_1\sigma^2 +\lambda_\sigma (\sigma^*\sigma)^2+\lambda'_\sigma \sigma^4 +\lambda''_\sigma \sigma^3\sigma^* +h.c. \end{eqnarray} and \begin{eqnarray} V(\phi,\sigma)~=~M_{2,ab}\phi^\dagger_a\phi_b\sigma~+~\kappa_{1,ab} \phi^\dagger_a\phi_b\sigma^2+ \kappa_{2,ab} \phi^\dagger_a\phi_b\sigma^*\sigma+ h.c.\label{sigma} \end{eqnarray} It is clear that the minimum of the potential $V(\sigma)$ corresponds to $<\sigma>=\Lambda e^{i\alpha}$, where $\Lambda\sim M_{0,1,2}\gg v_{wk}$ and $\alpha$ can be large. Substituting this vev in the potential, we can write the effective tree level potential for the $\phi_{1,2}$ fields at low energies to be: \begin{eqnarray} V_{eff}(\phi_1,\phi_2)~=~V(\phi_{1,2})+V_{new} \end{eqnarray} where $V_{new}~=~(M_{2,ab}\Lambda e^{i\alpha}+\kappa_{1,ab}\Lambda^2e^{2i\alpha}+ \kappa_{2,ab}\Lambda^2) \phi^\dagger_a\phi_b +h.c.\equiv \Lambda^2(\lambda_{11} \phi^\dagger_1\phi_1+\lambda_{22} \phi^\dagger_2\phi_2+\lambda_{12} e^{i\beta} \phi^\dagger_1\phi_2) + h.c.$ If we keep only the neutral components of the Higgs doublets, then the form of the potential is \begin{eqnarray} V_{eff}~=~\Lambda^2(\lambda_{11} \phi^\dagger_1\phi_1+\lambda_{22} \phi^\dagger_2\phi_2+\lambda_{12} e^{i\beta} \phi^\dagger_1\phi_2+h.c.) +\sum \lambda_{abcd} \phi^\dagger_a\phi_b\phi_c\phi_d+h.c. \end{eqnarray} where $\Lambda \gg v_{wk}$. It is clear that although CP is spontaneously broken at a high scale $\Lambda$, at low energies, one has CP explicitly softly broken \cite{rebelo} by the bilinear terms in $\lambda_{12}$. Note that both the fields $\phi_{1,2}$ have Yukawa couplings and we can make a redefinition of the phase of one of the doublet fields (say $\phi_2$) i.e. $\phi_2\rightarrow e^{-i\beta}\phi_2$ so that all the bilinear and $O(\Lambda^2)$ terms in the potential become phase independent but the Yukawa couplings become complex. Thus the effective theory at low energies looks naively like hard CP violation, even though it is spontaneous CP violation at a very high scale. The Yukawa coupling Lagrangian looks like \begin{eqnarray} {\cal L}_Y~=~\bar{Q}_{La}(h^{u,1}_{ab}\phi_{1}~+~ h^{u,2}_{ab}e^{-i\beta}\phi_{2})u_{R,b}+ \bar{Q}_{La}(h^{d,1}_{ab}\tilde{\phi}_{1}~+ h^{d,2}_{ab}e^{i\beta}\tilde{\phi}_{2})~d_{R,b}+ h.c. \end{eqnarray} This still does not imply a viable complex CKM matrix; to achieve that, we must show that the vev of $\phi_2$ where the phase resides, does not become very tiny when we demand the suppression of FCNC. In order to show this, let us write down the extremization of the potential as in section 2: For simplicity, we keep only the $\lambda_{1111}$, $\lambda_{2222}$ and $\lambda_{1122}$ terms in the potential but our results follow in general: \begin{eqnarray} (-\mu^2_1+\Lambda^2\lambda_{11}+\lambda_{1111}v^2_1+\lambda_{1122} v^2_2)v_1+ v_2(\Lambda^2\lambda_{12}) =0\\ (-\mu^2_2+\Lambda^2\lambda_{22}+\lambda_{2222}v^2_2+\lambda_{1122} v^2_1)v_2+ v_1(\Lambda^2\lambda_{12}) =0. \end{eqnarray} From these two equations, we find that both $v_{1}$ and $v_2$ are in general of the same order regardless of what the neutral Higgs masses are. This gives the CKM CP violation. As far as the masses of the neutral Higgs fields go, we can fine tune one set of parameters to keep one Higgs field light i.e. $\ll \Lambda$ and another will remain heavy thus suppressing the FCNC effects. Of course we do not need to make the rephasing $\phi_2\rightarrow e^{-i\beta}\phi_2$ and eliminate the phase from the bilinear terms. If we do not do rephasing, the extremum equation of the Higgs potential would look like: \begin{eqnarray} -\Lambda^2\lambda_{12}v_1v_2 sin(\beta+\delta)-sin\delta \left[2\lambda_4 v^2_1v^2_2 cos\delta+ v_1v_2(\lambda^2_5v^2_1+\lambda^2_6 v^2_2)\right]=0 \end{eqnarray} Since $\Lambda^2\gg v^2$, it is clear that to an excellent approximation one has: \begin{eqnarray} \beta~=~-\delta. \end{eqnarray} The phase $\delta$ would then appear in the quark mass matrices which will be nontrivially complex, thus leading to a complex CKM matrix. In Appendix B, we discuss how the fine tuning needed to keep the standard model Higgs at the electroweak scale does not prevent the components of the extra Higgs become superheavy in order to suppress the FCNC effects. \section{SCPV without FCNC problem in fermionic extensions of standard model} In this section, we briefly review the model in \cite{branco} in which an extension of the standard model with a $SU(2)_L$ singlet quark and a singlet Higgs field was presented where one can have spontaneous violation of CP at high scale without FCNC problem but with complex CKM. In this case, one extends the standard model by the introduction of one singlet vector like fermion of down type: $(D_{L,R})$ with $U(1)_Y$ quantum number $-2/3$ and a complex singlet Higgs field $\sigma$ as in sec. 3. The potential for the $\sigma$ field is the same as in equation (\ref{sigma}). As a result, the $\sigma$ field has a complex vev leading to high scale spontaneous CP violation (since $<\sigma>=\Lambda \gg v_{wk}$. The CP violation is transmitted to the weak scale via its couplings given below: \begin{eqnarray} {\cal L}_{\sigma}~=~\sum_a \bar{D}_Ld_{a,R}(g_a \sigma+g'_a\sigma^*)+ ((f \sigma+f'\sigma^*)\bar{D}_LD_R+h.c. \end{eqnarray} where $g_{a},g'_a,f,f'$ are real due CP conservation. But after symmetry breaking, the mass matrix contains terms mixing the heavy $D$ quarks with the light $d$ quarks \cite{branco}. This can be seen by writing down the full down quark mass matrix (in the notation $\bar{\psi}_LM_{dD}\psi_R$): \begin{eqnarray} M_{dD}~=~\pmatrix{m_d & 0\cr \Lambda(ge^{i\delta}+g'e^{-i\delta}) & \Lambda(fe^{i\delta}+f'e^{-i\delta})} \end{eqnarray} where $g$ and $g'$ denote the row vectors $(g_1,g_2,g_3)$ and $(g'_1,g'_2,g'_3)$. Diagonalizing $M_{dD}M^\dagger_{dD}$, we can get the generalized $4\times 4$ CKM matrix which indeed has a complex phase in the $3\times 3$ sector involving the standard model quarks even in the limit of heavy $D$ quark masses. This is an example of a breakdown of the decoupling theorem \cite{branco}. Clearly since there is only one neutral Higgs boson coupling to the effective down quark mass matrix, there is no FCNC effects at the tree level as in the case of the standard model. Clearly, if the masses of the vectorlike quarks were at the weak scale, the mixing between the light $d$ quarks and $D$ would be significant and lead to large FCNC effects at low energies. This provides a second way to introduce spontaneous CP violation without simultaneously having flavor changing neutral current effects. Note that the common thread between the examples in sec. 3 and 4 is the fact that CP is violated spontaneously at high scale, which highlights the main point of this paper. In the remainder of this paper, we show how these ideas can be embedded into extended models on the way towards a possible grand unified scheme where spontaneous CP violation occurs at the GUT scale. \section{Embedding high scale SCPV into left-right symmetric models} The left-right symmetric models are based on the gauge group $SU(2)_L\times SU(2)_R\times U(1)_{B-L}\times SU(3)_c$ with fermions assigned in a left-right symmetric manner\cite{lrs} and Higgs belonging to bidoublet field $\Phi(2,2,0)$ and a pair of fields of either $(\chi_L(2,1,-1)\oplus \chi_R(1,2,-1))$ type (called $\chi$-type below) or $(\Delta_L(3,1,+2)\oplus\Delta_R(1,3,+2))$ type (called $\Delta$-type below). The left-right symmetric models are ideally suited to embed the first class of high scale SCPV models since the bidoublet Higgs field already contains the necessary two standard model doublet Higgs fields in it. All we have to do is to embed the high scale singlet field into a left-right Higgs field. We present two different ways to do this embedding in the two subsections below: \subsection{Left-Right SCPV: Model I} The first way to implement high scale SCPV is by choosing two pairs of $\chi$ type or $\Delta$-type fields. Two pairs are needed since with a single pair, constraint that $W_R$ scale must be much higher than $W_L$ scale suppresses the SCPV phase by a factor $M_{W_L}/M_{W_R}$\cite{masiero}. The two $\Delta$ type model has been discussed in \cite{basecq} where at the high scale, the $\Delta_R$'s have vevs as follows: $<\Delta^0_{1,R}>~=~v_{1,R}$ and $<\Delta_{2,R}>~=v_{2,R}e^{i\delta}$. The coupling of the form $Tr(\Phi^\dagger\tau_2\Phi^*\tau_2) Tr(\Delta^\dagger_{1,R}\Delta_{2,R})$ then induces the term $\lambda_{12}e^{i\delta}\Lambda^2\phi^*_1\phi_2$ at low energies and the rest of the discussion is as in section 3 above. Let us now turn our attention to embedding of the model of Ref.\cite{branco} into the left-right model. We consider the left-right model without the bidoublet but with the $(\chi_L(2,1,-1)\oplus \chi_R(1,2,-1))$ pair and three pairs of $SU(2)_L\times SU(2)_R$ singlet vector-like quarks $(P_{L,R}(1, 1, 4/3)$ and $N_{L,R}(1,1,-2/3)$). Such models were extensively studied in the early 90's but not from the point of view of spontaneous CP violation\cite{babu}. We take a complex singlet Higgs field $\sigma$ as before and assume the theory to be CP conserving prior to symmetry breaking so that all couplings in the theory are real. Again, we assume the potential for the $\sigma$ field to be as in Eq.\ref{sigma} so that its minimum corresponds to a complex vev for $<\sigma>= \Lambda e^{i\delta}$ as before. The vevs for the fields $\chi_{L,R}$ are real. To study the implications of the theory for low energy quark mixings, let us write down the quark Yukawa couplings: \begin{eqnarray} {\cal L}_Y~=~h^u_{ab}[\bar{Q}_{L,a}\chi_LP_{R,b}+ \bar{Q}_{R,a}\chi_RP_{L,b}]+ h^d_{ab}[\bar{Q}_{L,a}\tilde{\chi}_LN_{R,b}+ \bar{Q}_{R,a}\tilde{\chi}_RN_{L,b}] + h.c.\\ \nonumber +[f^u_{ab}\sigma +f^{u,'}\sigma^*]\bar{P}_{L,a}P_{R,b}+ [f^d_{ab}\sigma +f^{d,'}\sigma^*]\bar{N}_{L,a}N_{R,b}] + h.c. \end{eqnarray} After spontaneous symmetry breaking we have $<\sigma>=\Lambda e^{i\delta}$, $<\chi^0_{L,R}>= v_{L,R}$ with $v_R\sim \Lambda \gg v_L$. This leads to the mass matrix of the form: \begin{eqnarray} {\cal M}_{uP}~=~\pmatrix{0 & h^u_{ab}v_L\cr h^u_{ba}v_R & M_P}\\ {\cal M}_{dN}~=~\pmatrix{0 & h^d_{ab}v_L\cr h^d_{ba}v_R & M_N} \end{eqnarray} Left-right symmetry requires that $M_{P,N}=M^\dagger_{P,N}$ whereas the matrices $h^{u,d}$ are real. After diagonalization, the effective up and down quark mass matrices become: \begin{eqnarray} M_{u,d}\simeq v_Lv_Rh^{u,d,T}M^{-1}_{P,N}h_{u,d} \end{eqnarray} These matrices are hermitean and therefore lead to equal left and right handed CKM matrices as in the usual left-right models with bi-doublets and lead to complex CKM matrices. In fact one can write the rotation matrices for both the up and down sector as follows in a basis where the couplings $h^{u,d}$ are diagonal: \begin{eqnarray} V^{u,d}~=~M^{-1/2}_{u,d}h^{u,d}U_{P,N}{M^{diag}}^{-1/2}_{P,N}\sqrt{v_Lv_R} \end{eqnarray} Clearly since $U_{P,N}$ is a unitary matrix with complex phases, $V^{u,d}$ will lead to complex CKM matrix i.e. $U_{CKM}=V^uV^{d,\dagger}$. As far as the FCNC effects are concerned, they arise only in order $m_{u,d}/M_{P,N}$ and therefore suppressed when $\Lambda\to $ large values. Note however that the quark mixing effects arise in zeroth order of this parameter. \subsection{Left-Right SCPV Model II:Connecting the CP violation and seesaw scales} In this subsection, we present a more economical left-right embedding of the high scale spontaneous CP violation with suppressed FCNC. The model consists of the usual left-right assignment of the fermions\cite{lrs} and Higgs system consists of a single bidoublet $\phi(2,2,0)$ and the $\chi_L(2,1,-1)\oplus\chi_R(1,2,-1)$. Here spontaneous CP violation is implemented via the vev of a CP and P odd real singlet scalar field $\eta$\cite{cmp}. The CP invariant Higgs potential for the theory can be written as: \begin{eqnarray} V(\chi_{L,R},\eta,\phi)~=~V_0(\phi)+ i\mu \eta Tr(\phi^\dagger_1{\phi}_2)+ M'\chi^\dagger_L\phi\chi_R+V_2(\eta, \chi_{L,R}) \end{eqnarray} where \begin{eqnarray} V_0(\phi)~=~-\mu^2_{ab}Tr(\phi^\dagger_a\phi_b)+ \sum\kappa_{abcd}Tr(\phi^\dagger_a\phi_b\phi^\dagger_c\phi_d)+ \kappa'_{abcd}Tr(\phi^\dagger_a\phi_b)Tr(\phi^\dagger_c\phi_d)+h.c. \end{eqnarray} with (a,b) going over (1,2) with $\phi_1=\phi$ and $\phi_2=\tau_2\phi^*\tau_2$. \begin{eqnarray} V_2(\eta, \chi_{L,R})~=~M^2_\eta \eta^2+\lambda_\eta \eta^4 -M^2_\chi (\chi^\dagger_L\chi_L+ \chi^\dagger_R\chi_R)+ \lambda_\chi (\chi^\dagger_L\chi_L+ \chi^\dagger_R\chi_R)^2+\\ \nonumber \lambda'_\chi(\chi^\dagger_L\chi_L- \chi^\dagger_R\chi_R)^2+ M'_\eta\eta (\chi^\dagger_L\chi_L- \chi^\dagger_R\chi_R) \end{eqnarray} We have assumed that under CP transformation $\eta\rightarrow -\eta$ and $\chi_L\rightarrow \chi^\dagger_R$ and $\phi\rightarrow \phi^\dagger$. Invariance under this transformation requires that all parameters in the potential be real (except for one imaginary coupling shown explicitly in the above equation). Note now that if the term in the potential connecting $\eta$ and $\chi$ fields was absent, we would have $<\eta>=0$ since $M^2_\eta >0$. However as soon as $SU(2)_R$ symmetry is broken by $<\chi^0_R>\neq 0$, the $M'_\eta$ term in the potential introduces a tadpole term for $\eta$ thereby generating \begin{eqnarray} <\eta>~\simeq \frac{+M'_\eta v^2_R}{2M^2_\eta}. \end{eqnarray} Since $\eta$ is CP odd, this breaks CP spontaneously. The way it manifests is that the $i\mu<\eta> Tr(\phi^\dagger_1\phi_2)$ term now combines with the $\mu^2_{12}Tr\phi^\dagger_1\phi_2$ to generate at low energies an effective soft CP breaking term as in Eq. (13) where $\phi_{1,2}$ are the two doublets contained in the bidoublet $\phi$ of the left-right model. The same arguments as in the Appendix B then guarantee that in this model the FCNC can be suppressed by making one of the left-right Higgs doublets superheavy. This can also be seen in an alternative manner by minimizing the potential, noting that there is a range of values of the parameters in the potential for which we have $<\chi^0_R>~=v_R\neq 0; <\eta>\neq 0; <\chi_L>=0$ provided $M'_\eta <\eta> > 2\lambda'_\chi u^2$. The vevs of $\chi_R$ and $\eta$ fields are much larger than the weak scale. An interesting point worth stressing is that in this model, the scale of CP violation and the seesaw\cite{seesaw1} scale for neutrino masses are connected. To see this, note that the right handed neutrino masses come from the higher dimensional term $(L_R\bar{\chi}_R)^2/M_{Pl}$ leading to seesaw right handed neutrino masses given by $M_{seesaw}\simeq\frac{v^2_R}{M_{Pl}}$ and from Eq.(29), the CP violating scale $<\eta>$ and $M_{seesaw}$ owe their origin to the same scale $v_R$ i.e. violation of parity. Since in grand unified theories, $v_R$ can be identified with the GUT scale, one would therefore relate several scales of the theory i.e. $M_{SCPV}$, $M_{seesaw}$ and $M_{GUT}$. \section{Possible extensions to supersymmetry and SUSY CP problem} As is well known, generic minimal supersymmetric extensions of the standard model (MSSM) are plagued with the SUSY CP problem. There have been many solutions suggested to solve this problem\cite{cures}. A simple solution to this problem would of course be to have CP spontaneously broken. However, in MSSM, CP cannot be spontaneously broken. Furthermore, it has also been pointed out that \cite{masip} it is particularly hard to have spontaneous CP breaking by considering multi-Higgs generalizations of the MSSM. A possibility for achieving spontaneous CP breaking within SUSY involves the introduction of singlet chiral fields\cite{teixeira}. As far as the FCNC effects are concerned, in these models one may fine tune the $\mu$ terms to make of the extra Higgs doublets heavy thereby eliminating large FCNC effects. However, the early versions of these models are no longer viable since they had a real CKM matrix, in contradiction with recent experimental data. Therefore, the ideas described in this paper may be particularly useful if one wants to solve the SUSY CP problem by spontaneous CP violation in a viable scenario, where vacuum phases do lead to a complex CKM matrix, while at the same time suppressing FCNC effects. In fact, recently it has been suggested one such model which includes two singlet Higgs superfields and adds an extra vector like singlet fermion to MSSM\cite{romao} to break CP spontaneously and generate a complex CKM matrix. One can embed this scheme into the SUSY left-right model. Detailed analysis of SUSY models that exploit the ideas of this paper is under way and will be taken up in a forthcoming publication. \section{Conclusion} We have emphasized the close connection between spontaneous CP violation and FCNC effects in theories where CP breaking vev is at the weak scale. We have also shown that in order to avoid FCNC effects while at the same time generating a complex CKM matrix through vecuum phases, one is naturally led to have spontaneous CP breaking at a high energy scale, well above the electroweak scale.We then describe two classes of models one without and one with extra heavy fermions where having a high vev break CP spontaneously leads to complex CKM matrix as given by experiment without simultaneously having large FCNC effects. We then show how these models can be embedded into the high scale left-right models where parity violation and neutrino mass are connected via the seesaw mechanism. We find one particular model where spontaneous parity violation triggers the spontaneous CP violation thus connecting the three scales: seesaw scale for neutrino masses, scale of spontaneous parity and CP violation. In conclusion, if our view on the origin of CP violation is correct, then small neutrino masses and CP violation at low energies would have in common the fact that they are both manifestations of physics occuring at very high energy scale. \newpage \begin{center} {\bf Appendix A} \end{center} In this appendix, we elaborate on the connection between SCPV and FCNC and complex CKM in a two Higgs doublet model. For this purpose, we write the Yukawa Lagrangian as: \begin{eqnarray} {\cal L}_{Y}~=~\sum_{a,b} (h^{u,i}_{ab}\bar{Q}^0_{La}\phi_iu^0_{R,b}+ u\rightarrow d)~+~h.c. \end{eqnarray} It can be readily seen \cite{lavoura}, \cite{gcb} that in the quark mass eigenstate basis, the scalar coupling can be written as: \begin{eqnarray} {\cal L}_{scalar}~=~\left[\bar{u}D_uu+\bar{d}D_dd\right]\frac{H}{v} -\left[\bar{u}(N_uP_R+N^\dagger_uP_L)u+\bar{d}(N_dP_R+N^\dagger_dP_L)d \right] \frac{R}{v}\\ \nonumber +i\left[\bar{u}(N_uP_R-N^\dagger_uP_L)u-\bar{d}(N_dP_R-N^\dagger_dP_L)d \right] \frac{I}{v} \end{eqnarray} where \begin{eqnarray} H~=~\frac{1}{v}\left[v_1R_1+v_2R_2\right]\\ \nonumber R~=~\frac{1}{v}\left[v_2R_1-v_1R_2\right]\\ \nonumber I~=~\frac{1}{v}\left[v_2I_1-v_1I_2\right] \end{eqnarray} with $\phi^0_1~=~\frac{1}{\sqrt{2}}\left[v_1+R_1+iI_1\right]$ $\phi^0_2~=~\frac{1}{\sqrt{2}}e^{i\delta}\left[v_2+R_2+iI_2\right]$. where \begin{eqnarray} N_d~=~U^\dagger_{dL}\left[\frac{v_2}{\sqrt{2}}Y^d_1-\frac{v_1}{\sqrt{2}} e^{i\delta}Y^d_2\right]U_{dR} \end{eqnarray} where $U_{d_{L,R}}$ are the unitary matrices which diagonalize the down quark mass matrix $M_d$. Analogous expressions are there for $N_u$. It is clear that $N_{d,u}$ are in general not diagonal and therefore $R$ and $I$ mediate FCNC. The quark mass matrices are in the form \begin{eqnarray} M_dM^\dagger_d~=~H_{real}+ 2iv_1v_2sin\delta(Y^{d2}{Y^{d1}}^T-Y^{d1}{Y^{d2}}^T) \end{eqnarray} where $H_{real}$ is a symmetric real matrix. It is clear that $M_dM^\dagger_d$ (and similarly $ M_uM^\dagger_u$) is an arbitray complex matrix and therefore CKM is a complex matrix. If one fine tunes such that $Y^d_1 \propto Y^2_d$, $N_d$ would be diagonal and FCNC would be eliminated. But in that case, Eq(34) implies that $M_dM^\dagger_d$ becomes real. This illustrates the connection between FCNC and the possibility of generating a complex CKM by a vacuum phase. \begin{center} {\bf Appendix B} \end{center} In this appendix, we discuss how the extra neutral Higgs fields in the model of section III that are potential mediators of FCNC effects can be made heavy while at the same time the SM Higgs can be kept light by one fine tuning. We will work with the potential in Eq.(13,14).Clearly, the minimum of this potential corresponds to: \begin{eqnarray} <\phi_1>~=~\pmatrix{0\cr v_1}; <\phi_2>~=~\pmatrix{0\cr v_2e^{i\delta}} \end{eqnarray} Let us work in a basis in which \begin{eqnarray} \pmatrix{H_1 \cr H_2}~=~\frac{1}{v}\pmatrix{v_1 & v_2\cr v_2 & -v_1}\pmatrix{\phi_1 \cr e^{-i\beta}\phi_2} \end{eqnarray} The potential in Eq. (13) looks as follows: \begin{eqnarray} V(H_{1,2})~=~\Lambda^2\left(\lambda_{11}H^\dagger_1H_1+ \lambda_{22}H^\dagger_2H_2+(\lambda_{12}H^\dagger_1H_2+ h.c.) \right) +\lambda_1 (H^\dagger_1 H_1)^2 +\lambda_2 (H^\dagger_2 H_2)^2\\ \nonumber +\lambda_3 (H^\dagger_1 H_1)(H^\dagger_2H_2) +\lambda_4 (H^\dagger_2 H_1)(H^\dagger_1H_2)+\left[\lambda_5 H^\dagger_1H_2+\lambda_6 H^\dagger_1H_1+\lambda_7H^\dagger_2H_2\right]H^\dagger_1H_2 + h.c. \end{eqnarray} Even though we use the same $\lambda$'s in both Eq.(13) and here, they are different and in fact now $\lambda_{12},\lambda_{5,6,7}$ are in general complex while the other $\lambda$'s are real. Now we can write the $H_{1,2}$ in terms of their components: \begin{eqnarray} H_1~=~\pmatrix{G^+\cr \frac{1}{\sqrt{2}}(v+H+iG)}; H_2~=~\pmatrix{C^+\cr \frac{S+iP}{\sqrt{2}}} \end{eqnarray} As already discussed in \cite{book}, the stability of vacuum demands that, the coefficients of the linear terms in $(H, S, P)$ vanish and gives \begin{eqnarray} \Lambda^2 \lambda_{11}+2\lambda_1 v^2 =~0\\ \nonumber \Lambda^2 \lambda_{12}+\lambda_6 v^2 =~0 \end{eqnarray} These are the fine tuning conditions in the $(H_{1,2})$ basis to have SM Higgs field light and have the correcxt electroweak symmetry breaking. We can now write down the mass matrix for the other neutral Higgs fields $(H, S, P)$ as follows\cite{book}: \begin{eqnarray} {\cal M}_{H,S,P}~=~\pmatrix{4v^2\lambda_1 & 2v^2Re\lambda_6 & -2v^2Im \lambda_6\cr 2v^2Re\lambda_6 & \lambda_2 \Lambda^2+(\lambda_3 +\lambda_4+2 Re \lambda_5)v^2 & -2v^2Im\lambda_5\cr -2v^2Im \lambda_6 & -2v^2Im\lambda_5 & \lambda_2\Lambda^2 +\lambda_3 v^2+(\lambda_4-2Re\lambda_5)v^2} \end{eqnarray} From this expression, we can explicitly see that the beyond the standard model neutral Higgs particles $(S,P)$ have masses of order $\Lambda$ whereas the SM Higgs field has mass of order of the elctroweak scale. Also the mixings of the SM Higgs which can generate FCNC effects are of order $v^2/\Lambda^2$ and hence very small as $(S,P)$ are made heavy. Also $\lambda_2\Lambda^2 +\lambda_3 v^2$ gives the mass of the charged Higgs field from the second Higgs field $H_2$. Thus we have complex CKM from SCPV while at the same time suppressing the FCNC effects. The work of R. N. M. is supported by the National Science Foundation grant no. Phy-0354401 and the work of GCB is supported by Fundacao para a Ciencia e a Tecnologia (FCT, Portugal), through the projects POCTI/FNU/44409/2002, PDCT/FP/FNU/50250/2003, POCI/FP/63415/2005, POCTI/FP/FNU/50167/2003, which are partially funded through POCTI (FEDER). Both the authors are very grateful for the Alexander von Humboldt Senior Research Award which made this collaboration possible. They are also grateful to A. Buras and M. Lindner at TUM and R. N. M. to H. Fritzsch at LMU for kind hospitality when the work was done.
1,477,468,751,313
arxiv
\section{Introduction} Perelman \cite{P1} proved the no shrinking, steady, and expanding breather theorems on compact manifolds, by applying the monotonicity formulas of his $\mathcal{W}$-functional, $\mathcal{F}$-functional, and normalized $\mathcal{F}$-functional, respectively: these types of breathers must also be gradient Ricci solitons of the corresponding types, which evolve by self-diffeomorphism and scaling. In fact, in the compact category, the steady and expanding gradient Ricci solitons must also be Einstein manifolds. Let us first of all recall the definitions of breathers and solitons. \begin{defn} Let $(M,g(t))$ be a complete Ricci flow. If there exist two time instances $t_1<t_2$, a constant $\alpha>0$, and a self-diffeomorphism $\phi: M\rightarrow M$, such that \begin{eqnarray*} g(t_1)=\alpha \phi^*g(t_2), \end{eqnarray*} then $(M,g(t))$ is called a breather. If $\alpha=1$, $\alpha>1$, or $\alpha<1$, then the breather is called steady, shrinking, or expanding, respectively. \end{defn} \begin{defn} A Ricci soliton is a tuple $(M, g, X, \lambda)$, where $(M, g)$ is a complete Riemannian manifold, $X$ is a smooth vector field on $M$, and $\lambda>0$ is a constant, satisfying \begin{eqnarray*} Rc +\frac{1}{2}\mathcal{L}_Xg =\frac{\lambda}{2}g. \end{eqnarray*} If $\lambda=0$, $\lambda>0$, or $\lambda<0$, then the soliton is called steady, shrinking, or expanding, respectively. The soliton is called \emph{complete} if the vector field $X$ is complete. If there exists a smooth function $f$ on $M$ such that $X=\nabla f$, then $(M, g, f, \lambda)$ is called a gradient Ricci soliton. \end{defn} A complete Ricci soliton $(M,g,X,\lambda)$ always generates a canonical form, that is, a Ricci flow $g(t)=\tau(t)\phi_t^*g$ which moves by self-diffeomorphism and scaling, where $ \tau(t)=\lambda t+1$ and $\frac{\partial}{\partial t}\phi_t(x)=\frac{1}{ \tau(t)}X(\phi_t(x))$. As indicated by Perelman \cite{P1}, if one views a Ricci flow as an orbit in the space $\operatorname{Met}(M)/ \operatorname{Diff}(M)$, where $\operatorname{Met}(M)$ is the space of all smooth Riemannian metrics and $ \operatorname{Diff}(M)$ stands for the group of self-diffeomorphisms and scalings, then a breather is a periodic orbit and a soliton is a static one. Therefore, the no breather theorem is tantamount to saying that the periodic orbits must also be static. Perelman's proofs of the no breather theorems require the existence of minimizers for the $\mathcal{W}$-functional, the $\mathcal{F}$-functional, and the normalized $\mathcal{F}$-functional, respectively. When the manifold is compact, such existence results are straightforward applications of variational problems. A natural question to ask is, under what condition can the no breather theorems be established for noncompact manifolds, and how to carry our the proof. One may certainly attempt to find the minimizers of these functionals, and this is possible under certain geometric conditions. For results of this type, the reader may refer to \cite{RV} and \cite{Zhang1}. We remark here that, in the noncomapct category, this approach is almost impossible for the no steady or expanding breather theorems. The reason is that the $\mathcal{F}$-functional or the normalized $\mathcal{F}$-functional are generally not finite on noncompact steady or expanding gradient Ricci solitons, respectively, when these functionals are evaluated using the potential functions of the corresponding solitons. (One may think of a Bryant soliton for example.) Another approach was initiated by the result of Lu-Zheng \cite{LZ}, where they constructed a Type I ancient solution using a shrinking breather, and proved, under certain geometric conditions, that the backward blow-down limit of this ancient solution must be the breather itself, which must also be a shrinking gradient Ricci soliton by Naber \cite{N}. This method was refined, and the conclusion improved, by the second author \cite{Zh}, where the condition for the no shrinking breather theorem is reduced to bounded curvature alone. The authors \cite{CZhang} recently further reduced the condition for the no shrinking breather theorem to a lower bound of the Ricci curvature alone. In this paper, we continue our study in \cite{CZhang} and apply our method to noncompact expanding breathers. Recall that Feldman-Ilmanen-Ni \cite{FIN} established some forward monotonicity formulas for the Ricci flow as the dual version of Perelman's \cite{P1}, whose equalities are fulfilled on expanding gradient Ricci solitons. We will be implementing Feldman-Ilmanen-Ni's forward reduced geometry in this paper in the same way as we have applied Perelman's reduced geometry in \cite{CZhang}. However, since the forward reduced geometry does not behave as nicely as Perelman's reduced geometry (The reason, intuitively speaking, is this, that on steady or expanding solitons, as it is in the case of a shrinking soliton, the forward reduced volumes should coincide with the $\mathcal{F}$-functional or the normalized $\mathcal{F}$-functional, respectively, evaluated using the corresponding potential functions, whereas the latter two are generally infinite in the noncompact case), we will need to impose some strong curvature conditions. \begin{thm}\label{main} Let $(M,g(t))$ be a complete and noncompact expanding breather of the Ricci flow. Assume $g(t)$ has bounded curvature on each time-slice and either one of the following is true. \begin{enumerate}[(1)] \item $g(t)$ satisfies the weak $\operatorname{PIC}-2$ condition. \item $(M,g(t))$ is K\"ahler with nonnegative bisectional curvature. \end{enumerate} Then $(M,g(t))$ is the canonical form of an expanding gradient Ricci soliton. \end{thm} \begin{rem} Our proof depends heavily on Hamilton's Harnack estimate (\cite{brendle}, \cite{cao}, and \cite{RH2}), and this is why we assumed the above curvature conditions. The reader may easily verify that, if Hamilton's trace Harnack is assumed to be valid, then a nonnegative Ricci curvature assumption is sufficient for the proof. Lott \cite{L} gave an example of a complete but nongradient expanding soliton on noncompact manifold (see page 635 in therein). This soliton has bounded curvature but does not have nonnegative Ricci curvature. Therefore, Theorem \ref{main} cannot be proved in general without any curvature positivity condition. \end{rem} Before we conclude our introduction, a word is to be said about our method. Similar to \cite{CZhang}, we constructed a Type III immortal solution starting from the given expanding breather and considered the monotonicity of the quantity \begin{eqnarray*} \tilde{\theta}_+(t)=\frac{1}{(4\pi t)^{\frac{n}{2}}}\int_M e^{-\ell_+}d\mu_t, \end{eqnarray*} where $\ell_+$ is the forward reduced distance constructed in \cite{FIN}. Note that the monotonicity of this quantity relies on Hamilton's Harnack, as proved in Theorem 4.3 of \cite{LNi}. However, the forward reduced volume defined in \cite{FIN} \begin{eqnarray*} \theta_+(t)=\frac{1}{(4\pi t)^{\frac{n}{2}}}\int_M e^{\ell_+}d\mu_t, \end{eqnarray*} being automatically decreasing along all Ricci flows on compact manifolds, is yet generally infinite on noncompact manifolds. The organization of the paper is as follows. In section 2, we review basic $\mathcal{L}_+$-geometry introduced by Feldman-Ilmanen-Ni \cite{FIN}. In section 3, we estimate the $\ell_+$-distance on Type III immortal Ricci flows. In section 4, we prove the main theorem. \section{Preliminaries} Let $g(t)$ be a metric evolving by the Ricci flow equation on $M\times[0,T]$. We always assume that either $M$ is compact or $g(t)$ has bounded curvature at each time. Fixing a point $x$, Feldman-Ilmanen-Ni \cite{FIN} defined the following dual version of Perelman's reduced distance, called the \emph{forward reduced distance function}. \begin{align}\label{l_+length} \ell_+(y,t)=\frac{1}{2\sqrt{t}}\inf\limits_{ \gamma}\int^t_0\sqrt{\eta}\left(R(\gamma(\eta),\eta)+|\gamma'(\eta)|_{g(\eta)}^2\right)d\eta, \end{align} where $(y,t)\in M\times(0,T]$, and the infimum is taken among all piecewise smooth curves $\gamma:[0,t]\rightarrow M$ satisfying $\gamma(0)=x$ and $\gamma(t)=y$. $(x,0)$ is called the \emph{base point} of $\ell_+$. We remark that since the minimizing curve in (\ref{l_+length}) also satisfies an $\mathcal{L}_+$-geodesic equation, whose form is very similar to that of an $\mathcal{L}$-geodesic equation, one may easily modify the arguments in, say, Chapter 7 in \cite{CCGGI}, to verify that $\ell_+$ is locally Lipschitz under our assumption. We then summarize some equations and inequalities satisfied by $\ell_+$. The reader may note their similarity to the case of Perelman's $\mathcal{L}$-geometry. \begin{thm}[Corollary 2.1 in \cite{FIN}]\label{variation2} The $\ell_+$ function satisfies the following equalities for almost every $(y,t)\in M\times(0,T]$ \begin{align} \frac{\partial \ell_+}{\partial t}=R-\frac{\ell_+}{t}-\frac{K}{2t^{ \frac{3}{2}}},\label{eq_l_1}\\ |\nabla \ell_+|^2=\frac{\ell_+}{t}-R+\frac{K}{t^{ \frac{3}{2}}},\label{eq_l_2} \end{align} Moreover, $\ell_+$ satisfies the following inequalities in the barrier sense or in the sense of distribution. \begin{align} \Delta \ell_+ \leq R+\frac{n}{2t}-\frac{K}{2t^{ \frac{3}{2}}},\label{eq_l_3}\\ \frac{\partial \ell_+}{\partial t}+\Delta \ell_+ +|\nabla \ell_+|^2-R-\frac{n}{2t}\leq 0,\label{eq_1_4}\\ 2\Delta \ell_+ +|\nabla \ell_+|^2-R-\frac{\ell_++n}{t}\leq 0,\label{eq_1_5} \end{align} where \begin{align}\label{K} K=\int^t_0 \eta^{\frac{3}{2}}H(X)d\eta, \end{align} $X$ is the velocity of the minimizing $\mathcal{L}_+$-geodesic connecting $(x,0)$ and $(y,t)$, and $$H(X)=\frac{\partial R}{\partial t}+2<\nabla R,X>+2Rc(X,X)+\frac{R}{t}$$ is Hamilton's trace Harnack. Furthermore, $\nabla \ell_+(y,t)=X(t)$ whenever the minimizing $\mathcal{L}_+$-geodesic connecting $(x,0)$ and $(y,t)$ is unique. \end{thm} \begin{thm}[Theorem 4.3 in \cite{LNi}]\label{Monotonicity} Let $(M,g(t))$ be a Ricci flow with bounded curvature at each time slice. Assume Hamilton's trace Harnack is nonnegative, then the quantity \begin{eqnarray*} \tilde{\theta}_+(t)=\frac{1}{(4\pi t)^{\frac{n}{2}}}\int_M e^{-\ell_+}d\mu_t \end{eqnarray*} is monotonically decreasing in $t$, where $\mu_t$ is the Riemannian measure of $g(t)$ and $\ell_+$ is the forward reduced distance based at some fixed point on $M\times\{0\}$. \end{thm} \begin{proof}[Sketch of proof] Combing (\ref{eq_l_1}), (\ref{eq_l_2}), and (\ref{eq_l_3}), we have \begin{eqnarray}\label{distribution} \frac{\partial \ell_+}{\partial t}-\Delta \ell_+ +|\nabla \ell_+|^2+R+\frac{n}{2t}\geq \frac{K}{t^{\frac{3}{2}}}\geq 0 \end{eqnarray} in the barrier sense or in the sense of distribution. Then, taking for granted the integration by parts at infinity, we compute \begin{align*} &\frac{d}{d t}\int_{M} (4\pi t)^{-\frac{n}{2}}e^{-\ell_+(x,t)}d\mu_t\\ =&-\int_{M}\left( \frac{\partial \ell_+}{\partial t}-\Delta \ell_+ +|\nabla \ell_+|^2+R+\frac{n}{2t}\right)(4\pi t)^{-\frac{n}{2}}e^{-\ell_+(x,t)}d\mu_t\\ =&-\int_{M}\frac{K}{t^{\frac{3}{2}}} (4\pi t)^{-\frac{n}{2}}e^{-\ell_+(x,t)}d\mu_t\le 0. \end{align*} Note that the above computation is valid in our case according to the estimates in the next section. \end{proof} \section{Estimates for $\ell_+$ function on Type III Ricci flows} In this section, we use similar methods as in \cite{N} to derive some estimates for the forward reduced distance on a Type III immortal Ricci flow. Since Theorem \ref{TypeIII_Estimate}(2)---(4) are already well established in \cite{N}, we shall be relatively brief in their proofs. Recall that an immortal solution $(M,g(t))_{t\in[0,\infty)}$ is called Type III, if there exists a constant $C_0>0$, such that \begin{eqnarray}\label{T_III} |Rm|\leq\frac{C_0}{t} &\text{ everywhere on } &M\times(0,\infty). \end{eqnarray} \begin{thm}\label{TypeIII_Estimate} Let $(M^n,g(t))_{t\in[0,\infty)}$ be an $n$-dimensional Type III immortal Ricci flow with nonnegative Ricci curvature everywhere. Let $ \ell_+$ be the forward reduced distance function based a fixed point $(x,0)$. Furthermore, assume that there exists a sequence of space-time points $\{(x_j,t_j)\}_{j=1}^\infty$ such that $t_j\nearrow\infty$ and \begin{equation}\label{key_assumption} \ell_+(x_j,t_j)\le A<\infty \ \text{ for all }\ j\geq 1. \end{equation} Then, there exists a positive constant $Q$ depending only on $A$, $\alpha\in(0,1)$, $n$, and the Type III constant $C_0$ in (\ref{T_III}), such that the following inequalities hold for all $(y,t)\in M^n\times [\alpha^{-1},\alpha]$ and for all $j\geq 1$, understood in the barrier sense if any differentiation is involved. \begin{enumerate} \item $\displaystyle |K^j|(y,t)\leq Q\sqrt{t}\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)^2$, \ \ $\displaystyle |\nabla K^j|(y,t)\leq Q\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)^2$, \\$\displaystyle \left|\frac{\partial}{\partial t} K^j\right|(y,t)\leq \frac{Q}{\sqrt{t}}\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)^2$, \item $\displaystyle\frac{1}{Q}\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)^2-Q\leq \ell^j_+(y,t)\leq Q\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)^2$, \item $\displaystyle|\nabla \ell^j_+|(y,t)\leq \frac{Q}{\sqrt{t}}\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)$ , \item $\displaystyle\left|\frac{\partial \ell^j_+}{\partial t}\right|(y,t)\leq \frac{Q}{t}\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)^2$, \end{enumerate} where $g_j(t):=t_j^{-1}g(tt_j)$ is the Ricci flow sequence obtained by Type III scaling, $K^j$ is as defined in (\ref{K}) for the scaled Ricci flow $g_j$, and $\ell_+^j(\cdot,t):=\ell_+(\cdot,tt_j)$ is the forward reduced distance based at $(x,0)$ and with respect to the Ricci flow $g_j(t)$. \end{thm} \begin{proof} In this proof, we will use the capital letter $C$ to denote a general estimation constant, which depends on $\alpha$, $n$, $A$, and $C_0$ as indicated in the statement of this theorem, and could vary from line to line. Since the Ricci flow is Type III, by Shi's gradient estimates we have \begin{eqnarray*}|R|(y,t)\le \frac{C}{t}, \ \ \ |\nabla \cur|(y,t)\leq \frac{C}{t^{\frac{3}{2}}}, \ \ \ \left|\frac{\partial R}{\partial t}\right|(y,t)\leq \frac{C}{t^2}, \ \ \ \left|\nabla\frac{\partial}{\partial t}R\right|\leq\frac{C}{t^{\frac{5}{2}}}, \end{eqnarray*} for all $(y,t)\in M\times (0,\infty)$. Let $(y,t)\in M\times (0,\infty)$ be such that the minimizing $\mathcal{L}_+$-geodesic $\gamma$ connecting $(x,0)$ and $(y,t)$ is unique. We denote $X:=\gamma'$ and calculate that \begin{align}\label{ineq_K} |K| &=\left|\int^t_0 \eta^{\frac{3}{2}}\left(\frac{\partial R}{\partial \eta}+\frac{R}{\eta}+2\langle\nabla R,X\rangle+2Rc(X,X)\right)d\eta\right| \leq \int^t_0 \eta^{\frac{3}{2}}\left(\frac{C}{\eta^2}+\frac{C}{\eta^{\frac{3}{2}}}|X|+\frac{C}{\eta}|X|^2\right)d\eta\nonumber\\ &\leq \int_0^t\frac{C}{\eta^{\frac{1}{2}}}d\eta+\int_0^t\sqrt{\eta}|X|^2d\eta\leq 2C\sqrt{t}+C\int_0^t\sqrt{\eta}(|X|^2+R)d\eta=2C\sqrt{t}(1+\ell_+(y,t)). \end{align} Since the Type III scaling process does not alter the Type III constant in (\ref{T_III}), we obtain from (\ref{eq_l_1}), (\ref{eq_l_2}), and (\ref{ineq_K}) that \begin{align}\label{ineq_partial_t} \left|\frac{\partial{\ell^j_+}}{\partial t}\right|(y,t)&\leq \frac{C}{t}(1+\ell^j_+(y,t)),\\ |\nabla \ell^j_+|^2(y,t)&\leq \frac{C}{t}(1+\ell^j_+(y,t)).\label{ineq_gradient} \end{align} In view of the fact \begin{align*} \ell^j_+(x_j,1)=\ell_+(x_j,t_j)\le A, \end{align*} we may integrate (\ref{ineq_partial_t}) to obtain \begin{align}\label{ineq_l_up} \ell^j_+(x_j,t)\le C \ \ \text{ for all } \ \ t\in [\alpha^{-1},\alpha]. \end{align} Integrating (\ref{ineq_gradient}) in space and applying (\ref{ineq_l_up}), we have \begin{align}\label{ineq_l_up_b} \ell^j_+(y,t) \leq Q\left(1+\frac{d_{g_j(t)}(x_j,y)}{\sqrt{t}}\right)^2 \ \ \text{ for all }\ \ (y,t)\in M\times[\alpha,\alpha^{-1}]. \end{align} Conclusion (3), (4), the second inequality of conclusion (2), and the first inequality of conclusion (1) now follow from (\ref{ineq_K}), (\ref{ineq_partial_t}), (\ref{ineq_gradient}), and (\ref{ineq_l_up_b}). To obtain the first inequality of conclusion (2), we fix $(y,t)\in M\times[\alpha,\alpha^{-1}]$ and let $\gamma_1(s)$ and $\gamma_2(s)$ be minimizing $\mathcal{L}_+$-geodesics from $(x,0)$ to $(x^j,t)$ and to $(y,t)$, respectively, all with respect to $g_j(t)$. We denote $f(s):=d_{g_j(s)}(\gamma_1(s),\gamma_2(s))$. Then we have \begin{align*} \frac{d^-}{d s}f(s) &=\langle\nabla d_{g_j(s)},\gamma_1'(s)\rangle+\langle\nabla d_{g_j(s)},\gamma_2'(s)\rangle+\left(\frac{d^-}{d \tau}d_{g_j(\tau)}(\gamma_1(s),\gamma_2(s))\right)\Bigg|_{\tau=s}\\ &=\langle\nabla d_{g_j(s)},\nabla \ell^j_+(\gamma_1(s),s)\rangle+\langle\nabla d_{g_j(s)},\nabla \ell^j_+(\gamma_2(s),s)\rangle +\left(\frac{d^-}{d \tau}d_{g_j(\tau)}(\gamma_1(s),\gamma_2(s))\right)\Bigg|_{\tau=s}\\ &\leq |\nabla \ell^j_+(\gamma_1(s),s)|+|\nabla \ell^j_+(\gamma_2(s),s)|+ \left(\frac{d^-}{d \tau}\int_{\sigma}\sqrt{g_j(\tau)(\sigma',\sigma')}\right)\Bigg|_{\tau=s}, \end{align*} where $\sigma$ is a unit speed minimizing geodesic from $\gamma_1(s)$ to $\gamma_2(s)$ with respect to metric $g_j(s)$, and $\frac{d^{-} f}{d s} = \liminf\limits _{h \rightarrow 0^{+}} \frac{f(s+h)-f(s)}{h}$ is the lower forward Dini derivative. By Lemma 18.1 in \cite{CCGGI}, we have \begin{align*} \left(\frac{d^-}{d \tau}\int_{\sigma}\sqrt{g_j(\tau)(\sigma',\sigma')}\right)\Bigg|_{\tau=s}=-\min\limits_{\eta\in\mathcal{Z}\left(s\right)}\int_{\eta}Rc_{g_j(s)}(\eta',\eta')\leq 0, \end{align*} where $\mathcal{Z}\left(s\right)$ denotes the set of all unit speed minimizing geodesics from $\gamma_1(s)$ to $\gamma_2(s)$ with respect to metric $g_j(s)$. Then, by (\ref{ineq_gradient}), we get \begin{align}\label{ineq_2.4_1} \frac{d^-}{ds} f(s)\leq \sqrt{\frac{C_2}{s}(1+\ell^j_+(\gamma_1(s),s))}+\sqrt{\frac{C_2}{s}(1+\ell^j_+(\gamma_2(s),s))}. \end{align} On the other hand, since for all $s\in(0, t]$, we have \begin{align*} \ell^j_+(\gamma_2(s),s)&=\frac{1}{2\sqrt{s}}\int^s_0\sqrt{\eta}(R+|X|^2) d\eta \leq \frac{1}{2\sqrt{s}}\int^t_0\sqrt{\eta}(R+|X|^2) d\eta=\frac{\sqrt{t}}{\sqrt{s}} \ell^j_+(y,t), \\ \ell^j_+(\gamma_1(s),s)&\leq\frac{\sqrt{t}}{\sqrt{s}}\ell^j_+(x_j,t)\leq C\frac{\sqrt{t}}{\sqrt{s}}, \end{align*} where in the latter formula we have applied (\ref{ineq_l_up}). Then, (\ref{ineq_2.4_1}) becomes \begin{align*} \frac{d^-}{d s} f(s)\leq C\frac{t^{\frac{1}{4}}}{s^{\frac{3}{4}}}\left(1+\sqrt{1+l^j_+(y,t)}\right)\ \ \text{ for all }\ \ s\in(0,t]. \end{align*} Integrating the above inequality from $0$ to $t$, we obtain the first inequality of conclusion (2). Finally, to obtain the last two inequalities of conclusion (1), we let $\gamma$ be a minimizing $\mathcal{L}_+$-geodesic with respect to $g$, which connects $(x,0)$ and $(y,t)$, and $Y$ an $\mathcal{L}_+$-Jacobi field along $\gamma$, satisfying $[X,Y]=0$ and \begin{eqnarray} \nabla_XY=\ric(Y)+\frac{1}{2\eta}Y,\ \ \ |Y(\eta)|^2=\frac{\eta}{t}|Y(t)|^2=\frac{\eta}{t},\ \ \ |\nabla_X Y|\leq \frac{C}{t^{\frac{1}{2}}\eta^{\frac{1}{2}}}. \end{eqnarray} Then, we may compute \begin{align}\label{K_g} |\delta_YK|&=\Bigg|\int_0^t\eta^{\frac{3}{2}}\bigg(\Big\langle\nabla\frac{\partial}{\partial\eta}R, Y\Big\rangle+\frac{1}{\eta}\langle\nabla R,Y\rangle+2\langle\nabla^2R,X\otimes Y\rangle+2\langle\nabla R,\nabla_XY\rangle\\\nonumber &\quad +2\nabla\ric(Y,X,X)+2\ric(X,\nabla_XY)\bigg)d\eta\Bigg|\leq C\int_0^t\eta^{\frac{3}{2}}\left(\frac{1}{t^{\frac{1}{2}}\eta^2}+\frac{1}{t^{\frac{1}{2}}\eta^{\frac{3}{2}}}|X|+\frac{1}{t^{\frac{1}{2}}\eta}|X|^2\right)d\eta \\\nonumber &\leq \frac{C}{\sqrt{t}}\int_0^t\frac{1}{\eta^{\frac{1}{2}}}d\eta+\frac{C}{\sqrt{t}}\int_0^t\sqrt{\eta}(|X|^2+R)d\eta\leq C(1+\ell_+(y,t)). \end{align} Since the Type III constant in (\ref{T_III}) is not affected by the Type III scaling, we obtain the second inequality of conclusion (1) by (\ref{K_g}). Next, we observe that \begin{align*} \left|\frac{\partial}{\partial t}K\right|(y,t)&=\left|\frac{d}{d\eta}\Big|_{\eta=t}K-\langle\nabla K, X\rangle(y,t)\right|\leq t^{\frac{3}{2}}\left|\frac{\partial R}{\partial t}+\frac{R}{t}+2\langle\nabla R,X\rangle+2Rc(X,X)\right|+|\nabla K||X| \\\nonumber &\leq\frac{C}{\sqrt{t}}+C|X|+C\sqrt{t}|X|^2+C(1+\ell_+(y,t))|X|\leq \frac{C}{\sqrt{t}}(1+\ell_+(y,t))+C\sqrt{t}|\nabla \ell_+|^2(y,t) \\\nonumber &\leq\frac{C}{\sqrt{t}}(1+\ell_+(y,t)). \end{align*} Here we have used $X(t)=\nabla\ell_+(y,t)$ and formula (\ref{ineq_gradient}). Again, by the scaling invariance of the Type III constant, we obtain the third inequality of conclusion (1). \end{proof} \section{the proof of the main theorem} In this section, we prove the main theorem by implementing the method of, \cite{CZhang}, \cite{LZ}, and \cite{Zh}. Since our techniques and arguments are very similar to that of \cite{CZhang}, we are not including obvious modifications, and the readers are referred to this paper for more detailed treatment. Let $(M,g_0(t))$ be an expanding breather as described in the statement of Theorem \ref{main}. Note that the assumptions therein guarantees the validity of Hamilton's Harnack estimate (\cite{brendle}, \cite{cao} and \cite{RH2}). After rescaling and translating in time, we may assume that there exists $\alpha\in(0,1)$ and a diffeomorphism $\phi:M\to M$, such that \begin{equation}\label{breather} \alpha g_0(1)=\phi^*g_0(0). \end{equation} Similar to \cite{CZhang}, for each $j\geq 0$ we define \begin{eqnarray*} \displaystyle t_j&=&\sum^j_{k=0}\alpha^{-k},\ \ t_0=1, \\ g_j(t)&=&\alpha^{-j}(\phi^j)^*g_0(\alpha^{j}( t- t_{j-1})),\ \ t\in [ t_{j-1}, t_j]. \end{eqnarray*} Obviously, $t_j\nearrow\infty$. We may then define the spliced immortal solution \begin{eqnarray}\label{defined_ancient_solution} g( t)=\left\{ \begin{array}{ll} g_0( t), \quad & t\in [0,1], \\\nonumber g_j( t), \quad & t\in [ t_{j-1}, t_j]. \end{array} \right. \end{eqnarray} It is straightforward to check that $ g_j(t_{j-1})=g_{j-1}(t_{j-1}) $ and $ |Rm_{g_{j}( t)}| \le \frac{C}{ t} $ for all $j\ge 1$, where the constant $C$ depends only on $\alpha$ and the curvature bound of the original breather. Then the immortal solution $g(t)$ is of Type III and is smooth by the uniqueness of the Ricci flow (c.f. \cite{uniqueness1} and \cite{uniqueness2}). Next, we fix a point $p_0\in M$ and define $x_j=\phi^{-{(j+1)}}(p_0)$ for $j\ge 0$. Let $\ell_+$ be the reduced distance from $(p_0,0)$, and we shall proceed to show \begin{eqnarray}\label{nonsense} \limsup_{j\rightarrow\infty}\ell_+(x_{j},t_{j})<\infty. \end{eqnarray} Let $\sigma:[0,1]\to M^n$ be a smooth curve satisfying $\sigma(0)=p_0$ and $\sigma(1)=x_0=\phi^{-1}(p_0)$. We define $\sigma_j:[t_j,t_{j+1}]\rightarrow M$ and $\gamma_j:[0, t_{j+1}]\to M$ as \begin{align} &\sigma_j( t)=\phi^{-{(j+1)}}\circ\sigma(\alpha^{j+1}( t- t_{j})),\quad t\in [ t_{j}, t_{j+1}], \\ &\gamma_j( t)=\left\{ \label{gamma} \begin{array}{ll} \sigma( t), \quad & t\in [0,1], \\ \sigma_i( t), \quad & t\in [ t_{i}, t_{i+1}],0\le i\le j. \end{array} \right. \end{align} Since \begin{equation*} \sigma_j( t_{j})=\phi^{-(j+1)}\circ\sigma(0)=\phi^{-j}\circ\sigma(1)=\phi^{-j}\circ\sigma(\alpha^{j}( t_{j}- t_{j-1}))=\sigma_{j-1}( t_{j}), \end{equation*} we have that $\gamma_j(t)$ defined as (\ref{gamma}) is a piecewise smooth $C^0$ curve satisfying $\gamma_j(0)=p_0$, $\gamma_j( t_j)=x_{j}$. We then compute \begin{align*} 2\sqrt{ t_{j+1}}\ell_+(x_{j+1}, t_{j+1})&\le\mathcal{L}_+(\sigma)+\sum\limits_{i=1}^{j} \int\limits_{ t_{i}}^{ t_{i+1}}\sqrt{ t}\left(R(\sigma_i( t), t)+|\sigma_i'( t)|^2_{g( t)}\right)d t\\ &\le D+\sum\limits_{i=1}^{j} \int\limits_{ t_{i}}^{ t_{i+1}}\sqrt{ t}\left(\frac{C}{ t}+A\alpha^{i+1}\right)d t\le D+C\sum\limits_{i=1}^{j} \alpha^{-\frac{i+1}{2}} \\ &\le D+ C\alpha^{-\frac{j+1}{2}}, \end{align*} for all $j\geq 0$, where $A:=\max\limits_{ t\in [0,1]}|\sigma'( t)|_{g_0( t)}$, $C$, and $D$ are all constants independent of $j$. Since $t_{j+1}\ge \alpha^{-(j+1)}$, we obtain (\ref{nonsense}) from the above computation. Theorem \ref{TypeIII_Estimate} is now applicable to $(M,g(t))_{t\in[0,\infty)}$ along the space-time sequence $\{(x_j,t_j)\}_{j=1}^\infty$. From the construction of $g(t)$, we easily observe that $$ \Big(M, t_j^{-1}g( t_j t),x_j\Big)_{t\in \left[1,\frac{ t_{j+1}}{ t_{j}}\right]}\rightarrow \Big(M,g_\infty(t),p_0\Big)_{t\in[1,\alpha^{-1}]}, $$ where $g_\infty$ and $g_0$ differ only by a scaling and a time-shifting. Furthermore, by Theorem \ref{TypeIII_Estimate}, we have that there exists a function $\ell_+^\infty: M\times[1,\alpha^{-1}]\rightarrow \mathbb{R}$, such that \begin{eqnarray*} \ell_+^j\rightarrow\ell_+^\infty \end{eqnarray*} in the $C^{0,\alpha}_{\operatorname{loc}}$ sense and the weak $*W^{1,2}_{\operatorname{loc}}$ sense, where $\ell_+^j(\cdot,t)=\ell_+(\cdot,tt_j)$. Our next goal is to show that $\ell_+^\infty$ gives rise to an expander structure on $(M,g_\infty(t))$. Arguing as in section 4 of \cite{N} or section 6 of \cite{CZhang} and applying the monotonicity in theorem \ref{Monotonicity} in the same way as one have applied Perelman's reduced geometry, we have that $\ell_+^\infty: M\times[1,\alpha^{-1}]\rightarrow \mathbb{R}$ is a smooth function, satisfying \begin{eqnarray}\label{linfty} \frac{\partial \ell_+^\infty}{\partial t}-\Delta \ell_+^\infty +|\nabla \ell_+^\infty|^2+R_\infty+\frac{n}{2t}=0. \end{eqnarray} Furthermore, by Theorem \ref{TypeIII_Estimate}(1), we may find a function $K^\infty: M\times[1,\alpha^{-1}]\rightarrow\mathbb{R}$, such that $K^j\rightarrow K^\infty$ in the $C^{0,\alpha}_{\operatorname{loc}}$ sense and the weak $*W^{1,2}_{\operatorname{loc}}$ sense. Fixing arbitrary $1<s_1<s_2<\alpha^{-1}$ and an arbitrary nonnegative and smooth time-independent cut-off function $\varphi$ compactly supported on $M$, we obtain the following by (\ref{distribution}) \begin{align*} 0&\leq\int_{s_1}^{s_2}\int_M\frac{1}{t^{\frac{3}{2}}}K^j\varphi (4\pi t)^{-\frac{n}{2}}e^{-\ell_+^j}d\mu_t^jdt \\ &\leq\int_{s_1}^{s_2}\int_M\left(\varphi\frac{\partial}{\partial t}\ell_+^j-\langle\nabla\varphi,\nabla\ell_+^j\rangle+R_j\varphi+\frac{n}{2t}\varphi\right) (4\pi t)^{-\frac{n}{2}}e^{-\ell_+^j}d\mu_t^jdt \\ &\rightarrow\int_{s_1}^{s_2}\int_M\left(\varphi\frac{\partial}{\partial t}\ell_+^\infty-\langle\nabla\varphi,\nabla\ell_+^\infty\rangle+R_\infty\varphi+\frac{n}{2t}\varphi\right) (4\pi t)^{-\frac{n}{2}}e^{-\ell_+^\infty}d\mu_t^\infty dt \\ &=0. \end{align*} This shows that \begin{eqnarray} K^\infty\equiv 0 &\text{ everywhere on }& M\times[1,\alpha^{-1}]. \end{eqnarray} Since equations (\ref{eq_l_1}) and (\ref{eq_l_2}) are both carried to the limit in the sense of distribution, and since $\ell_+^\infty$ is smooth, the following hold on $M\times[1,\alpha^{-1}]$ \begin{eqnarray*} \frac{\partial}{\partial t}\ell_+^\infty=R_\infty-\frac{\ell_+^\infty}{t},\ \ \ |\nabla \ell_+^\infty|^2=\frac{\ell_+^\infty}{t}-R_\infty. \end{eqnarray*} In combination with (\ref{linfty}), we then have \begin{align*} \frac{\partial \ell_+^{\infty}}{\partial t}+\Delta \ell_+^{\infty} +|\nabla \ell_+^{\infty}|^2-R_{{\infty}}-\frac{n}{2t}=0, \\ 2\Delta \ell_+^{\infty} +|\nabla \ell_+^{\infty}|^2-R_{{\infty}}-\frac{\ell_+^{\infty}+n}{t}=0. \end{align*} Then, by Theorem 1.2 in \cite{FIN}, \begin{align*} &\left(\frac{\partial}{\partial t}+\Delta-R_{\infty}\right) \left(t(2\Delta \ell_+^{\infty} +|\nabla \ell_+^{\infty}|^2-R_{\infty})-\ell_+^{\infty}-n\right)(4\pi t)^{-\frac{n}{2}}e^{\ell_+^{\infty}}\\ =&-2t \left|Rc_{\infty}-\nabla \nabla \ell_+^{\infty}+\frac{g_{\infty} }{2t}\right|^{2}(4\pi t)^{-\frac{n}{2}}e^{\ell_+^{\infty}}=0. \end{align*} Hence $\ell_+^{\infty}$ satisfies the following gradient expanding soliton equation \begin{align*} Rc_{{\infty}}-\nabla \nabla \ell_+^{\infty}= -\frac{1}{2t }g_{\infty}. \end{align*} Note that Theorem \ref{TypeIII_Estimate}(2) guarantees that $\displaystyle (4\pi t)^{-\frac{n}{2}}e^{\ell_+^{\infty}}>0$ everywhere. This finishes the proof.
1,477,468,751,314
arxiv
\section{Introduction} \label{sec:introduction} The branching problem for affine Lie algebras emerges in conformal field theory, for example, in the construction of modular-invariant partition functions \cite{difrancesco1997cft}. Recently the problem of conformal embeddings was considered in the paper \cite{coquereaux2008conformal}. There are different approaches to deal with the branching coefficients. Some of them use the BGG resolution \cite{bernstein1975differential} (for Kac-Moody algebras the algorithm is described in \cite{kac1990idl},\cite{wakimoto2001idl}), the Schur function series \cite{fauser2006new}, the BRST cohomology \cite{Hwang:1994yr}, Kac-Peterson formulas \cite{kac1990idl,quella2002branching} or the combinatorial methods applied in \cite{feigin707principal}. In this paper we prove that for an arbitrary reductive subalgebra the branching coefficients are subject to the recurrent properties that can be explicitly formulated and that there exists an effective and simple algorithm to solve these recurrent relations step by step. The basic idea is similar to the one used in \cite{ilyin812pbc} for maximal embeddings. In our case the algorithm is essentially different, new properties of singular weights are determined to deal with an arbitrary reductive injection $\frak{a} \rightarrow \frak{g}$. The principal point is to consider the subalgebra $\af$ together with its counterpart $\afb$ orthogonal to $\af$. For any reductive algebra $\af$ the subalgebra $\afb \subset \frak{g} $ is regular and reductive. For a highest weight module $L^{\left( \mu \right)}$ and orthogonal pair of subalgebras $\left( \af, \afb \right)$ we consider the so called singular element $\Psi^{\left( \mu \right)}$ (the numerator in the Weyl character formula $ch\left( L^{\mu }\right) =\frac{\Psi ^{\left( \mu \right) }}{\Psi ^{\left( 0\right) }}$, see for example \cite{humphreys1997introduction}) the Weyl denominator $\Psi ^{\left( 0\right) }_{\afb}$ and the projection $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)} =\pi_{\af}\frac{\Psi ^{\left( \mu \right) }_{\frak{g}}}{\Psi ^{\left( 0\right) }_{\afb}}$. We prove that for any highest weight $\hf$-diagonalizable module $L^{\left( \mu \right)}$ and orthogonal pair $\left( \af, \afb \right)$ the element $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}$ has a decomposition with respect to the set of Weyl numerators $\Psi ^{\left( \mu \right) }_{ \afb }$ of $\afb$. This decomposition provides the possibility to construct the recurrent property for branching coefficients corresponding to the injection $\frak{a} \rightarrow \frak{g} $. The property is formulated in terms of a specific element $\Gamma_{\af \rightarrow \gf}$ of the group algebra $\mathcal{E}\left( \frak{g} \right)$ called "the injection fan". Using this tool we formulate a simple and explicit algorithm for branching coefficients computations applicable for an arbitrary (maximal or nonmaximal) subalgebras of finite-dimensional or affine Lie algebras. In the case of maximal embedding the corresponding fan is unsubtracted, the singular element becomes trivial $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}=\Psi ^{\left( \mu \right) }_{\left( \gf\right)}$ and the relations described earlier in \cite{ilyin812pbc} are reobtained. We demonstrate that our algorithm is effective and can be used in studies of conformal embeddings and coset constructions in rational conformal field theory. The paper is organized as follows. In the subsection \ref{sec:notation} we fix the general notations. In the Section \ref{sec:recurr-form-branch} we derive the decomposition formula based on recurrent properties of anomalous branching coefficients and describe the decomposition algorithm for integrable highest weight modules $L_{\mathfrak{g}}$ with respect to a reductive subalgebra $\mathfrak{a}\subset \mathfrak{g}$ (subsection \ref{sec:algorithm}). In the Section \ref{sec:finite-dimens-lie} we present several simple examples for finite-dimensional Lie algebras. The affine Lie algebras and their applications in CFT models are considered in Section \ref{sec:phys-appl}. The general properties of the proposed algorithm and possible further developments are discussed (Section \ref{sec:conclusion}). \subsection{Notation} \label{sec:notation} Consider affine Lie algebras $\frak{g}$ and $\af$ with the underlying finite-dimensional subalgebras $\go$ and \ao$ and an injection $\af\longrightarrow \frak{ }$ such that $\af$ is a reductive subalgebra $\frak{a\subset g}$ with correlated root spaces: $\frak{h}_{\af}^{\ast }\subset \frak{h}_{\frak{ }}^{\ast }$ and $\frak{h}_{\ao}^{\ast }\subset \frak{ }_{\go}^{\ast }$\ . We use the following notations: $L^{\mu }$\ $\left( L_{\af}^{\nu }\right) $\ --- the integrable module of $\frak{g}$ with the highest weight $\mu $\ ; (resp. integrable $\af$ -module with the highest weight $\nu $ ); $r$ , $\left( r_{\af}\right) $ --- the rank of the algebra $\frak{g}$ \left( \mbox{resp. }\af\right) $ ; $\Delta $ $\left( \Delta _{\af}\right) $--- the root system; $\Delta ^{+} $ $\left( \mbox{resp. }\Delta _{\af}^{+}\right) $--- the positive root system (of $\frak{g}$ and $\af$ respectively); $\mathrm{mult}\left( \alpha \right) $ $\left( \mathrm{mult}_{\af}\left( \alpha \right) \right) $ --- the multiplicity of the root $\alpha$ in $\Delta $ (resp. in $\left( \Delta _{\af}\right) $); $\co{\Delta}$ , $\left( \co{\Delta _{\af} \right)$ --- the finite root system of the subalgebra $\co \frak{g}}$ (resp. $\co{\af}$); $\mathcal{N}^{\mu }$ , $\left( \mathcal{N}_{\af}^{\nu }\right) $ --- the weight diagram of $L^{\mu }$ $\left( \mbox{resp. }L_{\af}^{\nu }\right) $ ; $W$ , $\left( W_{\af}\right) $--- the corresponding Weyl group; $C$ , $\left( C_{\af}\right) $--- the fundamental Weyl chamber; $\bar{C}, \left(\bar{C_{\mathfrak{a}}}\right)$ --- the closure of the fundamental Weyl chamber; $\rho $\ , $\left( \rho _{\af}\right) $\ --- the Weyl vector; $\epsilon \left( w\right) :=\det \left( w\right) $ ; $\alpha _{i}$ , $\left( \beta _{j}\right) $ --- the $i $-th (resp. $j$-th) basic root for $\frak{g}$ $\left( \mbox{resp. }\a \right) $; $i=0,\ldots ,r$,\ \ $\left( j=0,\ldots ,r_{\af}\right) $; $\delta $ --- the imaginary root of $\frak{g}$ (and of $\af$ if any); $\alpha _{i}^{\vee }$ , $\left( \alpha _{\left( \af\right) j}^{\vee }\right) $--- the basic coroot for $\frak{g}$ $\left( \mbox{resp. }\a \right) $ , $i=0,\ldots ,r$ ;\ \ $\left( j=0,\ldots ,r_{\af}\right) $; $\co{\xi }$ , $\co{\xi _{\left( \af\right) }}$ --- the finite (classical) part of the weight $\xi \in P$ , $\left( \mbox resp. }\xi _{\left( \af\right) }\in P_{\af}\right) $; $\lambda =\left( \co{\lambda };k;n\right) $ --- the decomposition of an affine weight indicating the finite part $\co{\lambda }$, level $k$ and grade $n$; $P$ $\left( \mbox{resp. } P_{\af}\right) $ \ --- the weight lattice; $m_{\xi }^{\left( \mu \right) }$ , $\left( m_{\xi }^{\left( \nu \right) }\right) $ --- the multiplicity of the weight $\xi \in P$ \ $\left( \mbox resp. }\in P_{\af}\right) $ in the module $L^{\mu }$ , (resp. $\xi \in L_{\af}^{\nu } $); $ch\left( L^{\mu }\right) $ $\left( \mbox{resp. }ch\left( L_{\af}^{\nu }\right) \right) $--- the formal character of $L^{\mu }$ $\left( \mbox{resp. L_{\af}^{\nu }\right) $; $ch\left( L^{\mu }\right) =\frac{\sum_{w\in W}\epsilon (w)e^{w\circ (\mu +\rho )-\rho }}{\prod_{\alpha \in \Delta ^{+}}\left( 1-e^{-\alpha }\right) ^ \mathrm{{mult}\left( \alpha \right) }}}$ --- the Weyl-Kac formula; $R:=\prod_{\alpha \in \Delta ^{+}}\left( 1-e^{-\alpha }\right) ^{\mathrm{ mult}\left( \alpha \right) }}\quad $ $\left( \mbox{resp. }R_{\af}:=\prod_{\alpha \in \Delta _ \af}^{+}}\left( 1-e^{-\alpha }\right) ^{\mathrm{mult}_{\af}\mathrm \left( \alpha \right) }}\right) $--- the Weyl denominator. \section{Recurrent relations for branching coefficients.} \label{sec:recurr-form-branch} Consider the integrable module $L^{\mu }$ of $\frak{g}$ with the highest weight $\mu $ and let $\af\subset \frak{g}$ be a reductive subalgebra of $\frak{g}$. With respect to $\af$ the module $L^{\mu }$ is completely reducible, \begin{equation*} L_{\frak{g}\downarrow \af}^{\mu }=\bigoplus \limits_{\nu \in P_{\af}^{+}}b_{\nu }^{\left( \mu \right) }L_{\af}^{\nu }. \end{equation*} Using the projection operator $\pi_{\af}$ (to the weight space $\frak{h_a}^*$) one can rewrite this decomposition in terms of formal characters: \begin{equation} \label{branching1} \pi _{\af}\circ ch\left( L^{\mu }\right) =\sum_{\nu \in P_{\af}^{+}}b_{\nu }^{(\mu)}ch\left( L_{\af}^{\nu }\right) . \end{equation} We are interested in branching coefficients $b^{(\mu)}_{\nu}$. \subsection{Orthogonal subalgebra and injection fan.} \label{subsec:branching-orthog-pair} In this subsection we shall introduce some simple constructions that will be used in our studies of branching and in particular the "orthogonal partner" $\afb$ for a reductive subalgebra $\af$ in $\gf$. In the Weyl-Kac formula both numerator and denominator can be considered as formal elements containing the singular weights of the Verma modules $V^{\xi}$ with the highest weights $\xi=\mu$ and $\xi=0$ \cite{humphreys1997introduction}. We attribute singular elements to the corresponding integrable modules $L^{\mu }$ and $L_{\af}^{\nu }$: \begin{equation*} \Psi ^{\left( \mu \right) }:=\sum\limits_{w\in W}\epsilon (w)e^{w\circ (\mu +\rho )-\rho }, \end{equation*} \begin{equation*} \Psi _{ \af}^{\left( \nu \right) }:= \sum\limits_{w\in W_{\af}}\epsilon (w)e^{w\circ (\nu +\rho _{_{\af}})-\rho _{_{\af}}}. \end{equation*} and use the Weyl-Kac formula in the form \begin{equation} \label{Weyl-Kac2} ch\left( L^{\mu }\right) =\frac{\Psi ^{\left( \mu \right) }} {\Psi ^{\left( 0 \right) }}=\frac{\Psi ^{\left( \mu \right) }}{R}. \end{equation} Applying formula (\ref{Weyl-Kac2}) to the branching rule (\ref{branching1}) we get the relation connecting the singular elements $\Psi ^{\left( \mu \right) }$ and $\Psi _{ \af}^{\left( \nu \right) }$ : \begin{eqnarray} \nonumber \pi _{\af}\left( \frac{\sum_{w \in W}\epsilon (w )e^{w (\mu +\rho )-\rho }}{\prod_{\alpha \in \Delta ^{+}}(1-e^{-\alpha })^{\mathrm mult}(\alpha )}}\right) &=&\sum_{\nu \in P_{\af}^{+}}b_{\nu }^{(\mu ) \frac{\sum_{w \in W_{\af}}\epsilon (w )e^{w (\nu +\rho _ \af})-\rho _{\af}}}{\prod_{\beta \in \Delta _{\a }^{+}}(1-e^{-\beta })^{\mathrm{mult}_{\af}(\beta )}}, \label{eq:4} \\ \pi _{\af}\left( \frac{\Psi ^{\left( \mu \right) }}{R}\right) &=&\sum_{\nu \in P_{\af}^{+}}b_{\nu }^{(\mu )}\frac{\Psi _{ \frak a}}^{\left( \nu \right) }}{R_{\af}}. \end{eqnarray} Here $\Delta _{\af}^{+}$ is the set of positive roots of the subalgebra $\af$ (without loss of generality we consider them as vectors from the positive root space $\frak{h}^{\ast +}$ of $\frak{g}$). Consider the root subspace $\frak{h}_{\perp \af}^{\ast }$ orthogonal to $\af$, \begin{equation*} \frak{h}_{\perp \af}^{\ast }:=\left\{ \eta \in \frak{h}^{\ast } |\forall h \in \hf_{\af}; \eta\left(h \right)=0 \right\} , \end{equation*} and the roots (correspondingly -- positive roots) of $\frak{g}$ orthogonal to $\af$, \begin{eqnarray*} \Delta _{\af_{\perp }} &:&=\left\{ \beta \in \Delta _{\frak{g}}| \forall h \in \hf_{\af}; \beta\left(h \right)=0 \right\} , \\ \Delta _{\af_{\perp }}^{+} &:&=\left\{ \beta ^{+}\in \Delta _{\frak{g }^{+}|\forall h \in \hf_{\af}; \beta^{+}\left(h \right)=0 \right\} . \end{eqnarray*} Let $W_{\af_{\perp }}$ be the subgroup of $W$ generated by the reflections $w _{\beta }$ for the roots $\beta \in \Delta _{\a _{\perp }}^{+}$ . The subsystem $\Delta _{\af_{\perp }}$ determines the subalgebra $\af_{\perp }$ with the Cartan subalgebra $\frak{h}_{\a _{\perp }}$. Let \begin{equation*} \frak{h}_{\perp }^{\ast }:=\left\{ \eta \in \frak{h}_{\perp \af}^{\ast }|\forall h \in \hf_{\af\oplus \af_{\perp}}; \eta \left( h \right)=0 \right\} \end{equation*} and consider the subalgebras \begin{eqnarray*} \widetilde{\af_{\perp }} &:&=\af_{\perp }\oplus \frak{h}_{\perp } \\ \widetilde{\af} &:&=\af\oplus \frak{h}_{\perp }. \end{eqnarray*} Algebras $\af$ and $\af_{\perp }$ form the ''orthogonal pair'' $\left( \af,\af_{\perp}\right) $ of subalgebras in $\frak{g}$. For the Cartan subalgebras we have the decomposition \begin{equation} \frak{h}=\frak{\frak{h}_{\af}}\oplus \frak{h}_{\af_{\perp }}\oplus \frak{h}_{\perp }=\frak{\frak{h}_{\widetilde{\af}}}\oplus \frak{h}_ \af_{\perp }}=\frak{\frak{h}_{\widetilde{\af_{\perp }}}}\oplus \frak{h}_{\af}. \end{equation} For the subalgebras of an orthogonal pair $\left( \af,\af_{\perp }\right) $ we consider the corresponding Weyl vectors, $\rho _{\af}$ and $\rho _{\af_{\perp }}$ , and\ form the so called ''defects'' \mathcal{D}_{\af}$ and $\mathcal{D}_{\af_{\perp }}$ of the injection: \begin{equation} \mathcal{D}_{\af}:=\rho _{\af}-\pi _{\af}\rho , \end{equation} \begin{equation} \label{defect-perp} \mathcal{D}_{\af_{\perp }}:=\rho _{\af_{\perp }}-\pi _{\a _{\perp }}\circ\rho . \end{equation} For the highest weight module $L_{\frak{g}}^{\mu }$ consider the singular weights $\left\{\left( w(\mu +\rho )-\rho \right)|w \in W \right\}$ and their projections to $h_{\widetilde{\af_{\perp }}}^{\ast }$ (additionally shifted by the defect $-\mathcal{D}_{\af_{\perp }}$): \begin{equation*} \mu _{\widetilde{\af_{\perp }}}\left( w\right) :=\pi _{\widetilde{\frak a}_{\perp }}}\circ\left[ w(\mu +\rho )-\rho \right] -\mathcal{D}_{\af_{\perp }},\quad w\in W. \end{equation*} Among the weights $\left\{\mu _{\widetilde{\af_{\perp }}}\left( w\right) |w\in W\right\}$ choose those located in the fundamental chamber $\overline{C_ \widetilde{\af_{\perp }}}}$ and let $U$ be the set of representatives u $ for the classes $W/W_{\af_{\perp }}$ such that \begin{equation} U:=\left\{ u\in W|\quad \mu _{\widetilde{\af_{\perp }}}\left( u\right) \in \overline{C_{\widetilde{\af_{\perp }}}}\right\} \quad . \label{U-def} \end{equation} For the same set $U$ introduce the weights \begin{equation*} \mu _{\af}\left( u\right) :=\pi _{\af}\circ\left[ u(\mu +\rho )-\rho \right] +\mathcal{D}_{\af_{\perp }}. \end{equation*} To simplify the form of relations we shall now on omit the sign "$\circ$" in projected weights. To describe the recurrent properties for branching coefficients $b_{\nu }^{(\mu )}$ we shall use the technique elaborated in \cite{ilyin812pbc}. One of the main tools is the set of weights $\Gamma _{\af\rightarrow \frak{g } $ called the injection fan. As far as we consider the general situation (where the injection is not necessarily maximal) the notion of the injection fan is modified: \begin{definition} \label{fan-definition} For the product \begin{equation} \prod_{\alpha \in \Delta ^{+}\setminus \Delta _{\afb }^{+}}\left( 1-e^{-\pi _{\af}\alpha }\right) ^{\mathrm{mult}(\alpha )-\mathrm{mult}_{\a }(\pi _{\af}\alpha )}=-\sum_{\gamma \in P_{\af}}s(\gamma )e^{-\gamma } \label{eq:6} \end{equation} consider the carrier $\Phi _{\af\subset \frak{g}}\subset P_{\af}$ of the function $s(\gamma )=\det \left( \gamma \right) $ : \begin{equation} \Phi _{\af\subset \frak{g}}=\left\{ \gamma \in P_{\af}|s(\gamma )\neq 0\right\} \label{eq:37} \end{equation} The ordering of roots in $\co{\Delta _{\af}}$ induce the natural ordering of the weights in $P_{\af}$. Denote by $\gamma _{0}$ the lowest vector of $\Phi _{\af\subset \frak{g}}$ . The set \begin{equation} \Gamma _{\af\rightarrow \frak{g}}=\left\{ \xi -\gamma _{0}|\xi \in \Phi _ \af\subset \frak{g}}\right\} \setminus \left\{ 0\right\} \label{fan-defined} \end{equation} is called the \textit{injection fan}. \end{definition} In the next subsection we shall see how the injection fan defines the recurrent properties of branching coefficients. It must be noticed that the injection fan is the universal instrument that depends only on the injection and doesn't depend on the properties of a module. \subsection{Decomposing the singular element.} \label{subsec:decomp-sing-element} Now we shall prove that the Weyl-Kac character formula (in terms of singular elements) describes the particular case of a more general relation: \begin{lemma} \label{lemma} Let $\left( \af,\afb \right)$ be the orthogonal pair of reductive subalgebras in $\frak{g}$, with $\widetilde{\af_{\perp }}=\a _{\perp }\oplus \frak{h}_{\perp }$ and $\widetilde{\af}=\af\oplus \frak{h}_{\perp }$ , $L^{\mu }$ be the highest weight module with the singular element $\Psi ^{\left(\mu \right)}$ , $R_{\af_{\perp }}$ be the Weyl denominator for $\af_{\perp }$. Then the element $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)} =\pi _{\af}\left( \frac{\Psi _{\frak{g}}^{\mu }}{R_{\af_{\perp }}}\right) $ can be decomposed into the sum over $u\in U$ (see (\ref{U-def})) of the singular weights $e^{\mu _{\af}\left( u\right) }$ with the coefficients $\epsilon (u)\mathrm{\dim }\left( L_{\widetilde{\af_{\perp }}}^{\mu _{\widetilde{\af_{\perp }}}\left( u\right) }\right) $: \begin{equation} \Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}=\quad \pi _{\af}\left( \frac{\Psi^{\mu }}{R_{\a _{\perp }}}\right) =\sum_{u\in U}\;\epsilon (u)\mathrm{\dim } \left( L_{\widetilde{\af_{\perp }}}^{\mu _ \widetilde{\af_{\perp }}}\left( u\right) }\right) e^{\mu _{\af}\left( u \right) }. \end{equation} \end{lemma} \begin{proof} With $u\in U $ and $v\in W_{\afb}$ perform the decomposition \begin{equation*} u(\mu +\rho )=\pi _{\left( \af\right) } u(\mu +\rho )+\pi _{\left( \widetilde{\af_{\perp }}\right) } u(\mu +\rho ) \end{equation*} for the singular weight $vu(\mu +\rho )-\rho$: \begin{equation} \label{sing-decomp-1} \begin{array}{lcl} vu(\mu +\rho )-\rho &=&\pi _{\left( \af\right) }\left( u(\mu +\rho )\right) -\rho +\rho _{\af_{\perp }}+\pi _{\left( \hfb\right) }\rho \\ && + \ v\left( \pi _{\left( \widetilde \af_{\perp }}\right) }u(\mu +\rho )-\rho _{\af_{\perp }}+\rho _ \af_{\perp }}\right) -\rho _{\af_{\perp }} -\pi _{\left( \hfb\right) }\rho. \end{array} \end{equation} Use the defect $\mathcal{D}_{\afb}$ (\ref{defect-perp}) to simplify the first summand in (\ref{sing-decomp-1}): \begin{equation*} \begin{array}{r} \pi _{\left( \af\right) }\left( u(\mu +\rho )\right) -\rho +\rho _ \mathfrak{a}_{\perp }}+\pi _{\left( \hfb\right) }\rho = \\ \pi _{\left( \af\right) }\left( u(\mu +\rho )\right) -\pi _{\af}\rho -\pi _{\afb}\rho +\rho _{\afb}= \\ =\pi _{\left( \af\right) }\left( u(\mu +\rho )-\rho \right) \mathcal{D}_{\afb}, \end{array} \end{equation*} and the second one: \begin{equation*} \begin{array}{c} v\left( \pi _{\left( \widetilde \af_{\perp }}\right) }u(\mu +\rho )-\rho _{\af_{\perp }}+\rho _ \af_{\perp }}\right) -\rho _{\af_{\perp }}-\pi _{\left( \hfb\right) }\rho=\\ v\left( \pi _{\left( \widetilde \afb}\right) }u(\mu +\rho ) - \mathcal{D}_{\afb} - \pi _{\left( \afb\right) }\rho-\pi _{\left( \hfb\right) }\rho +\rho _{\afb}\right) -\rho _{\afb}=\\ =v\left( \pi _{\left( \widetilde \afb}\right) }\left[ u(\mu +\rho )-\rho\right] - \mathcal{D}_{\afb} +\rho _{\afb}\right) -\rho _{\afb}. \end{array} \end{equation*} These expressions provide a kind of a factorization in the anomalous element $\Psi^{\mu }$ and we find in it the combination of anomalous elements $\Psi _{\widetilde{\af_{\perp }}}^{\eta }$ of the subalgebra $\widetilde{\af_{\perp }}$-modules $L_{\widetilde{\a _{\perp }}}^{\eta }$: \begin{equation*} \begin{array}{l} \Psi^{\mu }=\sum_{u\in U}\sum_{v\in W_{\af_{\perp }}} \epsilon (v)\epsilon (u)e^{vu(\mu +\rho )-\rho }= \\ =\sum_{u\in U}\epsilon (u)e^{\pi _{\af}\left[ u(\mu +\rho )-\rho \right] +\mathcal{D}_{\af_{\perp }}}\sum_{v\in W_{\af_{\perp }}}\epsilon (v)e^{v\left( \pi _{\left( \widetilde{\af_{\perp }}\right) }\left[ u(\mu +\rho )-\rho \right] -\mathcal{D}_{\af_{\perp }}+\rho _{\a _{\perp }}\right) -\rho _{\af_{\perp }}}= \\ =\sum_{u\in U}\;\epsilon (u)e^{\pi _{\left( \af\right) }\left[ u(\mu +\rho )-\rho \right] +\mathcal{D}_{\af_{\perp }}}\Psi _{\widetilde \af_{\perp }}}^{\pi _{\left( \widetilde{\af_{\perp }}\right) \left[ u(\mu +\rho )-\rho \right] -\mathcal{D}_{\af_{\perp }}} \end{array} \end{equation*} Dividing both sides by the Weyl element $R_{\af_{\perp }}=\prod_{\beta \in \Delta _{\af_{\perp }}}(1-e^{-\beta })^{\mathrm{mult}(\beta )}$ and projecting them to the weight space $h_{\af}^{\ast }$\ we obtain the desired relation: \begin{eqnarray*} \Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)} &=&\sum_{u\in W/W_{\af_{\perp }}}\;\epsilon (u)e^{\pi _{\frak{ }}\left[ u(\mu +\rho )-\rho \right] }\pi _ \af}\left( \frac{\Psi _{\widetilde{\af_{\perp }}}^{\pi _{\left( \widetilde{\af_{\perp }}\right) }\left[ u(\mu +\rho )-\rho \right] \mathcal{D}_{\af_{\perp }}}}{\prod_{\beta \in \Delta _{\af_{\perp }}}(1-e^{-\beta })^{\mathrm{mult}(\beta )}}\right) \\ &=&\sum_{u\in U}\;\epsilon (u)\mathrm{\dim }\left( L_{\widetilde{\a _{\perp }}}^{\mu _{\widetilde{\af_{\perp }}}\left( u\right) }\right) e^{\pi _{\af}\left[ u(\mu +\rho )-\rho \right] }. \end{eqnarray*} \end{proof} \begin{remark} This relation can be considered a generalized form of the Weyl formula for singular element $\Psi _{\frak{g}}^{\mu }$ : the vectors $\mu _{\af}\left( u\right) $ play the role of singular weights while instead of the determinants \epsilon (u)$ we have the products $\epsilon (u)\mathrm{\dim }\left( L_ \widetilde{\af_{\perp }}}^{\mu _{\widetilde{\af_{\perp }}}\left( u\right) }\right) .$ In fact when $\frak{a=g}$ both $\af_{\perp }$ and \frak{h}_{\perp }$ are trivial, $U=W$ , and\ the original Weyl formula is easily reobtained. \end{remark} \subsection{Constructing recurrent relations.} \label{subsec:Construct-recurrent-rel} Consider the right-hand side of relation (\ref{eq:4}). The numerator there describes the branching in terms of singular elements and it is reasonable to expand it as an element of $\mathcal{E}\left( \frak{g} \right)$: \begin{equation} \label{eq:21} \sum_{\nu \in \bar{C_{\mathfrak{a}}}}b_{\nu }^{\left( \mu \right) }\Psi _{\left( \frak a}\right) }^{\left( \nu \right) }=\sum_{\lambda \in P_{\af}}k_{\lambda }^{\left( \mu \right) }e^{\lambda }. \end{equation} Here the coefficients $k_{\lambda}^{\left( \mu \right) }$ are integer and their signs depend on the length (see \cite{humphreys1997introduction}) of the Weyl group elements in $\Psi _{\left( \frak{a}\right) }^{\left( \nu \right) }$. The important property of $k_{\lambda}^{\left( \mu \right) }$'s is that they coincide with the branching coefficients for all weights $\nu$ inside the main Weil chamber: \begin{equation} b^{(\mu)}_{\nu}=k^{(\mu)}_{\nu} \; \mbox{for} \; \nu\in \bar{C}_{\mathfrak{a}}. \label{eq:21-1} \end{equation} We call the coefficients $k_{\lambda}$ --- the anomalous branching coefficients (see also \cite{ilyin812pbc}). Now we can state the main theorem which gives us an instrument for the recurrent computation of branching coefficients. \begin{theorem} For the anomalous branching coefficients $k^{(\mu)}_{\nu}$ (\ref{eq:21}) the following relation holds \begin{equation} \label{recurrent-relation} \begin{array}{c} k_{\xi }^{\left( \mu \right) }=-\frac{1}{s\left( \gamma _{0}\right) }\left( \sum_{u\in U} \epsilon(u)\; \dim \left( L_{\widetilde{\af_{\perp }}}^{\mu _{\widetilde{\af_{\perp }}}\left( u\right) }\right) \delta_{\xi-\gamma_0,\pi_{\af}(u(\mu+\rho)-\rho)}+ \right.\\ \left. +\sum_{\gamma \in \Gamma _{\af \rightarrow \gf}}s\left( \gamma +\gamma _{0}\right) k_{\xi +\gamma }^{\left( \mu \right) }\right). \end{array} \end{equation} \end{theorem} \begin{proof} Redress the relation (\ref{eq:4}) for the element $ \frac{\Psi _{\frak{g}}^{\mu }}{R_{\af_{\perp }}}$ using definition (\ref{eq:37}) for the carrier $\Phi _{\af\subset \frak{g}}$ , \begin{equation*} \begin{array}{l} \Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)} =\pi _{\frak{a}}\left( \frac{\Psi _{\frak{g}}^{\mu }}{R_{\af_{\perp }} \right) = \\[2mm] =\prod\limits_{\alpha \in \Delta ^{+}\setminus \Delta _{\afb }^{+}}\left( 1-e^{-\pi _{\af}\alpha }\right) ^{\mathrm{mult}(\alpha )-\mathrm{mult}_ \af}(\pi _{\af}\alpha )}\left( \sum\limits_{\nu \in P_{\a }^{+}}b_{\nu }^{(\mu )}\sum\limits_{w\in W_{\af}}\epsilon (w)e^{w(\nu +\rho _{\af})-\rho _{\af}}\right) = \\[5mm] =-\sum\limits_{\gamma \in \Phi _{\af\subset \frak{g}}}s(\gamma )e^{-\gamma }\left( \sum\limits_{\nu \in P_{\af}^{+},w\in W_{\a }}\epsilon (w)b_{\nu }^{(\mu )}e^{w(\nu +\rho _{\af})-\rho _{\a }}\right) \\ =-\sum\limits_{\gamma \in \Phi _{\af\subset \frak{g}}}s(\gamma )e^{-\gamma }\left( \sum\limits_{\nu \in P_{\af}^{+},w\in W_{\a }}\epsilon (w)b_{\nu }^{(\mu )}e^{w(\nu +\rho _{\af})-\rho _{\a }}\right) . \end{array} \end{equation*} Then expand the sum in brackets (with respect to the formal basis in $\mathcal{E} ): \begin{equation*} \Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)} =-\sum_{\gamma \in \Phi _{\af\subset \frak{g}}}s(\gamma )e^{-\gamma }\sum_{\lambda \in P_{\af}}k_{\nu }^{(\mu )}e^{\lambda }=-\sum_{\gamma \in \Phi _{\af\subset \frak{g}}}\sum_{\lambda \in P_ \af}}s(\gamma )k_{\nu }^{(\mu )}e^{\lambda -\gamma }. \end{equation*} Substitute in the left-hand side the expression obtained in Lemma \ref{lemma}, \begin{eqnarray*} \Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)} &=&\sum_{u\in U}\;\epsilon (u)e^{\pi _{\af}\left( \mu _{\frak{ }}\left( u\right) \right) }\dim \left( L_{\widetilde{\af_{\perp } }^{\mu _{\widetilde{\af_{\perp }}}\left( u\right) }\right) \label{anom modules 2} \\ &=&\sum_{u\in U}\;\epsilon (u)e^{\pi _{\af}\left[ u(\mu +\rho )-\rho \right] }\dim \left( L_{\widetilde{\af_{\perp }}}^{\mu _{\widetilde \af_{\perp }}}\left( u\right) }\right) \\ &=&-\sum_{\gamma \in \Phi _{\af\subset \frak{g}}}\sum_{\lambda \in P_ \af}}s(\gamma )k_{\nu }^{(\mu )}e^{\lambda -\gamma }. \end{eqnarray*} The immediate consequence of this equality is: \begin{equation} \sum_{u\in U}\epsilon (u)\dim \left( L_{\widetilde{\af_{\perp }}}^{\mu _{\widetilde{\af_{\perp }}}\left( u\right) }\right) \delta _{\xi ,\pi _ \af}\left[ u(\mu +\rho )-\rho \right] }+\sum_{\gamma \in \Phi _{\a \subset \frak{g}}}s(\gamma )\;k_{\xi +\gamma }^{(\mu )}=0,\quad \xi \in P_ \af}. \label{eq:17} \end{equation} The obtained formula means that the coefficients $k_{\xi +\gamma }^{(\mu )}$ for $\gamma \in \Phi _{\af\subset \frak{g}}$ are not independent, they are subject to the linear relations and the form of these relations changes when the tested weight $\xi $ coincides with one of \ the ''singular weights'' $\left\{ \pi _{\af}\left[ u(\mu +\rho )-\rho \right] |u\in U\right\} $ . To conclude the proof we extract the lowest weight $\gamma _{0}\in \Phi _{\frak{ }\subset \frak{g}}$ and pass to the summation over the vectors of the injection fan $\Gamma _{\af\rightarrow \frak{g}}$ (see the definition \ref {fan-definition}). Thus we get the desired recurrent relation (\ref{recurrent-relation}). \end{proof} \subsection{Embeddings and orthogonal pairs in simple Lie algebras} \label{sect-embeddings} In this subsection we discuss some properties of ''orthogonal pairs'' of subalgebras in simple Lie algebras of classical series. When both $\frak{g}$ and $\af$ are finite-dimensional all the regular embeddings can be obtained by a successive elimination of nodes in the extended Dynkin diagram of $\frak{g}$ (and $\Delta _{\bot }^{+}=\emptyset $ if $\af$ is maximal). For the classical series $A$, $C$ and $D$ when the regular injection $\af\rightarrow \frak{g}$ is thus fixed, the Dynkin diagram for $\af_{\bot }$ is obtained from the extended diagram of $\frak{g}$ by eliminating the subdiagram of $\af$ and the adjacent nodes: \begin{table}[tbh] \label{tab:diagrams} \noindent \centering{\ \begin{tabular}{|l|l|l|} \hline $\frak{g}$ & Extended diagram of $\frak{g}$ & Diagrams of the subalgebras \af,\; \afb$ \\ \hline $A_n$ & \includegraphics{table1_1_l_.eps} & \includegraphics{table1_1_r_.eps} \\ \hline $C_n$ & \includegraphics{table1_3_l_.eps} & \includegraphics{table1_3_r_.eps} \\ \hline $D_n$ & \includegraphics{table1_4_l_.eps} & \includegraphics{table1_4_r_.eps} \\ \hline \end{tabular} } \caption{Subalgebras $\af,\;\af_{\bot }$ for the classical series} \end{table} In the case of $B$ series the situation is different. The reason is that here the subalgebra $\af_{\bot }$ may be larger than the one obtained by elimination of the subdiagram of $\af$ and the adjacent nodes. The subalgebras of the orthogonal pair,$\ \af$ and $\af_{\bot }$, must not form a direct sum in $\frak{g}$ . It can be directly checked that when $\frak{g}=B_{r}$ and $\af=B_{r_{\af}}$ the orthogonal subalgebra is $\af_{\bot }=B_{r-r_{\af}}$. Consider the injection $B_{r_{\af}}\rightarrow B_{r},\quad 1<r_{\af}<r$. By eliminating the simple root $\alpha _{r_{\af}-1}=e_{r_{\a }-1}-e_{r_{\af}}$ one splits the extended Dynkin diagram of $B_{r}$ into the disjoint diagrams for $\af=B_{r_{\af}}$ and $D_{r-r_ \af}}$. But the system $\Delta _{\af_{\perp }}$ contains not only the simple roots $\left\{ e_{1}-e_{2},e_{2}-e_{3},\ldots ,e_{r_{\a }-2}-e_{r_{\af}-1},e_{1}+e_{2}\right\} $ but also the root $e_{r_{\frak a}}-1}$. Thus $\Delta _{\af_{\perp }}$ forms the subsystem of the type B_{r-r_{\af}}$ and the orthogonal pair for the injection $B_{r_{\a }}\rightarrow B_{r}$ is $\left( B_{r_{\af}},B_{r-r_{\af}}\right) . In the next Section the particular case of such orthogonal pair is presented for the injection $B_{2}\rightarrow B_{4}$ (see Figure \ref {fig:dynkin}). The complete classification of regular subalgebras for affine Lie algebras can be found in the recent paper \cite{1751-8121-41-36-365204}. From the complete classification of maximal special subalgebras in classical Lie algebras \cite {dynkin1952semisimple} we can deduce the following list of pairs of orthogonal subalgebras $\af,\;\af_{\bot }$: \begin{equation*} \begin{array}{lll} su(p)\oplus su(q) & \subset su(pq) & \\ so(p)\oplus so(q) & \subset so(pq) & \\ sp(2p)\oplus sp(2q) & \subset so(4pq) & \\ sp(2p)\oplus so(q) & \subset sp(2pq) & \\ so(p)\oplus so(q) & \subset so(p+q) & \mathrm{{for}\;p\;{and}\;q\;{odd}.} \end{array} \end{equation*} \subsection{Algorithm for recursive computation of branching coefficients} \label{sec:algorithm} The recurrent relation (\ref{recurrent-relation}) allows us to formulate an algorithm for recursive computation of branching coefficients. In this algorithm there is no need to construct the module $L^{(\mu)}_{\frak{g}}$ or any of the modules $L^{(\nu)}_{\af}$. It contains the following steps: \begin{enumerate} \item Construct the root system $\Delta _{\af}$ for the embedding \af\rightarrow \frak{g}$. \item Select the positive roots $\alpha \in \Delta ^{+}$ orthogonal to $\af$, i.e. form the set $\Delta _{\afb }^{+}$. \item Construct the set $\Gamma _{\af\rightarrow \frak{g}}$. The relation (\ref{eq:6}) defines the sign function $s(\gamma)$ and the set $\Phi_{\af\subset \frak{g}}$ where the lowest weight $\gamma_0$ is to be subtracted to get the fan (\ref{fan-defined}): $\Gamma _{\af\rightarrow \frak{g}}=\left\{ \xi -\gamma _{0}|\xi \in \Phi _ \af\subset \frak{g}}\right\} \setminus \left\{ 0\right\}$. \item Construct the set $\widehat{\Psi ^{(\mu )}}=\left\{ w (\mu +\rho )-\rho ;\;w \in W\right\} $ of singular weights for the $\frak{g} -module $L^{(\mu )}$. \item Select the weights $\left\{ \mu _{\widetilde{\af_{\perp } }\left( w\right) =\pi _{\widetilde{\af_{\perp }}}\left[ w(\mu +\rho )-\rho \right] -\mathcal{D}_{\af_{\perp }}\in \overline{C_{\widetilde \af_{\perp }}}}\right\} $. Since the set $\Delta _{\bot }^{+}$ is fixed we can easily check wether the weight $\mu _{\widetilde{\af_{\perp } }\left( w\right) $ belongs to the main Weyl chamber $\overline{C_{\widetilde \af_{\perp }}}}$ (by computing its scalar product with the fundamental weights of $\afb^{+}$). \item For the weights $\mu _{\widetilde{\af_{\perp }}}\left( w\right) $ calculate the dimensions of the corresponding modules $\mathrm{\dim }\left( L_{\widetilde{\af_{\perp }}}^{\mu _{\widetilde{\af_{\perp } }\left( u\right) }\right) $ using the Weyl dimension formula and construct the singular element $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}$. \item Calculate the anomalous branching coefficients using the recurrent relation (\ref{recurrent-relation}) and select among them those corresponding to the weights in the main Weyl chamber $\overline{C_{\af}}$. \end{enumerate} We can speed up the algorithm by one-time computation of the representatives of the conjugate classes W/W_{\afb }$. The next section contains examples illustrating the application of this algorithm. \section{Branching for finite dimensional Lie algebras} \label{sec:finite-dimens-lie} \subsection{Regular embedding of $A_1$ into $B_2$} \label{sec:regul-embedd-a_1} Consider the regular embedding $A_1\to B_2$. Simple roots $\alpha_1, \alpha_2$ of $B_2$ are presented as the dashed vectors in the Figure \ref{fig:B2_A1}. We denote the corresponding Weyl reflections by $w_1, w_2$. The simple root $\beta = \alpha_1+2\alpha_2$ of $A_1$ is indicated as the grey vector. \begin{figure}[p] \noindent\centering{ \includegraphics[width=80mm]{figure1.eps} } \caption{Regular embedding of $A_1$ into $B_2$. Simple roots $\alpha_1, \alpha_2$ of $B_2$ are presented as the dashed vectors. The simple root $\beta = \alpha_1+2\alpha_2$ of $A_1$ is indicated as the grey vector. The highest weight of the fundamental representation $L^{(1,0)=\omega_1}_{B_2}$ is shown by the black vector. The weights of the singular element $\Psi^{(\omega_1)}$ are marked by circles with superscripts indicating the corresponding determinants $\epsilon(w)$.} \label{fig:B2_A1} \noindent\centering{ \includegraphics[width=80mm]{figure2.eps} } \caption{Here in addition to the diagram presented above (Figure(\ref{fig:B2_A1})) the weights of $\left( \afb=A_1 \right)$-modules $L_{\af_{\perp }}^{\mu_{\af_{\perp }}\left( u\right) }$ originating in the points $\pi _{\af}\left[ u(\mu +\rho )-\rho \right] $ are shown by dotted lines. The superscripts over the highest weights $\mu_{\af_{\perp }}\left( u\right)$ are now the products $\epsilon(u)\dim\left(L_{\af_{\perp }}^{\mu_{\af_{\perp }}\left( u\right) }\right)$. Coordinates along the root $\beta$ are counted in terms of the fundamental weight of $\af$. } \label{fig:B2_A1_2} \end{figure} Let's perform the reduction of the fundamental representation $L^{(1,0)=\omega_1}_{B_2}$ ($\omega_1$ -- the black vector in Figure \ref{fig:B2_A1}). The root $\alpha_1$ is orthogonal to $\beta$, so we have $\Delta_{\perp}^+ = \left\{ \alpha_1 \right\}$. According to the definition (\ref{fan-definition}) the fan $\Gamma_{A_1\to B_2}$ consists of two weights: \begin{equation*} \label{eq:22} \Gamma_{A_1\to B_2}=\left\{ (1;2),\; (2;-1) \right\}, \end{equation*} where the second component is the value of the sign function $s(\gamma)$. The singular weights $\left\{ w (\omega_1 +\rho)-\rho ;\;w \in W\right\}$ are indicated by circles with the superscript $\epsilon\left( w \right)$. The space $U$ is the factor $W/W_{\afb}$ where $W_{\afb}=\left\{e,w_1\right\}$. This means that the singular weights located above the line generated by $\beta$ belong to the Weyl chamber $\overline{C_{\widetilde{\af_{\perp }}}}$. According to formula (\ref{defect-perp}) in our case we have $\mathcal{D}_{\af_{\perp }}=0$ and $\hf_{\perp }=0$, thus $\left\{ \mu _{\af_{\perp }}\left( w\right) =\pi _{\af_{\perp }}\left[ w(\mu +\rho)-\rho \right]\right\}$. We obtain four highest weights for $\af_{\perp }$-modules. In terms of $\af_{\perp }$-fundamental weight $\frac{1}{2} \alpha_1$ these highest weights $\left\{ \mu _{\af_{\perp }}\left( u\right) =\pi _{\af_{\perp }}\left[ u(\mu +\rho)-\rho \right]| u \in U \right\}$ are $\left\{ \left( 1\right) \left( 2\right) \left( 2\right) \left( 1\right) \right\}$. To visualize the procedure we indicate explicitly in Figure (\ref{fig:B2_A1_2}) how the corresponding weight diagrams $\left\{ \mathcal{N}_{\af_{\perp }}^{\mu _{\af_{\perp }}\left( u\right) }\right\} $ are attached to the set of $\af$-weights $\left\{ \mu _{\af}\left( u\right)\right\} =\left\{\pi _{\af}\left[ u(\mu +\rho )-\rho \right]\right\} =\left\{ \left( 1\right) \left( 0\right) \left( -4\right) \left( -5\right) \right\}$. In fact we do not need the weight diagrams but only the dimensions of the corresponding modules $L_{\af_{\perp }}^{\mu_{\af_{\perp }}\left( u\right) }$ multiplied by $\epsilon \left( u\right) $. The obtained values are to be attributed to the points $\left\{ \left( 1\right) \left( 0\right) \left( -4\right) \left( -5\right) \right\}$ in $P_{\af}$. The corresponding element of ${\cal E}_{\af}$ is the singular element $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}$ with the set of weights having anomalous multiplicities: \begin{equation} \label{eq:25} \left\{(1;2),\; (0;-3),\; (-4;3),\; (-5;-2)\right\}. \end{equation} Applying formula (\ref{recurrent-relation}) with the fan $\Gamma_{A_1\to B_2}$ to the set (\ref{eq:25}) we get zeros for the weights greater than the highest anomalous vector $(1;2)$ and $k^{(1,0)}_1=2$ for the vector $(1;2)$ itself. For the anomalous weight (0;-3) on the boundary of $\bar{C}^{(0)}_{\af}$ the recurrent relation gives \begin{equation*} \label{eq:23} k^{(1,0)}_{0}=-1\cdot k^{(1,0)}_2 +2\cdot k^{(1,0)}_1 - 3\cdot \delta_{0,0} = 1, \end{equation*} the branching is completed: $L_{B_2\downarrow A_1}^{\omega_1}= 2L_{A_1}^{\omega_{\left(A_1\right)} } \bigoplus L_{A_1}^{2\omega_{\left(A_1\right)} }$. \subsection{Embedding $B_2$ into $B_4$} \label{sec:someth-high-dimens} Consider the regular embedding $B_2 \rightarrow B_4$. The corresponding Dynkin diagrams are presented in the Figure \ref{fig:dynkin}. \begin{figure}[h] \centering \includegraphics[width=50mm]{figure3.eps} \caption{The regular embedding $B_2 \rightarrow B_4$ described by dropping the node from the Dynkin diagram. Remember that here $\afb$ is equal to $B_2$ while the diagram shows only $A_1\oplus A_1$ (see Subsection \ref{sect-embeddings}).} \label{fig:dynkin} \end{figure} \begin{figure}[pt] \centering \includegraphics[width=100mm,height=90mm]{figure4.eps} \caption{The singular element $e^{\gamma_0}\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}$ displayed in the weight subspace $P_{\af}$ for $\af=B_2$ with the basis $\left\{e_3,e_4\right\}$. We see the projected singular weights $\left\{\pi _{\af}\left[ u(\mu +\rho )-\rho \right] +\gamma_0 | u \in U \right\}$ shifted by $\gamma_0$ and supplied by the multipliers $\epsilon(u)\dim\left(L_{\af_{\perp }}^{\mu_{\af_{\perp }}\left( u\right) }\right)$.} \label{fig:B4B2anom} \centering \includegraphics[height=80mm]{figure5.eps} \caption{The fan $\Gamma$ for $B_2\rightarrow B_4$ and the values of $s(\gamma+\gamma_0)$ for the weights $\gamma$.} \label{fig:B4B2Fan} \end{figure} In the orthonormal basis $\left\{e_1,\dots,e_4\right\}$ simple roots and positive roots of $B_4$ are \begin{eqnarray*} \label{eq:19} S_{B_4}= \{e_1 - e_2,\; e_2 - e_3,\; e_3 - e_4,\; e_4\},\\[2mm] \Delta^+_{B_4}=\left\{ (e_1 - e_2,\; e_2 - e_3,\; e_3 - e_4,\; e_4,\; e_1 - e_3,\; e_2 - e_4,\; e_3 + e_4,\; e_3,\; e_1 - e_4,\;\right.\\ \left. e_2 + e_4,\; e_2,\; e_1 + e_4,\; e_2 + e_3,\; e_1,\; e_1 + e_3,\; e_1 + e_2\right\} \end{eqnarray*} The subalgebra $\af=B_2$ is fixed by the simple roots \begin{equation*} \label{eq:26} S_{B_2}=\{e_3-e_4,e_4\} \end{equation*} Its orthogonal counterpart $\afb=B_2$ has \begin{eqnarray*} \label{eq:27} S_{\afb}=\{e_1-e_2,e_2\},\\ \Delta^{+}_{\afb}= \left\{e_1-e_2,e_1+e_2,e_1,e_2\right\}. \end{eqnarray*} As far as the set $\Delta^+_{B_4} \setminus \Delta^{+}_{\afb}$ is fixed the injection fan $\Gamma_{B_2 \to B_4}$ can be constructed using Definition \ref{fan-definition}. As far as for this injection $s\left( \gamma_0\right)=-1$ in the recursion formula we need just the factor $s\left(\gamma + \gamma_0\right)$. The result is presented in Figure \ref{fig:B4B2Fan}. Consider the $B_4$-module $L^{\mu}$ with the highest weight $\mu=2e_1 + 2 e_2 + e_3 + e_4$; \, $\mathrm{dim}(L^{\left[0,1,0,2\right]})=2772$. The set of singular weights for $B_4$ contains 384 vectors. Here the defect is nontrivial, $\mathcal{D}_{\af_{\perp }}=-2\left( e_1 + e_2 \right)$, while $\hf_{\bot}=0$. Taking this into account we find among the singular weights 48 vectors with the property $\left\{ \mu _{\af_{\perp }\left( u\right) =\pi _{\af_{\perp }}\left[ u(\mu +\rho )-\rho \right] -\mathcal{D}_{\af_{\perp }}\in \overline{C_{\af_{\perp }}}\right\} $, scalar products of these weights with all the roots in $\Delta^{+}_{\afb}$ are nonnegative. The set $U=\left\{ u \right\}$ is thus fixed. Compute the dimensions of the corresponding $\afb$-modules with the highest weights $ \mu _{\af_{\perp }}\left( u\right)$ (using the Weyl dimension formula) and multiply them by $\epsilon\left( u \right)$. The result is the singular element $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}$ shown in Figure \ref{fig:B4B2anom}. Now one can place the fan $\Gamma$ from Figure \ref{fig:B4B2Fan} in the highest of the weights presented in Figure \ref{fig:B4B2anom} and start the recursive determination of the branching coefficients (using relation (\ref{recurrent-relation})): \begin{eqnarray*} \label{eq:24} \pi_{\af} \left(ch L^{\left[0,1,0,2\right]}_{B_4}\right) = 6 \; ch L^{\left[0,0\right]}_{B_2}+ 60 \; ch L_{B_2}^{\left[0,2\right]}+ 30 \; ch L_{B_2}^{\left[1,0\right]}+ 19 \; ch L_{B_2}^{\left[2,0\right]}+\\ 40 \; ch L_{B_2}^{\left[1,2\right]}+ 10 \; ch L_{B_2}^{\left[0,4\right]}. \end{eqnarray*} \section{Applications to the conformal field theory} \label{sec:phys-appl} \subsection{Conformal embeddings} \label{sec:conformal-embeddings} Branching coefficients for an embedding of affine Lie algebra into affine Lie algebra can be used to construct modular invariant partition functions for Wess-Zumino-Novikov-Witten models in conformal field theory (\cite{difrancesco1997cft}, \cite{Walton:1999xc}, \cite{walton1989conformal}, \cite{schellekens1986conformal}). In these models current algebras are affine Lie algebras. The modular invariant partition function is crucial for the conformal theory to be valid on the torus and higher genus Riemann surfaces. It is important for the applications of CFT to string theory and to critical phenomena description. The simplest modular-invariant partition function has the diagonal form: \begin{equation} \label{eq:34} Z(\tau)=\sum_{ \mu\in P^{+}_{\mathfrak{g}}} \chi_{\mu}(\tau)\bar \chi_{\mu}(\bar \tau) \end{equation} Here the sum is over the set of the highest weights of integrable modules in a WZW-model and $\chi_{\mu}(\tau)$ are the normalized characters (see \cite{difrancesco1997cft}) of these modules. To construct the nondiagonal modular invariants is not an easy problem, although for some models the complete classification of modular invariants is known \cite{1994hepthGannon,1995JMPGannon}. Consider the Wess-Zumino-Witten model with the affine Lie algebra $\af$. Nondiagonal modular invariants for this model can be constructed from the diagonal invariant if there exists an affine algebra $\mathfrak{g}$ such that $\af\subset\mathfrak{g}$. Then we can replace the characters of the $\mathfrak{g}$-modules in the diagonal modular invariant partition function (\ref{eq:34}) by the decompositions \begin{equation*} \label{eq:32} \sum_{\nu \in P^{+}_{\af}}b^{(\mu)}_{\nu} \chi_{\nu} \end{equation*} containing normalized characters $\chi_{\nu}$ of the corresponding $\af$-modules. Thus we obtain a nondiagonal modular-invariant partition function for the theory with the current algebra $\af$, \begin{equation} \label{eq:36} Z_{\af}(\tau)=\sum_{ \nu,\lambda\in P^{+}_{\af}} \chi_{\nu}(\tau)M_{\nu\lambda}\bar \chi_{\lambda}(\bar \tau). \end{equation} The effective reduction procedure is crucial for this construction. The embedding is required to preserve the conformal invariance. Let $X^{\alpha_j}_{-n_j}$ and $\tilde{X}^{\alpha'_j}_{-n_j}$ be the lowering generators for $\mathfrak{g}$ and for $\af\subset\mathfrak{g}$ correspondingly. Let $\pi_{\af}$ be the projection operator of $\pi_{\af}:\mathfrak{g}\longrightarrow \af$. In the theory attributed to $\mathfrak{g}$ with the vacuum $\left|\lambda\right>$ the states can be described as \begin{equation*} \label{eq:109} X^{\alpha_1}_{-n_1}X^{\alpha_2}_{-n_2}\dots\left|\lambda\right>\quad n_1\geq n_2\geq \dots>0. \end{equation*} And for the sub-algebra $\af$ the corresponding states are \begin{equation*} \label{eq:110} \tilde{X}^{\alpha'_1}_{-n_1}\tilde{X}^{\alpha'_2}_{-n_2}\dots\left|\pi_{\af}(\lambda)\right>. \end{equation*} The $\mathfrak{g}$-invariance of the vacuum entails its $\af$-invariance, but this is not the case for the energy-momentum tensor. So the energy-momentum tensor of the larger theory should contain only the generators $\tilde{X}$. Then the relation \begin{equation} \label{eq:2} T_{\mathfrak{g}}(z)=T_{\af}(z) \end{equation} leads to the equality of central charges \begin{equation*} \label{eq:33} c(\mathfrak{g})=c(\af) \end{equation*} and to the relation \begin{equation} \label{eq:111} \frac{k\;\mathrm{dim}\,\mathfrak{g}}{k+g}=\frac{x_e k\; \mathrm{dim}\,\af}{x_ek+a}. \end{equation} Here $x_e$ is the so called "embedding index": $x_e=\frac{\left|\pi_{\mathfrak{a}} \Theta\right|^2}{\left|\Theta_{\mathfrak{a}}\right|^2}$ with $\Theta$, $\Theta_{\mathfrak{a}}$ being the highest roots of $\mathfrak{g}$ and $\mathfrak{a}$ while $g$ and $a$ are the corresponding dual Coxeter numbers. It can be demonstrated that solutions of equation (\ref{eq:111}) exist only for the level $k=1$ \cite{difrancesco1997cft}. The complete classification of conformal embeddings is given in \cite{schellekens1986conformal}. The relation (\ref{eq:111}) and the asymptotics of the branching functions can be used to prove the finite reducibility theorem \cite{kac1988modular}. It states that for a conformal embedding $\af\longrightarrow\mathfrak{g}$ only finite number of branching coefficients have nonzero values. \begin{mynote} The orthogonal subalgebra $\afb$ is always trivial for conformal embeddings $\af\longrightarrow \mathfrak{g}$. \begin{proof} Consider the modes expansion of the energy-momentum tensor \begin{equation*} \label{eq:47} T(z)=\frac{1}{2(k+h^v)}\sum_n z^{-n-1}L_n. \end{equation*} The modes $L_n$ are constructed as combinations of normally-ordered products of the generators of $\mathfrak{g}$, \begin{equation*} \label{eq:48} L_n=\frac{1}{2(k+h^v)}\sum_{\alpha}\sum_m:X^{\alpha}_m X^{\alpha}_{n-m}: \; . \end{equation*} In case of a conformal embedding energy-momentum tensors $T_{\mathfrak{g}}(z)$ and $T_{\af}(z)$ are equal (see (\ref{eq:2})). Substituting generators of $\af$ in terms of generators of $\mathfrak{g}$ into these combinations we must obtain the energy-momentum tensor $T_{\mathfrak{g}}$. But if the set of generators attributed to $\Delta_{\afb}$ is not empty this is not possible, since $T_{\mathfrak{g}}$ contains generators $X^{\alpha}_n$ for $\alpha\in \Delta_{\afb}$. \end{proof} \end{mynote} \subsubsection{Special embedding $\hat{A}_1\rightarrow\hat{A}_2$.} \label{sec:spec-embedd-hata_1s} Consider the case where both $\gf$ and $\af$ are affine Lie algebras: $\hat{A}_1 \rightarrow \hat{A}_2$ and the injection is the affine extension of the special injection $A_1 \rightarrow A_2$ with the embedding index $x_e=4$. As far as the $\gf$-modules to be considered are of level one, the $\af$-modules will be of level $\tilde{k}=kx_e=4$. There exist three level one fundamental weights of $\hat{A}_2$. It is easy to see that the set $\Delta_{ \afb }$ is empty and the subalgebra $\afb=0$. Then ${\cal D}_{\afb}=0$, $\hf_{\perp}$ is one-dimensional Abelian subalgebra and the dimensions of $\tilde\afb=\afb\oplus \hf_{\perp}$ are equal to 1. It is convenient to choose the classical root for $\hat{A}_1$ to be $\beta=\frac{1}{2}(\alpha_1+\alpha_2)$. Using Definition (\ref{fan-definition}) we construct the fan $\Gamma_{\hat A_1\to\hat A_2}$. In this case $\gamma_0 =0$ and its sign $s\left( 0 \right)=-1$ thus we are to use the sign function $s(\gamma)$ (see Figure \ref{fig:AffineA2A1Fan}). \begin{figure}[h!bt] \centering \includegraphics[width=125mm]{figure6.eps} \caption{The fan $\Gamma_{\hat{A_1}\rightarrow \hat{A_2}}$ for $\hat{A_1}\rightarrow \hat{A_2}$ in the basis $\left\{\beta,\delta \right\}$. Notice that $\gamma_0 =0$, so values of $s(\gamma)$ are prescribed to the weights $\gamma\in \Gamma_{\hat{A_1}\rightarrow \hat{A_2}}$} \label{fig:AffineA2A1Fan} \end{figure} Consider the module $L^{\omega_0=(0,0;1;0)}$. Here we use the (finite part; level; grade) presentation of the highest weight and the finite part coordinates are the Dynkin indices (see section(\ref{sec:notation})). The set $\widehat{\Psi^{(\omega_0)}}$ is displayed in Figure \ref{fig:affine_A2_anom_point} up to the sixth grade. \begin{figure}[h!tb] \hspace*{-1.5cm} \includegraphics[width=180mm]{figure7.eps} \caption{The singular weights of the module $L_{\hat{A_2}}^{\omega_0}=L^{(0,0;1;0)}_{\hat{A_2}}$. The classical (grade zero) cross-section of the diagram is shown separately in the right part of the figure. We use the orthogonal basis with the unit vector equal to $\alpha_1$. The weights $w (\omega_0+\rho)-\rho$ are marked by crosses when $\epsilon(w)=1$ and by box when $\epsilon(w)=-1$. Simple roots of the classical subalgebra $A_2$ are grey and the grey diagonal plane corresponds to the Cartan subalgebra of the embedded algebra $\hat{A}_1$.} \label{fig:affine_A2_anom_point} \end{figure} The next step is to project the anomalous weights to $P_{\hat A_1}$. The result is the element $\Psi ^{\left( \omega_0 \right) }_{\left( \hat A_1\, , \, \afb=0 \right)}$ presented in Figure \ref{fig:AffineA2_A1_anom_proj} up to the twelfth grade. \begin{figure}[h!tb] \centering \includegraphics[width=130mm]{figure8.eps} \caption{The singular element $\Psi ^{\left( \omega_0 \right) }_{\left( \hat A_1\, , \, \afb=0 \right)}$ displayed in $P_{\hat A_1}$ with the basis $\left\{\beta,\delta \right\}$.} \label{fig:AffineA2_A1_anom_proj} \end{figure} Using the recurrent relation (\ref{recurrent-relation}) with the fan $\Gamma_{\hat{A_1}\rightarrow \hat{A_2}}$ and the singular weights in $\Psi ^{\left( \omega_0 \right) }_{\left( \hat A_1\, , \, \afb=0 \right)}$ we get the anomalous branching coefficients presented in Figure \ref{fig:AffineA2_A1_branching}. \begin{figure}[h!tb] \centering \includegraphics[width=130mm]{figure9.eps} \caption{Anomalous branching coefficients for $\hat{A_1}\subset \hat{A_2}$. The boundaries of the main Weyl chamber $\bar{C}_{\hat{A}_1}$ are indicated by black lines. Two anomalous highest weights located in the main Weyl chamber are marked by stars. Both have multiplicity 1, so the branching coefficients for them are equal 1.} \label{fig:AffineA2_A1_branching} \end{figure} Inside the Weyl chamber $\bar{C}_{\hat{A}_1}$ (its boundaries are indicated in Figure \ref{fig:AffineA2_A1_branching}) there are only two nonzero anomalous weights and both have multiplicity 1. These are the highest weights of $\af$-submodules and the multiplicities are their branching coefficients. Thus we get the decomposition \begin{equation*} \label{eq:43} L^{(0,0;1;0)}_{\hat{A_2}\downarrow \hat{A_1}}= L_{\hat{A_1}}^{(0;4;0)}\oplus L_{\hat{A_1}}^{(4;4;0)}. \end{equation*} The finite reducibility theorem holds. The same fan $\Gamma_{\hat{A_1}\rightarrow \hat{A_2}}$ can be used for any other highest weight module $L^{\mu}_{\hat{A_2}}$. In particular for the irreducible modules of level one we get the trivial branching: \begin{equation*} \label{eq:44} L^{(1,0;1;0)}_{\hat{A_2}\downarrow \hat{A_1}}= L_{\hat{A_1}}^{(2;4;0)},\\ L^{(0,1;1;0)}_{\hat{A_2}\downarrow \hat{A_1}}= L_{\hat{A_1}}^{(2;4;0)}. \end{equation*} Using these results the modular-invariant partition function is easily found, \begin{equation*} \label{eq:45} Z=\left|\chi_{(4;4;0)}+\chi_{(0;4;0)}\right|^2+2\chi_{(2;4;0)}^2. \end{equation*} \subsection{Coset models} \label{sec:coset-models} Coset models \cite{Goddard198588} tightly connected with the gauged WZW-models are actively studied in string theory, especially in string models on anti-de-Sitter space \cite{Maldacena:2000hw,Maldacena:2000kv,Maldacena:2001km,Maldacena:2001ky,Aharony:1999ti}. The characters in coset models are proportional to branching functions, \begin{equation} \label{eq:31} \chi^{(\mu)}_{\nu}(\tau)=e^{2\pi i \tau (m_{\mu}-m_{\nu})} b^{(\mu)}_{\nu}(\tau), \end{equation} with \begin{equation*} \label{eq:46} m_{\mu}=\frac{\left|\mu+\rho\right|^2}{2(k+g)}-\frac{\left|\rho\right|^2}{2g}. \end{equation*} The problem of branching functions construction in the coset models was considered in \cite{Dunbar:1992gh}, \cite{Hwang:1994yr}, \cite{lu1994branching}. Let us return to the example \ref{sec:regul-embedd-a_1} and consider the affine extension of the injection $A_1 \rightarrow B_2$. Since this embedding is regular and $x_e=1$, the subalgebra modules and the initial module are of the same level. The set of positive roots with zero projection on the root space of the subalgebra $\hat{A_1}$ is the same as in the finite-dimensional case $\Delta^{+}_{\afb}=\left\{ \alpha_1 \right\}$ and $\afb=A_1$. It is easy to see that $\hf_{\perp}$ is trivial in this case and also ${\cal D}_{\afb}=0$. Using the definition (\ref{fan-defined}) we obtain the fan $\Gamma_{\hat{A_1} \longrightarrow \hat{B_2} }$. Notice that here the lowest weight $\gamma_0$ of the fan is zero and $s\left( \gamma_0 \right)=-1$. The values of the sign function $s(\gamma)$ for $ \gamma \in \Gamma_{\hat{A_1} \longrightarrow \hat{B_2} }$ are presented in Figure \ref{fig:AffineB2A1Fan}. We restricted the computation to the twelfth grade. \begin{figure}[h!bt] \centering \includegraphics[width=135mm]{figure10.eps} \caption{The fan $\Gamma_{\hat{A_1}\rightarrow \hat{B_2}}$ for $\hat{A_1}\rightarrow \hat{B_2}$ in the basis $\left\{\beta,\delta \right\}$. Values of $s(\gamma)$ are shown for the weights $\gamma\in \Gamma_{\hat{A_1}\rightarrow \hat{B_2}}$} \label{fig:AffineB2A1Fan} \end{figure} Consider the level one module $L^{\left( 1,0;1;0 \right)}_{\hat{B_2}}$ with the highest weight $\omega_1=(1,0;1;0)$, where the finite part coordinates are in the orthogonal basis $e_1,e_2$. The set of anomalous weights for this module up to the sixth grade is presented in the Figure \ref{fig:affine_B2_anom_point}. \begin{figure}[h!tb] \includegraphics[width=140mm]{figure11.eps} \caption{Singular weights for $L^{(1,0;1;0)}_{\hat B_2 }$. The standard basis $\{e_1,e_2\}$ is used for the classical cross-section. The weights in the zero grade are the same as in Figure \ref{fig:B2_A1}. The weights $w (\omega_1+\rho)-\rho$ are marked by crosses if $\epsilon(w)=1$ and by boxes for $\epsilon(w)=-1$. Simple roots of the classical subalgebra $B_2$ are grey and grey diagonal plane corresponds to the Cartan subalgebra of the embedded algebra $\hat{A}_1$.} \label{fig:affine_B2_anom_point} \end{figure} According to the algorithm \ref{sec:algorithm} we project the anomalous weights to $P_{\hat{A_1}}$ and find the dimensions of the corresponding $\afb$-modules $L^{\pi_{\afb}(w(\mu+\rho))-\rho_{\afb}}_{\afb}$. In the grade zero this projection gives exactly the set $\Psi ^{\left( \mu \right) }_{\left( A_1, A_1 \right)}$ for the embedding of the classical Lie algebra $A_1\rightarrow B_2$. To see this compare Figure \ref{fig:B2_A1} with the Figure \ref{fig:AffineB2_A1_anom_proj} where the singular element $\Psi ^{\left( \mu \right) }_{\left( \widehat{A_1}, A_1 \right)}$ for the affine embedding $\hat{A_1}$ is presented up to the twelfth grade. \begin{figure}[h!tb] \centering \includegraphics[width=120mm]{figure12.eps} \caption{The singular element $\Psi ^{\left( \omega_1 \right) }_{\left( \widehat{A_1}, A_1 \right)}$ in the basis $\{\beta,\delta\}$. The dimensions of the corresponding $\afb=A_1$-modules with the signs $\epsilon(u)$ are indicated.} \label{fig:AffineB2_A1_anom_proj} \end{figure} \begin{figure}[h!bt] \centering \includegraphics[width=120mm]{figure13.eps} \caption{Anomalous branching coefficients for $\hat{A_1}\rightarrow \hat{B_2}$. The basis $\{\beta,\delta\}$ is used. The boundaries of the main Weyl chamber $\bar{C}_{\hat{A}_1}$ are indicated by the black lines. The anomalous branching coefficients inside the main Weyl chamber are equal to the branching coefficients of the embedding $\hat{A_1}\rightarrow \hat{B_2}$.} \label{fig:AffineB2_A1_branching} \end{figure} The multiplicities of the highest weights inside the Weyl chamber $\bar{C}^{\left( 0 \right)}_{\hat{A_1}}$ define the following branching coefficients (up to the twelfth grade), \begin{eqnarray*} \label{eq:28} L^{\omega_1}_{\hat{B_2}\downarrow \hat{A_1}} &=&2 L_{\hat{A_1}}^{\omega_1}\oplus 1 L_{\hat{A_1}}^{\omega_0}\oplus 4 L_{\hat{A_1}}^{\omega_0-\delta}\oplus\\ &&2 L_{\hat{A_1}}^{\omega_1-\delta}\oplus 8 L_{\hat{A_1}}^{\omega_0-2\delta}\oplus 8 L_{\hat{A_1}}^{\omega_1-2\delta}\oplus 15 L_{\hat{A_1}}^{\omega_0-3\delta}\oplus\\ &&12 L_{\hat{A_1}}^{\omega_1-3\delta}\oplus 26 L_{\hat{A_1}}^{\omega_1-4\delta}\oplus 29 L_{\hat{A_1}}^{\omega_0-4\delta}\oplus 51 L_{\hat{A_1}}^{\omega_0-5\delta}\oplus\\ &&42 L_{\hat{A_1}}^{\omega_1-5\delta}\oplus 78 L_{\hat{A_1}}^{\omega_1-6\delta}\oplus 85 L_{\hat{A_1}}^{\omega_0-6\delta}\oplus 120 L_{\hat{A_1}}^{\omega_1-7\delta}\oplus\\ &&139 L_{\hat{A_1}}^{\omega_0-7\delta}\oplus 202 L_{\hat{A_1}}^{\omega_1-8\delta}\oplus 222 L_{\hat{A_1}}^{\omega_0-8\delta}\oplus 306 L_{\hat{A_1}}^{\omega_1-9\delta}\oplus\\ &&346 L_{\hat{A_1}}^{\omega_0-9\delta}\oplus 530 L_{\hat{A_1}}^{\omega_0-10\delta}\oplus 482 L_{\hat{A_1}}^{\omega_1-10\delta}\oplus 714 L_{\hat{A_1}}^{\omega_1-11\delta}\oplus\\ &&797 L_{\hat{A_1}}^{\omega_0-11\delta}\oplus 1080 L_{\hat{A_1}}^{\omega_1-12\delta}\oplus 1180 L_{\hat{A_1}}^{\omega_0-12\delta}\oplus \dots \end{eqnarray*} This result can be presented as the set of branching functions: \begin{eqnarray*} \label{eq:29} \begin{array}{cc} b^{(\omega_1)}_{0}= & 1 + 4\,q^{1}+ 8\,q^{2}+ 15\,q^{3}+ 29\,q^{4}+ 51\,q^{5}+ 85\,q^{6}+ 139\,q^{7}+\\ &222\,q^{8}+ 346\,q^{9}+ 530\,q^{10}+ 797\,q^{11}+ 1180\,q^{12}+\dots\\ \end{array}\\ \begin{array}{cc} b^{(\omega_1)}_{1}= &2+2\,q^{1}+8\,q^{2}+12\,q^{3}+26\,q^{4}+42\,q^{5}+78\,q^{6}+120\,q^{7}+\\ & 202\,q^{8}+306\,q^{9}+482\,q^{10}+714\,q^{11}+1080\,q^{12}+\dots \end{array} \end{eqnarray*} Here $q=\exp (2\pi i \tau)$ and the lower index enumerates the branching functions according to their highest weights in $P^+_{\hat{A_1}}$. These are the fundamental weights $\omega_0=\lambda_0=(0,1,0),\; \omega_1=\alpha/2=(1,1,0)$. Now we can use the relation (\ref{eq:31}), \begin{equation*} \label{eq:35} \begin{array}{cc} \chi^{(\omega_1)}_{1}(q)= & q^{\frac{7}{12}}\left( 2+2\,q^{1}+8\,q^{2}+12\,q^{3}+26\,q^{4}+42\,q^{5}+78\,q^{6}+120\,q^{7}+\right. \\ & \left. 202\,q^{8}+306\,q^{9}+482\,q^{10}+714\,q^{11}+1080\,q^{12}+\dots \right),\\ \chi^{(\omega_1)}_{0}(q) = & q^{\frac{5}{6}}\left(1 + 4\,q^{1}+ 8\,q^{2}+ 15\,q^{3}+ 29\,q^{4}+ 51\,q^{5}+ 85\,q^{6}+ 139\,q^{7}+\right. \\ &\left. 222\,q^{8}+ 346\,q^{9}+ 530\,q^{10}+ 797\,q^{11}+ 1180\,q^{12}+\dots\right), \end{array} \end{equation*} and thus obtain the expansion of the $B_2/A_1$-coset characters. \section{Conclusion} \label{sec:conclusion} We have demonstrated that the injection fan technique can be used to deal with an arbitrary reductive subalgebra (maximal as well as nonmaximal). It was shown that the branching problem for $\af \subset \gf$ is tightly connected with the properties of the orthogonal partner $ \af_{\perp } $ of $\af$. The subalgebra $\afb$ corresponds to the subset $\Delta^{+}_{\afb}$ of positive roots in $\Delta_{\mathfrak{g}}^{+}$ that trivialize the Cartan subalgebra $\hf_{\afb}$. Both the injection fan and the sets of singular weights for the highest weight $\gf$-modules depend substantially on the structure of $\afb$ and its submodules. For the fan $\Gamma_{\af\rightarrow \gf}$ this dependence is almost obvious: in the element $\Phi_{\af\rightarrow \gf}$ the factors corresponding to the roots of $\Delta^{+}_{\afb}$ are eliminated. The transformation in the set of projected singular weights is more interesting. We have found out that in the new singular element $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}$ the coefficients depend on the the $\afb$-submodules (their highest weights $\mu _{\widetilde{\af_{\perp }}}\left( u\right)$ are fixed by the injection and by the weights of the initial element $\Psi^{\mu}$). Fortunately no more information on $L^{\mu _{\widetilde{\af_{\perp }}}\left( u\right)} _{\afb}$-submodules is necessary than their dimensions. In the new singular element $\Psi ^{\left( \mu \right) }_{\left( \af, \afb \right)}$ the weight multiplicities are equal to the dimensions $\dim\left(L^{\mu _{\widetilde{\af_{\perp }}}\left( u\right)}_{ \afb }\right)$ of the corresponding $\afb$-modules multiplied by the values $\epsilon (u)$. As a result the highest weights of $\af$-submodules and their multiplicities are subject to the set of linear equations (\ref{eq:17}). These properties are valid for any reductive subalgebra $\af\rightarrow \gf$ and the set can be redressed to the form of recurrent relations to be solved step by step. The efficiency of the obtained algorithm was illustrated in various examples. In particular we considered the construction of modular-invariant partition functions in the framework of conformal embedding method and the coset construction in the rational conformal field theory. This construction is useful in the study of WZW-models emerging in the context of the AdS/CFT correspondence \cite{Maldacena:2000hw,Maldacena:2000kv,Maldacena:2001km}. Further amelioration of the algorithm can be achieved by using the folded fan technique \cite{il2010folded}. It must be mentioned that even in the case of string functions the explicit solution of the corresponding recurrent relations is a difficult problem (see \cite{il2010folded} for details). Nevertheless we hope that by developing the procedure of folding one could get explicit solutions for at least some of branching functions and the corresponding coset characters. \section{Acknowledgements} The work was supported in part by RFFI grant N 09-01-00504 and the National Project RNP.2.1.1./1575. \section*{References}
1,477,468,751,315
arxiv
\section{Introduction} \label{sec:intro} Polar codes are a breakthrough in the field of channel coding as they can achieve the capacity of any binary symmetric channel with efficient encoding and decoding algorithms \cite{arikan}. Successive cancellation (SC) and belief propagation (BP) decoding algorithms were introduced in \cite{arikan} to decode polar codes. Although SC decoding can provide a low-complexity implementation, its serial nature prevents the decoder to reach a high decoding throughput. Furthermore, the error-correction performance of SC decoding for short to moderate polar codes does not satisfy the requirements of the fifth generation of cellular mobile communications (5G) standard. To improve the error-correction performance of SC decoding, SC list (SCL) decoding was introduced in \cite{tal_list} and it was shown that SCL can provide a significant error-correction performance improvement if it is concatenated with a cyclic redundancy check (CRC). Based on this observation, polar codes have been selected to be used in the enhanced mobile broadband (eMBB) control channel of 5G together with a CRC \cite{3gpp_report}. Unlike SC-based decoders, the iterative message passing process of BP decoding can be executed in parallel, hence enabling the decoder to reach high decoding throughput. However, the conventional BP decoding algorithm suffers from poor error-correction performance. It has been shown that if polar codes are concatenated with a CRC, the error-correction performance of them under BP decoding can be significantly improved by exploiting the extrinsic information between the factor graphs of polar codes and the CRC \cite{Doan_ICC19, CABPList}. In addition, by using multiple independent permutations of the factor-graph of polar codes, the error-correction performance of them under BP decoding is significantly improved \cite{hussami2009performance, elkelesh2018belief, Doan_GLOBECOM, CABPList, LoopSimp}. However, the selection of good factor-graph permutations for polar codes that result in a correctly decoded codeword given a specific channel output realization remains an open research problem. In this paper, we first formalize the selection of factor-graph permutations of polar codes under the CRC-aided (CA) BP (CABP) decoder in \cite{Doan_ICC19} as a multi-armed bandit problem in reinforcement learning (RL). We then utilize state-of-the-art algorithms designed for the multi-armed bandit problem to select the factor-graph permutations of polar codes that work best under CABP decoding. Unlike existing approaches, such as using genetic algorithm \cite{CABPList} or Monte Carlo-based methods \cite{Doan_GLOBECOM, LoopSimp}, in which the mechanism for the selection of factor-graph permutations requires off-line training, the proposed approach treats the CABP-based decoding of polar codes as an online-learning agent that learns to select the good factor-graph permutations during the course of decoding. We show that for a 5G polar code of length $128$ with $64$ information bits and concatenated with a $16$-bit 5G CRC, the proposed RL-aided CABP (RL-CABP) decoding algorithm has an error-correction performance gain of around $0.125$ dB, at the target frame error rate (FER) of $10^{-4}$, compared to the approach that selects the factor-graph permutations of polar codes randomly. The remainder of the paper is as follows. Section~\ref{sec:polar} provides background on polar codes and BP-based decoding algorithms. Section~\ref{sec:bandit} summarizes the multi-armed bandit problem and its state-of-the-art algorithms. Section~\ref{sec:RL-CABP} introduces the proposed decoding algorithm, followed by the experimental results provided in Section~\ref{sec:exp}. Finally, concluding remarks are drawn in Section~\ref{sec:conclude}. \section{Polar Codes} \label{sec:polar} \subsection{Polar Encoding} A polar code $\mathcal{P}(N,K)$ of length $N$ with $K$ information bits is constructed by applying a linear transformation to the binary message word $\bm{u} = \{u_0,u_1,\ldots,u_{N-1}\}$ as $\bm{x} = \bm{u}\bm{G}^{\otimes n}$ where $\bm{x} = \{x_0,x_1,\ldots,x_{N-1}\}$ is the codeword, $\bm{G}^{\otimes n}$ is the $n$-th Kronecker power of the polarizing matrix $\bm{G}=\bigl[\begin{smallmatrix} 1&0\\ 1&1 \end{smallmatrix} \bigr]$, and $n = \log_2 N$. The vector $\bm{u}$ contains a set $\mathcal{I}$ of $K$ information bit indices and a set $\mathcal{I}^c$ of $N-K$ frozen bit indices. The positions of the frozen bits are known to the encoder and the decoder and their values are set to $0$. The codeword $\bm{x}$ is then modulated and sent through the channel. In this paper, binary phase-shift keying (BPSK) modulation and additive white Gaussian noise (AWGN) channel model are considered. Therefore, the soft vector of the transmitted codeword received by the decoder is written as ${\bm{y}=(\mathbf{1}-2\bm{x})+\bm{z}}$, where $\mathbf{1}$ is an all-one vector of size $N$, and $\bm{z} \in \mathbbm{R}^N$ is a Gaussian noise vector with variance $\sigma^2$ and zero mean. In the log-likelihood ratio (LLR) domain, the LLR vector of the transmitted codeword is given as $ {\bm{L} = \ln{\frac{\text{Pr}(\bm{x}=0|\bm{y})}{\text{Pr}(\bm{x}=1|\bm{y})}}=\frac{2\bm{y}}{\sigma^2}} $. \subsection{Belief Propagation Decoding of Polar Codes} \label{sec:polar:BPD} Fig.~\ref{fig:BPGraph}a illustrates BP decoding on a factor graph representation of $\mathcal{P}(8,5)$. The messages are iteratively propagated through the processing elements (PEs) \cite{arikan2010polar}. An update iteration starts with a right-to-left message pass that propagates the LLR values from the channel stage (right-most stage), to the information bit stage (left-most stage), and ends with the left-to-right message pass occurring in the reverse order. Fig.~\ref{fig:BPGraph}b shows a PE with its corresponding messages, where $r_{s,i}$ denotes a left-to-right message, and $l_{s,i}$ denotes a right-to-left message of the $i$-th bit index at stage $s$. The update rule for the right-to-left messages of a PE is \cite{arikan2010polar} \begin{align} \label{PolarPE_left} \begin{split} \begin{cases} l_{s,i} &= f(l_{s+1,i},r_{s,i+2^s} + l_{s,i+2^s})\text{,}\\ l_{s,i+2^s} &= f(l_{s+1,i},r_{s,i}) + l_{s+1,i+2^s}\text{,}\\ \end{cases} \end{split} \end{align} and for the left-to-right messages is \begin{align} \label{PolarPE_right} \begin{split} \begin{cases} r_{s+1,i} &= f(r_{s,i},l_{s+1,i+2^s} + r_{s,i+2^s})\text{,}\\ r_{s+1,i+2^s} &= f(r_{s,i},l_{s+1,i}) + r_{s,i+2^s}\text{,} \end{cases} \end{split} \end{align} where $f(.)$ is the scaled min-sum function \cite{yuan2014early}: \begin{equation} \label{minsum} f(x,y) = 0.9375\times\sgn(x)\sgn(y)\min(|x|,|y|)\text{.} \end{equation} The BP decoding performs a predetermined $I_{\max}$ iterations where the messages are propagated through all PEs in accordance with (\ref{PolarPE_left}) and (\ref{PolarPE_right}). The LLR values at stage $0$, denoted as $\bm{r}_0$, are initialized as \begin{equation} r_{0,i} = \begin{cases} 0 \text{,} & \text{if } i \in \mathcal{I} \text{,}\\ +\infty \text{,} & \text{if } i \in \mathcal{I}^c \text{,} \end{cases} \end{equation} and the LLR values at stage $n$, denoted as $\bm{l}_n$, are initialized as $\bm{l}_n=\bm{L}$. In addition, all the other left-to-right and right-to-left messages of the PEs at the first iteration are set to $0$. After running $I_\text{max}$ iterations, the decoder makes a hard decision on the LLR values of the $i$-th bit at the information bit stage to obtain the estimated message word as \begin{equation} \label{hardDec} \hat{u}_i= \begin{cases} 0 \text{,} & \text{if } r_{0,i} + l_{0,i} \geq 0 \text{,}\\ 1 \text{,} & \text{otherwise.} \end{cases} \end{equation} In this paper we consider the case where a CRC is concatenated to the polar code as in the 5G standard. After running BP decoding on the factor-graph of polar codes for $I_\text{min}$ iterations ${(0 < I_\text{min} < I_\text{max})}$, a CRC verification is performed to early-terminate the decoding process. In addition, the factor-graph of CRC is utilized to further improve the error-correction performance of polar codes under BP decoding in a way that the extrinsic information of the factor-graphs of CRC and polar codes is exchanged by running BP decoding on both factor-graphs after the $I_\text{min}$-th iteration \cite{Doan_ICC19}. We refer to this algorithm as CABP decoding. \begin{figure}[t] \centering \input{Figures/PolarFactorGraph.tikz} \vspace*{5pt} \caption{(a) Factor-graph representation of $\mathcal{P}(8,5)$ with $\mathcal{I}^c = \{0,1,2\}$, (b) a PE for BP decoding.} \label{fig:BPGraph} \vspace*{-1\baselineskip} \end{figure} \subsection{Decoding Polar Codes on Factor-Graph Permutations} The error-correction performance of polar codes under different decoding algorithms can significantly improve if the decoding is performed independently on multiple factor-graph permutations \cite{hussami2009performance, elkelesh2018belief, Doan_GLOBECOM, CABPList, LoopSimp}. A factor-graph permutation, denoted as $\pi_p$ $(0 \leq p < n! )$, is constructed by permuting the PE stages of the polar codes factor graph \cite{hussami2009performance}. For instance, Fig.~\ref{fig:BPGraph}a shows the original factor graph of $\mathcal{P}(8,5)$, denoted as $\pi_0=\{s_0,s_1,s_2\}$. Permuting the PEs in stage $s_1$ and $s_2$ in Fig.~\ref{fig:BPGraph}a forms another factor-graph permutation, $\pi_1=\{s_0,s_2,s_1\}$. It was shown that there is a one-to-one mapping between the factor-graph permutation and the bit-index permutation of the original factor-graph \cite{Doan_GLOBECOM}. Thus, the decoding of polar codes on their permuted factor graphs can be performed by running the decoder on the permuted bit-indices of the original factor graph. This keeps the architecture of the decoder unchanged \cite{Doan_GLOBECOM}. In this paper, given $\pi_p$ and $\bm{L}$, we use the technique presented in \cite{Doan_GLOBECOM} to form the corresponding permuted bit-indices of the channel LLR values, $\bm{L}_{\pi_p}$. We then apply CABP decoding on $\bm{L}_{\pi_p}$ using the original factor-graph. Note that the permuted soft messages of the information bit stage $\bm{l}_{0_{\pi_p}}$ is permuted back to $\bm{l}_0$ before running BP decoding on the CRC factor-graph. Given $\bm{L}$ and $\pi_p$, we consider CABP decoding as a function and denote its output as $\bm{\hat{u}}=\CABP(\bm{L}, \pi_p)$. In addition, throughout this paper, we refer to $\pi_0$ as the permutation corresponding to the original factor-graph. \section{Multi-Armed Bandit Problem} \label{sec:bandit} A multi-armed bandit problem, or a $k$-armed bandit problem $(k>1)$, is an RL problem where an agent has to repeatedly make a choice among $k$ different actions (options). After each action is performed, the agent receives a numerical reward that is drawn from a distribution that depends on the selected action. The agent's objective is to maximize the expected cumulative rewards over a time period \cite{Sutton}. Let ${\mathcal{A}=\{a_1,a_2,\ldots,a_k\}}$ be the set of actions and $q^*(a_j)$ $(1\leq j \leq k)$ be the corresponding expected reward of an action $a_j$. $q^*(a_j)$ is called the value function and its value is unknown to the agent. In this paper, we consider three state-of-the-art algorithms designed for the multi-armed bandit problem, namely, $\varepsilon$-greedy, upper confidence bound (UCB), and Thompson sampling (TS). \subsection{$\varepsilon$-Greedy and UCB Algorithms} Let $n_{a_j}$ be the number of times that an action $a_j$ is selected up to the $t$-th time step. If $a_j$ is selected at the $t$-th time step, $n_{a_j}$ is updated as ${n_{a_j}:=n_{a_j}+1}$ \cite{Sutton}. Then, the value function $q^*(a_j)$ is estimated as $Q_{a_j}$ in accordance with ${Q_{a_j} := Q_{a_j} + \frac{1}{n_{a_j}}\left[R_t-Q_{a_j}\right]}$, where $R_t$ is the reward received by selecting action $a_j$ at the $t$-th time step \cite{Sutton}. Initially, $Q_{a_j}$ and $a_j$ are set to 0 $(\forall j, 1 \leq j \leq k)$. Given the estimated expected rewards $Q_{a_j}$, an exploitation occurs when the agent selects an action that has the largest expected reward value \cite{Sutton}. On the other hand, an exploration occurs when the agent selects any action that does not have the largest expected reward value \cite{Sutton}. Let $a_{j^*}$ be the action selected by the agent at the $t$-th time step. Under the $\varepsilon$-greedy algorithm $a_{j^*}$ is selected as \cite{Sutton} \begin{equation} a_{j^*} = \begin{cases} \displaystyle\argmax_{\forall {a_j}} Q_{a_j} &\text{with probability $1-\varepsilon$,}\\ a_\text{random} &\text{with probability $\varepsilon$,}\\ \end{cases} \end{equation} where $a_\text{random}$ is a random action drawn i.i.d. from $\mathcal{A}$. On the other hand, under the UCB algorithm $a_{j^*}$ is selected as \begin{equation} a_{j^*}=\argmax_{\forall a_j} \left[Q_{a_j}+c\sqrt{\frac{\ln t}{n_{a_j}}}\right], \end{equation} where $n_{a_j}\neq0$ and $c \in \mathbb{R}^+$. If $n_{a_j}=0$, $a_j$ is considered as an exploitation action. Note that $\varepsilon$ and $c$ control the degree of exploration of the $\varepsilon$-greedy and UCB algorithms, respectively. \subsection{Thompson Sampling} Instead of estimating the expected reward value $q^*(a_j)$ as in the $\varepsilon$-greedy and UCB algorithms, the TS algorithm directly estimates the distribution of the reward value associated with each action. In this paper, as $R_t\in\{0,1\}$ a Beta distribution is used to estimate the reward's distribution \cite{TS}. A Beta distribution has two shape parameters: $\alpha,\beta \in \mathbb{R}^+$, and a different set of shape parameters is used for each action. We denote a random sampling from the estimated reward distribution of the $j$-th action as $\upsilon_{a_j}=\Beta(\alpha_{a_j}, \beta_{a_j})$. At the $t$-th time step, the TS algorithm first draws a random sample from each of the estimated reward distributions. The agent then selects the action $a_{j^*}$ as $a_{j^*} = \argmax_{\forall {a_j}} \upsilon_{a_j}$. The shape parameters corresponding to the selected action $a_{j^*}$ are then updated as $\alpha_{a_{j^*}}:=\alpha_{a_{j^*}}+R_t$ and $\beta_{a_{j^*}}:=\beta_{a_{j^*}}+R_t$ \cite{TS}. Initially, ${\alpha_{a_j}=\beta_{a_j}=1}$ ${(\forall j, 1 \leq j \leq k)}$ \cite{TS}. \section{Selection of Factor-Graph Permutations with Reinforcement Learning} \label{sec:RL-CABP} This section first formalizes the selection of factor-graph permutations for polar decoding as a $k$-armed bandit problem. It then introduces the proposed decoding method that utilizes the multi-armed bandit algorithms in Section~\ref{sec:bandit} to select the factor-graph permutations under CABP decoding. \subsection{Problem Formulation} Under BP decoding of polar codes, the original factor-graph permutation $\pi_0$ is empirically observed to have the best error-correction performance compared to other factor-graph permutations \cite{hussami2009performance}. However, there are cases that a specific channel output realization, which cannot be decoded using the original factor-graph permutation, can be decoded using another factor-graph permutation \cite{hussami2009performance}. As the number of permutations, $n!$, is large, running BP decoding on all of the permutations is not possible in real applications. Instead, the decoding is performed on a small set of $M$ factor-graph permutations, including the original factor-graph permutation \cite{hussami2009performance,elkelesh2018belief,Doan_GLOBECOM,CABPList}. Let an action $a_j \in \mathcal{A}$ $(1\leq j \leq k)$ be a random selection of $M-1$ $(M>1)$ factor-graph permutations that do not include the original factor-graph permutation. Consider the CRC verification is not successful when CABP decoding is performed on the original factor-graph permutation $\pi_0$. The proposed decoder then selects an action $a_j$ from the set $\mathcal{A}$. If one of the factor-graph permutations in $a_j$ results in a successful CRC verification, a reward of $1$ is given to the decoder. Otherwise, if none of the permutations in $a_j$ results in a successful CRC verification under CABP decoding, a reward of $0$ is given to the decoder. Therefore, among $k$ sets of predefined factor-graph permutations, i.e. $k$ different actions, the proposed decoding algorithm decides which set of factor-graph permutations maximizes the reward during the course of decoding. The selection of factor-graph permutations for CABP decoding can thus be formalized as a $k$-armed bandit problem as defined in Section~\ref{sec:bandit}. \subsection{Reinforcement Learning-Aided CABP} \label{sec:RL-CABPB} \begin{algorithm}[t] \DontPrintSemicolon \caption{Forming the action set} \label{alg1} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$n,k,M$} \Output{$\mathcal{A}$} \tcp{Define the original permutation} $\pi_0\leftarrow\{s_0,s_1,\cdots,s_{n-1}\}$\\ \tcp{Select $M-1$ random permutations for each action} $\mathcal{A} \leftarrow \emptyset$\\ \For{$j \leftarrow 1$ \KwTo $k$} { $a_j \leftarrow \emptyset$\\ \For{$t \leftarrow 1$ \KwTo $M-1$} { $\pi_{j,t} \leftarrow \text{RandShuffle}(\pi_0)$\\ $a_j \leftarrow a_j \cup \pi_{j,t}$\\ } $\mathcal{A} \leftarrow \mathcal{A} \cup a_j$ } \Return $\mathcal{A}$ \end{algorithm} The proposed decoding algorithm starts with the construction of $\mathcal{A}$, the set of $k$ different actions, which is outlined in Algorithm~\ref{alg1}. Each action $a_j \in \mathcal{A}$ contains $M-1$ random factor-graph permutations. Formally, $a_j = \{\pi_{j,1},\pi_{j,2},\cdots,\pi_{j,M-1}\}$, ${\pi_{j,t} \neq \pi_0}$ $\forall j,t$, where $1\leq j \leq k$, and $1 \leq t \leq M-1$. A random factor-graph permutation is formed by randomly permuting the PE stages of the original factor graph $\pi_0$, which is obtained by the $\text{RandShuffle}$ function in Algorithm~\ref{alg1}. The number of all possible actions is \begin{equation} \label{equ:k_max} k_{\max}={{n!-1}\choose{M-1}}=\frac{(n!-1)!}{(M-1)!(n!-M)!}, \end{equation} which is generally intractable for practical values of $n$ and $M$. Therefore, only the subset $\mathcal{A}$ of all the possible actions is considered. In fact, $\mathcal{A}$ is constructed by randomly sampling from the complete set of actions as shown in Algorithm~\ref{alg1}. Note that after $\mathcal{A}$ is formed, the set of actions in $\mathcal{A}$ remains unchanged during the course of decoding. Algorithm~\ref{alg2} outlines the proposed RL-CABP decoding algorithm, given the predefined set of actions $\mathcal{A}$ constructed in Algorithm~\ref{alg1}. The proposed RL-CABP decoder first initializes the parameters of the multi-armed bandit algorithm depending on its type, which is defined by the parameter $\Algo$ in Algorithm~\ref{alg2}. If $\Algo$ indicates the $\varepsilon$-greedy or UCB algorithms, the parameters of the multi-armed bandit algorithm are initialized as $Q_{a_j}=n_{a_j}=0$ $\forall j$, $1 \leq j \leq k$. If the TS algorithm is used, the set of parameters is initialized as $\alpha_{a_j}=\beta_{a_j}=1$ $\forall j$, $1 \leq j \leq k$. Note that the initialization process is only carried out once in the course of decoding. Then, the proposed RL-CABP decoding applies CABP decoding over the original factor-graph permutation $\pi_0$. If the CRC verification, which is obtained by the $\VerifyCRC$ function in Algorithm~\ref{alg2} is successful, the proposed decoder outputs the estimated message word $\bm{\hat{u}}$ and the decoding process is terminated. Otherwise, the RL-CABP decoder selects an action $a_{j^*}$ from $\mathcal{A}$, which contains a set of $M-1$ random factor-graph permutations as described in Algorithm~\ref{alg1}. Depending on the type of the algorithm, the function $\SelectAction$ implements the selection criteria of the considered multi-armed bandit algorithms as introduced in Section~\ref{sec:bandit}. Note that the $\SelectAction$ function can be performed in parallel with the first CABP decoding attempt as there is no dependency between them. Therefore, the selected action $a_{j^*}$ can be obtained in advance without adding a latency overhead to the proposed decoding algorithm. Moreover, if the first CABP decoding attempt over $\pi_0$ is successful, the selected action $a_{j^*}$ is discarded. If the first CABP decoding attempt fails in the proposed RL-CABP decoding algorithm, additional CABP decoding attempts are sequentially carried over the factor-graph permutations specified by $a_{j^*}$. As soon as the CRC verification is successful after CABP decoding on one of the factor-graph permutations in $a_{j^*}$, a reward of $1$ is given to $a_{j^*}$, and the decoding outputs the estimated message word that satisfies the CRC verification. On the other hand, if running CABP on all of the permutations in $a_{j^*}$ does not result in a successful CRC test, a reward of $0$ is given to $a_{j^*}$ and the decoding is declared unsuccessful. Finally, after each action selection, the parameters associated with the selected action $a_{j^*}$ are updated using the $\UpdateBandit$ function. Note that the parameter update process is based on the received reward and the type of the mutli-armed bandit algorithm as provided in Section~\ref{sec:bandit}. \begin{algorithm}[t] \DontPrintSemicolon \caption{RL-CABP Decoding} \label{alg2} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$\bm{L}, \mathcal{A}, k, M, \Algo$} \Output{$\bm{\hat{u}}$} \tcp{Initialize the bandit parameters} $\InitBandit(k,\Algo)$\\ \tcp{Apply CABP decoding on $\pi_0$} $\bm{\hat{u}} \leftarrow \CABP(\bm{L},\pi_0)$\\ $isCorrect_{\pi_0} \leftarrow \VerifyCRC(\bm{\hat{u}})$\\ \tcp{Select an action in advance} $a_{j^*} \leftarrow \SelectAction (\mathcal{A},\text{Algo})$\\ \tcp{If applicable, apply CABP decoding on the permutations specified by $a_{j^*}$} \If{$(isCorrect_{\pi_0} = 0)$}{ $isCorrect_{a_{j^*}} \leftarrow 0$\\ \For{$t \leftarrow 1$ \KwTo $M-1$}{ $\bm{\hat{u}} \leftarrow \CABP(\bm{L},\pi_{j^*,t})$\\ $isCorrect_{a_{j^*}} \leftarrow \VerifyCRC(\bm{\hat{u}})$\\ \If{$(isCorrect_{a_{j^*}} = 1)$}{ \textbf{break}\\ } } \tcp{Update the bandit parameters associated with $a_{j^*}$} $R_t \leftarrow isCorrect_{a_{j^*}}$\\ $\UpdateBandit(R_t,a_{j^*},\Algo)$ } \Return $\bm{\hat{u}}$ \end{algorithm} \section{Experimental Results} \label{sec:exp} In this section, the performance of various multi-armed bandit algorithms used by the proposed RL-CABP decoding is numerically evaluated. In addition, the error-correction performance of the proposed RL-CABP decoding in terms of FER is compared with that of other polar decoding techniques. A complexity comparison of different multi-armed bandit algorithms in the proposed RL-CABP decoding is also given. We use $\mathcal{P}(128,64)$ selected for the eMBB control channel of the 5G standard \cite{3gpp_report}. Furthermore, the polar code is concatenated with a CRC of length $16$, which is also used in 5G \cite{3gpp_report}. The total number of factor-graph permutations used by all BP-based decoders is set to $7$. We set $I_{\max}=100$ and $I_{\min}=50$ for all BP-based decoding algorithms. Fig.~\ref{fig:param} illustrates the dependence of the average reward on the parameters in $\varepsilon$-greedy and UCB algorithms for $\mathcal{P}(128,64)$. The simulation is carried out at $E_b/N_0 = 3.0$~dB and we set $k=500$ for all multi-armed bandit algorithms. In this figure, the average reward of the first $10000$ time steps received by the RL-CABP decoder is plotted against the parameter value. Note that a time step is increased by $1$ when the multi-armed bandit algorithm is required for the action selection, i.e., when CABP decoding has failed on the original factor-graph permutation $\pi_0$. As seen from Fig.~\ref{fig:param}, at $\varepsilon=2^{-4}$ and $c=2^{-3}$, RL-CABP decoding has the highest average reward value for $\varepsilon$-greedy and UCB algorithms, respectively. The TS algorithm does not require parameter tuning since $\alpha$ and $\beta$ parameters associated with each action are optimized during the decoding process. \begin{figure}[t!] \centering \input{Figures/param_study.tikz} \hspace*{20pt} \ref{legend_param_study} \caption{A parameter study of the $\varepsilon$-greedy and UCB algorithms. The average reward is obtained for the first $10000$ time steps with $k=500$ at $E_b/N_0=3.0$ dB.} \label{fig:param} \end{figure} Fig.~\ref{fig:k} illustrates the performance of multi-armed bandit algorithms used by RL-CABP decoding with different values of $k$. This simulation is also carried out at $E_b/N_0 = 3.0$~dB. We set $\varepsilon=2^{-4}$ for the $\varepsilon$-greedy algorithm and $c=2^{-3}$ for the UCB algorithm as those configurations provide the best performance in Fig.~\ref{fig:param}. It can be observed that for all the bandit algorithms, $k=500$ provides the largest cumulative reward after the first $10000$ time steps. Thus, we set $k=500$ for the rest of the paper. \begin{figure}[t!] \centering \input{Figures/k_selection.tikz} \hspace*{20pt} \ref{legend_k} \caption{The impact of $k$ on the performance of different multi-armed bandit algorithms used by RL-CABP decoding for $\mathcal{P}(128,64)$, obtained for the first $10000$ time steps.} \label{fig:k} \vspace*{-1\baselineskip} \end{figure} Fig.~\ref{fig:reward} illustrates the average cumulative reward over the first $10000$ time steps for all the multi-armed bandit algorithms. The simulation is performed at $E_b/N_0=3.0$ dB with $k=500$, $\varepsilon=2^{-4}$, and $c=2^{-3}$. It can be seen that the $\varepsilon$-greedy algorithm has the best performance in terms of the average cumulative reward. In addition, the UCB algorithm performs slightly better than the TS algorithm. Note that the spikes in the early part of the curves are caused by the small value of the time step, which makes the calculation of the average cumulative reward unreliable at the initial phases of the algorithm. \begin{figure}[t!] \centering \includegraphics[height=0.45\columnwidth,width=\columnwidth]{Figures/reward.pdf} \hspace*{20pt} \includegraphics[height=0.55cm,width=4.5cm]{Figures/reward_legend.pdf} \caption{Performance comparison of various multi-armed bandit algorithms used by RL-CABP decoding. The simulation is obtained at $E_b/N_0=3.0$ dB with $k=500$, $\varepsilon=2^{-4}$, and $c=2^{-3}$. } \label{fig:reward} \vspace*{-1\baselineskip} \end{figure} Fig.~\ref{fig:fer:1} compares the FER of different factor-graph permutation selection schemes under the CABP decoding algorithm. In this figure, CABP denotes the CABP decoding algorithm performed only on the original factor-graph permutation. CP-CABP and RP-CABP denote the cyclically-shifted and random factor-graph permutations selection schemes proposed in \cite{hussami2009performance} and \cite{elkelesh2018belief}, respectively. Note that as there are $n=7$ cyclically-shifted permutations for $\mathcal{P}(128,64)$, we set the number of additional random permutations used by RP-CABP to $6$, and $M=7$ for the proposed RL-CABP decoder for a fair comparison. It can be seen that the proposed RL-CABP decoder under various multi-armed bandit algorithms has a similar FER performance. When compared with CP-CABP and RP-CABP, an error-correction performance gain of at least $0.125$~dB is obtained at the target FER of $10^{-4}$. In addition, an FER gain of around $0.62$~dB is obtained when the proposed RL-CABP decoding algorithm is compared with the baseline CABP decoder at the FER of $10^{-4}$. Fig.~\ref{fig:fer:2} compares the error-correction performance of the proposed RL-CABP decoding with BP decoding and CA-SCL decoding of polar codes. In Fig.~\ref{fig:fer:2}, CA-SCL$L$ indicates the CA-SCL decoder with a list size of $L$. It can be observed that at the target FER of $10^{-4}$, the FER performance of the proposed RL-CABP decoder is around $0.92$~dB better than that of the BP decoding algorithm in \cite{yuan2014early}. At the same target FER, CA-SCL$4$ provides a better error-correction performance in comparison with the proposed RL-CABP decoder. However, compared with CA-SCL$2$ at the same target FER, the proposed decoder has a performance gain of around $0.12$~dB, under different multi-armed bandit algorithms. \begin{figure}[t!] \centering \input{Figures/fer_PBP.tikz} \hspace*{10pt} \ref{legend_fer_comp1} \caption{Error-correction performance of different factor-graph permutation selection schemes for $\mathcal{P}(128,64)$.} \label{fig:fer:1} \vspace*{-0.5\baselineskip} \end{figure} \begin{figure}[t!] \centering \input{Figures/fer_PBP_SCL.tikz} \hspace*{10pt} \ref{legend_fer_comp2} \caption{Error-correction performance of RL-CABP decoding and other decoding algorithms of polar codes.} \label{fig:fer:2} \vspace*{-1\baselineskip} \end{figure} Table~\ref{tab:complx} shows the maximum number of computations required by various permutation selection schemes used in Fig.~\ref{fig:fer:1}. Among all the multi-armed bandit algorithms, the $\varepsilon$-greedy algorithm in general has the lowest computational complexity. This is because the TS algorithm requires a sampling process for $k$ different $\Beta$ distributions, which in general requires higher computational complexity than applying an i.i.d. sampling from the interval of $(0,1)$ and doing a multiplication as required by the $\varepsilon$-greedy algorithm. In addition, although using the cyclically-shifted factor-graph permutations does not consume any additional complexity for the factor-graph permutation selection, this technique is not applicable when more than $n$ different permutations are required. It can also be observed that the main drawback of the multi-armed bandit algorithms is the sorting operations required to identify the exploitation action. However, as described in Section~\ref{sec:RL-CABPB}, the action selection process can be performed in parallel with the first CABP decoding attempt. Therefore, there is no additional latency overhead. Furthermore, the approaches in \cite{hussami2009performance} and \cite{elkelesh2018belief} come with the cost of error-correction performance degradation when compared with the proposed RL-CABP decoder as illustrated in Fig.~\ref{fig:fer:1}. \begin{table}[t!] \caption{Computational complexity of different permutation selection schemes in terms of the maximum number of operations performed} \centering \begin{tabular}{l c c c c c} \toprule Operations & \cite{hussami2009performance} & \cite{elkelesh2018belief} & $\varepsilon$-greedy & UCB & TS \\ \midrule $+$ & 0 & 0 & 2 & 2 + $k$& 2 \\ $-$ & 0 & 0 & 1 & 1 & 0 \\ $\times$ & 0 & 0 & 1 & 1+$k$ & 0 \\ $\divisionsymbol$ & 0 & 0 & 0 & $k$ & 0 \\ $\sqrt{\color{white}{.}}$ & 0 & 0 & 0 & $k$ & 0 \\ $\ln$ & 0 & 0 & 0 & $k$ & 0 \\ Random sampling & 0 & $M-1$ & 1 & 0 & $k$ \\ Sorting & 0 & 0 & $k$ & $k$ & $k$\\ \bottomrule \end{tabular} \label{tab:complx} \vspace*{-1\baselineskip} \end{table} \section{Conclusions} \label{sec:conclude} In this paper, we first showed that the selection of factor-graph permutations for polar decoding can be formalized as a multi-armed bandit problem in RL. We then proposed an RL-CABP decoding algorithm that utilizes the state-of-the-art algorithms for the multi-armed bandit problem to select the factor-graph permutations under CABP decoding of polar codes. We showed that for a 5G polar code of length $128$, with $64$ information bits and concatenated with a $16$-bit 5G CRC, the FER of the proposed decoder is around $0.125$~dB better than that of the technique that selects the factor-graph permutations randomly, at the target FER of $10^{-4}$. In addition, we showed that there is no additional latency overhead for the selection of factor-graph permutations of the proposed decoder compared with the approach that selects the factor-graph permutations at random. \section*{Acknowledgment} S. A. Hashemi is supported by a Postdoctoral Fellowship from the Natural Sciences and Engineering Research Council of Canada (NSERC).
1,477,468,751,316
arxiv
\section*{1. Introduction} Pure Chern-Simons field theory in 3 dimensions is a fascinating topic from many points of view. It can be used to give a path-integral representation of knot and link invariants \cite{wi} and in order to understand many properties of 2-dimensional conformal field theories \cite{wi,conf}. Being a topological field theory the model has no propagating degrees of freedom. In fact, canonical quantization \cite{can} yields a Hilbert space with only finitely many physical states which can be related to the conformal blocks of (rational) conformal field theories. Perturbative covariant quantization \cite{pir,alv,gia,fal,mar,shif} shows that the theory is not only renormalizable but even ultraviolet finite. It is remarkable that despite this high degree of "triviality" the theory produces nontrivial radiative corrections. Pisarski and Rao \cite{pir} and Witten \cite{wi} showed that one-loop effects lead to a renormalization of the parameter $\kappa$ which multiplies the Chern-Simons 3-form in the action, \begin{equation} S_{\mbox{\rm \scriptsize CS}}[A]=i\kappa \ \frac{g^2}{8\pi} \int d^3\!x ~\varepsilon_{\alpha\beta\gamma} \: [A^a_\alpha\, \partial_{\beta}\! A^a_\gamma + \frac{1}{3}g f^{abc} A^a_{\alpha} A^b_{\beta} A^c_{\gamma}] \end{equation} A variety of gauge invariant regularization methods, including spectral flow arguments based upon the $\eta$-invariant, predict a finite difference between the bare and the renormalized value of $\kappa$: \begin{equation} \kappa_{\mbox{\rm \scriptsize ren}}=\kappa_{\mbox{\rm \scriptsize bare}}+\mbox{\rm sign}(\kappa)~T(G) \end{equation} Here $T(G)$ denotes the value of the quadratic Casimir operator of the gauge group $G$ in the adjoint representation. It is normalized such that $T(SU\!(N))=N$. The shift of $\kappa$ has a natural relation to similar shifts in the Sugawara construction of 2-dimensional conformal field theories. On the other hand, in standard renormalization theory a relation of the type (2) is rather unusual. In a generic renormalizable but not necessarily finite theory the divergent parts of the counterterms are fixed by the requirement that the renormalized quantities should be finite. Their finite parts are not fixed by any general principle but rather depend on the renormalization scheme. It was argued that, as there are no ultraviolet divergences in Chern-Simons theory, there exists a distinguished natural renormalization scheme which leads to $\kappa_{\mbox{\rm \scriptsize ren}}=\kappa_{\mbox{\rm \scriptsize bare}}$ \cite{gia}. This contradicts the relation (2) favored by conformal field theory, but it is clear that any argument in favor of one of the two possibilities must come from considerations which lie outside the standard framework of renormalized perturbation theory. \vspace*{3mm} In this paper we investigate Chern-Simons theory along the lines of the Wilsonian renormalization group approach by using an exact evolution equation for gauge theories which was introduced recently \cite{ex,qcd}. It describes the scale dependence of the effective average action $\Gamma_k$ which can be thought of as a continuous interpolation between the classical action $S \equiv \Gamma_{k \rightarrow \infty}$ and the conventional effective action $\Gamma \equiv \Gamma_{k \rightarrow 0}$. It depends on the infrared cutoff scale $k$ in such a way that the functional $\Gamma_k$ evolves out of the classical action by integrating out only those quantum fluctuations which have momenta larger than $k$. When $k$ is lowered from infinity to zero, $\Gamma_k$ follows a certain trajectory in the space of all actions. This trajectory is a solution of the exact renormalization group equation \cite{ex} \begin{eqnarray} k\frac{d}{dk} \Gamma_k[A,\bar{A}] & = & \frac{1}{2} \mbox{\rm Tr} \left[\left(\Gamma_k^{(2)}[A,\bar{A}] +R_k(\Delta[\bar{A}])\right)^{-1} k \frac{d}{dk}R_k(\Delta[\bar{A}])\right] \\ && -\mbox{\rm Tr}\left[\left(-D_{\mu}[A]\, D_{\mu}[\bar{A}] + R_k(-D^2(\bar{A}))\right)^{-1} k \frac{d}{dk} R_k(-D^2[\bar{A}])\right] \nonumber \end{eqnarray} We use the background gauge fixing technique \cite{abb}. Therefore $\Gamma_k$ depends on two gauge fields: the usual classical average field $A^a_\mu$ and the background field $\bar{A}^a_\mu$. Eq.(3) has to be solved subject to the initial condition \begin{equation} \Gamma_{\infty}[A,\bar{A}] =S[A]+\frac{1}{2\alpha} \int d^d\!x~ \left(D^{ab}_{\mu}[\bar{A}]~(A_{\mu}^a-\bar{A}_{\mu}^a)\right)^2 \end{equation} where the classical action is augmented by the background gauge fixing term. Furthermore, $\Gamma^{(2)}_k[A,\bar{A}]$ denotes the matrix of the second functional derivatives of $\Gamma_k$ with respect to $A$. The function $R_k$ specifies the precise form of the infrared cutoff. It has to satisfy $\lim_{u \rightarrow 0} R_k(u)=k^2$, but is arbitrary otherwise. A convenient choice is \begin{equation} R_k(u)=u~[\exp{(u/k^2)}-1]^{-1} \end{equation} but in some cases even a simple constant $R_k=k^2$ is sufficient. Observable quantities will not depend on the form of $R_k$. A similar remark applies to the precise form of the operator $\Delta[\bar{A}] \equiv -D^2 [\bar{A}]+...$ which is essentially the covariant laplacian, possibly with additional nonminimal terms \cite{ex,qcd}. The r\^{o}le of $\Delta$ is to distinguish ``high momentum modes" from ``low momentum modes". If one expands all quantum fluctuations in terms of the eigenmodes of $\Delta$, then it is the modes with eigenvalues larger than $k^2$ which are integrated out in $\Gamma_k$. The solution $\Gamma_k[A,\bar{A}]$ of (3) with (4) is gauge invariant under simultaneous gauge transformations of $A$ and $\bar{A}$. In practice solutions can be found by truncating the space of actions to a finite dimensional subspace. If one makes an ansatz for $\Gamma_k$ which contains only finitely many parameters (depending on $k$) and inserts it into (3), the functional differential equation reduces to a set of coupled ordinary differential equations for the parameter functions \cite{qcd,ahm}. The effective average action $\Gamma_k$ is closely related to a continuum version of the block-spin action of lattice systems\footnote{Also in ref.\cite{shif} a version of the Wilsonian effective action was used.}. Block-spin transformations can be iterated, and when we have already constructed $\Gamma_{k_1}$ at a certain scale $k_1$ we may view $\Gamma_{k_1}$ as the ``classical" action for the next step of the iteration, in which an integral over $\exp{(-\Gamma_{k_1})}$ has to be performed. If we now apply this machinery to Chern-Simons field theory and try to understand the shift (2) from a renormalization group point of view, we are immediately confronted with the following puzzle. Because $S_{\mbox{\rm \scriptsize CS}}$ is not invariant under large gauge transformations, $\exp{(-S_{\mbox{\rm \scriptsize CS}})}$ is single valued only if $\kappa \in {\bf Z}$. In the renormalization group language $\kappa_{\mbox{\rm \scriptsize bare}}$ has to be identified with $\kappa(k=\infty)$ and $\kappa_{\mbox{\rm \scriptsize ren}}$ with $\kappa(k=0)$, where $\kappa=\kappa(k)$ is the scale-dependent prefactor of the Chern-Simons term. If there is a smooth interpolation between $\kappa(\infty)$ and $\kappa(0)$ a nontrivial shift (2) implies that there are intermediate scales at which $\kappa$ cannot be integer. Hence it seems that there should be an inconsistency if we try to do the next blockspin transformation starting from a scale $k_1$ where $\kappa$ is non-integer, because we would have to integrate over a multivalued ``Boltzmann factor" $\exp{(-\Gamma_{k_1})}$. Thus one is led to believe that any sensible solution to the evolution equation should have $\kappa_{\mbox{\rm \scriptsize ren}}=\kappa_{\mbox{\rm \scriptsize bare}}$. In the following we shall see that this ``no go theorem" is actually wrong: a non-zero shift is not in contradiction with a well-defined (albeit somewhat unusual) renormalization group trajectory. \section*{2. Truncating the Evolution Equation} Let us try to solve the initial value problem (3) with (4) for the classical Chern-Simons action (1). We work on flat euclidean space and allow for an arbitrary semi-simple, compact gauge group $G$. Our strategy for finding solutions of the evolution equation is to restrict the infinite dimensional space of all actions to a finite dimensional subspace by means of an appropriate ansatz for $\Gamma_k$. In the case at hand the essential physics is captured by a $\Gamma_k$ of the form \begin{eqnarray} \Gamma_k[A,N,\bar{A}] &=& i\kappa(k)~\frac{g^2}{4\pi}~I[A]+ \kappa(k)~\frac{g^2}{8\pi} \int d^3\!x\,\Bigl\{iN^a D_{\mu}^{ab}[\bar{A}]\, (A^b_{\mu}-\bar{A}^b_{\mu}) \\ \nonumber &&-i(A^a_{\mu}-\bar{A}^a_{\mu})D^{ab}_{\mu}[\bar{A}]~N^b +\alpha\, \kappa(k) \frac{g^2}{4\pi}N^aN^a\Bigr\} \end{eqnarray} with \begin{equation} I[A] \equiv \frac{1}{2} \int d^3\!x~\varepsilon_{\alpha\beta\gamma} ~[A^a_{\alpha}\,\partial_\beta\! A^a_\gamma + \frac{1}{3}g f^{abc} A^a_{\alpha}A^b_{\beta}A^c_{\gamma}] \end{equation} The first term on the RHS of (6) is the Chern-Simons action, but with a scale-dependent prefactor. In the second term we introduced an auxiliary field $N^a(x)$ in order to linearize the gauge fixing term. By eliminating $N^a$ one recovers the classical, $k$-independent background gauge fixing term $\frac{1}{2\alpha}(D_{\mu}[\bar{A}](A_{\mu}-\bar{A}_{\mu}))^2$. In principle also the gauge fixing term could change its form during the evolution, but this effect is neglected here. The ansatz (6) is motivated by the success of similar truncations in 4 dimensions. Apart from the gauge fixing term we keep only the dimension-3 operator and neglect all terms which are ``irrelevant'' according to their canonical dimension. It was demonstrated already that in QCD \cite{ex,qcd} and in the abelian Higgs model \cite{ahm} the approximation of keeping only the relevant and the marginal terms can lead to rather accurate results which go well beyond a one-loop calculation. \vspace*{3mm} For $k\rightarrow \infty$, and upon eliminating $N^a$, the ansatz (6) reduces to (4) with the identification $\kappa(\infty) \equiv \kappa_{\mbox{\rm \scriptsize bare}}$. We shall insert (6) into the evolution equation and from the solution for the function $\kappa(k)$ we shall be able to determine the renormalized parameter $\kappa(0) \equiv \kappa_{\mbox{\rm \scriptsize ren}}$. We have to project the traces on the RHS of (3) on the subspace spanned by the truncation (6). In practice this means that we have to extract only the term proportional to $I[A]$ and to compare the coefficients of $I[A]$ on both sides of the equation. In the formalism with the auxiliary field $N^a\!,$ $\Gamma^{(2)}_k$ in (3) denotes the matrix of second functional derivatives with respect to both $A^a_\mu$ and $N,$ but with $\bar{A}^a_\mu$ fixed \cite{ex}. As we are only interested in the coefficient of $I[A]$, it is computationally advantageous to set $\bar{A}=A$ after the derivatives have been performed. Then the second variation of (6) becomes \begin{eqnarray} \delta^2\Gamma_k[A,N,A]&= & i\kappa(k) \frac{g^2}{4\pi} \int d^3\!x~\Big\{\delta\! A^a_{\mu}\, \varepsilon_{\mu \nu \alpha} D^{ab}_{\alpha}\, \delta\! A^b_{\nu}+\delta\! N^aD^{ab}_{\mu}\,\delta\! A^b_{\mu} \nonumber \\ && -\delta\! A^a_{\mu}D^{ab}_{\mu}N^b\Big\} +\alpha \ (\kappa(k) \frac{g^2}{4\pi})^2 \int d^3x~\delta\! N^a~\delta\! N^a \end{eqnarray} In order to facilitate the calculations we introduce three 4$\times$4 matrices $\gamma_\mu$ with matrix elements $(\gamma_\mu)_{mn}$, $m$=($\mu$,4)=1,...,4, etc., in the following way\cite{shif}: \begin{eqnarray} (\gamma_\mu)_{\alpha\beta}=\varepsilon_{\alpha\mu\beta}, & (\gamma_\mu)_{4\alpha}=-(\gamma_\mu)_{\alpha 4} =\delta_{\mu\alpha} \nonumber \\ (\gamma_\mu)_{44}=0 \hspace*{8mm} & \end{eqnarray} If we combine the gauge field fluctuation and the auxiliary field into a 4-component object $\Psi^a_m \equiv (\delta A^a_{\mu},\delta N^a)$ and choose the gauge $\alpha=0$, then (8) assumes the form \begin{equation} \delta^2\Gamma_k[A,N,A] = i\kappa(k) \frac{g^2}{4\pi} \int d^3\!x ~\Psi^a_m(\gamma_\mu)_{mn} D^{ab}_{\mu} \Psi^b_n \end{equation} so that in matrix notation \begin{equation} \Gamma^{(2)}_k=i\kappa(k) \ \frac{g^2}{4\pi} \not\!\!D \end{equation} Clearly $\not\!\!D \equiv \gamma_{\mu}D_{\mu}$ is reminiscent of a Dirac operator. In fact, the algebra of the $\gamma$-matrices is similar to the one of the Pauli matrices: $\gamma_{\mu}\gamma_{\nu}=-\delta_{\mu\nu} +\varepsilon_{\mu\nu\alpha}\gamma_{\alpha}$. Because $\gamma^+_{\mu}=-\gamma_{\mu}, \ \ \not\!\!D$ is hermitian. Its square reads \begin{equation} \not\!\!D^2=-D^2-ig\ ^*\!F_\mu\gamma_{\mu} \end{equation} where $^*\!F_{\mu} \equiv \frac{1}{2} \varepsilon_{\mu\alpha\beta}F_{\alpha\beta}$ is the dual of the field strength tensor. (In equations such as (11) and (12) $A_\mu$ and $F_{\mu\nu}$ are matrices in the adjoint representation.) Because $\not\!\!D^2$ is `almost' equal to the covariant laplacian, it is the natural candidate for the cutoff operator $\Delta$. With this choice the evolution equation (3) reads at $\bar{A}=A$: \begin{eqnarray} ic ~k \frac{d}{dk}\kappa(k)~I[A] &=& \frac{1}{2} \mbox{\rm Tr} \left[\left(ic\kappa \not\!\!D+R_k(\not\!\!D^2)\right)^{-1} k \frac{d}{dk}R_k(\not\!\!D^2)% \right] \nonumber \\ &&- \mbox{\rm Tr} \left[\left(-D^2+R_k(-D^2)\right)^{-1} k \frac{d}{dk}R_k(-D^2)% \right] \end{eqnarray} Here $c \equiv g^2/4\pi$. The equality sign in (13) is to be understood in the sense that the term $\sim i I[A]$ has to be extracted from the RHS and all other terms have to be discarded. In particular, the second trace on the RHS of (13) is manifestly real, so it cannot match the purely imaginary $i I[A]$ and can be omitted therefore. For the same reason we may replace the first trace by $i$ times its imaginary part: \begin{equation} k \frac{d}{dk} \kappa(k)\, I[A] = - \frac{1}{2} \kappa(k)\,\mbox{\rm Tr}\left[\, \not\!\!D\ \left(c^2 \kappa^2% \not\!\!D^2 +R^2_k(\not\!\!D^2)\right)^{-1} k \frac{d}{dk}R_k(\not\!\!D^2)\right] +\cdots \end{equation} The trace in (14) involves an integration over spacetime, a summation over adjoint group indices, and a ``Dirac trace". We shall evaluate it explicitly in the next section. Before turning to that let us first look at the general structure of eq.(14). In terms of the (real) eigenvalues $\lambda$ of $\not\!\!D$ eq.(14) reads \begin{equation} \frac{d\kappa(k)}{dk^2}~I[A]=-\frac{1}{2}\kappa(k) \sum_{\lambda}\frac{\lambda}{c^2\kappa^2(k) \lambda^2+R^2_k(\lambda^2)} \cdot \frac{dR_k(\lambda^2)}{dk^2} \end{equation} where we switched from $k$ to $k^2$ as the independent variable. We observe that the sum in (15) is related to a regularized form of the spectral asymmetry of $\not\!\!\!D$. We emphasize at this point that the evolution equation (3), and therefore also (15), is well-defined, both in the infrared and the ultraviolet, without any further regularization. If one employs a cutoff function $R_k(u)$ which vanishes exponentially fast for $u \rightarrow \infty$ (such as (5) for example) only eigenvalues of $\Delta$ in a small neighborhood of $\lambda \approx k$ contribute significantly to the trace \cite{ex}. \vspace*{3mm} An approximate solution for $\kappa(k)$ can be obtained by integrating both sides of eq.(15) from a low scale $k^2_0$ to a higher scale $\Lambda^2$ and approximating $\kappa(k) \simeq \kappa(k_0)$ on the RHS. (In more conventional theories \cite{ahm} this type of approximation amounts to neglecting anomalous dimensions.) This yields \begin{equation} [\kappa(k_0)-\kappa(\Lambda)]~I[A] = \frac{1}{2} \kappa(k_0) \sum_{\lambda} \int^{\Lambda^2}_{k_0^2} dk^2 \ \frac{dR_k(\lambda^2)}{dk^2} \cdot \frac{\lambda}{c^2\kappa^2(k_0) \lambda^2+R^2_k(\lambda^2)} \end{equation} Upon using $R_k$ as the variable of integration one arrives at \begin{equation} [\kappa(k_0)-\kappa(\Lambda)]~I[A] = \frac{1}{2c}\ \mbox{\rm sign}(\kappa(k_0)) \sum_{\lambda}~ \mbox{\rm sign}(\lambda)\, G(\lambda;k_0,\Lambda) \end{equation} with \begin{equation} G(\lambda;k_0,\Lambda) \equiv \arctan\left[ c\,|\kappa(k_0)\lambda|\, \frac{R_\Lambda(\lambda^2)-R_{k_0}(\lambda^2)} {c^2\kappa(k_0)^2\lambda^2 + R_{\Lambda}(\lambda^2)~R_{k_0}(\lambda^2)} \right] \end{equation} Recalling the properties of $R_k$ we see that in the spectral sum (17) the contributions of eigenvalues $|\lambda| \ll k_0$ and $|\lambda|\gg\Lambda$ are strongly suppressed, and only the eigenvalues with $k_0 < |\lambda| < \Lambda$ contribute effectively. Ultimately we would like to perform the limits $k_0 \rightarrow 0$ and $\Lambda \rightarrow \infty$. In this case the sum over $\lambda$ remains without IR and UV regularization. This means that if we want to formally perform the limits $k_0 \rightarrow 0$ and $\Lambda \rightarrow \infty$ in eq.(17), we have to introduce an alternative regulator. In order to make contact with the standard spectral flow argument \cite{wi} let us briefly describe this procedure. We avoid IR divergences by putting the system in a finite volume and imposing boundary conditions such that there are no zero modes. In the UV we regularize with a zeta-function-type convergence factor $|\lambda/\mu|^{-s}$ where $\mu$ is an arbitrary mass parameter. Thus the spectral sum becomes \begin{equation} \lim_{s \rightarrow 0}\ \sum_{\lambda} \mbox{\rm sign}(\lambda)\, \left|\frac{\lambda}{\mu}\right|^{-s} G(\lambda; k_0,\Lambda) \end{equation} Now we interchange the limits $k_0 \rightarrow 0$, $\Lambda \rightarrow \infty$ and $s \rightarrow 0$. By construction, only finite $(|\lambda| \leq \mu)$ and nonzero eigenvalues contribute in (19). For such $\lambda$'s we have $G(\lambda; 0,\infty)=\pi/2$ irrespective of the precise form of $R_k$. Therefore (17) becomes \begin{equation} [\kappa(0)-\kappa(\infty)]~I[A]= \frac{2\pi^2}{g^2}~\mbox{\rm sign}(\kappa(0)) ~\eta[A] \end{equation} where $\eta[A] \equiv \lim_{s \rightarrow 0} \frac{1}{2} \sum_{\lambda} \mbox{\rm sign}(\lambda)~|\lambda/\mu|^{-s} $ is the eta-invariant. If we insert the known result \cite{wi} $\eta[A]=(g^2/2\pi^2)~T(G)~I[A]$ we find that in agreement with eq.(2) \begin{equation} \kappa(0)=\kappa(\infty)+\mbox{\rm sign}(\kappa(0))~T(G) \end{equation} We see that at least at the formal level the function $R_k$ has dropped out of the calculation. In this sense the shift of the parameter $\kappa$ is universal: it does not depend on the form of the IR cutoff. \section*{3. Explicit Calculation} Next we turn to the evaluation of the trace in eq.(14). The derivation in this section does not rely on formal manipulations of spectral sums, and it will keep the full $k$-dependence of $\kappa$ on the RHS. It is precisely this $\kappa(k)$-dependence on the RHS of the evolution equation which implements the ``renormalization group improvement" \cite{ex,qcd}. To start with we use the constant cutoff $R_k=k^2$ for which eq.(14) assumes the form\footnote{Even with $R_k=k^2$ there are no convergence problems for $\lambda \rightarrow \infty$ in eq.(15). The extraction of the term $\sim I[A]$ from the spectral sum involves derivatives which improve the convergence, see eq.(25) below. } \begin{equation} \frac{d}{dk^2}\kappa(k)~I[A] =-\frac{1}{2c^2\kappa(k)}% \,\mbox{\rm Tr}\left[\, \not\!\!D\left(\not\!\!D^2+l(k)^2\right)^{-1}\right] \end{equation} where \begin{equation} l(k) \equiv \frac{k^2}{c~|\kappa(k)|} \end{equation} (Note that in 3 dimensions $c \equiv g^2/4\pi$ and hence also $l$ has the dimension of a mass.) Our strategy is to extract from the trace the term quadratic in $A$ and linear in the external momentum, and to equate the coefficients of the $A\,\partial \! A$-terms on both sides. (Using the $A^3$-term instead leads to the same answer.) Using $tr(\gamma_{\alpha}\gamma_{\mu}\gamma_{\nu}) =-4\varepsilon_{\alpha\mu\nu}$, \ \ $f^{acd}f^{bcd}=T(G)\,\delta^{ab}$ and similar identities one obtains after some algebra \begin{equation} \frac{d\kappa(k)}{dk^2} \int d^3\!x ~\varepsilon_{\alpha\beta\gamma} ~A^a_{\alpha}\,\partial_{\beta}\!A^a_{\gamma}= -\frac{g^2T(G)}{c^2\kappa(k)} \int d^3\!x\, \varepsilon_{\alpha\beta\gamma}\, A^a_{\alpha}\Pi_k(-\partial^2) \partial_{\beta}\!A^a_{\gamma}+O(A^3) \end{equation} The function $\Pi_k$ is given by the Feynman parameter integral \begin{equation} \Pi_k(q^2)=8 \int_0^1 dx~x(1-x) \int \frac{d^3p}{(2\pi)^3} \, \frac{q^2}{[p^2+l^2+x(1-x)q^2]^3} \end{equation} Expanding $\Pi_k(-\partial^2)= \Pi_k(0)-\Pi'_k(0)\partial^2+ ...$, we see that only for the term with $\Pi_k(0)$ the number of derivatives on both sides of eq.(24) coincides. Therefore one concludes that \begin{equation} \frac{d\kappa(k)}{dk^2}= - \frac{g^2 T(G)}{c^2\kappa(k)} \ \Pi_k(0) \end{equation} where $\Pi_k(0)$ depends on $\kappa(k)$ via (23). Equation (26) is the renormalization group equation for $\kappa(k)$ which we wanted to derive. Formally it is similar to the evolution equations which we derived for QCD\cite{ex} and for the abelian Higgs model \cite{ahm}. The very special features of Chern-Simons theory, reflecting its topological character, become obvious when we give a closer look to the function $\Pi_k(q^2)$. Assume we fix a non-zero value of $k$ $(l\neq 0)$ and let $q^2 \rightarrow 0$ in (25). Because the $l^2$-term prevents the $p$-integral from becoming IR divergent, we may set $q^2=0$ in the denominator, and we conclude that the integral vanishes $\sim q^2$. This means that the RHS of (26) is zero and that $\kappa(k)$ keeps the same value for all strictly positive values of $k$ . One might be tempted to take this result as a confirmation of the ``no-go theorem" mentioned in the introduction and to conclude that $\kappa_{\mbox{\rm \scriptsize ren}}=\kappa_{\mbox{\rm \scriptsize bare}}$. This is premature however because $\Pi_k(0)$ really vanishes only for $k>0$. If we set $l=0$ in (25) we cannot conclude anymore that $\Pi_k \sim q^2$, because in the region $p^2 \rightarrow 0$ the term $x(1-x)q^2$ provides the only IR cutoff and may not be set to zero in a naive way. In fact, $\Pi_k(0)$ has a $\delta$-function-like peak at $k=0$. To see this, we first perform the integrals in (25): \begin{equation} \Pi_k(q^2)=\frac{1}{\pi}\left[ \frac{1}{2|q|} \arctan \left(\frac{|q|}{2|l|}\right)-\frac{|l|}{q^2+4l^2} \right] \end{equation} As $q^2$ approaches zero, this function develops an increasingly sharp maximum at $l=0$. Integrating (27) against a smooth test function $\Phi(l)$ it is easy to verify that \begin{equation} \lim_{q^2 \rightarrow 0} \int_0^{\infty} dl~ \Phi(l)~\Pi_k(q^2) = \frac{1}{4\pi} \Phi(0) \end{equation} This means that on the space of even test functions $\lim_{q^2 \rightarrow 0}\Pi_k(q^2)=\delta(l)/2\pi$. Even though the value of $\kappa(k)$ does not change during almost the whole evolution from $k=\infty$ down to very small scales, it performs a finite jump in the very last moment of the evolution, just before reaching $k=0$. This jump can be calculated in a well-defined manner by integrating (26) from $k^2=0$ to $k^2=\infty$: \begin{equation} \kappa(0)-\kappa(\infty)= 4\pi~T(G)~\lim_{q^2 \rightarrow 0} \int_0^{\infty} dl~\mbox{\rm sign}(\kappa(l)) \cdot \left[1-c \, l \frac{d}{dk^2} |\kappa(k)|\right]^{-1} \Pi_k(q^2) \end{equation} The term $\sim d|\kappa|/dk^2$ is a Jacobian factor which is due to the fact that $l$ depends on $\kappa(k)$. This factor is the only remnant of the $\kappa(k)$-dependence of the RHS of the evolution equation. We mentioned already that, in more conventional theories, this dependence of the RHS on the running couplings is the origin of the renormalization group improvement. Chern-Simons theory is special also in this respect. If we use (28) in (29), $l~d|\kappa|/dk^2$ is set to zero and we find \begin{equation} \kappa(0)=\kappa(\infty)+\mbox{\rm sign}(\kappa(0))~T(G), \end{equation} which is precisely the 1-loop result. It is straightforward to check that the shift (30) is independent of the choice for $R_k$. For a generic cutoff the momentum space integral (25) becomes more complicated and depends on $R_k$ nontrivially. Nevertheless, by an argument similar to the one following eq.(16) the relation (30) can be seen to hold for any $R_k$. \section*{4. Conclusion} We used an exact and manifestly gauge invariant evolution equation in order to study the renormalization of the Chern-Simons parameter. The method of truncating the space of actions allows us to obtain nonperturbative solutions which require neither an expansion in the number of loops nor in the gauge coupling. The approximation involved here is that during the evolution the mixing of the Chern-Simons term with other operators is neglected. This approach has been tested already in the abelian Higgs model \cite{ahm} and in QCD\cite{ex,qcd}. The results obtained for Chern-Simons theory are strikingly different in at least two respects. \vspace*{3mm} Like $\kappa$, also the gauge coupling in QCD$_4$, for instance, is a universal quantity. Its running is governed by a $R_k$-independent $\beta$-function which leads to a logarithmic dependence on the scale $k$. The Chern-Simons parameter $\kappa$, on the other hand, does not run at all between $k=\infty$ and any infinitesimally small value of $k$. Only at the very end of the evolution, when $k$ is very close to zero, $\kappa$ jumps by a universal, unambiguously calculable amount $\pm T(G)$. Though surprising in comparison with non-topological theories, this feature is precisely what one would expect if one recalls the topological origin of a non-vanishing $\eta$-invariant \cite{wi}. If $\eta[A]\neq 0$ for a fixed gauge field $A$, some of the low lying eigenvalues of $\not\!\!D[A]$ must have crossed zero during the interpolation from $A=0$ to $A$. However, this spectral flow involves only that part of the spectrum which, in the infinite volume limit, is infinitesimally close to zero. It is gratifying to see that even without an artificial discretization of the spectrum (by a finite volume) the spectral flow is correctly described by the evolution equation. A jump in $\kappa$, rather than a continuous evolution, resolves the puzzle mentioned in the introduction: at $k>0$ the iterated block-spin transformations are all well-defined, but their limit is nontrivial. It is also remarkable that the evolution equation by itself is well-defined even for noninteger $\kappa$. The quantization condition follows only if we require the limit $\lim_{k\rightarrow 0} \exp(-\Gamma_k)$ to be a single-valued functional\footnote{A similar phenomenon occurs in stochastic quantization \cite{sto}.}. \vspace*{3mm} The second unusual feature of Chern-Simons theory is the absence of any renormalization group improvement beyond the 1-loop result. This situation has to be contrasted with the running of $g$ in QCD$_4$, for instance where a truncation similar to the one used here leads to a nonperturbative $\beta$-function involving arbitrarily high powers of $g$. We emphasize that our exact evolution equation with the truncation (6) potentially goes far beyond a 1-loop calculation. It is quite remarkable therefore that in Chern-Simons theory all higher contributions vanish. It is not possible to translate such a ``nonrenormalization theorem" for a given truncation into a statement about the nonrenormalization at a given number of loops. Nevertheless, our results point in the same direction as ref.~\cite{gia} where the absence of 2-loop corrections was proven. As there are gauge-invariant regularizations which do not produce the shift (2) \cite{pro} it remains an open questions whether more complicated truncations could modify the above picture. \noindent Acknowledgement: I would like to thank E.Gozzi and C.Wetterich for helpful discussions.
1,477,468,751,317
arxiv
\section{Introduction} The motivating problem for this article is the characterization of maximal non-Hamiltonian (MNH) graphs. Skupien and co-authors give the first broad family of MNH graphs in~\cite{SkupienMNH} and describe all MNH graphs with 10 or fewer vertices in ~\cite{SkupienCat}. The latter paper also includes three constructions---types $A1$, $A2$, $A3$---with a similar structure. Zelinka gave two constructions of graphs that are maximal non-traceable; that is, they have no Hamiltonian path, but the addition of any edge gives a Hamiltonian path. The join of such a graph with a single vertex gives a MNH graph. Zelinka's first family produces, under the join with $K_1$, the Skupien MNH graphs from \cite{SkupienMNH}. Zelinka's second family is a broad generalization of the type $A1$, $A2$, and $A3$ graphs of \cite{SkupienCat}. Bullock et al~\cite{Bullock} provide further examples of infinite families of maximal non-traceable graphs. In this article we work with two closely related invariants of a graph~$G$, $\check{\mu}(G)$ and $\mu(G)$. The $\mu$-invariant, introduced by Ore \cite{Ore}, is the maximal number of paths in $G$ required to cover the vertex set of $G$. We show that $\check{\mu}(G)= \mu(G)$ unless $G$ is Hamiltonian, when $\check{\mu}(G)=0$. Maximal non-Hamiltonian graphs are maximal with respect to $\check{\mu}(G)=1$, and maximal non-traceable graphs are maximal with respect to $\check{\mu}(G)=2$. It is useful to broaden the perspective to study, for arbitrary $t$, graphs that are maximal with respect to $\check{\mu}(G) = t$, which we call $t$-path traceable graphs. In Section~\ref{s:traceability} we show how the $\check{\mu}$ and $\mu$ invariants behave with respect to disjoint union of graphs and the join with a complete graph. Section~\ref{s:decomposition} derives the main result, a decomposition theorem that reduces the problem of characterizing maximal $t$-path traceable to characterizing those that have no universal vertex, which we call trim. Section~\ref{s:family} presents a generalization of the Zelinka construction to $t$-path traceable graphs. \section{Traceability and Hamiltonicity} \label{s:traceability} It will be notationally convenient to say that the complete graphs $K_1$ and $K_2$ are Hamiltonian. As justification for this view, consider an undirected graph as a directed graph with each edge having a conjugate edge in the reverse direction. This perspective does not affect the Hamiltonicity of a graph with more than 3 vertices, but it does give $K_2$ a Hamiltonian cycle. Similarly, adding loops to any graph with more than~2 vertices does not alter the Hamiltonicity of the graph, but $K_1$, with an added loop, has a Hamiltonian cycle. Let $G$ be a graph. A vertex, $v \in V(G)$ , is called a {\em universal vertex} if $\deg(v) = |V(G)|-1$. Let $\overline{G}$ denote the {\em graph complement} of $G$, having vertex set $V(G)$ and edge set $E(K_n) \setminus E(G)$. We will use the disjoint union of two graphs, $G \sqcup H$ and the join of two graphs $G\ast H$. The latter is $G \sqcup H $ together with the edges $\set{vw \vert v \in V(G) \text{ and } w \in V(H)}$. \begin{definition} A set of $s$ disjoint paths in a graph $G$ that includes every vertex in $G$ is a {\em $s$-path covering} of $G$. Define the following invariants. $\displaystyle \mu(G) := \min_{s\in \mathbb{N}} \{ \exists s \text{-path covering of } G \}$. $\displaystyle \check{\mu}(G) := \min_{l\in \mathbb{N}_0} \{ K_l \ast G \text{ is Hamiltonian } \}$ $\displaystyle i_H(G) := \begin{cases} 1 & \text{ if } G \text{ is Hamiltonian}\\ 0 & \text{ otherwise} \end{cases}$ We will say $G$ is {\em $t$-path traceable} when $\mu(G) = t$. A set of $t$ disjoint paths that cover a $t$-path traceable graph $G$ is a {\em minimal path covering}. \end{definition} Note that $K_r*(K_s*G) = K_{r+s}*G$. If $G$ is Hamiltonian then so is $K_r*G$ for $r\geq 0$. (In particular this is true for $G = K_1$ and $G=K_2$.) We now have a series of lemmas that lead to the main result of this section, which is a formula showing how the $\mu$-invariant and $\check{\mu}$-invariant behave with respect to disjoint union and the join with a complete graph. \begin{lemma} \label{l:alpha} $\displaystyle \check{\mu}(G) = \min_{l\in \mathbb{N}_0} \{ \overline{K_l} \ast G \text{ is Hamiltonian }\}$ \end{lemma} \begin{proof} Since $\overline{K_l}\ast G$ is a subgraph of $K_l \ast G$, a Hamiltonian cycle in $\overline{K_l}\ast G$ would also be one in $K_l \ast G$. Let $\check{\mu}(G)= a$. Suppose $C$ is a Hamiltonian cycle in $K_a \ast G$ and write $C$ as $v \sim P_1 \sim Q_1 \sim \ldots \sim P_s \sim Q_s \sim v$, where $v$ is a vertex in $G$ and the paths $P_i \in G$ and $Q_i \in K_a$. If any $Q_i$ contains 2 vertices or more, say $u$ and $w_1, \ldots, w_k$ with $k \geq 1$, then we may simply remove all the vertices, except $u$, and end up with a Hamiltonian graph on $K_{a-k}$. This contradicts the minimality of $a=\check{\mu}(G)$. Therefore, $C$ must not contain any paths of length greater than two in the subgraph $K_{a}$, and any Hamiltonian cycle on $K_a \ast G$ is also a Hamiltonian cycle on $\overline{K_a} \ast G$. \end{proof} \begin{lemma} {\label{l:ami}} $\check{\mu}(G) = \mu(G) -i_H(G)$ \end{lemma} \begin{proof} If $G$ is Hamiltonian (including $P_1$ and $P_2$) then $\check{\mu}(G)=0$, $\mu(G)= 1$ so the equality holds. Suppose $G$ is non-Hamiltonian with $\mu(G)= t$ and $t$-path covering $P_1, \dots, P_t$. Let $K_t$ have vertices $u_1, \dots, u_t$. In the graph $K_t*G$, there is a Hamiltonian cycle: $v_1\sim P_1\sim v_2\sim P_2 \sim\cdots\sim v_t \sim P_t\sim v_1$. Thus $\check{\mu}(G) \leq t = \mu(G)$. Let $\check{\mu}(G)= a$, so there is a Hamiltonian cycle in $K_a*G$. Removing the vertices of $K_a$ breaks the cycle into at most $a$ disjoint paths covering $G$. Thus $\mu(G) \leq \check{\mu}(G)$. \end{proof} \begin{lemma} \label{l:disjoint} $\mu(G \sqcup H) = \mu(G) + \mu(H)$ and $\check{\mu}(G \sqcup H) = \check{\mu}(G)+ \check{\mu}(H) + i_H(G)+i_H(H)$. \end{lemma} \begin{proof} A path covering of $G$ may be combined with a path covering of $H$ to create one for $G \sqcup H$. Conversely, paths in a $t$-path covering of $G\sqcup H$ can be partitioned into those contained in $G$ and those contained in $H$, giving a path covering of $G$ and one of $H$. Consequently \[\mu(G\sqcup H) = \mu(G)+ \mu(H)\] Since $G\sqcup H$ is not Hamiltonian we have \begin{align*} \check{\mu}(G \sqcup H) &= \mu(G \sqcup H) + i_H(G \sqcup H)\\ & = \mu(G) + \mu(H) \\ & = \check{\mu}(G) + i_H(G) + \check{\mu}(H) + i_H(H) \end{align*} \end{proof} \begin{lemma}{\label{l:star}} For any graph $G$, \begin{align*} \mu(K_s \ast G) &= \max \{1, \mu(G) -s \}\\ \check{\mu}(K_s \ast G) &= \max \{0, \check{\mu}(G) -s \} \end{align*} In particular, if $K_s*G$ is Hamiltonian then $\mu(K_s*G) = 1$ and $\check{\mu}(K_s*G) = 0$; otherwise, $\mu(K_s \ast G) =\mu(G) -s $ and $\check{\mu}(K_s \ast G) = \check{\mu}(G) -s$. \end{lemma} \begin{proof} The formula for $\check{\mu}$ is immediate when $G$ is Hamiltonian since we have observed that this forces $K_s*G$ to be Hamiltonian. Otherwise, it follows from $K_r*(K_s*G)= K_{r+s}*G$: if $\check{\mu}(G)= a$, then $K_r*(K_s*G) $ is Hamiltonian if and only if $r+s\geq a$. The formula for $\mu$ may be derived from the result for $\check{\mu}$ using Lemma~\ref{l:ami}. We may also prove it directly. Observe that it is enough to prove $\mu(K_1*G)= \max\{1,\mu(G)-1\}$. Let $u$ be the vertex of $K_1$. Let $\mu(G)= t$ and $P_1, \dots, P_t$ a $t$-path covering of $G$. If $t=1$ then $u$ can be connected to the initial vertex of $P_1$ to create a 1-path covering of $K_1*G$. For $t\geq 2$, the path $P_1\sim u \sim P_2$ along with $P_3, \dots, P_t$ gives a $(t-1)$-path covering of $K_1*G$. Thus for $t>1$, $\mu(K_1*G) \leq t-1$. Suppose $Q_1, \dots, Q_d$ were a minimal $d$-path covering of $K_1*G$, with $u$ a vertex of $Q_1$. Removing $u$ gives at most a $(d+1)$-path covering of $G$. Thus $\mu(K_1*G) +1 \geq t$. This shows $\mu(K_1*G)=\mu(G)-1$ for $\mu(G) \geq 2$. \end{proof} The main result of this section is the following two formulas for for the $\mu$ and $\check{\mu}$ invariants for the disjoint union of graphs, and the join with a complete graph. \begin{proposition}{\label{p:lemmy}} Let $\displaystyle \{G_j\}_{j=1}^m$ be graphs. $\displaystyle \mu\big(\bigsqcup_{j=1}^m G_j\big) =\sum_{j=1}^m\mu(G_j)$ and $\displaystyle \check{\mu}\big(\bigsqcup_{j=1}^m G_j\big) =\sum_{j=1}^m\check{\mu}(G_j) + \sum_{j=1}^mi_H(G_j)$. Furthermore, $\displaystyle \check{\mu} \big( (\bigsqcup_{j=1}^m G_j)~\ast~K_r \big) =\max \big\{ 0 , \sum_{j=1}^m\check{\mu}(G_j) + \sum_{j=1}^mi_H(G_j) -r \big\}$. \end{proposition} \begin{proof} We proceed by induction. The base case $k=2$ is exactly Lemma~\ref{l:disjoint}. Assume the formula holds for $k$ graphs we will prove it for $k+1$ graphs. \begin{align*} \mu\big(\bigsqcup_{j=1}^{k+1} G_j\big) &= \mu\big( (\bigsqcup_{j=1}^{k+1} G_j) \sqcup G_{k+1} \big) \\ &= \mu\big(\bigsqcup_{j=1}^{k} G_j\big) + \mu\big( G_{k+1} \big) \\ &= \sum_{j=1}^k \mu(G_j) + \mu\big( G_{k+1} \big) \\ &=\sum_{j=1}^{k+1} \mu(G_j) \end{align*} By Lemma~\ref{l:ami} and the fact that disjoint graphs are not Hamiltonian, we have, \begin{align*} \check{\mu}\big(\bigsqcup_{j=1}^m G_j\big) &= \mu\big(\bigsqcup_{j=1}^{m} G_j\big) + i_H\big(\bigsqcup_{j=1}^{m} G_j\big) \\ &= \sum_{j=1}^m \mu(G_j) +0 \\ &=\sum_{j=1}^{m} (\check{\mu}(G_j) + i_H(G_j)) \\ &=\sum_{j=1}^m\check{\mu}(G_j) + \sum_{j=1}^mi_H(G_j) \end{align*} Therefore, we have by Lemma~\ref{l:star}, \begin{align*} \check{\mu}\big( (\bigsqcup_{j=1}^{m} G_j) \ast K_r\big) &= \max \{ 0, \check{\mu}\big(\bigsqcup_{j=1}^{m} G_j\big) - r \} \\ &= \max \{ 0, \sum_{j=1}^{m} \check{\mu}(G_j) + \sum_{j=1}^{m} i_H(G_j) -r \} \\ \end{align*} \end{proof} The following lemma will be useful in the next section. To express it succintly we introduce the following Boolean condition. For a graph $G$ and vertex $v\in G$, $T(v,G)$ is true if and only if $v$ is a terminal vertex in some minimal path covering of $G$. \begin{lemma} Let $v \in G$ and $w \in H$. \begin{align*} \mu \Big( ( G \sqcup H) + vw \Big) = \begin{cases} \mu(G\sqcup H) -1 & \text{ if } T(v,G) \text{ and } T(w,H)\\ \mu(G\sqcup H) & \text{ otherwise} \end{cases} \end{align*} \end{lemma} \begin{proof} Let $\mu(G) = c$, $\mu(H)= d$ and $\mu \Big( (G \sqcup H )+ vw\Big) = t$. Clearly, $t \leq c+d$. Let $R_1, \dots, R_t$ be a minimal path cover of $(G\sqcup H) + vw$. If no $R_i$ contains $vw$ then this is also a minimal path cover of $(G\sqcup H)$ so $t= c+d$. Suppose $R_1$ contains $vw$ and note that $R_1$ is the only path with vertices in both $G$ and $H$. Removing $vw$ gives two paths $P\subseteq G$ and $Q\subseteq H$. Paths $P$ and $Q$ along with $R_2, \dots, R_t$ cover $G\sqcup H$, so $t+1 \geq c+d$. Thus, $t$ can either be $c+d$ or $c+d -1$. If $t = c+d -1$, then we have the minimal $(t+1)$-path covering $P,Q, R_2, \ldots, R_t$ of $G \sqcup H$, as above. We note that $v$ must be a terminal point of $P$ and $w$ must be a terminal point of $Q$, by construction. This path covering may be partitioned into a $c$-path covering of $G$ containing $P$ and a $d$-path covering of $H$ containing $Q$. Thus, $T(v,G)$ and $T(w,G)$ hold. Conversely, suppose $T(u,G)$ and $T(w,H)$ both hold. Let $P_1, \dots, P_c$ be a minimal path of $G$ with $v$ a terminal vertex of $P_1$ and let $Q_1, \dots, Q_d$ be a minimal path cover of $H$ with $w$ a terminal vertex of $Q_1$. The edge $vw$ knits $P_1$ and $Q_1$ into a single path and $P_1\sim Q_1, P_1, \dots, P_c, Q_1, \dots, Q_d$ is a $c+d-1$ cover of $(G\sqcup H)+ vw$. Consequently, $t \leq c+d-1$. Thus, $T(u,G)$ and $T(w,H)$ both hold if and only if $t = c+d -1$. Otherwise, $t= c+d$. \end{proof} \begin{corollary} \label{c:disjointaddedge} Let $v \in G$ and $w \in H$. \begin{align*} \check{\mu} \Big( ( G \sqcup H) + vw \Big) = \begin{cases} \check{\mu}(G\sqcup H) -2 & \text{ if } G=H=K_1\\ \check{\mu}(G\sqcup H) -1 & \text{ if } T(v,G) \text{ and } T(w,H)\\ \check{\mu}(G\sqcup H) & \text{ Otherwise} \end{cases} \end{align*} \end{corollary} \begin{proof} Let $\delta = 1$ if $T(v,G)$ and $T(w,H)$ are both true {\ch and $\delta = 0$ otherwise. Then} \begin{align*} \check{\mu}\Big ( (G \sqcup H) + vw \Big) &= \mu\Big ( (G \sqcup H) + vw \Big) - i_H\Big ( (G \sqcup H) + vw \Big) \\ &= \mu( (G \sqcup H) - \delta - i_H\Big ( (G \sqcup H) + vw \Big) \end{align*} The final term is $-1$ if and only if $G=H=K_1$. \end{proof} \section{Decomposing Maximal $t$-path traceable graphs} \label{s:decomposition} In this section we prove our main result, a maximal $t$-path traceable graph may be uniquely written as the join of a complete graph and a disjoint union of graphs that are also maximal with respect to traceability, but which are also either complete or have no universal vertex. We work with the families of graphs $\mathscr{M}_t$ for $t\geq0$ and $\mathscr{N}_t$ for $t\geq 1$. \begin{align*} \mathscr{M}_t & := \{ G \vert \check{\mu}(G) = t \text{ and } \check{\mu}(G + e) < t, \forall e \in E(\overline{G}) \}\\ \mathscr{N}_t &:= \{ G \in \mathscr{M}_t \vert G \text{ is connected and has no universal vertex } \} \end{align*} The set $\mathscr{M}_0$ is the set of complete graphs. The set $\mathscr{M}_1$ is the set of graphs with a Hamiltonian path but no Hamiltonian cycle, that is, maximal non-Hamiltonian graphs. For $t > 1$, $\mathscr{M}_t$ is also the set of graphs $G$ such $\mu(G)=t$ and $\mu(G+e) = t-1$ for any $e \in E(\overline{G})$. We will call these {\em maximal $t$-path traceable graphs}. A graph in $\mathscr{N}_t$ will be called {\em trim}. \begin{proposition} \label{p:max1} For $0 \leq s <t$, $G \in \mathscr{M}_t$ if and only if $K_s*G \in \mathscr{M}_{t-s}$. \end{proposition} \begin{proof} We have $\check{\mu}(K_s*G) = \check{\mu}(G)-s$, so we just need to show that $K_s \ast G$ is maximal if and only if $G$ is maximal. The only edges that can be added to $K_s*G$ are those between vertices of $G$, that is, $E(\overline{K_s*G}) = E(\overline{G})$. For such an edge $e$, \begin{align} \label{e:max1} \check{\mu}\Big( (K_s*G) + e\Big) &= \check{\mu}\Big( K_s* (G + e) \Big) \notag\\ &= \check{\mu}(G + e) -s \end{align} Consequently, $\check{\mu}(G+ e) = \check{\mu}(G)-1$ if and only if $\check{\mu}\Big( (K_s*G) + e\Big) = \check{\mu}(K_s*G)-1$. \end{proof} Note that the proposition is false for $s=t>0$ since $K_s*G$ will not be a complete graph and $\mathscr{M}_0$ is the set of complete graphs. The proof breaks down in \eqref{e:max1}. \begin{proposition} \label{p:cot} Let $G \in \mathscr{M}_c$ and $H \in \mathscr{M}_d$. The following are equivalent. \begin{enumerate} \item $G \sqcup H \in \mathscr{M}_{c+d + i_H(G) + i_H(H)}$ \item Each of $G$ and $H$ is either complete or has no universal vertex. \end{enumerate} \end{proposition} \begin{proof} We have already shown that $\check{\mu}(G \sqcup H) = c+d + i_H(G) + i_H(H)$. We have to consider whether adding an edge to $G\sqcup H$ reduces the $\check{\mu}$-invariant. There are three cases to consider, the extra edge may be in $E(\overline{G})$ or $E(\overline{H})$ or it may join a vertex in $G$ to one in $ H$. Since $G$ is maximal, adding an edge to $G$ is either impossible, when $G$ is complete, or it reduces the $\check{\mu}$-invariant of $G$. This edge would also reduce the $\check{\mu}$-invariant of $G \sqcup H$ by Lemma~\ref{l:disjoint}. The case for adding an edge of $H$ is the same. Consider the edge $vw$ for $v \in V(G)$ and $w \in V(H)$. By Corollary~\ref{c:disjointaddedge} the $\check{\mu}$-invariant will drop if and only if $v$ is the terminal point of a path in a minimal path covering of $G$ and similarly for $w$ in $H$, that is, $T(v,G)$ and $T(w,H)$. Clearly this holds for all vertices in a complete graph. The following lemma shows that $T(v,G)$ holds for $G\in \mathscr{M}_c$ with $c>0$ if and only if $v$ is not a universal vertex in $G$. Thus, in order for $G \sqcup H$ to be maximal $G$ must either be complete, or be maximal itself, and have no universal vertex, and similarly for $H$. \end{proof} As a key step before the main theorem, the next lemma shows that in a maximal graph, each vertex is universal, or a terminal vertex in a minimal path covering. \begin{lemma} Let $c \geq 1 $ and $G\in \mathscr{M}_c$. For any two non-adjacent vertices $v,w$ in $G$ there is a $c$-path covering of $G$ in which both $v$ and $w$ are terminal points of paths. Moreover, a vertex $v\in G$ is a terminal point in some $c$-path covering if and only if $v$ is not universal. \end{lemma} \begin{proof} Suppose $c>1$ and let $v,w$ be non-adjacent in $G$. Since $G$ is maximal $G + vw$ has a $(c-1)$-path covering, $P_1, \dots, P_{c-1}$. The edge $vw$ must be contained in some $P_i$ because $G$ has no $(c-1)$-path covering. Removing that edge gives a $c$-path covering of $G$ with $v$ and $w$ as terminal vertices. The special case $c=1$ is well known, adding the edge $vw$ gives a Hamiltonian cycle, and removing it leaves a path with endpoints $v$ and $w$. A consequence is that any non-universal vertex is the terminal point of some path in a $c$-path covering. Suppose $P_1, \dots, P_c$ is a $c$-path covering of $G\in \mathscr{M}_c$ with $v$ a terminal point of $P_i$. Then $v$ is not adjacent to any of the terminal points of $P_j$ for $j\ne i$, for otherwise two paths could be combined into a single one. In the case $c=1$, $v$ cannot be adjacent to the other terminal point of $P_1$, otherwise $G$ would have a Hamiltonian cycle. Consequently a universal vertex is not a terminal point in a $c$-path covering of $G$. \end{proof} \begin{theorem} \label{t:decomposition} For any $G \in \mathscr{M}_t$, $t >0$, $G$ may be uniquely decomposed as $\displaystyle K_s \ast ( G_1 \sqcup \ldots \sqcup G_r)$, where $s$ is the number of universal vertices of $G$, and each $G_j$ is either complete or $G_j\in \mathscr{N}_{t_j}$ for some $t_j > 0$. Furthermore $\displaystyle t = \sum_{j=1}^r t_j + \sum_{j=1}^ri_H(G_j) - s$. \end{theorem} \begin{proof} Suppose $G \in \mathscr{M}_t$ and let $s$ be the number of universal vertices of $G$. Let $r$ be the number of components in the graph obtained by removing the universal vertices from $G$, let $G_1, \dots G_r$ be the components and let $\check{\mu}(G_j)= t_j$. Proposition~\ref{p:lemmy} shows that $\displaystyle t = \sum_{j=1}^r t_j + \sum_{j=1}^ri_H(G_j) - r$. By Proposition~\ref{p:max1}, we have that $G \in \mathscr{M}_t$ if and only if $G_1 \sqcup \ldots \sqcup G_r \in \mathscr{M}_{t+s}$. Furthermore, each $G_j$ must be in $\mathscr{M}_{t_j}$ for otherwise we could . Without loss of generality if we add an edge $e$ to $G_1$, such that $\check{\mu}(G_1 + e) < t_1$, then \begin{align*} \check{\mu}(G + e) &= \check{\mu}(G_1 + e) + \sum_{j=2}^r t_j + \sum_{j=1}^r i_H(G_j) -s \\ &< \sum_{j=1}^r t_j + \sum_{j=1}^r i_H(G_j) -s \\ &= t \end{align*} Now, we apply Proposition~\ref{p:cot}, so then $\displaystyle G_1 \sqcup \ldots \sqcup G_r \in \mathscr{M}_{t+s}$, where $\displaystyle t+s =\sum_{j=1}^r t_j + \sum_{j=1}^ri_H(G_j)$ if and only if $G_j$ is either trim or complete. In other words, $G_j \in \mathscr{N}_{t_j}$ for $t_j > 0$ or $G_j \in \mathscr{M}_0$ for $t_j = 0$. \end{proof} \section{Trim maximal $t$-path traceable graphs} \label{s:family} Skupien ~\cite{SkupienMNH} discovered the first family of maximal non-Hamiltonian graphs, that is, graphs in $\mathscr{M}_1$. These graphs are formed by taking the join with $K_r$ of the disjoint union of $r+1$ complete graphs. The smallest graph in $\mathscr{N}_2$ is shown in Figure~\ref{f:triangle}. Chv\'atal identified its join with $K_1$ as the smallest maximal non-Hamilitonian graph that is not 1-tough, that is, not one of the Skupien family. Jamrozik, Kalinowski and Skupien~\cite{SkupienCat} generalized this example to three different families. Family $A1$ replaces each edge $u_iv_i$ with an arbitrary complete graph containing $u_i$ and replaces the $K_3$ formed by the $u_i$ with an arbitrary complete graph. The result has four cliques, the first three disjoint from each other but each intersecting the fourth clique in a single vertex. This graph is also in $\mathscr{N}_2$ and its join with $K_1$ gives a maximal non-Hamiltonian graph. Family $A2$ is formed by taking the join with $K_2$ of the disjoint union of a complete graph and the graph in $\mathscr{N}_2$ just described. Theorem~\ref{t:decomposition} shows that the resulting graph is in $\mathscr{M}_1$. Family $A3$ is a modification of the $A1$ family based on the graph in Figure~\ref{f:A3}, which is in $\mathscr{N}_2$. Bullock, Frick, Singleton and van Aardt~\cite{Bullock} recognized that two constructions of Zelinka~\cite{Zelinka} gave maximal non-traceable graphs, that is, elements of $\mathscr{M}_2$. Zelinka's first construction is like the Skupien family: formed from $r+1$ complete graphs followed by the join with $K_{r-1}$. The Zelinka Type~II family contains graphs in $\mathscr{N}_2$ that are a significant generalization of the graphs in Figures~\ref{f:triangle}~and~\ref{f:A3}. In this section we generalize this family further to get graphs in $\mathscr{N}_t$ for arbitrary $t$. Our starting point is the graph in Figure~\ref{f:whirligig}, which is in $\mathscr{N}_3$. \begin{figure} \begin{center} \begin{tikzpicture} [scale=2,auto=left,every node/.style={circle,fill=white!20}] \node[draw, circle] (n1) at (1.2,1.5) {$v_1$}; \node[draw, circle] (n2) at (3,3.8) {$v_2$}; \node[draw, circle] (n3) at (4.8,1.5) {$v_3$}; \node[draw, circle] (n4) at (2,2) {$u_1$}; \node[draw, circle] (n5) at (3,3) {$u_2$}; \node[draw, circle] (n6) at (4,2) {$u_3$}; \foreach \from/\to in {n1/n4,n4/n5,n5/n6,n6/n4,n2/n5,n3/n6} \draw (\from) -- (\to); \end{tikzpicture} \end{center} \caption{Smallest graph in $\mathscr{N}_2$} \label{f:triangle} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture} [scale=2,auto=left,every node/.style={circle,fill=white!20}] \node[draw, circle] (n1) at (1.3,1.6) {$v_1$}; \node[draw, circle] (n2) at (2,2.3) {$u_1$}; \node[draw, circle] (n3) at (2,3.5) {$v_2$}; \node[draw, circle] (n4) at (3,3) {$u_2$}; \node[draw, circle] (n5) at (3,4.3) {$u_3$}; \node[draw, circle] (n6) at (4,2.3) {$u_4$}; \node[draw, circle] (n7) at (4,3.5) {$v_3$}; \node[draw, circle] (n8) at (4.7,1.6) {$v_4$}; \foreach \from/\to in {n2/n4,n2/n5,n2/n6,n4/n5,n4/n6,n5/n6, n1/n2,n3/n4,n3/n5,n7/n4,n7/n5,n8/n6} \draw (\from) -- (\to); \end{tikzpicture} \end{center} \caption{The join of this graph with $K_1$ is the smallest graph in the $A3$ family.} \label{f:A3} \end{figure} \begin{example} \label{ex:whirligig} Consider $K_m$ with $m\geq 2t-1$ and vertices $u_1, \dots, u_m$. Let $G$ be the graph containing $K_m$ along with vertices $v_1, \dots, v_{2t-1}$ and edges $u_iv_i$. The case with $t=3$ and $m=5=2t-1$ is Figure~\ref{f:whirligig}. We claim $G \in \mathscr{N}_t$. One can readily check that this graph is $t$-path covered using $v_{2i-1} \sim u_{2i-1} \sim u_{2i} \sim v_{2i}$ for $i= 1, \dots, t-1$ and $v_{2t-1}\sim u_{2t-1} \sim u_{2t} \sim \cdots \sim u_m$. We check that $G$ is maximal. By the symmetry of the graph, we need only consider the addition of the edge $v_1u_m$ and $v_1u_2$. In either case, the last and the first paths listed above may be combined into one, either \begin{align*} &v_{2t-1} \sim u_{2t-1} \sim \cdots \sim u_m \sim v_1 \sim u_1 \sim u_2 \sim v_2, \quad \text{ or } \quad \\ &v_{2t-1} \sim u_{2t-1} \sim \cdots \sim u_m \sim u_1 \sim v_1 \sim u_2 \sim v_2 \end{align*} Thus, adding an edge creates a $(t-1)$-path covered graph, proving maximality. \end{example} \begin{figure} \begin{center} \rot{270}{ \begin{tikzpicture} [scale=1.5,auto=left,every node/.style={circle,fill=white!20}] \node[draw, circle] (n1) at (2.2,3) {\rot{90}{$u_1$}}; \node[draw, circle] (n2) at (3,4) {\rot{90}{$u_2$}}; \node[draw, circle] (n3) at (4,3.7) {\rot{90}{$u_3$}}; \node[draw, circle] (n4) at (4,2.3) {\rot{90}{$u_4$}}; \node[draw, circle] (n5) at (3,2) {\rot{90}{$u_5$}}; \node[draw, circle] (n6) at (1,3) {\rot{90}{$v_1$}}; \node[draw, circle] (n7) at (3,5) {\rot{90}{$v_2$}}; \node[draw, circle] (n8) at (5,4.5) {\rot{90}{$v_3$}}; \node[draw, circle] (n9) at (5,1.5) {\rot{90}{$v_4$}}; \node[draw, circle] (n10) at (3,1) {\rot{90}{$v_5$}}; \foreach \from/\to in {n1/n2,n1/n3,n1/n4,n1/n5,n2/n4,n2/n5,n3/n4,n3/n5,n2/n3,n4/n5, n1/n6,n2/n7,n3/n8,n4/n9,n5/n10} \draw (\from) -- (\to); \end{tikzpicture} } \end{center} \caption{Whirligig in $\mathscr{N}_3$.} \label{f:whirligig} \end{figure} The next proposition shows that the previous example is the only way to have a trim maximal $t$-path covered graph with $2t-1$ degree-one vertices. We start with a technical lemma.H \begin{lemma} \label{l:degone} Let $G$ be a connected graph and let $u_1, v_1, v_2, v_3 \in G$ with $\deg(v_i) = 1$, and $u $ adjacent to $v_1$ and $v_2$ but not $v_3$. Then $\mu(G) = \mu(G + uv_3 )$. \end{lemma} \begin{proof} Let $P_1, \dots, P_r$ be a minimal path covering of $G +uv_3$; it is enough to show that there are $r$-paths covering $G$. If the covering doesn't include $uv_3$, then $P_1,\dots, P_r$ also give a minimal path covering of $G$ establishing the claim of the lemma. Otherwise, suppose $uv_3$ is an edge of $P_1$. We consider two cases. Suppose $P_1$ contains the edge $uv_1$ (or similarly $uv_2$). Then $P_1$ has $v_1$ as a terminal point and one of the other paths, say $P_2$ must be a length-$0$ path containing simply $v_2$. Let $Q$ be obtained by removing $uv_1$ and $uv_3$ from $P_1$. Then $v_1\sim u\sim v_2, Q, P_3, \dots, P_r$, gives an $r$-path covering of $G$ Suppose $P_1$ contains neither $uv_1$ nor $uv_2$. Then each of $v_1$ and $v_2$ must be on a length-$0$ path in the covering, say $P_2$ and $P_3$ are these paths. Furthermore $u$ must not be a terminal point of $P_1$, for, if were, the path could be extended to include $v_1$ or $v_2$, reducing the number of paths required to cover $G$. Removing $u$ from $P_1$ yields two paths, $Q_1, Q_2$. Then $v_1\sim u\sim v_2, Q_1, Q_2, P_4, \dots, P_r$ gives an $r$-path cover of $G$. This proves the lemma. \end{proof} \begin{proposition} Let $G\in \mathscr{N}_t$. The number of degree-one vertices in $G$ is at most $2t-1$. This occurs if and only if the $2t-1 $ vertices of degree-one have distinct neighbors and removing the degree-one vertices leaves a complete graph. \end{proposition} \begin{proof} Each degree-one vertex must be a terminal point in a path covering. So any graph $G$ covered by $t$ paths can have at most $2t$ degree-one vertices. Aside from the case $t=1$ and $G=K_2$, we can see that a graph with $2t$ degree-one vertices cannot be maximal $t$-path traceable as follows. It is easy to check that a $2t$ star is not $t$-path traceable (it is also not trim). A $t$-path traceable graph with $2t$ degree-one vertices must therefore have an interior vertex $w$ that is not connected to one of the degree-one vertices $v$. Such a graph is not maximal because the edge $vw$ can be added leaving $2t-1$ degree-one vertices. This graph cannot be $(t-1)$-path covered. Suppose that $G\in \mathscr{N}_t$ with $2t-1$ degree-one vertices, $v_1, \dots, v_{2t-1}$. Lemma~\ref{l:degone} shows that no two of the $v_i$ can be adjacent to the same vertex, for that would violate maximality of $G$. So, the $v_i$ have distinct neighbors. Furthermore, all the nodes except the $v_i$ can be connected to each other and a path covering will still require at least $t$ paths since there remain $2t-1$ degree-one vertices. This proves the necessity of the structure claimed in the proposition. The previous example showed that the graph is indeed in $\mathscr{N}_t$. \end{proof} We can now generalize the Zelinka family. \begin{construction} \label{C:whirligig} Let $U_0, U_{1}, \dotsm U_{2t-1}$ be disjoint sets and $\displaystyle U= \bigsqcup_{i=0}^{2t-1} U_i$. Let $m_i = \abs{U_i} $ and assume that for $i>0$ the $U_i$ are non-empty, so $m_i >0$. For $i=1,\dots, 2t-1$ (but not $i=0$) and $j = 1, \dots, m_i$, let $V_{ij}$ be disjoint from each other and from $U$. Form the graph with vertex set $\displaystyle U \sqcup \Big( \bigsqcup_{i=1}^{2t-1} \big(\bigsqcup_{j=1}^{m_i} V_{ij}\big)\Big)$ and edges $uu'$ for $u,u' \in U$ and $uv$ for any $u \in U_i$ and $v \in V_{ij}$ with $i= 1,\dots, 2t-1$ and $j=1,\dots,m_i$. The cliques of this graph are $K_U$ and $K_{U_i\sqcup V_{ij}}$ for each $i= 1,\dots, 2t-1$ and $j=1,\dots,m_i$. \end{construction} The graph in Figure~\ref{f:A3} has $m_0=0$, $m_1=m_2=1$ and $m_3=2$, and the graph in Figure~\ref{f:general} indicates the general construction. \begin{figure} \begin{tikzpicture} [scale=1.6,auto=left,every node/.style={circle,fill=white!20}] \node (n1) at (1.3,3) {$U_0$}; \node (n2) at (2.3,4) {$U_1$}; \node (n3) at (2.3,2) {$U_2$}; \node (n4) at (3.5,3) {$\iddots$}; \node (n6) at (4.7,4) {$U_{2t-3}$}; \node (n7) at (4.7,2) {$U_{2t-2}$}; \node (n8) at (5.7,3) {$U_{2t-1}$}; \node (n9) at (1,5) {$V_{1,1}$}; \node (n10) at (2,5) {$\ldots$}; \node (n11) at (3,5) {$V_{1,m_1}$}; \node (n12) at (4,5) {$V_{2t-3,1}$}; \node (n13) at (5,5) {$\ldots$}; \node (n14) at (6,5) {$V_{2t-3,m_{2t-3}}$}; \node (n15) at (1,1) {$V_{2,1}$}; \node (n16) at (2,1) {$\ldots$}; \node (n17) at (3,1) {$V_{2,m_2}$}; \node (n18) at (4,1) {$V_{2t-2,1}$}; \node (n19) at (5,1) {$\ldots$}; \node (n20) at (6,1) {$V_{2t-2,m_{2t-2}}$}; \node (n21) at (7,4) {$V_{2t-1,1}$}; \node (n22) at (7,3) {$\vdots$}; \node (n23) at (7,2) {$V_{2t-1,m_{2t-1}}$}; \foreach \from/\to in {n2/n9,n2/n10,n2/n11, n3/n15,n3/n16,n3/n17, n6/n12,n6/n13,n6/n14, n7/n18,n7/n19,n7/n20, n8/n21,n8/n22,n8/n23} \draw (\from) -- (\to); \draw (1.3,3.7) to[out=-45,in=45] (1.3,2.3); \draw (1.65,4.05) to[out=-70,in=180] (2.3,3.5) to[out=0,in=-60] (2.9,4.45); \draw (1.65,1.95) to[out=70,in=180] (2.3,2.5) to[out=0,in=60] (2.9,1.55); \draw (5.7,3.7) to[out=200,in=160] (5.7,2.3); \draw (5.35,4.05) to[out=250,in=0] (4.7,3.5) to[out=180,in=240] (4.1,4.45); \draw (5.35,1.95) to[out=110,in=0] (4.7,2.5) to[out=180,in=120] (4.1,1.55); \draw (3.5,3) ellipse (2.5cm and 1.5cm); \end{tikzpicture} \caption{Generalization of the Whirligig, $W$} \label{f:general} \end{figure} \begin{theorem} The graph $W$ in Construction~\ref{C:whirligig} is a trim, maximal $t$-path traceable graph. \end{theorem} \begin{proof} We must show that $W$ is $t$-path covered and not $(t-1)$-path covered, and that the addition of any edge yields a $(t-1)$-path covered graph. The argument is analogous to the one in Example~\ref{ex:whirligig}. Let $R$ be a Hamiltonian path in $U_0$. For each $i=1,\dots,2t-1$ and $j=1,\dots,m_i$ let $Q_{ij}$ be a Hamiltonian path in $K_{V_{ij}}$. Let $P_i$ be the path \[ P_i: Q_{i1} \sim u_{i1} \sim \cdots \sim Q_{im_i} \sim u_{im_i} \] and let $\overleftarrow{P_i}$ be the reversal of $P_i$. Since there is an edge $u_{im_i}u_{jm_j}$ there is a path $P_i\sim \overleftarrow{P}_j$ for any $i\ne j\in \set{1,\dots, 2t-1}$. Therefore the graph $W$ has a $t$-path covering $P_{2i-1} \sim \overleftarrow{P}_{2i}$ for $i=1,\dots, (t-1)$ , along with $P_{2t-1} \sim R$. We leave to the reader the argument that there is no $(t-1)$-path cover. To show $W$ is maximal we show that after adding an edge $e$, we can join two paths in the $t$-path cover above, with a bit of rearrangement. There are three types of edges to consider, the edge $e$ might join $V_{ij}$ to $U_{i'}$ for $i \ne i'$; or $V_{ij}$ to $V_{ij'}$ for $j\ne j'$; or $V_{ij}$ to $V_{i'j'}$ for $i\ne i'$. Because of the symmetry of $W$, we may assume $i=1$ and $j=1$ and that the vertex chosen from $V_{ij}$ is the initial vertex of $Q_{ij}$. Other simplifications due to symmetry will be evident in what follows. In the first case there are two subcases---determined by $i'\geq 2t$ or not---and after permutation, we may consider the edge $e$ from the initial vertex of $Q_{11}$ to the terminal vertex of $R$, or to the terminal vertex of $P_{2t-1}$. We can then join two paths in the $t$-path cover: either $P_{2t-1} \sim R\stackrel{ e}{\sim } P_1\sim \overleftarrow{P}_2$ or $P_2 \sim \overleftarrow{P}_1\stackrel{e}{ \sim }P_{2t-1}\sim R$. Suppose next that we join the initial vertex of $Q_{11}$ with the terminal vertex of $Q_{12}$. We then rearrange $P_1$ and join two path in the $t$-path cover to get \[P_{2t-1}\sim R \sim u_{11} \sim Q_{11}\stackrel{e}{\sim} Q_{12} \sim u_{12}\sim \cdots \sim Q_{1m_1}\sim u_{1m_1}\sim \overleftarrow{P}_2 \] Finally, suppose that we join the initial vertex of $Q_{11}$ with the initial vertex of $Q_{2t-1,1}$. Then we rearrange to $\overleftarrow{R}\sim\overleftarrow{P}_{2t-1}\stackrel{e}{\sim}P_1\sim\overleftarrow{P}_2$. \end{proof}
1,477,468,751,318
arxiv
\section{Introduction} A crucial and long-standing problem in the theory and practice of portfolio optimization is the choice of an effective and transparent performance criterion that balances risk and return. In this paper, we propose a novel portfolio optimization criterion that aims to combine to some extent the respective strengths of the classical criteria considered in the literature. The origin of the literature corresponds to the notion of decision making under uncertainty. From there, \citet{vonNeumann1944} proposed the expected utility approach for which the investment preferences are captured by a utility function. The shortcomings of this approach include the abstract nature of utility functions, which can make them impractical, and its omission of several practical aspects of actual decision making, as identified by \citet{Tversky1992}'s cumulative prospect theory, see for example \citet{Barberis2012}. The mean-variance framework of \citet{Markowitz1952}, which uses variance to measure risk, can well approximate the quadratic utility case. When asset returns are assumed to be normally distributed, many other risk measures have been found equivalent to variance (for example, the equivalence to the first and second-order lower partial moments has been proved by \citet*{Klebaner2017}), but the mean-variance framework greatly benefits from its simple quadratic formulation. Some may argue that variance is an inadequate measure of portfolio risk as asset returns usually exhibit the so-called leptokurtic property, meaning that higher moments may need to be incorporated into the optimization. We refer to \citet{Lai1991} and \citet*{Konno1993} for the skewness component and \citet{Davis1990} for both skewness and kurtosis. Another approach to address the issue of non-normality of asset returns is to use a downside risk measure. The most common downside risk measures are the lower-partial moments (e.g., semivariance introduced in \citet{Markowitz1959}), Value at Risk (VaR, \citealt{Longerstaey1996}) and Conditional Value at Risk (CVaR, \citealt{Rockafellar2000}, a.k.a. expected shortfall). These measures can replace variance to form a mean-downside risk approach, see \citet{Harlow1991} for a mean-lower-partial moment framework, \citet{Alexander2002} for the mean-VaR framework and \citet{Agarwal2004} for the mean-CVaR framework. The last main strand of literature corresponds to target-based strategies that aim to track a prespecified investment target. A popular target-based strategy is to maximize the probability of achieving a return target, see \citet{Browne1999a} for a fixed absolute target and \citet{Browne1999b}, \citet{Pham2003}, \citet*{Gaivoronski2005} and \citet*{Morton2006} for relative benchmark targets. Alternatively, one can minimize the probability of an undesirable outcome, see for example \citet*{Hata2010}, \citet{Nagai2012} and \citet*{Milevsky2006}. Using an explicitly specified investment target in portfolio optimization makes it easier to understand and monitor in practice. However, choosing a suitable investment target that properly balances risk and return remains a challenging task. Building upon these classical investment criteria, we propose in this paper the so-called \textit{Skewed Target Range Strategy} (STRS), which maximizes the expected portfolio value bounded within a prespecified target range, composed of a conservative lower target representing a need for capital protection and a desired upper target corresponding to an ideal return level the investor wishes to achieve. Implicitly, the optimization can be described as maximizing the probability that the realized return lies within the targeted range and as close to the upper target as possible. There are three main motivations behind the proposed STRS. The first motivation traces back to the primary purpose of an investment objective function, which is to carve a desirable shape for the probability distribution of returns. The STRS, seeking a desirable expected return while chopping off most of the tails of the distribution beyond the targeted range, restrains the entire return distribution. The second motivation comes from the difficulty of specifying a single return target for classical target-based strategies, which cannot simultaneously serve the pursuit of a desired investment target and downside protection. The STRS solves this dilemma by using an upper target which accounts for return-seeking preference, combined with a lower target which accounts for loss-aversion preference. Finally, performance criteria such as utility functions depending on abstract parameters with unforeseeable practical effects are unlikely to be adopted by investors. Our proposition of two explicit targets labeled in terms of returns, with intuitive purposes (capital protection for the lower target and desired investment return for the upper target), serves as a more practical investment criterion. To test the effectiveness of the proposed STRS (formulated in Section \ref{sec:objective}), we study a multi-period portfolio optimization problem with proportional transaction costs. To do so, we modify the classical Least Squares Monte Carlo (LSMC) algorithm to use a two-stage regression technique, which makes the problem of approximating the abrupt STRS objective function (equation \eqref{eq:objective}) as easy as approximating a linear function. The LSMC literature and the details of the proposed two-stage LSMC method are further discussed in Section \ref{sec:LSMC}. We show that this two-stage LSMC method is numerically more stable than the classical LSMC method for both the smooth constant relative risk aversion (CRRA) utility approach and the abrupt STRS. We find that an appropriate level for the lower target is the initial portfolio value, as it marginally minimizes the standard deviation and the downside risk of the terminal portfolio value. Importantly, we show that the STRS criterion does behave as expected from its design: the portfolio value is well targeted within the specified range, and the downside risk is robust with respect to the choice of the upper target. We numerically show that the STRS achieves a similar mean-variance efficient frontier while delivering a better downside risk-return trade-off when compared to the CRRA utility optimization approach. We also provide two simple extensions of the STRS, described in Section \ref{sec:Extensions}. The first extension, dubbed \textit{Flat Target Range Strategy} (FTRS), corresponds with pure probability maximization of achieving a targeted range, without a further attempt to pursue a higher return. The FTRS is useful for problems where maintaining solvency is more important than seeking high returns, for example for long-term pension schemes, retirement funds and life-cycle management. The second extension, dubbed \textit{Relative Target Range Strategy} (RTRS), focuses relative returns: it involves a return target range defined in terms of excess return over a stochastic benchmark, such as stock market index, interest rate or inflation rate. All the numerical results are presented in Section \ref{sec:Numerical}. \section{Skewed Target Range Strategy\label{sec:objective}} In this section, we define the skewed target range strategy (STRS) for portfolio optimization problems and discuss potential benefits of this strategy. We consider a portfolio optimization problem with $d$ risky assets available over a finite time horizon $T$. Let $\boldsymbol{\alpha}_{t}=\{\alpha_{t}^{i}\}_{1\leq i\leq d}$ be the portfolio weight in each risky asset at time $t$, and denote by $W_{t}$ the portfolio value (or wealth). Assume that the investor aims to maximize the expectation of some function of the terminal portfolio value $\mbox{\ensuremath{\mathbb{E}}}\left[f(W_{T})\right]$. Then, the objective function simply reads \begin{equation} \sup_{\boldsymbol{\alpha}}\mbox{\ensuremath{\mathbb{E}}}\left[f\left(W_{T}\right)\right],\label{eq:objective} \end{equation} where the investment preference is characterized by the function $f\left(\cdot\right)$ . In this paper, we propose the following parametric shape: \begin{equation} f(w)=(w-L_{{\scriptscriptstyle \!W}})\mathbbm{1}\{L_{\!{\scriptscriptstyle W}}\leq w\leq U_{{\scriptscriptstyle \!W}}\},\label{eq:f} \end{equation} where $L_{{\scriptscriptstyle \!W}}\in\mathbb{R}$ represents a conservative lower target, $U_{{\scriptscriptstyle \!W}}\in\mathbb{R}$ represents a desired upper target, and the indicator function $\mathbbm{1}\{L_{{\scriptscriptstyle \!W}}\leq w\leq U_{{\scriptscriptstyle \!W}}\}$ returns one if $L_{{\scriptscriptstyle \!W}}\leq w\leq U_{{\scriptscriptstyle \!W}}$ and returns zero otherwise. We refer to the shape \eqref{eq:f} and the corresponding objective \eqref{eq:objective} as the STRS. Throughout this paper, we normalize the portfolio value $W$ and the bounds $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]$ by the initial portfolio value $W_{0}$. Indeed, the formula \eqref{eq:f} shows that $f(w;L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}})=W_{0}\times f(\frac{w}{W_{0}};\frac{L_{{\scriptscriptstyle \!W}}}{W_{0}},\frac{U_{{\scriptscriptstyle \!W}}}{W_{0}})$, so we can assume without loss of generality that $W_{0}=1$ and set the bounds $L_{{\scriptscriptstyle \!W}}$ and $U_{{\scriptscriptstyle \!W}}$ in the vicinity of $1$. Figure \ref{fig:STRS} shows an example of equation \eqref{eq:f} with $L_{{\scriptscriptstyle \!W}}=1.0$ and $U_{{\scriptscriptstyle \!W}}=1.2$. From equation \eqref{eq:f}, one can see that the objective is to maximize the expected terminal portfolio value within the interval $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]$, while the values outside this interval are penalized down to zero. This strategy implicitly combines two objectives: maximizing the expected terminal portfolio value and maximizing the probability that the terminal portfolio value lies within the chosen target range $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]$. \begin{figure} \caption{Skewed target range function\label{fig:STRS}} \smallskip \centering{ \begin{minipage}[t]{0.45\columnwidth \includegraphics[scale=0.55]{EnLU \end{minipage} \end{figure} On the left side of the skewed shape in equation \eqref{eq:f}, the function is convex at the lower target $L_{{\scriptscriptstyle \!W}}$. This is consistent with the notion from \citet{Tversky1992}'s cumulative prospect theory that investors tend to be risk-seeking when losing money. By contrast, on the right side of the skewed shape, the function is discontinuous and jumps down to zero at the upper target $U_{{\scriptscriptstyle \!W}}$. This is the distinctive feature of the STRS compared to classical utility functions as well as cumulative prospect theory. In particular, the foregoing of the upside potential beyond the upper target $U_{{\scriptscriptstyle \!W}}$ seems to conflict with the non-satiation axiom that people prefer more to less. The following explains the importance of this upper threshold. Everything else being equal (ceteris paribus assumption), one would expect people to prefer more to less. This axiom in the context of dynamic stochastic portfolio optimization can be interpreted as follows: the downside risk being fixed (the left tail of the return distribution), investors would prefer higher upside potential (a longer right tail of the return distribution). However, \textit{\emph{after extensive numerical experiments, we came to the conclusion that }}non-decreasing utility functions are unable to decouple upside potential from downside risk. Indeed, pursuing higher upside potential leads to riskier portfolio decisions, which may result in a return distribution with a large right tail (gains) as well as a large left tail (losses). As the ceteris paribus assumption does not apply in this stochastic context, one cannot rule our the existence of a satiation level. Such a level is determined by the investor's preference with respect to risk and return. As upside potential and downside risk are naturally intertwined, the proposed upper target is able to curtail downside risk by addressing its main cause - namely the pursuit of excessive upside potential. As a result, the realized returns can be well contained within the targeted range with a high degree of confidence, which in several contexts is more important than allowing for the possibility of rare windfall returns at the cost of higher downside risk. \section{Multi-Period Portfolio Optimization\label{sec:LSMC}} In this section, we consider a multi-period portfolio optimization problem and formulate it as a discrete-time dynamic programming problem, for which we develop a two-stage LSMC method to solve it. The LSMC algorithm, originally developed by \citet{Carriere1996}, \citet{Longstaff2001} and \citet{Tsitsiklis01} for pricing American options, has been extended to solve dynamic portfolio optimization problems by several researchers. \citet*{Brandt2005} consider a CRRA utility function and determine a semi-closed-form solution by solving the first order condition of the Taylor series expansion of the value function. \citet{Cong2016} and \citet{Cong2016b} consider a target-based mean-variance objective function and use a suboptimal strategy to perform the forward simulation of control variables which are iteratively updated in the backward recursive programming. Later, \citet{Cong2017} combine \citet{Jain2015}'s stochastic bundling technique with \citet{Brandt2005}'s method. \citet*{Zhang2018} consider a CRRA utility function and adopt \citet*{Kharroubi2014}'s control randomization technique for a portfolio optimization problem with switching costs including transaction costs, liquidity costs and market impact. The aforementioned works solve problems with a continuous payoff function for which the classical LSMC method can be very effective. By contrast, highly nonlinear, abruptly changing or discontinuous payoffs can be more difficult to handle for the LSMC algorithm (\citet{Zhang2018}, \citet*{Balata18}, \citet*{Andreasson2018}). The STRS \eqref{eq:f}, with its abrupt drop at the upper bound $U_{{\scriptscriptstyle \!W}}$, is such a difficult function. In addition, as the terminal wealth outside the targeted range are truncated to zero in the value function, a direct regression on these zeros would forego the original information from the wealth variable. In this section, we propose a two-stage LSMC method to overcome these issues. \subsection{Dynamic programming} Denote by $R^{f}$ the cumulative return of the risk-free asset over one single period. Denote by $\mathbf{R}_{t}=\left\{ R_{t}^{i}\right\} _{1\leq i\leq d}$ the excess returns of the risky assets over the risk-free rate and denote by $\mathbf{Z}_{t}$ the vector of return predictors. The optimization problem in equation \eqref{eq:objective} can be formulated as a stochastic control problem with exogenous state variables $\mathbf{Z}_{t}$ and one endogenous state variable $W_{t}$. Let $\mathcal{A}\subseteq\mathbb{R}^{d}$ be the set of admissible portfolio weights. The value function in equation \eqref{eq:objective} can now be rewritten as \begin{eqnarray} v_{t}(z,w) & := & \sup_{\left\{ \boldsymbol{\alpha}_{\tau}\in\mathcal{A}\right\} _{t\leq\tau\leq T}}\mbox{\ensuremath{\mathbb{E}}}\left[f\left(W_{T}\right)\left|\mathbf{Z}_{t}=z,W_{t}=w\right.\right]. \end{eqnarray} Consider an equidistant discretization of the investment horizon $[0,T]$, denoted by $0=t_{0}<\cdots<t_{N}=T$. The wealth process evolves as \begin{eqnarray} W_{t_{n+1}} & = & W_{t_{n}}\left(R^{f}+\boldsymbol{\alpha}_{t_{n}}\cdot\mathbf{R}_{t_{n+1}}\right),\label{eq:wealth} \end{eqnarray} and the value function satisfies the following dynamic programming principle \begin{eqnarray} v_{t_{N}}\left(z,w\right) & = & f(w),\nonumber \\ v_{t_{n}}\left(z,w\right) & = & \sup_{\boldsymbol{\alpha}_{t_{n}}\in\mathcal{A}}\mbox{\ensuremath{\mathbb{E}}}\left[v_{t_{n+1}}\left(\mathbf{Z}_{t_{n+1}},W_{t_{n+1}}\right)\left|\mathbf{Z}_{t_{n}}=z,W_{t_{n}}=w\right.\right],\label{eq:dp} \end{eqnarray} where $f(w)=(w-L_{{\scriptscriptstyle \!W}})\mathbbm{1}\{L_{\!{\scriptscriptstyle W}}\leq w\leq U_{{\scriptscriptstyle \!W}}\}$. \subsection{Classical least squares Monte Carlo } The first part of the LSMC algorithm is the forward simulation of all the stochastic state variables. Let $M$ denote the number of Monte Carlo simulations. The return predictors $\{\mathbf{Z}_{t_{n}}^{m}\}_{0\leq n\leq N}^{1\leq m\leq M}$ and the asset excess returns $\{\mathbf{R}_{t_{n}}^{m}\}_{0\leq n\leq N}^{1\leq m\leq M}$ are generated through some predetermined return dynamics. By contrast, the wealth process is an endogenous state variable depending on the realization of the portfolio weights. We follow the control randomization approach of \citet{Kharroubi2014}: we randomly generate uniform portfolio weights $\{\tilde{\boldsymbol{\alpha}}_{t_{n}}^{m}\}_{0\leq n\leq N}^{1\leq m\leq M}$, and then compute the corresponding portfolio values $\{\tilde{W}_{t_{n}}^{m}\}_{0\leq n\leq N}^{1\leq m\leq M}$ according to equation \eqref{eq:wealth}. The second part of the LSMC algorithm uses a discretization procedure. We discretize the control space as $\mathcal{A}^{\text{d}}=\{\mathbf{a}_{1},...,\mathbf{a}_{J}\}$. We define the continuation value function $\text{CV}{}_{t_{n}}^{j}$ as the expectation of the subsequent value function conditional on making the decision $\boldsymbol{\alpha}_{t_{n}}=\mathbf{a}_{j}\in\mathcal{A}^{\text{d}}$, i.e., \begin{eqnarray} \text{CV}{}_{t_{n}}^{j}\left(z,w\right) & := & \mathbb{E}\left[\left.v_{t_{n+1}}\left(\mathbf{Z}_{t_{n+1}},W_{t_{n+1}}\right)\right|\mathbf{Z}_{t_{n}}=z,W_{t_{n}}=w,\boldsymbol{\alpha}_{t_{n}}=\mathbf{a}_{j}\right].\label{eq:cv} \end{eqnarray} Therefore, the value function can be approximated by \[ v_{t_{n}}(z,w)=\sup_{\boldsymbol{\alpha}_{t_{n}}\in\mathcal{A}}\mathbb{E}\left[\left.v_{t_{n+1}}\left(\mathbf{Z}_{t_{n+1}},W_{t_{n+1}}\right)\right|\mathbf{Z}_{t_{n}}=z,W_{t_{n}}=w\right]\approx\max_{\mathbf{a}_{j}\in\mathcal{A}^{\text{d}}}\text{CV}{}_{t_{n}}^{j}\left(z,w\right). \] To compute this value function, we proceed by backward dynamic programming. At time $t_{N}$, the value function is equal to $\hat{v}_{t_{N}}(z,w)=(w-L_{{\scriptscriptstyle \!W}})\mathbbm{1}\{L_{{\scriptscriptstyle \!W}}\leq w\leq U_{{\scriptscriptstyle \!W}}\}$. At time $t_{n}$, assume that the continuation value functions $\{\hat{\text{CV}}{}_{t_{n'}}^{j}(z,w)\}_{n+1\leq n'\leq N-1}^{1\leq j\leq J}$ have been estimated. We evaluate the continuation value function at the current time $\text{CV}_{t_{n}}^{j}$ for each decision $\mathbf{a}_{j}\in\mathcal{A}^{\text{d}}$. We then reset the portfolio weights $\{\boldsymbol{\alpha}_{t_{n}}^{m}\}_{1\leq m\leq M}$ to $\mathbf{a}_{j}$, and recompute the endogenous wealth from $t_{n}$ to $t_{N}$: \begin{eqnarray} \hat{W}_{t_{n+1}}^{m,(n,j)} & = & \tilde{W}_{t_{n}}^{m}\left(R^{f}+\mathbf{a}_{j}\cdot\mathbf{R}_{t_{n+1}}^{m}\right)\nonumber \\ \hat{W}_{t_{n+2}}^{m,(n,j)} & = & \hat{W}_{t_{n+1}}^{m,(n,j)}\left(R^{f}+\arg\max_{\mathbf{a}_{l}\in\mathcal{A}^{\text{d}}}\left\{ \hat{\text{CV}}{}_{t_{n+1}}^{l}\left(\mathbf{Z}_{t_{n+1}}^{m},\hat{W}_{t_{n+1}}^{m,(n,j)}\right)\right\} \cdot\mathbf{R}_{t_{n+2}}^{m}\right)\nonumber \\ & \vdots\nonumber \\ \hat{W}_{t_{N}}^{m,(n,j)} & = & \hat{W}_{t_{N-1}}^{m,(n,j)}\left(R^{f}+\arg\max_{\mathbf{a}_{l}\in\mathcal{A}^{\text{d}}}\left\{ \hat{\text{CV}}{}_{t_{N-1}}^{l}\left(\mathbf{Z}_{t_{N-1}}^{m},\hat{W}_{t_{N-1}}^{m,(n,j)}\right)\right\} \cdot\mathbf{R}_{t_{N}}^{m}\right).\label{eq:wealth_hat-evolution} \end{eqnarray} where $\hat{W}_{t_{n'}}^{m,(n,j)}:=\left.\hat{W}_{t_{n'}}^{m}\right|_{W_{t_{n}}^{m}=\tilde{W}_{t_{n}}^{m},\boldsymbol{\alpha}_{t_{n}}=\mathbf{a}_{j}}$, $n'=n,\ldots,N$ is the recomputed wealth from $t_{n}$ to $t_{N}$, using the portfolio weights $\mathbf{a}_{j}$ at time $t_{n}$ and the estimated optimal portfolio weights at times $t_{n+1},\ldots,t_{N-1}$. To approximate the continuation value function $\text{CV}{}_{t_{n}}^{j}(z,w)$, the classical LSMC algorithm regresses the payoffs $\{f(\hat{W}_{t_{N}}^{m,(n,j)})\}_{1\leq m\leq M}$ on $\{\psi_{k}(\mathbf{Z}_{t_{n}}^{m},\tilde{W}_{t_{n}}^{m})\}_{1\leq m\leq M}^{1\leq k\leq K}$, where $\{\psi_{k}(z,w)\}_{1\leq k\leq K}$ is the vector of basis functions of the state variables. However, the major difficulty here lies in the abrupt upper bound $U_{{\scriptscriptstyle \!W}}$, which can cause large numerical errors in the regression according to our numerical exploration. As $f$ censors the values of $\hat{W}_{t_{N}}^{m,(n,j)}$ outside the targeted range $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]$, our regression problem looks similar to a censored regression problem, for which a common estimation approach is maximum likelihood estimation (MLE). However, the main difference between our problem and a censored regression problem is that we have access to both the censored samples $\{f(\hat{W}_{t_{N}}^{m,(n,j)})\}_{1\leq m\leq M}$ and the uncensored samples $\{\hat{W}_{t_{N}}^{m,(n,j)}\}_{1\leq m\leq M}$. Thus, MLE would ignore the information of the uncensored values $\hat{W}_{t_{N}}^{m,(n,j)}$ which are also observable in this estimation problem. The availability of this extra piece of information motivates us to propose a two-stage regression that takes advantages of this information. We now describe this technique in detail. \subsection{Two-stage least squares Monte Carlo \label{subsec:TLSMC}} This two-stage regression works as follows: \begin{enumerate} \item Instead of regressing the payoffs $\{f(\hat{W}_{t_{N}}^{m,(n,j)})\}_{1\leq m\leq M}$, we regress the wealth $\{\hat{W}_{t_{N}}^{m,(n,j)}\}_{1\leq m\leq M}$ on $\{\psi_{k}(\mathbf{Z}_{t_{n}}^{m},\tilde{W}_{t_{n}}^{m})\}_{1\leq m\leq M}^{1\leq k\leq K}$ to obtain \begin{eqnarray} \left\{ \hat{\beta}_{k,t_{n}}^{j}\right\} {}_{1\leq k\leq K} & = & {\displaystyle \arg\min_{\beta\in\mathbb{R}^{K}}}\sum_{m=1}^{M}\left(\sum_{k=1}^{K}\beta_{k}\psi_{k}\left(\mathbf{Z}_{t_{n}}^{m},\tilde{W}_{t_{n}}^{m}\right)-\hat{W}_{t_{N}}^{m,(n,j)}\right)^{2},\nonumber \\ \hat{\sigma}_{t_{n}}^{j} & = & \sqrt{\frac{1}{M-K}\sum_{m=1}^{M}\left(\hat{W}_{t_{N}}^{m,(n,j)}-\sum_{k=1}^{K}\hat{\beta}_{k,t_{n}}^{j}\psi_{k}\left(\mathbf{Z}_{t_{n}}^{m},\tilde{W}_{t_{n}}^{m}\right)\right)^{2}}.\label{eq:ls} \end{eqnarray} As a result, the terminal wealth can be modeled as \begin{eqnarray} \hat{W}_{t_{N}}^{(n,j)}=\hat{\mu}_{t_{n}}^{j}\left(z,w\right)+\hat{\sigma}_{t_{n}}^{j}\varepsilon, & & \hat{\mu}_{t_{n}}^{j}\left(z,w\right):=\sum_{k=1}^{K}\hat{\beta}_{k,t_{n}}^{j}\psi_{k}\left(z,w\right),\label{eq:linear-wealth} \end{eqnarray} where $\varepsilon$ is the regression residual, which for demonstrative purposes we assume Gaussian. (Remark that an assumption for the distribution of the residuals is also required by MLE.) Let $\phi(x)=\frac{1}{\sqrt{2\pi}}\exp(\frac{x^{2}}{2})$ represent the standard normal probability density function, and $\Phi(x)=\int_{-\infty}^{x}\phi(x)dx$ represent the standard normal cumulative distribution function. \item Plug equation \eqref{eq:linear-wealth} into the continuation value formula \eqref{eq:cv} to obtain a closed-form estimate. By combining equations \eqref{eq:cv}, \eqref{eq:wealth_hat-evolution}, \eqref{eq:ls} and \eqref{eq:linear-wealth}, we obtain the following closed-form estimate of the continuation value function for each $\mathbf{a}_{j}\in\mathcal{A}^{\text{d}}$ at time $t_{n}$: \begin{eqnarray} \hat{\text{CV}}{}_{t_{n}}^{j}\left(z,w\right) & = & \mathbb{E}\left[\left.\left(W_{t_{N}}-L_{\!{\scriptscriptstyle W}}\right)\mathbbm{1}\left\{ L_{{\scriptscriptstyle \!W}}\leq W_{t_{N}}\leq U_{{\scriptscriptstyle \!W}}\right\} \right|\mathbf{Z}_{t_{n}}=z,W_{t_{n}}=w,\boldsymbol{\alpha}_{t_{n}}=\mathbf{a}_{j}\right]\nonumber \\ & = & \mathbb{E}_{\varepsilon}\left[\left(\hat{\mu}_{t_{n}}^{j}\left(z,w\right)+\hat{\sigma}_{t_{n}}^{j}\varepsilon-L_{\!{\scriptscriptstyle W}}\right)\times\mathbbm{1}\left\{ L_{{\scriptscriptstyle \!W}}\leq\hat{\mu}_{t_{n}}^{j}\left(z,w\right)+\hat{\sigma}_{t_{n}}^{j}\varepsilon\leq U_{{\scriptscriptstyle \!W}}\right\} \right]\nonumber \\ & = & \left(\hat{\mu}_{t_{n}}^{j}\left(z,w\right)-L_{\!{\scriptscriptstyle W}}\right)\mathbb{E}_{\varepsilon}\left[\mathbbm{1}\left\{ \frac{L_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\leq\varepsilon\leq\frac{U_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right\} \right]\nonumber \\ & & +\hat{\sigma}_{t_{n}}^{j}\mathbb{E}_{\varepsilon}\left[\varepsilon\mathbbm{1}\left\{ \frac{L_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\leq\varepsilon\leq\frac{U_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right\} \right]\nonumber \\ & = & \left(\hat{\mu}_{t_{n}}^{j}\left(z,w\right)-L_{\!{\scriptscriptstyle W}}\right)\left(\Phi\left(\frac{U_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right)-\Phi\left(\frac{L_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right)\right)\nonumber \\ & & -\hat{\sigma}_{t_{n}}^{j}\left(\phi\left(\frac{U_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right)-\phi\left(\frac{L_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right)\right),\label{eq:cv-STRS} \end{eqnarray} where the last equality is obtained by direct integration. \item The mappings $\hat{\boldsymbol{\alpha}}_{t_{n}}:(z,w)\mapsto\hat{\boldsymbol{\alpha}}_{t_{n}}(z,w)$ and $\hat{v}_{t_{n}}:(z,w)\mapsto\hat{v}_{t_{n}}(z,w)$ are estimated by \begin{eqnarray} \hat{\boldsymbol{\alpha}}_{t_{n}}\left(z,w\right)=\arg\max_{\mathbf{a}_{j}\in\mathcal{A}^{\text{d}}}\hat{\text{CV}}{}_{t_{n}}^{j}\left(z,w\right) & \text{ and } & \hat{v}_{t_{n}}(z,w)=\max_{\mathbf{a}_{j}\in\mathcal{A}^{\text{d}}}\hat{\text{CV}}{}_{t_{n}}^{j}\left(z,w\right).\label{eq:opt} \end{eqnarray} \end{enumerate} In summary, thanks to the censored linear shape of the skewed target range function in equation \eqref{eq:f}, the conditional expectations in the dynamic programming equations \eqref{eq:dp} can be estimated by the closed-form formula \eqref{eq:cv-STRS}. Due to the linearity of the regressand $\hat{W}_{t_{N}}^{m,(n,j)}$ in equation \eqref{eq:ls}, this two-stage regression is much more robust and stable than a direct regression of $f(\hat{W}_{t_{N}}^{m,(n,j)})$. Subsection \ref{subsec:CRRA-utility} describes a similar closed-form conditional value for the CRRA utility approach, and Subsection \ref{subsec:Model-validation} illustrates the numerical improvements provided by this two-stage LSMC method. More generally, the approach proposed here (linear approximation in \eqref{eq:linear-wealth} + decensored corrections in \eqref{eq:cv-STRS}) can be adapted to the situations where residuals are non-Gaussian: this would simply modify the correction terms in \eqref{eq:cv-STRS}. There is no restriction on the choice of the residual distribution, nor on the estimation methods (empirical distribution, kernel estimation, mixture normal, etc.). Nevertheless, without loss of generality, it can be reasonable to assume normality of residuals for low-frequency trading such as monthly returns with monthly rebalancing considered in our numerical experiments in Section \ref{sec:Numerical}. In addition, the properties of the wealth distribution can be well captured by regressing $\{\hat{W}_{t_{N}}^{m,(n,j)}\}_{1\leq m\leq M}$ on basis functions of $\{\tilde{W}_{t_{n}}^{m}\}_{1\leq m\leq M}$, yielding regression residuals close to normal. Based on our numerical experiments, the residuals are indeed very close to normal. For these reasons and for demonstration purposes, we henceforth assume normality of residuals and focus on the analysis of the effects of the new investment objective \eqref{eq:f}. \subsection{State-dependent standard deviation\label{subsec:State-dependent-SD}} An important assumption made in the previous subsection is that $\hat{\sigma}_{t_{n}}^{j}$ only depends on the portfolio decision $\boldsymbol{a}^{j}$, but not on the state variables $(\mathbf{Z}_{t_{n}},W_{t_{n}})$. This subsection describes how to improve the standard deviation estimate to incorporate state variables. Similar to the approximation of $\hat{\mu}_{t_{n}}^{j}(z,w)$, the state-dependent standard deviation $\hat{\sigma}_{t_{n}}^{j}(z,w)$ can be approximated by the exponential of a linear combination of basis functions of state variables, $\hat{\sigma}_{t_{n}}^{j}(z,w)=\exp(\sum_{k=1}^{K'}\hat{\eta}_{k,t_{n}}^{j}\psi_{k}\left(z,w\right))$. The purpose of the exponential transform is to avoid the possibility of negative standard deviation estimates. Then, the two-stage regression becomes \begin{eqnarray*} \hat{W}_{t_{N}}^{(n,j)} & = & \hat{\mu}_{t_{n}}^{j}\left(z,w\right)+\varepsilon,\\ \varepsilon & \sim & \mathcal{N}\left(0,\hat{\sigma}_{t_{n}}^{j}\left(z,w\right)\right),\\ \hat{\mu}_{t_{n}}^{j}\left(z,w\right) & = & \sum_{k=1}^{K}\hat{\beta}_{k,t_{n}}^{j}\psi_{k}\left(z,w\right),\\ \hat{\sigma}_{t_{n}}^{j}\left(z,w\right) & = & \exp\left(\sum_{k=1}^{K'}\hat{\eta}_{k,t_{n}}^{j}\psi_{k}\left(z,w\right)\right). \end{eqnarray*} Note that a standard least squares regression cannot be used to estimate an unobservable variable such as standard deviation. Instead, we use MLE. We first perform a least squares regression to approximate the mean $\hat{\mu}_{t_{n}}^{j}(z,w)$, and then approximate the logarithmic standard deviation $\log\hat{\sigma}_{t_{n}}^{j}(z,w)$ by maximizing the following log-likelihood function: \[ \mathcal{L}\left(\eta\left|\mathbf{Z}_{t_{n}},\tilde{W}_{t_{n}},\hat{W}_{t_{N}}^{(n,j)}\right.\right)=\sum_{m=1}^{M}\left\{ -\sum_{k=1}^{K'}\eta_{k,t_{n}}^{j}\psi_{k}\left(\mathbf{Z}_{t_{n}}^{m},\tilde{W}_{t_{n}}^{m}\right)-\frac{\left(\hat{\varepsilon}^{m}\right)^{2}}{2}\exp\left(-2\sum_{k=1}^{K'}\eta_{k,t_{n}}^{j}\psi_{k}\left(\mathbf{Z}_{t_{n}}^{m},\tilde{W}_{t_{n}}^{m}\right)\right)\right\} , \] where \begin{eqnarray*} \hat{\varepsilon}^{m} & = & \hat{W}_{t_{N}}^{m,(n,j)}-\sum_{k=1}^{K}\hat{\beta}_{k,t_{n}}^{j}\psi_{k}\left(\mathbf{Z}_{t_{n}}^{m},\tilde{W}_{t_{n}}^{m}\right). \end{eqnarray*} We use the Broyden\textendash Fletcher\textendash Goldfarb\textendash Shanno algorithm to perform the maximization of this log-likelihood function. In Subsection \ref{subsec:Model-validation}, we compare the results obtained with and without state-dependency in the standard deviation estimate. \subsection{Upper target as stop-profit\label{subsec:stop}} As discussed in Section \ref{sec:objective}, the main purpose of the upper target $U_{{\scriptscriptstyle \!W}}$ in the performance measure is to reduce downside risk. However, in multi-period optimization, a paradox might occur when the realized wealth overshoots the upper target: by default, the portfolio optimizer might tell the fund manager to pick the assets most likely to fall. It is trivial to see that, when $W_{t}\geq U_{{\scriptscriptstyle \!W}}R_{f}^{-(T-t)}$, one can outperform the upper target for certain by henceforth investing $U_{{\scriptscriptstyle \!W}}R_{f}^{-(T-t)}$ amount of wealth into the risk-free asset and taking out the balance amount $W_{t}-U_{{\scriptscriptstyle \!W}}R_{f}^{-(T-t)}$ from the problem. To implement such a correction, two approaches are possible: \begin{enumerate} \item One can replace $T$ by $\min\{T,\tau\}$ in the value function in equation \eqref{eq:objective}, where $\tau$ is the first (stopping) time such that $W_{\tau}\geq U_{{\scriptscriptstyle \!W}}R_{f}^{-(T-\tau)}$. At time $\tau$ (if it occurs before $T$), the dynamic optimization stops: the amount $U_{{\scriptscriptstyle \!W}}R_{f}^{-(T-\tau)}$ is invested in the risk-free asset, and the balance amount $W_{\tau}-U_{{\scriptscriptstyle \!W}}R_{f}^{-(T-\tau)}$ is taken out. \item Alternatively, one can add an extra dynamic control to the problem: dynamic withdrawal/consumption, see \citet*{Dang2017} for example. \end{enumerate} For simplicity, we use the first approach in this paper. Based on our numerical experiments, we find that imposing this stop-profit rule does not significantly affect the terminal wealth distribution, as usually only a very small portion of wealth realizations overshoot the upper bound. For example, we show in the numerical section that about 1\% of the realizations overshoot the upper bound for $[L_{\!{\scriptscriptstyle W}}=1.0,U_{{\scriptscriptstyle \!W}}=1.1]$, and virtually 0\% for $[L_{\!{\scriptscriptstyle W}}=1.0,U_{{\scriptscriptstyle \!W}}=1.2]$. \section{Extensions\label{sec:Extensions}} This section adapts the two-stage LSMC method to alternative investment objectives. We first describe how to use the two-stage LSMC method to deal with the CRRA utility approach, then we adapt the formulation of the STRS to the Flat Target Range Strategy (FTRS) which purely maximizes the probability of achieving a prespecified target range without further attempts to rally for profits, and to target range strategies based on a stochastic benchmark, for which the absolute fixed target range is replaced by a relative target range. \subsection{CRRA utility\label{subsec:CRRA-utility}} In the classical LSMC approach, a conditional expected utility of the type $\mathbb{E}[\mathcal{U}(W_{T})|\mathbf{Z}_{t_{n}}=z,W_{t_{n}}=w]$ would be approximated by $\beta\cdot\psi(z,w)$, which may lead to large numerical errors when the utility function $\mathcal{U}$ is highly non-linear, see \citet{vanBinsbergen2007}, \citet{Garlappi2009}, \citet{Denault2017}, \citet{Zhang2018} and \citet{Andreasson2018}. The proposed two-stage regression avoids this non-linearity problem and greatly improves the stability of the LSMC method. In this subsection, we derive the two-stage continuation value estimates for the CRRA utility approach. These estimates involve the following special functions: \begin{itemize} \item Gamma function: \[ \Gamma\left(z\right)=\int_{0}^{\infty}t^{z-1}\exp\left(-t\right)\text{d}t \] \item Rising factorial: \[ z^{(n)}=\frac{\Gamma\left(z+n\right)}{\Gamma\left(z\right)} \] \item Confluent hypergeometric function of the first kind: \begin{eqnarray*} _{1}F_{1}\left(a,b,z\right) & = & \sum_{n=0}^{\infty}\frac{a^{(n)}}{b^{(n)}}\frac{z^{n}}{n!} \end{eqnarray*} \item Confluent hypergeometric function of the second kind: \begin{eqnarray*} \Psi\left(a,b,z\right) & = & \frac{\Gamma\left(1-b\right)}{\Gamma\left(a-b+1\right)}{}_{1}F_{1}\left(a,b,z\right)+\frac{\Gamma\left(b-1\right)}{\Gamma\left(a\right)}z^{1-b}{}_{1}F_{1}\left(a-b+1,2-b,z\right) \end{eqnarray*} \end{itemize} Assume that the conditional mean of the terminal wealth $\hat{\mu}_{t_{n}}^{j}(z,w)$ and the standard deviation $\hat{\sigma}_{t_{n}}^{j}$ have been estimated according to equations \eqref{eq:ls} and \eqref{eq:linear-wealth}. Then, using the general formula for the real moments of a Gaussian distribution (\citet{Winkelbauer2014}), the continuation value function in the CRRA utility approach is given by \begin{eqnarray} \hat{\text{CV}}_{t_{n}}^{j}\left(z,w\right) & = & \mathbb{E}\left[\left.\frac{\left.\hat{W}_{t_{N}}\right.^{1-\gamma}}{1-\gamma}\right|\mathbf{Z}_{t_{n}}=z,W_{t_{n}}=w,\boldsymbol{\alpha}_{t_{n}}=\mathbf{a}_{j}\right]\nonumber \\ & = & \frac{\left(\hat{\sigma}_{t_{n}}^{j}\right)^{1-\gamma}}{1-\gamma}\cdot\left(-i\sqrt{2}\right)^{1-\gamma}\cdot\Psi\left(-\frac{1-\gamma}{2},\frac{1}{2},-\frac{1}{2}\left(\frac{\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right)^{2}\right).\label{eq:cv-CRRA} \end{eqnarray} We use this closed-form formula for the numerical comparisons in Subsection \ref{subsec:Model-validation}. \subsection{Flat target range strategy \label{subsec:prob}} The return distribution produced by the STRS \eqref{eq:f} is skewed towards the upper return target. Yet, there exists some other types of portfolio optimization problems (such as life-cycle and insurance-related investments) for which the ability to remain solvent prevails over the appetite for high expected return. For such problems, one can adjust the skewed target range shape \eqref{eq:f} to a flat target range shape given by \begin{equation} f(w)=\mathbbm{1}\left\{ L_{{\scriptscriptstyle \!W}}\leq w\leq U_{{\scriptscriptstyle \!W}}\right\} .\label{eq:f_uniform} \end{equation} Figure \ref{fig:EnC} illustrates the above equation \eqref{eq:f_uniform} with $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]=[1.0,1.2]$. Then the portfolio optimization problem becomes \begin{eqnarray} v_{t}(z,w) & = & \sup_{\left\{ \boldsymbol{\alpha}_{\tau}\in\mathcal{A}\right\} _{t\leq\tau\leq T}}\mbox{\ensuremath{\mathbb{E}}}\left[\mathbbm{1}\left\{ L_{{\scriptscriptstyle \!W}}\leq w\leq U_{{\scriptscriptstyle \!W}}\right\} \left|\mathbf{Z}_{t}=z,W_{t}=w\right.\right]\nonumber \\ & = & \sup_{\left\{ \boldsymbol{\alpha}_{\tau}\in\mathcal{A}\right\} _{t\leq\tau\leq T}}\mathbb{P}\left[L_{{\scriptscriptstyle \!W}}\leq W_{T}\leq U_{{\scriptscriptstyle \!W}}\left|\mathbf{Z}_{t}=z,W_{t}=w\right.\right],\label{eq:prob-max} \end{eqnarray} which is a pure probability maximizing strategy. The conservative FTRS can be deemed more flexible than the classical Value-at-Risk (VaR) minimization approach: when $U_{{\scriptscriptstyle \!W}}=+\infty$, the FTRS \eqref{eq:prob-max} and VaR minimization achieve comparable investment outcomes, the difference being a fixed, absolute cut-off level for the former and an implicit, relative cut-off level for the latter. In particular, the FTRS minimizes the probability of being below a particular loss level, while the VaR procedure minimizes a particular loss quantile. When $U_{{\scriptscriptstyle \!W}}$ is finite, the FTRS provides greater flexibility for investors to devise their risk preferences, as the lower return target $L_{{\scriptscriptstyle \!W}}$ in such circumstances is an explicit input from the investor, and the option to fix an upper target $U_{{\scriptscriptstyle \!W}}$ broadens the range of possible risk profiles. \begin{figure}[H] \caption{Flat Target Range Function\label{fig:EnC}} \smallskip \centering{ \begin{minipage}[t]{0.45\columnwidth \includegraphics[scale=0.55]{EnCU \end{minipage} \end{figure} Assuming that the conditional mean of the terminal wealth $\hat{\mu}_{t_{n}}^{j}(z,w)$ and the standard deviation $\hat{\sigma}_{t_{n}}^{j}$ have been estimated according to equations \eqref{eq:ls} and \eqref{eq:linear-wealth}, the continuation value function is simply given by \begin{eqnarray} \hat{\text{CV}}{}_{t_{n}}^{j}\left(z,w\right) & = & \mathbb{P}\left[\left.\mathbbm{1}\left\{ L_{{\scriptscriptstyle \!W}}\leq W_{t_{N}}\leq U_{{\scriptscriptstyle \!W}}\right\} \right|\mathbf{Z}_{t_{n}}=z,W_{t_{n}}=w,\boldsymbol{\alpha}_{t_{n}}=\mathbf{a}_{j}\right]\nonumber \\ & = & \mathbb{P}_{\varepsilon}\left[\mathbbm{1}\left\{ L_{{\scriptscriptstyle \!W}}\leq\hat{\mu}_{t_{n}}^{j}\left(z,w\right)+\hat{\sigma}_{t_{n}}^{j}\varepsilon\leq U_{{\scriptscriptstyle \!W}}\right\} \right]\nonumber \\ & = & \Phi\left(\frac{U_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right)-\Phi\left(\frac{L_{{\scriptscriptstyle \!W}}-\hat{\mu}_{t_{n}}^{j}\left(z,w\right)}{\hat{\sigma}_{t_{n}}^{j}}\right). \end{eqnarray} \subsection{Target range over a stochastic benchmark\label{subsec:relative}} It is also possible to define the return thresholds $L_{{\scriptscriptstyle \!W}}$ and $U_{{\scriptscriptstyle \!W}}$ relatively to a stochastic benchmark, be it stock market index, inflation rate, exchange rate or interest rate. We refer to \citet{Franks1992}, \citet{Browne1999a}, \citet{Brogan2005} and \citet{Gaivoronski2005} for classical investment strategies that aim to outperform a stochastic benchmark. Denote by $B$ the stochastic benchmark of interest, and define the relative excess wealth as $W-B$. We can then modify the target range function as: \begin{equation} f_{B}(w,b):=(w-b)\mathbbm{1}\{L_{{\scriptscriptstyle \!W}}\leq w-b\leq U_{{\scriptscriptstyle \!W}}\}\,,\label{eq:f_benchmark} \end{equation} for STRS, and \begin{equation} f_{B}(w,b):=\mathbbm{1}\{L_{{\scriptscriptstyle \!W}}\leq w-b\leq U_{{\scriptscriptstyle \!W}}\}\,,\label{eq:f_flat_benchmark} \end{equation} for FTRS. The stochastic benchmark $B$ can be simply modeled as one additional exogenous state variable, so that this new problem can be solved using the same approach developed in Section \ref{sec:LSMC}. \section{Numerical Experiments\label{sec:Numerical}} In this section, we test the skewed target range strategy (STRS), and illustrate how it can achieve the investor's range objective. Table \ref{tab:asset-class} summarizes the asset classes and the exogenous state variables used for our numerical experiments. We consider a portfolio invested in five assets: risk-free cash, U.S. bonds (AGG), U.S. shares (SPY), international shares (IFA) and emerging market shares (EEM), the other assets listed in Table \ref{tab:asset-class} being used as return predictors. \begin{table}[H] \caption{\label{tab:asset-class}Risky assets and return predictors} \smallskip \begin{singlespace} \centering{ \begin{tabular}{lllll} {\footnotesize{}Assets } & & {\footnotesize{}Underlying} & & {\footnotesize{}Data source}\tabularnewline \hline {\footnotesize{}U.S. Bonds } & & {\footnotesize{}AGG (ETF) } & & {\footnotesize{}Yahoo Finance}\tabularnewline {\footnotesize{}U.S. Shares } & & {\footnotesize{}SPY (ETF) } & & {\footnotesize{}Yahoo Finance}\tabularnewline {\footnotesize{}International Shares } & & {\footnotesize{}IFA (ETF) } & & {\footnotesize{}Yahoo Finance}\tabularnewline {\footnotesize{}Emerging Market Shares } & & {\footnotesize{}EEM (ETF) } & & {\footnotesize{}Yahoo Finance}\tabularnewline {\footnotesize{}Japanese shares} & & {\footnotesize{}NIKKEI225} & & {\footnotesize{}Yahoo Finance}\tabularnewline {\footnotesize{}U.K. shares} & & {\footnotesize{}FTSE100} & & {\footnotesize{}Yahoo Finance}\tabularnewline {\footnotesize{}Australian shares} & & {\footnotesize{}ASX200} & & {\footnotesize{}Yahoo Finance}\tabularnewline {\footnotesize{}Gold } & & {\footnotesize{}Spot Price} & & {\footnotesize{}World Gold Council}\tabularnewline {\footnotesize{}Crude Oil } & & {\footnotesize{}Spot Price} & & {\footnotesize{}U.S. Energy Info. Admin.}\tabularnewline {\footnotesize{}U.S. Dollar } & & {\footnotesize{}USD Index } & & {\footnotesize{}Federal Reserve}\tabularnewline {\footnotesize{}Japanese Yen} & & {\footnotesize{}JPYUSD} & & {\footnotesize{}Federal Reserve}\tabularnewline {\footnotesize{}Euro} & & {\footnotesize{}USDEUR} & & {\footnotesize{}Federal Reserve}\tabularnewline {\footnotesize{}Australian Dollar} & & {\footnotesize{}USDAUD} & & {\footnotesize{}Federal Reserve}\tabularnewline \hline \end{tabular} \end{singlespace} \end{table} The annual interest rate on the cash component is set to be $2\%$. We assume $0.1\%$ proportional transaction costs and we refer to \citet{Zhang2018} on how to deal with switching costs in the LSMC algorithm with endogenous variables. A first-order vector autoregression model is calibrated to the monthly log-returns of the assets listed in Table \ref{tab:asset-class} from September 2003 to March 2016. By bootstrapping the residuals, 10,000 simulation paths are generated for one year with monthly time steps. The two-stage regression method approximates a linear wealth $W_{T}$, but not a concave utility $\mathcal{U}(W_{T})$; as a result, a sample of 10,000 paths can be deemed sufficient to reach numerical stability, as reported in \citet{vanBinsbergen2007} and \citet{Zhang2018}. For the same reason, we use a simple second-order multivariate polynomial as the basis functions for the linear least squares regressions in the algorithm. For simplicity, all the reported distributions are simulated in-sample, which might in theory make the estimation upward-biased. In the numerical experiments, we use a mesh of 0.2 increment for the discrete control grid and we do not allow short-selling and borrowing. Apart from Subsection \ref{subsec:Model-validation} where a state-dependent standard deviation is tested, the state-independent standard deviation is used for all the other numerical experiments. The program is coded in Python 3.4.3, and it takes approximately two hours on a 2.2 GHz Intel Core i7 CPU to complete the computation for $M=10,000$ paths, 12 time steps, 13 state variables, a second-order polynomial basis, and a control mesh of 0.2 for a five-dimensional portfolio. \subsection{Wealth distribution} Figure \ref{fig:target} provides some examples of estimated distribution of terminal portfolio value when using the STRS. We recall that the portfolio value $W$ and the bounds $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]$ are scaled by the initial wealth, so that without loss of generality we assume $W_{0}=1.00$. The lower target $L_{{\scriptscriptstyle \!W}}$ is set to the initial wealth level $1.00$, a natural choice representing the preference of investors for capital protection. Four different upper targets $U_{{\scriptscriptstyle \!W}}$ are tested: $1.05$, $1.10$, $1.20$ and $1.30$. \begin{figure}[H] \caption{Terminal wealth distribution using STRS\label{fig:target}} \medskip \begin{centering} \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{lb=1_ub=1\lyxdot 05 \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{lb=1_ub=1\lyxdot 1 \end{minipage} \par\end{centering} \centering{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{lb=1_ub=1\lyxdot 2 \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{lb=1_ub=1\lyxdot 3 \end{minipage} \end{figure} Several comments can be made about the shape of the terminal wealth distribution produced by the STRS in Figure \ref{fig:target}. The most striking observation is that the STRS does confine most of the wealth realizations within the predefined target range, and for low upper target levels $U_{{\scriptscriptstyle \!W}}=1.05$ and $U_{{\scriptscriptstyle \!W}}=1.10$, the wealth distributions mimics to some extent the shape of the skewed target range function \eqref{eq:f}, making downside risk negligible. This suggests the two-stage LSMC algorithm is indeed capable of handling an abrupt discontinuous payoff function properly. There are some wealth realizations lying above the upper bound, which, in spite of the first correction described in Subsection \ref{subsec:stop}, may occur due to the discrete-time nature of monthly rebalancing (a large upward jump can occur during one single month, after which the risky investment is immediately stopped as described in Subsection \ref{subsec:stop}). \begin{figure}[H] \begin{centering} \caption{Time evolution of wealth distribution using STRS\label{fig:evolution}} \par\end{centering} \smallskip \begin{centering} \hspace{-2em \begin{minipage}[t]{0.4\textwidth \includegraphics[scale=0.14]{evolution_wealth_L=1_U=1\lyxdot 1 \end{minipage}\hspace{3em \begin{minipage}[t]{0.4\textwidth \includegraphics[scale=0.14]{evolution_wealth_L=1_U=1\lyxdot 2 \end{minipage} \par\end{centering} \centering{}\hspace{-2em \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.14]{evolution_wealth_L=1_U=5 \end{minipage}\hspace{3em \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.14]{evolution_wealth_L=0_U=5 \end{minipage} \end{figure} As expected, setting the upper target $U_{{\scriptscriptstyle \!W}}$ to a higher level produces a higher expected terminal wealth with higher standard deviation and greater downside risk (as measured by the probability of losing capital). At the same time, the higher the upper target $U_{{\scriptscriptstyle \!W}}$, the harder it is for the terminal wealth distribution to be skewed towards the upper target. Regarding the tails beyond the targeted range, the two low upper target levels $U_{{\scriptscriptstyle \!W}}=1.05$ and $U_{{\scriptscriptstyle \!W}}=1.10$ produce larger right tails, while the two higher levels $U_{{\scriptscriptstyle \!W}}=1.20$ and $U_{{\scriptscriptstyle \!W}}=1.30$ produce larger left tails, which is consistent with the fact that the greater $U_{{\scriptscriptstyle \!W}}$, the higher the risk that the investor is willing to take to achieve a higher return. This illustrates the capability of the STRS to cater to different risk appetites. An interesting quantity to monitor is the ratio $\mathcal{R}:=(\mathbb{E}[W_{T}]-L_{{\scriptscriptstyle \!W}})/(U_{{\scriptscriptstyle \!W}}-L_{{\scriptscriptstyle \!W}})$ which measures the location of the expected performance $\mathbb{E}[W_{T}]$ relative to the targeted range: $\mathcal{R}=0\%$ means $\mathbb{E}[W_{T}]=L_{{\scriptscriptstyle \!W}}$, while at the opposite $\mathcal{R}=100\%$ means $\mathbb{E}[W_{T}]=U_{{\scriptscriptstyle \!W}}$. In our experiments from Figure \ref{fig:target}, $\mathcal{R}$ is a decreasing function of $U_{{\scriptscriptstyle \!W}}$, from $\mathcal{R}=72\%$ for $U_{{\scriptscriptstyle \!W}}=1.05$ down to $\mathcal{R}=38\%$ for $U_{{\scriptscriptstyle \!W}}=1.30$. This illustrates the natural fact that the higher the desired upper target, the harder it is to achieve it. One visible drawback of the proposed strategy is the relatively long left tail when both the upper and lower targets are set to relatively high levels, for example, $L_{{\scriptscriptstyle \!W}}\geq1.00$ and $U_{{\scriptscriptstyle \!W}}\geq1.20$. Figure \ref{fig:evolution} shows the time evolution of the wealth distribution (0.05 percentile to 99.95 percentile) over the whole investment horizon, for the STRS with $[L_{{\scriptscriptstyle \!W}}=1.0,U_{\negmedspace{\scriptscriptstyle W}}=1.1]$ (top-left panel), $[L_{{\scriptscriptstyle \!W}}=1.0,U_{\negmedspace{\scriptscriptstyle W}}=1.2]$ (top-right panel), $[L_{{\scriptscriptstyle \!W}}=1.0,U_{\negmedspace{\scriptscriptstyle W}}=\infty]$ (bottom-left panel) and $[L_{{\scriptscriptstyle \!W}}=0,U_{\negmedspace{\scriptscriptstyle W}}=\infty]$ (bottom-right panel), where the last strategy is equivalent to maximizing the expected terminal wealth without taking risk into account. The results show that the wealth distributions in the top panel are well tightened within the prespecified target ranges over the whole investment process, as opposed to the case $U_{\negmedspace{\scriptscriptstyle W}}=\infty$ in the bottom panel. Once again, as upside potential and downside risk are naturally intertwined, one cannot protect against downside risk very well when the upper target is set to a very high level, as shown by the $[L_{{\scriptscriptstyle \!W}}=1.0,U_{\negmedspace{\scriptscriptstyle W}}=\infty]$ example (bottom-left panel). \subsection{Sensitivity analysis and choice of $L_{{\scriptscriptstyle \!W}}$\label{subsec:Sensitivity}} The next experiment is a sensitivity analysis of the expected terminal wealth, standard deviation and downside risk with respect to the bounds of the STRS. Figure \ref{fig:sensitivity} shows how the expected terminal wealth ($\mathbb{E}[W_{T}]$, first row), the standard deviation of the terminal wealth ($\text{SD}[W_{T}]$, second row) and the downside risk ($\mathbb{P}[W_{T}<1]$, third row) are affected by changes in the upper bound $U_{\!{\scriptscriptstyle W}}$ (left column) and by changes in the lower bound $L_{\!{\scriptscriptstyle W}}$ (right column). The left column of Figure \ref{fig:sensitivity} shows how the expectation $\mathbb{E}[W_{T}]$, standard deviation $\text{SD}[W_{T}]$ and downside risk $\mathbb{P}[W_{T}<1]$ increase with $U_{{\scriptscriptstyle \!W}}$, though a plateau is reached around $U_{\!{\scriptscriptstyle W}}=1.5$ for $\mathbb{P}[W_{T}<1]$ and around $U_{\!{\scriptscriptstyle W}}=1.8$ for $\mathbb{E}[W_{T}]$. On the right column, one can see that the standard deviation $\text{SD}[W_{T}]$ and downside risk $\mathbb{P}[W_{T}<1]$ both increase when $L_{{\scriptscriptstyle \!W}}$ moves away from the initial wealth $W_{0}=1.0$. When $L_{{\scriptscriptstyle \!W}}>1.0$, both risk measures increase with $|L_{{\scriptscriptstyle \!W}}-W_{0}|$ due to the additional risk required at the beginning of the trading period to force the portfolio value to grow from $W_{0}=1.0$ to the lower target $L_{{\scriptscriptstyle \!W}}>W_{0}=1.0$. When $L_{{\scriptscriptstyle \!W}}<1.0$, both risk measures also increase with $|W_{0}-L_{{\scriptscriptstyle \!W}}|$ due to the lack of immediate loss penalization. Nevertheless, the net effect of $L_{{\scriptscriptstyle \!W}}$ on $\mathbb{E}[W_{T}]$ is mostly negligible. As a result, these observations suggest that $L_{{\scriptscriptstyle \!W}}=W_{0}=1.0$ is an appropriate choice for the lower bound of the targeted interval, from which the upper bound $U_{\!{\scriptscriptstyle W}}$ can be set according to the risk preference and the return requirement of the investor. \begin{figure}[H] \caption{Sensitivity analysis w.r.t. target bounds\label{fig:sensitivity}} \vspace{1em} \begin{centering} \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{sen_LE \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{sen_UE \end{minipage} \par\end{centering} \vspace{1em} \begin{centering} \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{sen_LSD \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{sen_USD \end{minipage} \par\end{centering} \vspace{1em} \centering{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{sen_LP \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{sen_UP \end{minipage} \end{figure} \subsection{Model validation\label{subsec:Model-validation}} The following experiment aims at validating the two-stage LSMC method via a comparison to the classical LSMC method. We first study a CRRA utility optimization example. It has been noted that a simulation-and-regression approach can generate large numerical errors when the utility function is highly nonlinear (high risk aversion), see for example \citet{vanBinsbergen2007}, \citet{Garlappi2009} and \citet{Denault2017}. We apply the two-stage LSMC method and the classical LSMC method to CRRA utility optimization, and then compare the resulting initial value function estimates $\hat{v}_{0}=\frac{1}{M}\sum_{m=1}^{M}(\hat{W}_{t_{N}})^{1-\gamma}/(1-\gamma)$ for a one-year time horizon with monthly rebalancing. Following \citet{Zhang2018}, we choose $M=10,000$ sample paths to ensure numerical stability of the solution. For the classical LSMC method, we include the utility function itself as part of the regression basis, so that the regression basis can be adjusted to some extent to the risk-aversion parameter. Figure \ref{fig:LSMC-2LSMC-CRRA} shows that the classical LSMC method becomes unstable when the value of $\gamma$ is high, while the two-stage LSMC method converges quite well. In our experiment, the two-stage LSMC method can approximate the CRRA utility optimization approach well up to $\gamma=100$. \begin{figure}[H] \caption{Two-stage LSMC v.s. classical LSMC for CRRA utility\label{fig:LSMC-2LSMC-CRRA}} \smallskip \centering{ \begin{minipage}[t]{1\columnwidth \begin{center} \includegraphics[scale=0.13]{v0_CRRA} \par\end{center \end{minipage} \end{figure} We then compare our two-stage LSMC to the classical LSMC for solving the STRS. To check the possibility of heteroskedastic residuals, we calibrate a state-dependent standard deviation $\sigma\left(z,w\right)$ as described in Subsection \ref{subsec:State-dependent-SD} and compare it with the original two-stage LSMC method in which the standard deviation only depends on the portfolio decision. In particular, we use a simple linear basis to approximate the logarithmic standard deviation. Figure \ref{tab:LSMC-2LSMC-STRS} shows that the two-stage LSMC method substantially improves the estimates $\hat{v}_{0}$ and the return distributions, compared to the classical LSMC approach, while using a state-dependent standard deviation does not significantly improve the results, suggesting that the assumption of homoskedastic residuals is reasonable. \begin{table}[H] \caption{Two-stage LSMC v.s. classical LSMC for STRS\label{tab:LSMC-2LSMC-STRS}} \setlength\tabcolsep{2.5pt} \begin{minipage}[t]{1\columnwidth \begin{center} {\footnotesize{} \begin{tabular}{ccccccccccccccccc} & & & \multicolumn{4}{l}{{\footnotesize{}Classical LSMC}} & & \multicolumn{4}{l}{{\footnotesize{}Two-Stage LSMC}} & & \multicolumn{4}{l}{{\footnotesize{}Two-Stage LSMC + $\sigma\!\left(z,\!w\right)$}}\tabularnewline \cline{4-17} {\footnotesize{}$L_{{\scriptscriptstyle \!W}}$} & {\footnotesize{}$U_{\!{\scriptscriptstyle W}}$} & & {\footnotesize{}$\hat{v}_{0}$} & {\footnotesize{}$\mathbb{E}\!\left[W_{T}\right]$ } & {\footnotesize{}$\text{SD}\!\left[W_{T}\right]$} & {\footnotesize{}$\mathbb{P}\!\left[W_{T}\!\!<\!\!1\right]$} & & {\footnotesize{}$\hat{v}_{0}$} & {\footnotesize{}$\mathbb{E}\!\left[W_{T}\right]$ } & {\footnotesize{}$\text{SD}\!\left[W_{T}\right]$} & {\footnotesize{}$\mathbb{P}\!\left[W_{T}\!\!<\!\!1\right]$} & & {\footnotesize{}$\hat{v}_{0}$} & {\footnotesize{}$\mathbb{E}\!\left[W_{T}\right]$ } & {\footnotesize{}$\text{SD}\!\left[W_{T}\right]$} & {\footnotesize{}$\mathbb{P}\!\left[W_{T}\!\!<\!\!1\right]$}\tabularnewline \hline {\footnotesize{}1} & {\footnotesize{}1.1} & & {\footnotesize{}0.0058} & {\footnotesize{}1.1571} & {\footnotesize{}0.1847} & {\footnotesize{}0.1244} & & {\footnotesize{}0.0574} & {\footnotesize{}1.0596} & {\footnotesize{}0.0272} & {\footnotesize{}0.0028} & & {\footnotesize{}0.0475} & {\footnotesize{}1.0499} & {\footnotesize{}0.0318} & {\footnotesize{}0.0095}\tabularnewline {\footnotesize{}1} & {\footnotesize{}1.2} & & {\footnotesize{}0.0292} & {\footnotesize{}1.1609} & {\footnotesize{}0.1709} & {\footnotesize{}0.1077} & & {\footnotesize{}0.0922} & {\footnotesize{}1.0883} & {\footnotesize{}0.0405} & {\footnotesize{}0.0128} & & {\footnotesize{}0.0904} & {\footnotesize{}1.0867} & {\footnotesize{}0.0405} & {\footnotesize{}0.0122}\tabularnewline {\footnotesize{}1} & {\footnotesize{}1.3} & & {\footnotesize{}0.0608} & {\footnotesize{}1.1631} & {\footnotesize{}0.1542} & {\footnotesize{}0.0832} & & {\footnotesize{}0.1190} & {\footnotesize{}1.1126} & {\footnotesize{}0.0588} & {\footnotesize{}0.0178} & & {\footnotesize{}0.1239} & {\footnotesize{}1.1164} & {\footnotesize{}0.0609} & {\footnotesize{}0.0192}\tabularnewline {\footnotesize{}1} & {\footnotesize{}1.4} & & {\footnotesize{}0.0918} & {\footnotesize{}1.1663} & {\footnotesize{}0.1597} & {\footnotesize{}0.0656} & & {\footnotesize{}0.1393} & {\footnotesize{}1.1296} & {\footnotesize{}0.0832} & {\footnotesize{}0.0244} & & {\footnotesize{}0.1446} & {\footnotesize{}1.1351} & {\footnotesize{}0.0893} & {\footnotesize{}0.0286}\tabularnewline {\footnotesize{}1} & {\footnotesize{}1.5} & & {\footnotesize{}0.1199} & {\footnotesize{}1.1692} & {\footnotesize{}0.1625} & {\footnotesize{}0.0503} & & {\footnotesize{}0.1578} & {\footnotesize{}1.1449} & {\footnotesize{}0.1078} & {\footnotesize{}0.0299} & & {\footnotesize{}0.1596} & {\footnotesize{}1.1491} & {\footnotesize{}0.1165} & {\footnotesize{}0.0321}\tabularnewline {\footnotesize{}1} & {\footnotesize{}1.6} & & {\footnotesize{}0.1455} & {\footnotesize{}1.1721} & {\footnotesize{}0.1641} & {\footnotesize{}0.0454} & & {\footnotesize{}0.1718} & {\footnotesize{}1.1563} & {\footnotesize{}0.1264} & {\footnotesize{}0.0352} & & {\footnotesize{}0.1728} & {\footnotesize{}1.1596} & {\footnotesize{}0.1359} & {\footnotesize{}0.0413}\tabularnewline {\footnotesize{}1} & {\footnotesize{}$\infty$} & & {\footnotesize{}0.1903} & {\footnotesize{}1.1743} & {\footnotesize{}0.1652} & {\footnotesize{}0.0483} & & {\footnotesize{}0.1934} & {\footnotesize{}1.1684} & {\footnotesize{}0.1635} & {\footnotesize{}0.0423} & & {\footnotesize{}0.1938} & {\footnotesize{}1.1688} & {\footnotesize{}0.1625} & {\footnotesize{}0.0446}\tabularnewline \hline \end{tabular} \par\end{center \end{minipage} \end{table} \subsection{STRS and CRRA} We now compare the STRS to the CRRA utility optimization approach. Our main finding regarding this comparison is that for each risk aversion level $\gamma$ of the CRRA utility approach, one can find a target range $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]$ such that the STRS delivers a similar expectation, but with a lower standard deviation and a lower downside risk. As an illustration, Figure \ref{fig:benchmark} shows how the STRS with $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]=[0.93,1.53]$ outperforms the CRRA utility approach with $\gamma=10$. Despite the better statistical moments of the STRS, the shorter right tail of the STRS compared to the CRRA utility approach can be deemed a shortcoming of our approach, though giving up some upside potential is the reason for the improved downside risk protection compared to the CRRA utility approach. \begin{figure}[H] \begin{centering} \caption{Terminal wealth distribution: comparison between STRS and CRRA\label{fig:benchmark}} \par\end{centering} \smallskip \centering{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{lb=0\lyxdot 93_ub=1\lyxdot 53 \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{CRRA10 \end{minipage} \end{figure} To provide a more comprehensive comparison, we now report two risk-return trade-offs: the mean-variance efficient frontier and the trade-off between return and downside risk. Figure \ref{fig:frontier} displays the efficient frontiers of the STRS (for different combinations of $L_{{\scriptscriptstyle \!W}}$ and $U_{{\scriptscriptstyle \!W}}$) and the CRRA utility approach (for different $\gamma$ levels) for a three-month investment horizon. The results show that the STRS and the CRRA utility approach trace out a similar mean-variance efficient frontier, while the STRS delivers a better downside risk-return trade-off. Remark that the STRS and the CRRA utility approach produce similar results when the risk-aversion parameter is either very small (risk-neutral) or very high, while the STRS is preferable for intermediate risk-aversion levels. \begin{figure}[H] \begin{centering} \caption{Comparison with CRRA: risk-return trade-off\label{fig:frontier}} \par\end{centering} \smallskip \centering{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{frontier_ESD \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.13]{frontier_EP \end{minipage} \end{figure} A theoretical proof of the higher efficiency of the STRS over the classical utility strategies would be desirable to corroborate our numerical findings. However, given for example the difficulty in deriving an explicit optimal allocation for a single trading period with a simpler downside risk minimization objective (\citealt{Klebaner2017}), a theoretical proof of the higher efficiency of the STRS over the classical utility strategies might be out of reach. We thus leave this question for further research. \subsection{Extensions} This subsection discusses the wealth distributions produced by the modified target range strategies described in Section \ref{sec:Extensions}. Figure \ref{fig:prob} provides examples for the flat target range strategy (FTRS) with $L_{\!{\scriptscriptstyle W}}=1.0$ and $U_{\!{\scriptscriptstyle W}}=1.05$, $1.10$, $1.20$ and $+\infty$. The main observation is that, as expected, the probability of the terminal wealth lying outside the predefined range $[L_{{\scriptscriptstyle \!W}},U_{{\scriptscriptstyle \!W}}]$ is smaller than for the STRS (refer to Figure \ref{fig:target} for comparison). This is the main strength of the FTRS: downside risk is kept to a minimum, while the price to pay for this safety is the inability to generate high returns. Finally, the wealth distribution is less sensitive to the choice of $U_{\!{\scriptscriptstyle W}}$: the distribution is tight even when $U_{\!{\scriptscriptstyle W}}=\infty$, given the absence of incentive to chase high returns. In theory, if one wants to maximize the probability that the terminal wealth lies within the targeted range with the lower bound $L_{\!{\scriptscriptstyle W}}=1.0$ and a large enough upper bound $U_{\!{\scriptscriptstyle W}}$, the optimal decision should be to allocate all the capital to the risk-free asset. Numerically though, it is difficult to guarantee a full allocation in the risk-free asset at all times and for all paths. Intuitively, the reason for this is the following: for the portfolios allocated mostly to the risk-free asset, most, if not all, of the terminal wealth realizations will lie within the targeted range, which makes the value function flat and almost invariant among these convervative portfolio allocations. \begin{figure}[H] \caption{Terminal wealth distributions using FTRS\label{fig:prob}} \smallskip \begin{centering} \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{prob_lb=1_ub=1\lyxdot 05 \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{prob_lb=1_ub=1\lyxdot 1 \end{minipage} \par\end{centering} \centering{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{prob_lb=1_ub=1\lyxdot 2 \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{prob_lb=1_ub=5 \end{minipage} \end{figure} Figure \ref{fig:ex_return} provides some examples for the relative target range strategy (RTRS) with a passive equal-weight portfolio as benchmark. The probability that the portfolio value underperforms the benchmark portfolio remains small (around $6\%-8\%$ for the excess return distributions), though higher than those provided by absolute targets. The reason for this is that the passive equal-weight benchmark already delivers a high expected return, therefore outperforming it requires taking more risk than what was necessary in the previous absolute return target examples. \begin{figure}[H] \begin{centering} \caption{Excess terminal wealth distributions with relative target range strategies\label{fig:ex_return}} \par\end{centering} \smallskip \begin{centering} \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{ex_lb=0_ub=0\lyxdot 1 \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{ex_lb=0_ub=0\lyxdot 2 \end{minipage} \par\end{centering} \begin{centering} \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{ex_prob_lb=0_ub=0\lyxdot 1 \end{minipage}\qquad{ \begin{minipage}[t]{0.4\columnwidth \includegraphics[scale=0.55]{ex_prob_lb=0_ub=0\lyxdot 2 \end{minipage} \par\end{centering} \centering{}{\footnotesize{}Excess wealth distributions of STRS (top row) and FTRS (bottom row)}{\footnotesize\par} \end{figure} \section{Conclusions\label{sec:Conclusion}} This paper introduces the skewed target range strategy (STRS) for portfolio optimization problems. The STRS maximizes the expected portfolio value while simultaneously restraining the bulk of the return distribution within a predefined range. This joint goal is achieved with an unconstrained optimization formulation, which achieves, in a simpler manner, similar results to those that can be expected from more complex constrained optimization methods. To illustrate the effectiveness of the STRS, we study a multi-period portfolio optimization problem and propose a two-stage least squares Monte Carlo (LSMC) method to handle the new objective function. The two-stage regression method can also be adopted for general investment objectives such as the smooth constant relative risk aversion (CRRA) utility. We show that this regression method substantially improves the numerical stability of the LSMC algorithm compared to direct regression. We show that the STRS achieves a similar mean-variance efficient frontier while delivering a better downside risk-return trade-off, compared to the CRRA utility approach. We find that the recommended level for the lower bound of the target range is the initial portfolio value, at which the standard deviation and the downside risk of the terminal portfolio value are marginally minimized. From there, the upper bound of the target range can be set based on risk preferences. Going further, the unconstrained optimization formulation used by the STRS, built upon an indicator function, has the potential to incorporate additional range constraints on other dynamic risk measures such as realized volatility or maximum drawdown. This is an area we wish to investigate in future research. \paragraph*{Acknowledgments } The authors are grateful to Dr. Wen Chen and the two anonymous referees for their valuable comments and remarks.\bibliographystyle{chicago}
1,477,468,751,319
arxiv
\section{Introduction} Due to their fortitious geometry transiting exoplanets allow the determination of physical properties that are inaccesible or hard to reach for non-transiting system. One of the most exciting possibilities enabled by the transting geometry is to measure atmospheric properties of exoplanets without the need to resolve them from their parent star through the technique of transmission spectroscopy. In this technique, the atmospheric opacity at the planet terminator is probed by measuring the planetary size via transit light curve observations at different wavelengths. The measurable quantity is the planet-to-star radius ratio as a function of wavelength, $(R_p/R_*) (\lambda) \equiv k(\lambda)$, and is termed the transmission spectrum. The measurement of a transmission spectrum is a challenging one, with one atmospheric scale height $H$ translating to a signal of order $2 H k \approx 10^{-4}$ for hot Jupiters \citep[e.g.,][]{Brown:2001:transspec}. The requirements on precision favour exoplanets with large atmospheric scale heights, large values of $k$ (e.g., systems transiting M dwarfs) and orbiting bright targets due to the necessity of acquiring a large number of photons to reach the needed precision. The first successful measurement by transmission spectroscopy was the detection with the {\em Hubble Space Telescope (HST}) of absorption by Na I in the hot Jupiter HD~209458b \citep{Charbonneau:2002:hd209}. The signature of Na was 2--3 times weaker than expected from clear atmosphere models, providing the first indications that condensates can play an important role in determining the opacity of their atmospheres as seen in transmission \citep[e.g.,][and references therein]{Fortney:2005:condensates}. Subsequent space based studies have concentrated largely on the planets orbiting the stars HD~209458 and HD~189733 due to the fact that they are very bright stars, and therefore allow the collection of large number of photons even with the modest aperture of space based telescopes. A recent study of all the transmission spectra available for HD~189733, spanning the range from 0.32 to 24~$\mu$m, points to a spectrum dominated by Rayleigh scattering over the visible and near infra-red range with the only detected feature being a narrow resonance line of Na \citep{Pont:2013:hd189}. For HD~209458, \citet{Deming:2013:hd209} present new WFC3 data combined with previous STIS data \citep{Sing:2008a:hd209}, resulting in a transmission spectrum spanning the wavelength range 0.3 to 1.6 $\mu$m. They conclude that the broad features of the spectrum are dominated by haze and/or dust opacity. In both cases the spectra are different from those predicted by clear atmosphere models that do not incorporate condensates. In order to further our understanding of gas giant atmospheres it is necessary to build a larger sample of systems with measured transmission spectra. Hundreds of transiting exoplanets, mostly hot gas giants, have been discovered by ground based surveys such as HATNet \citep{Bakos:2004:hatnet}, WASP \citep{Pollacco:2006:wasp}, KELT \citep{Pepper:2007:kelt}, XO \citep{McCullough:2005:XO}, TRES \citep{Alonso:2004:tres1} and HATSouth \citep{Bakos:2013a}, with magnitudes within reach of the larger collecting areas afforded by ground based telescopes but often too faint for {\em HST}\footnote{The {\em Kepler} mission \citep{Borucki:2010:kepler} has discovered thousands of transiting exoplanet candidates but the magnitude of the hosts are usually significantly fainter than the systems discovered by ground based surveys, making detection of their atmospheres more challenging.}. The ground based observations have to contend with the atmosphere and instruments lacking the space-based stability of {\em HST}, but despite these extra hurdles the pace of ground-based transmission spectra studies is steadily increasing. Following the ground-based detection of Na I in HD189733b \citep{Redfield:2008:hd189} and confirmation of Na I in HD209358b \citep{Snellen:2008:hd209}, Na I has been additionally reported from the ground in WASP-17b \citep{Wood:2011:w17,Zhou:2012:w17} and XO-2b \citep{Sing:2012:xo2:na}. K I has been detected in XO-2b \citep{Sing:2011:xo2:k} and the highly eccentric exoplanet HD80606b \citep{Colon:2012:hd806:k}. All of these studies have used high resolution spectroscopy or narrow band photometry to specifically target resonant lines of alkali elements. Recently, a detection of H$\alpha$ has been reported from the ground for HD~189733b \citep{jensen:2012:ha}, complementing previous space-based detection of Ly$\alpha$ and atomic lines in the UV with {\em HST} for HD~189733b and HD~209358b \citep{Vidal-Madjar:2003:hd209, vidal-madjar:2004:hd209,lecavelier:2010:hd189}. Differential spectrophotometry using multi-object spectrographs offer an attractive means to obtain transmission spectra given the possibility of using comparison stars to account for the various systematic effects that affect the spectral time series obtained. Using such spectrograps, transmission spectra in the optical have been obtained for GJ~1214b \citep[][610-1000 nm with VLT/FORS]{Bean:2011:gj1214} and recently for WASP29-b \citep[][515-720 nm with Gemini/GMOS]{Gibson:2013:w29}, with both studies finding featureless spectra. In the near infrared \citet{Bean:2013:w19} present a transmission spectrum in the range 1.25 to 2.35 $\mu$m for WASP-19b, using MMIRS on Magellan. In this work we present an optical transmission spectrum of another planet, WASP-6 b, an inflated sub Jupiter mass ($0.504 M_J$) planet orbiting a $V=11.9$ G dwarf \citep{Gillon:2009a}, in the in the range 471--863 nm. \section{Observations} \label{sec:obs} The transmission spectrum spectrum of WASP-6b was obtained performing multi-object differential spectrophotometry with the Inamori-Magellan Areal Camera \& Spectrograph \citep[IMACS,][]{Dressler:2011:imacs} mounted on the 6.5m Baade telescope at Las Campanas Observatory. A series of 91 spectra of WASP-6 and a set of comparison stars were obtained during a transit of the hot Jupiter WASP-6b in October 03 2010 with the f/2 camera of IMACS, which provides an unvignetted circular field of view of radius $r\approx 12$ arcmin. The large field of view makes IMACS a very attractive instrument for multi-object differential spectrophotometry as it allows to search for suitable comparison stars that have as much as possible similar magnitude and colors as the target star. The median cadence of our observations was 224 sec, and the exposure time was set to 140 sec, except for the first eight exposures when we were tuning the exposure level and whose exposure times were $\{30, 120, 150, 150, 150, 130, 130, 130\}$ sec. The count level of the brightest pixel in the spectrum of WASP-6 was $\approx$ 43000 ADU, i.e. $\approx$ 65\% of the saturation level. In addition to WASP-6 we observed 10 comparison stars of comparable magnitude, seven of which had the whole wavelength range of interest ($\approx 4700-8600\ \text{\normalfont\AA}$) recorded in the CCD with enough signal-to-noise. The seven comparison stars we used are listed in Table~\ref{tab:comp}. The integrated counts over the wavelength range of interest for the spectrum of WASP-6 was typically $\approx 3.6\times10^8$ electrons, giving a Poisson noise limit for the white-light light curve of $\approx 0.06$ mmag. Each star was observed through a $10\times 10$ arcsec$^2$ slit in order to avoid the adverse effects of variable slit losses. We used the 300-l+17 grating as dispersing element, which gave us a seeing-dependent resolution $\Delta \lambda$ which was $\approx$ $5\ \text{\normalfont\AA}$ under $0.7$ arcsec seeing and a dispersion of $1.34 \text{\normalfont\AA}$ per pixel. In addition to the science mask, we obtained HeNeAr arc lamps through a mask that had slits at the same position as the science mask but with slit widths in the spectral direction of $0.7$ arcsec. Observing such masks is necessary in order to produce well defined lines that are then used to define the wavelength solution. \begin{deluxetable}{c} \tabletypesize{\scriptsize} \tablecaption{List of comparison stars \label{tab:comp}} \tablewidth{0pt} \tablehead{ \colhead{2MASS identifier}} \startdata 2MASS-23124095-2243232\\ 2MASS-23124836-2252099\\ 2MASS-23124448-2253190\\ 2MASS-23124428-2256403\\ 2MASS-23114068-2248130\\ 2MASS-23113937-2250334\\ 2MASS-23114820-2256592\\ \enddata \end{deluxetable} The extracted spectra of WASP-6 and the seven comparison stars we used are shown for a typical exposure in Figure~\ref{fig:W6Spec}. The conditions throughout the night were variable. The raw light curves constructed with the integrated counts over the whole spectral range for WASP-6 and the comparison stars are shown as a function of time in Figure~\ref{fig:counts}. Besides the variation due to varying airmass (and the transit for WASP-6), there were periods with strongly varying levels of transparency concentrated in the period of time 0--2 hrs after mid-transit. The seeing was in the range $\approx$ 0.6--0.8\arcsec. In order to maintain good sampling of the PSF in the spatial direction we defocused the telescope slightly in the periods of best seeing. Changes in seeing and transparency left no noticeable traces in the final light curves. \begin{figure} \plotone{f1.eps} \caption{Extracted spectra for WASP-6 and the seven comparison stars used in this work for a typical exposure. \label{fig:W6Spec}} \end{figure} \begin{figure} \plotone{f2.eps} \caption{Raw light curves for WASP-6 and seven comparison stars used in this work as a function of time. \label{fig:counts}} \end{figure} \section{Data reduction} \subsection{Background and sky substraction} After substracting the median value of the overscan region to every image, an initial trace of each spectrum was obtained by calculating the centroid of each row, which are perpendicular to the dispersion direction. Each row was then divided in three regions: a central region, which contains the bulk of the light of the star, a middle, on-slit region which is dominated by sky continuum and line emission, and an outer, out-of-slit region which contains a smooth background outside the slit arising from, e.g., scattered light. The middle and outer regions have components on each side of the spectrum. The outermost region was used to determine a smooth background that varies slowly along the dispersion direction. The median level was obtained in the outer regions on either side of the slit, and then a 3rd order polynomial was used to estimate the average background level as a function of pixel in the dispersion direction. This smooth background component was then subtracted from the central and middle regions. Then a Moffat function plus a constant level $c_i$ was fit robustly to each background subtracted row in those regions. The estimated $c_i$ (one per row), was then substracted from the central and middle regions in order to obtain a spectrum where only the stellar contribution remains. It is necessary to estimate the sky emission on a row-per-row basis as sky emission lines have a wide, box-shaped form with sharp boundaries due to the fact they fully illuminate the wide slit. \subsection{Fine tracing and spectrum extraction} The background and sky subtracted spectrum was traced by an algorithm that cross-correlates each slice perpendicular to the wavelength direction with a Gaussian in order to find the spectral trace. The centers of the trace were then fitted robustly with a fourth order polynomial. This new tracing procedure served as a double check for the centers obtained via the centroid method in the background and sky substraction part of the data reduction process; both methods gave traces consistent whith each other. With the trace in hand, the spectrum was extracted by using a simple extraction procedure, i.e. summing the flux on each row $\pm 15$ pixels form the trace position at that row. We also tried optimal extraction \citep{Marsh:1989a}, but it lead to additional systematic effects when analyzing the light curves,\footnote{Optimal extraction assumes that the profile along the wavelength direction is smooth enough to be approximated by a low order polynomial. However, this assumption is not always valid. In particular, we found that fringing in the reddest part of the spectra induces fluctuations in the extracted flux with wavelength due to the inadequacy of the smoothness assumption.} and in any case optimal extraction is not expected to give significant gains over simple extraction at the high signal-to-noise levels we are working with here. We also took spectroscopic flats at the begining of the night with a quartz lamp and reduced the data using both flat-fielded and non flat-fielded spectra. The results where consistent when using both alternate reductions, but the flat-fielded spectra showed higher dispersion in the final transmission spectrum. We therefore used the non flat-fielded spectra in the present work. \subsection{Wavelength calibration} The extracted spectra were calibrated using NeHeAr lamps taken at the start of the night. The wavelength solution was obtained by the following iterative procedure: pixel centers of lines with known wavelengths where obtained by fitting Gaussians to them, and then all the pixel centers, along with the known wavelengths of the lines were fitted by a 6th order Chebyshev polynomial. We checked the absolute deviation of each line from the fit, and removed the most deviant one from our sample, repeating the fit without it. This process was iterated, removing one line at a time, until a rms of less than 2000 m/s was obtained. The rms of the final wavelength solution was $\approx 1200$ m/s, using $27$ lines. The procedure explained in the preceding paragraph served to wavelength calibrate the first spectrum of the night closest in time to the NeHeAr lamps. In order to measure and correct for wavelength shifts throughout the night, the first spectrum was cross-correlated with the subsequent ones in pixel-space in order to find the shifts in wavelength-space. If $\lambda_{t_0,s}(p)$ is the wavelength solution at time $t_0$ (the beggining of the night) for star $s$ as a function of the pixel $p$, then the wavelength solution at time $t$ is just $\lambda_{t,s}(p+\delta p_{t,s})$, where $\delta p_{t,s}$ is the shift in pixel-space found by cross-correlating the spectrum of star $s$ taken at time $t_0$ with the one taken at time $t$. Finally, each spectrum was fitted with a b-spline in order to interpolate each of the spectra into a common wavelength grid with pixel size $0.75 \text{\normalfont\AA}$. \section{Modelling Framework} The observed signal of WASP-6 is perturbed with respect to its intrinsic shape, which we assume ideally to be a constant flux, $F$. This constant flux is multiplied by the transit signal, $f(t; \theta)$, which we describe parametrically using the formalism of \citet{Mandel:2002a}. In what follows $\theta$ represents the vector of transit parameters. The largest departure from this idealized model in our observations will be given by systematic effects arising from atmospheric and instrumental effects, which are assumed to act multiplicatively on our signals. We will model the logarithm of the observed flux, $L(t)$, as \begin{equation} \label{eq1} L(t; \theta) = S(t) + \log_{{10}}f(t; \theta)+\log_{{10}}F + \epsilon(t), \end{equation} \noindent where $S(t)$ represents the (multiplicative) perturbation to the star's flux, which we'll refer to in what follows as the perturbation signal, and $\epsilon(t)$ is a stochastic signal which represents the noise in our measurements (under the term noise we will also include potential variations of the star that are not accounted for in the estimate of the deterministic $S(t)$ and that can be modelled by a stochastic signal). \subsection{Modelling the perturbation signal} \subsubsection{Estimation of Systematic Effects via Principal Component Analysis of the Comparison Stars} Each star in the field is affected by a different perturbation signal. However, these perturbation signals have in common that they arise from the same physical and instrumental sources. In terms of information, this is something we want to take advantage of. We model this by assuming that a given perturbation signal is in fact a linear combination of a set of signals $s_i(t)$, which represent the different instrumental and atmospheric effects affecting all of our lightcurves, i.e., \begin{equation} \label{eq:pert} S_k(t) = \sum_{i=1}^K \alpha_{k,i} s_i(t). \end{equation} Note that this model for the perturbation signal so far includes the popular linear and polynomial trends (e.g., $s_i(t) = t^i$). According to this model, the logarithm of the flux of each of $N$ stars without a transiting planet in our field can be modelled as \begin{equation} L_{k}(t; \alpha) = S_{k}(t; \alpha) + \log_{{10}}F_{k} + \epsilon_{k}(t), \label{eq:model_nt} \end{equation} \noindent where $\alpha$ denotes the set of parameters $\{\alpha_{k,i}\}_{i=1}^K$. In the case in which we have a set of comparison stars, we can see each of them as an independent (noisy) measurement of a linear combination of the signals $s_i(t)$ in Eq.~(\ref{eq:pert}). A way of obtaining those signals is by assuming that the $s_i(t)$ are uncorrelated random variables, in which case these signals are easily estimated by performing a principal component analysis (PCA) of the mean-substracted lightcurves of the comparison stars. Given $N$ comparison stars one can estimate at most $N$ components, and thus we must have $K \leq N$. As written in Eq.~\ref{eq:model_nt} we cannot separate $s_i(t)$ from $\epsilon_k(t)$, and in general the principal components will have contributions from both terms. If $s_i(t) \gg \epsilon_k(t)$ the $K$ principal components that contribute most to the signal variance will be dominated by the perturbation signals, but some projection of the $\epsilon_k$ into the estimates of $s_i$ is to be expected. \subsubsection{Selecting the number of principal components} In our case, the number of components $K$ is unknown a priori. We need therefore to determine an optimal number of principal components to describe the perturbation signal, taking in consideration that there is noise present in the lightcurves of the comparison stars and, thus, some of the principal components obtained are mostly noise. There are several possibilities for doing this depending on what we define as optimal. We will determine the optimal number of components as the minimum number of components that are able to achieve the best predictive power allowed by the maximum set of $N$ components available. As a measure of predictive power we use $k$-fold cross-validation procedure \citep{ESL07}. $k$-fold cross-validation is a procedure which estimates prediction error, i.e., how well a model predicts out-of-sample data. The idea is to split the datapoints in $k$ disjoint groups (called folds). A``validation'' fold is left out and a fit is done with the remaining ``training'' folds, allowing to predict the data in the validation fold that was not used in this fitting procedure. This procedure is repeated for all folds. Denoting the datapoints by $y_i$ and the values predicted on the $k$-th fold by the cross-validation procedure by $f_i^{-k}$, an estimate for the prediction error is \begin{equation} \label{cv_error} \hat{CV} = \frac{1}{N} \sum_{i=1}^{N}\mathcal{L}(y_i^k-f^{-k}_i), \nonumber \end{equation} \noindent where $\mathcal{L}(\cdot)$ is the loss function. Examples of loss functions are the $\mathcal{L}_1$ norm ($\mathcal{L}_1(x) = |x|$) or the $\mathcal{L}_2$ norm ($\mathcal{L}_2(x) = x^2$). In our case, the light curves of the $N$ comparison stars are used to estimate $l<N$ principal components. These $l$ principal components, which are a set of light curves $\{s_i\}_{i=1}^l$, are our estimates of the systematic effects, and we use the out-of-transit part of the light curve of WASP-6 as the validation data by fitting it with the $\{s_i\}_{i=1}^l$. In more precise terms, if $y(t_k)$ denote the time series of the out-of-transit portion of the light curve of WASP-6, we apply $k$-fold cross-validation by considering a model of the form $y(t_k) = \sum_{i=1}^l \alpha_is_i(t_k)$. \subsection{Joint Parameter Estimation for Transit and Stochastic Components} \begin{figure} \plotone{f3.eps} \caption{Example of the structure that is expected in the power spectral density (PSD) of residual signals of the different types considered in this paper. The PSDs shown here are the mean of $10000$ realizations with the noise-structures indicated in the figure. Note that the white-noise PSD is flat, while the flicker-noise and the ARMA($0,1$) models cover low and high-frequency ranges. respectively. \label{noise_graph}} \end{figure} In the past sub-sections we set up an estimation process for the signal given in eq. (\ref{eq:pert}) using principal component analysis. It remains to specify a model for the stochastic signal which we have termed noise, i.e., the $\epsilon(t)$ term in eq. (\ref{eq1}). As noted above, the principal components will absorb part of the $\epsilon(t)$, and so our estimate of the noise may not necessarily accurately reflect the $\epsilon(t)$ term in eq. (\ref{eq1}) assuming the model holds. Nonetheless, this is of no consequence as we just aim to model the residuals after the time series has been modeled with the $\{s_i\}_{i=1}^l$. While we still call this term $\epsilon(t)$ in what follows, one should bear in mind this subtlety. An important feature of the correlated stochastic models we consider is that they can model trends. The $\{s_i\}_{i=1}^l$ are obtained from the comparison stars, and while the hope is that they capture all of the systematic effects, it is possible that some systematic effects individual to the target star are not captured. The stochastic ``noise'' models considered below that have time correlations can in principle capture remnant individual trends particular to WASP-6. We make use of Markov Chain Monte Carlo \citep[MCMC; see, e.g.,][]{Ford:2005a} algorithms to obtain estimates of the posterior probabilty distributions of our parameters, $\theta, \alpha, \eta$, given a dataset $\mathbf{y}$, where we have introduced a new set of parameters $\eta$ characterizing a stochastic component (see below). The posterior distribution $p(\mathbf{\theta, \alpha, \eta}|\mathbf{y})$ is obtained using a prior distribution for our parameters $p(\mathbf{\theta, \alpha, \eta})$ and a likelihood function, $p(\mathbf{y}|\mathbf{\theta, \alpha, \eta})$. Following previous works \citep[e.g.][]{Carter:2009a,Gibson:2012:GP} we assume that the likelihood function is a multivariate Gaussian distribution given by \begin{eqnarray} \label{eq:lik} p(\mathbf{y}|\mathbf{\theta, \alpha,\eta}) &=& \frac{1}{(2\pi)^{n/2}|\mathbf{\Sigma_\eta}|^{1/2}} \\ & & \times \exp\left[{-\frac{1}{2} (\mathbf{y}-\mathbf{g}(\theta, \alpha))^T\mathbf{\Sigma_\eta}^{-1}(\mathbf{y}-\mathbf{g}(\theta, \alpha))}\right], \nonumber \end{eqnarray} \noindent where $\mathbf{g}(\theta, \alpha)$ is the function that predicts the observed datapoints and $\mathbf{\Sigma}_\eta$ is the covariance matrix which depends on the set of parameters $\eta$. It is the structure of this matrix which defines the type of noise of the residuals. Previous works have proposed to account for time correlated structure in the residuals using flicker noise models, where it is assumed that the noise follows a Power Spectral Density (PSD) of the form $1/f$ \citep{Carter:2009a}, and Gaussian processes, where the covariance matrix is parametrized with a particular kernel that can incorporate correlations depending on a set of input parameters, including time \citep{Gibson:2012:GP,Gibson:2013:w29} \footnote{In \citet{Gibson:2012:GP} a set of optical state parameters is used within a Gaussian process framework to model what we have termed here the perturbation signal, while in \citet{Gibson:2013:w29} the Gaussian process is used to account for the time correlation structure of the residuals in a procedure more comparable to ours.}. In the present work we consider three different models: a white-noise model, where the covariance matrix is assumed to be diagonal, a flicker-noise model, and ARMA$(p,q)$ models, where the structure of the covariance is determined via the parameters $p$ and $q$ (see \S\ref{ssec:ARMA} below for the definition of ARMA$(p,q)$ models). The reason for choosing these three models is that they sample a wide range of spectral structure of the noise: white-noise models define models where the PSD is flat, while flicker and ARMA-like models define noise structures with PSDs with power in low and high frequencies, respectively. Figure~\ref{noise_graph} illustrates the various structures the PSD can have for the different noise structures considered here. \subsubsection{Flicker noise model} Flicker-noise is known to arise in many astrophysical time series \citep{Press:1978a}. It is a type of noise that fits long-range correlations in a stochastic process very well because of its assumed PSD shape of $1/f$. An efficient set of algorithms for its implementation in MCMC algorithms was proposed recently by \citet{Carter:2009a}. The basic idea of this implementation is to assume that the noise is made up of two components: an uncorrelated Gaussian process of constant variance and a correlated Gaussian process which follows this flicker-noise model. These two components are parametrized by $\sigma_w$ and $\sigma_r$, characterizing the white and correlated noise components, respectively. A wavelet transform of the residuals takes the problem into the wavelet basis where flicker noise is nearly-uncorrelated making the problem analytically and computationally more tractable. \subsubsection{ARMA noise model} \label{ssec:ARMA} Autoregressive-moving-average (ARMA) models have been in use in the statistical literature for a long time with a very broad range of different applications \citep{TSA91}. Although known for long in the astronomical community \citep[e.g.,][]{Scargle:1981a,Koen:1993a}, these noise models haven't been used so far for transit lightcurves to the best of our knowledge.\footnote{ARMA$(p,q)$ models have been considered recently in the modeling of radial velocity data \citep{Tuomi:2013:tceti}. The very iregular sampling in those data needs careful consideration, in the case of transit light curve analysis their use is more direct given the nearly uniform sampling that is obtained for these observations.} The time series $X_{t_k}$ of an ARMA($p,q$) process, where the $t_k$ are the times of each observation, satisfies \begin{equation} X_{t_k}= \sum_{i=1}^p \phi_i X_{t_{k-i}}+\sum_{i=1}^q \theta_i \varepsilon(t_{k-i})+\varepsilon(t_k). \end{equation} \noindent where the $\{\phi_i\}^p_{i=1}$ and $\{\theta_i\}^q_{i=1}$ are the parameters of the model and $\varepsilon_{t_k}$ is white noise with variance $\sigma_w^2$. The orders $(p,q)$ of the ARMA($p,q$) model define how far in the past a given process looks at when defining future values. Long-range correlations need a high order ARMA model, while short-range correlations need lower order models. An ARMA model allows us to explore a higher range of noise structures in a complementary way to flicker noise models. In order to fit ARMA models to the residuals via an MCMC algorithm, we need the likelihood function of the model given that the residuals follow an ARMA($p,q$) model. For this we implented the recursive algorithm described in \citet[Chapter 8]{TSA91}, which assumes that $\varepsilon (t_k)$ follows a normal distribution with constant variance and that the ARMA process is causal and invertible. \subsubsection{Stochastic model selection} \label{ssec:model_sel} Given the three proposed noise models for the stochastic signal $\epsilon(t)$, it remains to define which of the three affords a better description of the data, taking into account the trade-off between the complexity of the proposed model and its goodness-of-fit. There are several criteria for model selection, a comprehensive comparison between different criteria has been done recently by \citet{Vehtari:2012a}. The main conclusion is that, despite the fact that many model selection criteria have good asymptotic behavior under the constraints that are explicit when deriving them, there is no ``perfect model selection'' criteria, and there is a need to compare the different methods in the finite-sample case. Following this philosophy, we compare in this work the results of the AIC \citep[``An Information Criterion'';][]{Akaike:1974a}, the BIC \citep[``Bayesian Information Criterion'';][]{Schwarz:1978a}, the DIC \citep[``Deviance Information Criterion'';][]{Spiegelhalter:2002a} and the DIC$^{\rm{A}}$, a modified version of DIC with a proposal for bias-correction \citep{Ando:2012a}. \section{Lightcurve analysis} From the initial ten comparison stars, only seven were used to correct for systematic effects. One star was eliminated on the grounds of having significantly less flux than the rest and the other two due to not having the whole spectral range of interest recorded in the CCD. Given the seven comparison stars, we applied PCA to the mean-substracted time-series in order to obtain an estimate of the perturbation signals. We describe now the construction and analysis of the white light transit light curve and the light curves for 20 wavelength bins. \subsection{White light transit light curve} \label{ssec:wl} In order to obtain the white light transit light curve of WASP-6, we summed the signal over the wavelength range $4718$ to $8879$ \AA\ for the target and the comparison stars. Then, we performed $5$-fold cross-validation in the out-of-transit part of the light curve of WASP-6 in order to obtain the optimal number of components to be used in our MCMC algorithm. The result of this cross-validation procedure is shown in Figure \ref{cv_graph_wl}. \begin{figure} \plotone{f4.eps} \caption{Cross-validation error for the prediction of out-of-transit data using different number of principal components with the $5$-fold cross-validation procedure we adopted. Note that the minimum is at $k=7$ (dashed lines indicate the value at the minimum and a value higher by $1\sigma$), but the value at $k=5$ achieves similar error with lower degrees of freedom. \label{cv_graph_wl}} \end{figure} From the results of our $5$-fold cross-validation procedure one may choose either $k=7$ (the value of the minimum error) or $k=5$, which is at less than 1-standard error away from the value at the minimum. We choose this last value because it allows a similar prediction error as the minimum with two less parameter. \begin{figure} \plotone{f5.eps} \caption{Power spectral density of the residuals of the fit using white Gaussian noise (see Figure~\ref{white_light_tl} to see the residuals). Note the preference for high power at small frequencies. \label{res_PSD}} \end{figure} Using the first five principal components, we fitted the model proposed in Equation~\ref{eq1} first using a white Gaussian noise model via MCMC using the PyMC python module \citep{Patil:2010a}. We used wide truncated Gaussian priors\footnote{We denote our truncated Gaussian priors as TruncNorm$(\mu,\sigma^2)$. They are Normal distributions restricted to take values in the range ($0,\infty$), i.e. they are restricted to be positive.} in order to incorporate previous measurements of the transit parameters obtained by \citet{Gillon:2009a} and \citet{Dragomir:2011a}, and the orbital parameters of \citet{Gillon:2009a}. We adopt a quadratic limb darkening law of the form $I(\mu) = I(1)[1-u_1(1-\mu)-u_2(1-\mu)^2]$, where $\mu=\cos(\theta)$ and $\{u_1,u_2\}$ are the limb darkening coefficients. It is well known that $u_1$ and $u_2$ are strongly correlated \citep{Pal:2008a}, and it has been shown that if we define new coefficients $(w_1,w_2)$ that are related to $(u_1,u_2)$ by $(w_1,w_2) =R(\pi/4)(u_1,u_2)$ where \[ R(\theta) = \left( \begin{array}{cc} \cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\\ \end{array} \right) \] \noindent is a rotation matrix by $\theta$ radians, then $w_1, w_2$ are nearly uncorrelated and transits are mostly sensitive to $w_1$, with $w_2$ essentially constant \citep{Howarth:2011a}. In our MCMC analysis we fix $w_2$ to the (wavelength dependent) value calculated for the stellar parameters of WASP-6 as described in \citet{Sing:2010a}. Five MCMC chains of $10^6$ links each, plus $10^5$ used for burn-in, where used. We checked that every chain converged to similar values and then thinned the MCMC samples by $10^4$ in order to get rid of the auto-correlation between the links. We used the thinned sample as our posterior distribution, using the posterior median as an estimate of each parameter (using the point in the chain with the largest likelihood leads to statistically indistinguishable results). The fit using a white Gaussian model for the noise allows us to investigate the structure of the residuals, which show clear long-range correlations, as is evident in the power spectral density of the residuals plotted in Figure~\ref{res_PSD}. Note that the power is significantly higher at lower frequencies, which suggest that the residuals have long-range correlations. We performed an MCMC fit using an $1/f$-like and another MCMC fit using an ARMA-like model for the residuals. Note that in order to fit an ARMA($p$,$q$) model with our algorithms, we need to define the order $p$ and $q$ of the ARMA process. In order to do this, we fitted several ARMA($p$,$q$) models to the residuals of the white Gaussian MCMC fit for different orders $p$ and $q$ using maximum likelihood, and calculated the AIC and BIC of each fit. In the sense of minimizing these information criteria, the ``best'' ARMA model was an ARMA($2$,$2$) model, so we performed our MCMC algorithms assuming this as the best model for the ARMA case. The results of the MCMC fits assuming a white Gaussian noise model, an ARMA(2,2) noise model and a $1/f$ noise model for the residuals are shown in Figure~\ref{white_light_tl}, and a summary of the values of the information-criteria for each of our MCMC fits are shown in Table~\ref{tbl-1}. \begin{deluxetable}{clll} \tabletypesize{\scriptsize} \tablecaption{Values for the different information criteria (IC) for each noise-model considered in our MCMC fits. \label{tbl-1}} \tablewidth{0pt} \tablehead{ \colhead{IC} & \colhead{WG model$^\textrm{a}$} & \colhead{ARMA$(2,2)$ model$^\textrm{a}$} & \colhead{$1/f$ model$^\textrm{a}$} } \startdata AIC & -1260.59 & -1273.38 & -1833.72 \\ BIC & -1230.46 & -1234.20 & -1801.08 \\ DIC & -1165 & -1202.03 & -1793.60 \\ DIC$^\textrm{A}$ & -1105.20 & -1149.86 & -1760.53\\ \enddata \tablenotetext{a}{Note that each of the noise models has a different number of parameters: the white Gaussian noise model (WG model) has 12 parameters, the ARMA($2$,$2$)-like noise model 16 parameters and the $1/f$-like noise model has 13 parameters.} \end{deluxetable} \begin{figure*} \plotone{f6.eps} \caption{({\em Top}) The circles show the baseline-substracted lightcurves (i.e., lightcurves with the fitted perturbation signal substracted) using the different noise models indicated. We also show the corresponding best-fit transit models (dashed line) and the best-fit transit models plus an estimate of the correlated noise component (solid line, only for the two right-most lightcurves). The shaded regions indicate points that where used as out-of-transit data by the $5$-fold cross-validation procedure that selected the number of principal components to use in the fits. ({\em Bottom}) Residuals between the best-fit transit model and the baseline subtracted lightcurves (circles). The solid lines in the two right-most set of points indicate estimates of the correlated components obtained by projecting the residuals into the best-fit correlated component model (see \S5). The difference between the points and the solid lines (dashed line for the white Gaussian noise case) is the white Gaussian noise component, whose dispersion $\sigma_w$ is indicated for each of the noise models considered and also illustrated with $\pm 1\sigma_w$ bands. \label{white_light_tl}} \end{figure*} \begin{deluxetable*}{clll} \tabletypesize{\scriptsize} \tablecaption{MCMC priors used for the white light transit analysis. \label{tbl-3}} \tablewidth{0pt} \tablehead{ \colhead{Transit parameter} & \colhead{Description} & \colhead{Prior$^\textrm{a}$} & \colhead{Units} } \startdata $R_p/R_s$ & Planet-to-star radius ratio & TruncNorm$(0.14,0.01^2)^A$ & -\\ $t_0$ & Time of mid-transit & TruncNorm$(55473.15,0.01^2)^c$ & MHJD\\ $P$ & Period & TruncNorm$(3.36,0.01^2)^b$ & days\\ $i$ & Inclination & TruncNorm$(1.546,0.017^2)^b$ & Radians\\ $R_s/a$ & Stellar radius to semi-major axis ratio & TruncNorm$(0.09,0.01^2)^b$ & - \\ $w_1$ & Linear limb-darkening coefficient & U$(0,1)$ & - \\ $\sigma_w$ & Standard deviation of the white noise part of the noise model & U$(0,1)$ & mag\\ $\sigma_r$ & Noise parameter for the $1/f$ part of the noise model$^\textrm{d}$ & U$(0,1)$ & mag\\ \enddata \tablenotetext{a}{The TruncNorm$(\mu,\sigma^2)$ distributions are Normal distributions truncated to take values in the range ($0,\infty$), i.e., they are required to be positive. The U$(a,b)$ distributions are uniform distributions between $a$ and $b$.} \tablenotetext{b}{Obtained from the arithmetic mean between the values cited in \cite{Gillon:2009a} and \cite{Dragomir:2011a}. The variance of the prior covers more than $3\sigma$ around their values.} \tablenotetext{c}{Obtained from the values cited in \cite{Gillon:2009a}. The variance of the prior covers more than $3\sigma$ of their values.} \tablenotetext{d}{Not to be interpreted as the standard-deviation of the $1/f$ part of the noise.} \end{deluxetable*} \begin{deluxetable*}{clll} \tabletypesize{\scriptsize} \tablecaption{WASP-6b transit parameters estimated using the white light transit lightcurve using a $1/f$-like noise model. \label{tbl-2}} \tablewidth{0pt} \tablehead{ \colhead{Transit parameter} & \colhead{Description} & \colhead{Posterior value} & \colhead{Units} } \startdata $R_p/R_s$ & Planet-to-star radius ratio & $0.1404^{+0.0010}_{-0.0010}$ & -\\ $t_0$ & Time of mid-transit & $55473.15365^{+0.00016}_{-0.00016}$ & MHJD\\ $P$ & Period & $3.3605^{+0.0098}_{-0.0101}$ & days\\ $i$ & Inclination & $1.5465^{+0.0074}_{-0.0055}$ & Radians\\ $R_s/a$ & Stellar radius to semi-major axis ratio & $0.0932^{+0.0015}_{-0.0015}$ & - \\ $w_1$ & limb-darkening coefficient (see \S~\ref{ssec:wl}) & $0.44^{+0.12}_{-0.12}$ & - \\ $\sigma_w$ & Standard deviation of the white noise part of the noise model & $0.1492^{+0.1078}_{-0.1021}$ & mmag\\ $\sigma_r$ & Noise parameter for the $1/f$ part of the noise model$^\textrm{a}$ & $3.26^{+0.03}_{-0.50}$ & mmag\\ \enddata \tablenotetext{a}{Not to be interpreted as the standard-deviation of the $1/f$ part of the noise.} \label{tab:wl_transit} \end{deluxetable*} It is important to note that the residuals shown in Figure~\ref{white_light_tl} are the signal left over after subtracting the deterministic part of the model only (denoted by $\mathbf{g}(\theta, \alpha)$ in equation~\ref{eq:lik}). Therefore, they still contain in the case of the ARMA(2,2) and $1/f$ noise models, a correlated stochastic component summed with a white noise component. As opposed to deterministic components, the stochastic components cannot just be predicted given the times $t_i$ of the observations, as we only know the distribution of expected values once we know the parameters ($\{\theta_1,\theta_2,\phi_1,\phi_2,\sigma_w\}$ for ARMA(2,2), $\{\sigma_r,\sigma_w\}$ for flicker noise and $\sigma_w$ for white Gaussian noise). But even though we cannot plot a unique expected trend given the best-fit parameters for the correlated noise models, we can apply filters to the residuals that project them into the best-fit model, or viewed differently, we can filter out the expected white Gaussian noise component leaving just the correlated part. Such filters allow us then to build {\em estimates} of the particular realization of a given process that is present in our residuals. For the ARMA(2,2) and $1/f$ case we plot in the bottom panel of Figure~\ref{white_light_tl} estimates of the correlated components as solid lines through the residuals\footnote{For the $1/f$ model we use the whitening filter presented in \citet[][see \S3.4]{Carter:2009a}, while for the ARMA(2,2) process we use prediction equations in the time domain \citep[][see \S5.1]{TSA91}.}. It is the difference between these lines and the residual points that constitute the remaining white Gaussian noise component with dispersion $\sigma_w$ indicated in the residuals panel. It is informative to discuss the different values of the $\sigma_w$ parameter inferred for each of the models we consider. For the white Gaussian noise model, the value of this parameter is $\sigma_w=0.55^{+0.05}_{-0.04}$ mmag, which is an order of magnitude higher than the underlying Poisson noise ($\approx$ 0.06 mmag, see \S~\ref{sec:obs}). The same goes for the ARMA-like noise model fit, which has a value of this parameter of $\sigma_w=0.49^{+0.04}_{-0.04}$ mmag. Finally, for the $1/f$-like noise model the value of this parameter is $\sigma_w=0.15^{+0.11}_{-0.10}$ mmag, which is just $\approx 2.5$ times the Poisson noise limit. Motivated by this result and by the values of the information-theoretic model selection measures quoted on Table~\ref{tbl-1}, we conclude that the preferred model is the one that models the underlying stochastic signal as $1/f$-like noise. We note that as \citet{Carter:2009a} stress in their work, using this model for the residuals increases the uncertainty in the transit parameter, but provides more realistic estimates for them. We select the model parameters fitted using the $1/f$ noise model, which are quoted in Table~\ref{tbl-2}, as the best estimates from now on. These parameters are generally an improvement on previous measurements by \citet{Gillon:2009a} and \citet{Dragomir:2011a}. We close this section by noting that the principal component regression we adopted was able to recover from the high periods of absorption present 0--2 hrs after mid-transit (see Figure~\ref{fig:counts}) without leaving a noticeable trace in the final light curve. \subsection{WASP-6b transmission spectrum} The procedures explained in the previous sub-section were replicated for the time series of each of 20 wavelength bins, but now leaving only the planet-to-star radius ratio, the linear limb darkening coefficient $w_1$ and the noise parameters as free parameters (all the other transit parameters where fixed to the values shown in Table \ref{tbl-2} while the values for $w_2$ are calculated as indicated in \S~\ref{ssec:wl} and are indicated in Table~\ref{tab:transits}). Priors were the same as the white light analysis for parameters for $\mu_1, \sigma_r$ and $\sigma_w$, and the MCMC chains were set-up similarly except that a thinning value of $10^3$ was used. The prior for $(R_{pl}/R_*)$ was set to $TruncNorm(0.1404,0.01^2)$, i.e.\ we set the mean to the posterior value of our white light analysis\footnote{}. The wavelength bins were chosen to be $\approx$ 200 \AA\ wide, with boundaries that lie in the pseudo-continuum of the WASP-6 spectrum, as boundaries in steep parts of the spectrum such as spectral lines would in principle maximize redistribution of flux between adjacent bins under the changing seeing conditions which set the spectral resolution in our setup. For a given spectral bin, the number of principal components was selected separately because different systematics may be dependent on wavelength, and therefore the number of principal components needed may change. In practice, no more than one principal component was added or subtracted in each wavelength bin when compared to the five components used for the white light curve. In all of them, however, the noise model to be used is the same, the $1/f$-like noise model. Figure~\ref{fig:transits} shows the baseline-substracted data along with the best-fit transit model at different wavelengths, and Table~\ref{tab:transits} tabulates the transit parameters from the MCMC analysis for each wavelength bin. The values of $R_{\rm pl}/R_*$ as a function of wavelength constitute our measured transmission spectrum which is shown in Figure~\ref{fig:transpec_dataonly}; the typical uncertainty in $R_{\rm pl}/R_*$ is $\approx 0.8$\%, and the inferred $\sigma_w$ values are typically $\approx$ 1--3 times the Poisson limit in each wavelength bin, for which a typical value is 0.25 mmag. \begin{figure*} \plottwo{f7a.eps}{f7b.eps} \caption{ ({\em Left}) Transits as observed in different wavelength channels along with the best-fit transit signal plus the stochastic $1/f$ noise signal. The obvious outlier close to $t-t_0 =0.05$ at the first, bluest, wavelength bin was not included in the fit. ({\em Right}) Residuals between the best-fit transit model and the baseline subtracted lightcurves for each of the wavelength channels (circles). The solid lines indicate estimates of the correlated $1/f$ component obtained as described in \S5. The best fit parameters of the $1/f$ component are indicated over each set of residuals. \label{fig:transits}} \end{figure*} \begin{figure*} \plotone{f8.eps} \caption{Transmission spectrum of WASP-6b measured with IMACS. \label{fig:transpec_dataonly}} \end{figure*} \subsection{Limits on the contribution of unocculted stellar spots} As pointed out in several works \citep[e.g.,][]{Pont:2008:hd189,Sing:2011:hd189}, stellar spots -- both occulted and unocculted during transit -- can affect the transmission spectrum. In our transit light curve we see no significant deviations that could be attributed to an occulted starspot, so in what follows we estimate the potential signal induced in the transmission spectrum by unocculted stellar spots. Stellar spots can be modelled as regions in the surface of the star that have a lower effective temperature that the photosphere. Given that WASP-6 is a G star, we can use the Sun as a proxy to infer spot properties. Sunspots can be characterized as having a temperature difference with the photosphere of $\Delta T \approx -500$ K \citep[][see \S2.2]{Lagrange:2010:spots}. This is an effective value that represents a good average for the different values of $\Delta T$ in the umbral and penumbral regions. Given a fraction of the stellar surface $f_s$ covered by spots characterized by temperature $T+\Delta T$, the total brightness of the star will be changed by a factor $1 + f(\lambda) = 1 + f_s (I_\lambda(T+\Delta T, \theta) / I_\lambda(T,\theta) - 1)$, where $I_\lambda(T, \theta)$ is the surface brightness of a star with effective temperature $T$ and other stellar parameters given by $\theta = (\log g, Z,\ldots)$. If the fractional change in flux $\epsilon$ caused by spots at a reference wavelength $\lambda_0$ can be measured then $f_s$ can be inferred to be $f_s = \epsilon / ( I_{\lambda_0}(T+\Delta T,\theta)/I_{\lambda_0}(T,\theta) - 1)$ and then we can write $f(\lambda) = \epsilon (I_\lambda(T+\Delta T, \theta) / I_\lambda(T,\theta) - 1) (I_{\lambda_0}(T+\Delta T, \theta) / I_{\lambda_0}(T,\theta) - 1)^{-1}$ \citep[c.f.][eq. 4]{Sing:2011:hd189}. A change in the stellar luminosity due to starspots will have an effect on the measured value of $k = R_p/R_*$, and as the effect is chromatic, it will induce an effect in the tranmission spectrum. The decrease of flux during transit with respect to the out of transit flux $F_0$ is given by $ (\Delta F/ F_0) = k^2$ (neglecting any emission from the planet). If $F_0$ is changed by starspots by a fractional amount $f(\lambda)$ we have $\delta (\Delta F / F_0) \approx -(\Delta F / F_0) \delta F_0 / F_0 \equiv -(\Delta F / F_0) f(\lambda) = k^2 f(\lambda) = 2k \delta k$, where we have used $f(\lambda) \equiv \delta F_0 / F_0$. From here we get finally\footnote{This is a special case of the derivation of \citet{Desert:2011:hd189}, namely their case $\alpha=-1$ which corresponds to neglecting changes in brightness of the fraction of the stellar disk that is not affected by spots.} \[ \frac{\delta k}{k} = \frac{-f(\lambda)}{2}. \] We used the method described in \citet{Maxted:2011:w41} to look for periodic variations due to spots in the lightcurves of WASP-6 from the WASP archive \citep{Pollacco:2006:wasp}. Data from 3 observing seasons were analysed independently. The lightcurves typically contain $\sim$4500 observations with a baseline of 200 nights. From the projected equatorial rotation velocity of WASP-6 and its radius \citep{Doyle:2013:wasp_sppars} we estimate that the rotation period is $16 \pm 3$\, days. There are no significant periodic variations in this range in any of the WASP lightcurves. To estimate the false alarm probability of any peaks in the periodogram we use a bootstrap Monte Carlo method. The results of this anaysis can also be used to estimate an upper limit of 2 mmag to the amplitude of any periodic variation in these lightcurves. Therefore, $\epsilon$ is constrained to be less than the implied peak-to-peak amplitude, $|\epsilon| < 4$ mmag. While this constraint is valid only at the time the discovery light curve was taken, lacking any other constrains we will take this value as our upper limit. In order to estimate $f(\lambda)$ we make use of the high resolution Phoenix synthetic stellar spectra computed by \citet{Husser:2013}. We assume $T=5400$ K, $\Delta T = -500$ K, and other stellar parameters to be the closest available in the models grid to those presented in \citet{Gillon:2009a}. The resulting expected maximum value for $\delta k / k$ given the constrains on the rotational modulation afforded by the WASP-6 discovery light curve is presented in Figure~\ref{fig:spots}. As can be seen, the change in $\delta k / k$ induced by starspots over the wavelength range of our spectrum is expected to be $< 5\times10^{-4}$. This is more than one order of magnitude less than the change we in $\delta k / k$ we infer from our observations (see Figure~\ref{fig:transpec_dataonly}), and thus we conclude that the observed transmission spectrum is not produced by unocculted spots. \begin{figure} \plotone{f9.eps} \caption{Predicted fractional change in $k=(R_p/R_*)$ due to stellar spots that produce a rotation amplitude of $\epsilon=-4$ mmag in the $V$ band. The spots are assumed to have a temperature lower than that of the photosphere by $\Delta T=-500$ K. \label{fig:spots}} \end{figure} \section{The Transmission Spectrum: Analysis} \label{sec:discussion} The main feature of the transmission spectrum shown in Figure~\ref{fig:transpec_dataonly} is a general sloping trend with $R_p/R_*$ becoming smaller for longer wavelengths. The general trend is broken by the two redmost datapoints that could be indicating the presence of a source of opacity in that region, but the error bars of the extreme points are large, as the measurements there are naturally more uncertain because the spectrograph efficiency changes drops rapidly at the red end of the spectrum and this region of the spectrum can be badly affected by variations in night sky emission and telluric absorption. There are no indications of the broad features expected at the resonance doublets of Na I at 589.4 nm or K I at 767 nm. To make the statements above quantitative, we compare our measured transmission spectrum with the clear atmosphere models computed by \citet{Fortney:2010a}. We scale the models that have a surface gravity of $g=10$ m/s$^2$ to match the measured surface gravity of WASP-6b \citep[$g=8.71$ m/s$^2$,][]{Gillon:2009a} by scaling the spectral features from the base level by $10/g$. We do not have an absolute reference to be able to place the 10 bar level such as could be provided by observations at infra-red wavelengths, which in the \citet{Fortney:2010a} models is set to 1.25 $R_J$, and so we fit for an overall offset in the $y$-axis. In other words, our measured transmission spectrum will be able to discriminate on the shape of the models but will provide no independent information on the absolute height in the atmosphere where the features are formed. Given that the equilibrium planet temperature assuming no albedo and full redistribution between the day and night sides is $T_{\rm eff} = 1194^{+58}_{-57}$ K \citep{Gillon:2009a} we will compare our measurements with the $T=1000$ and $T=1500$ K models of \citet{Fortney:2010a}. The $T=1000$ K model has Na and K as the main absorbers, while the $T=1500$ K model also displays the effects of partially condensed TiO and VO resulting in a very different transmission spectra. In addition to clear atmosphere models we fit our data to a pure scattering spectrum as given by \citet{lecavelier:2008:rayleigh} for a scattering cross section $\sigma = \sigma_0(\lambda/\lambda_0)^\alpha$, \begin{equation} \frac{d\,R_p}{d\,\ln \lambda} = \alpha H_c = \frac{\alpha k_B T}{\mu g}, \end{equation} \noindent where $H_c$ is the scale-height of the particles producing the scattering, which we assume to be equal to the gaseous scale height $H=k_B T / \mu g$, although condensates producing scattering can have smaller scale heights than the gas, $H_c \sim H/3$, unless they are very well mixed vertically \citep{Fortney:2005:condensates}. In the case of pure scattering we will fit for two parameters to match to our observed spectrum, the combination $\xi = \alpha T$, and a zero-point offset. We can then interpret the value of $\xi$ assuming Rayleigh scattering $\alpha=-4$ or the expected values of the equlibrium temperature for the atmosphere of WASP-6b. Along with the transmission spectrum, Figure~\ref{fig:transpec} shows the results of fitting the models to our observed transmission spectrum. It is clear to the eye that the best fit is given by the pure scattering model, with the clear atmosphere models giving considerably worse fits. The clear atmosphere models fail to give a better match to the spectrum due to the lack in the latter of evidence for the broad features expected around Na and K. The AIC for the scattering model assuming Gaussian noise given by the known error bars gives $-115.3$, while the values for the $T=1000$ and $T=1500$ clear atmosphere models are $-97.2$ and $-90.9$ respectively, providing a very significant preference for the scattering model. A $\chi^2$ analysis gives a $p$-value of 0.04 for the pure scattering model ($\chi^2 = 30$ for 18 degrees of freedom), while the probabilities for the data being produced by either of the clear atmosphere models are exceedingly small ($\chi^2 =80$ and $106$ with 19 degrees of freedom for the $T=1000$ and 1500 models, respectively, giving $p$-values $<10^{-8}$ for both). Based on the numbers above, only the scattering model is viable, but these analyses ignore potential correlation of the errors in the wavelength dimension. Looking at the light curves in Figure~\ref{fig:transits} one can see that in some cases features in the light curves repeat between adjacent wavelength channels. In order to assess the potential impact of correlations in the wavelength direction, we compute the partial autocorrelation function (PACF) for the residuals in the wavelength dimension. Denoting the residuals of the transit fits shown in Figure~\ref{fig:transits} by $r_{il}$, where $i$ indexes time and $l$ the wavelength channel, we compute the PACF for the 20 vectors $(r_{1l},r_{2l},\ldots,r_{nl})$ with $l=1,\ldots,20$ and $n=91$. The PACF has one significant component at lag 1, which shows that the residuals have indeed correlations with the adjacent channels that are suggestive of an AR(1) correlation struture\footnote{AR(p) denotes an ARMA(p,0) process}. This does not imply that the $(R_p/R_*)_\lambda$ values will necessarily have such correlation, but we should check how the models fare when including such potential correlation structure in the fits. We performed fits then of all three models including an AR(1) component, to see if it gave a significantly better model as gauged by the information criteria listed in \S~\ref{ssec:model_sel}. The scattering model does not need an additional AR(1) component, while for the clear atmosphere models including correlation gives a significantly better fit. This should come as no surprise, as the clear atmosphere models give a poor fit to start with, and including an extra AR(1) component can effectively model some of the residual structure. But even after accounting for potential correlation structure on them, the scattering model is significantly better than clear atmosphere models. We conclude therefore that our measured transmission spectrum is most consistent with a featureless sloped spectrum and does not present significant evidence for the features predicted by clear atmosphere models even if trying to account for the differences between the model and the observations with correlated errors with short lags between the wavelength channels as suggested by the residuals in the light curves. The best-fit value for $\xi$ is $\xi = -10670 \pm 3015$. If we fix the temperature to the equilibrium value given by \citet{Gillon:2009a}, then we would infer $\alpha = -9 \pm 2.5$ and, inversely, when assuming Rayleigh scattering we'd infer a temperature of $T=2667\pm750$ K. The inferred values for $\alpha$ and $T$ are consistent within 2$\sigma$ from the values for Rayleigh scattering and the equilibrium temperature $T=1194^{+58}_{-57}$ given by \citet{Gillon:2009a}, but the uncertainties are too large to allow any further conclusions, especially when considering the additional uncertainty in the scale height assigned to the material responsible for the scattering. \begin{figure*} \plotone{f10.eps} \caption{Transmission spectrum of WASP-6b along with various models. Black dots with error bars indicate our measurements, while blue squares indicate the binned model of \cite{Fortney:2010a} with $T_{\rm{eq}} = 1500$ K and red diamonds indicate the binned model using $T_{\rm{eq}} = 1500$ K. The green-line and triangles indicate the best-fit line for a scattering model, which is the favored model in this case. \label{fig:transpec}} \end{figure*} \section{Discussion and Conclusions} \label{sec:conclusions} We have measured the optical transmission spectrum for WASP-6b in the range $\approx$ 480 to 860 nm via differential spectrophotometry using 7 comparison stars with the IMACS spectrograph on Magellan. By modeling the systematic effects via a principal component analysis of the available comparison stars and a white-noise model for the noise, we are able to achieve light curve with residuals of order $\approx 0.8$ mmag in 200 nm channels per 140 sec exposure, and $\approx 0.5$ mmag in the summed (white-light) light curve. In order to take into account possible remaining trends particular to the target star and the correlated structure of the noise we probe the appropriateness of both short (ARMA(p,q)) and long ($1/f$ ``flicker'' noise) stochastic process, making use of well established information criteria to select the model most appropriate for our particular observations which turned out to be the $1/f$ model. We believe it is fundamental to carry out a residual analysis for each particular observation. Lacking a detailed physical model for a given correlation structure it should be the data that select which is the most appropriate for a given observation. With the $1/f$ model the inferred white noise components are $\approx$ 1--3 times the expected Poisson shot noise ($\sigma_w=0.16$ mmag per 140 sec exposure for the white light curve and $\sim 0.6$ mmag per 140 sec exposure for the 200nm channels). The measured spectrum has a general trend of decreasing planetary size with wavelength, and does not display any evident additional features. We fit our transmission spectrum with three different models: two clear transmission spectra from \citet{Fortney:2010a} and a spectrum caused by pure scattering. Our main conclusion is that the transmission spectrum of WASP-6b is most consistent with that expected from a scattering process that is more efficient in the blue. In addition, the spectrum does not show the expected broad features due to alkali metals expected in clear atmosphere models which give significantly less satisfactory description of our data, even when allowing for the errors to be correlated between different wavelength bins. We conclude that the spectrum is most consistent with a featureless spectrum that can be produced by scattering. The potentially prominent role of condensates or hazes in determining the transmission spectra of exoplanets has been apparent from the very first measurement \citep{Charbonneau:2002:hd209}, and our transmission spectrum of WASP-6b is in line with what seems to be building trend for transmission spectra with muted features in the optical. Higher resolution observations around the alkali lines for WASP-6b will be valuable to see if they remain at detectable levels over the mechamism that is veiling the very broad lines that are expected for clear atmospheres. We note that the expected equilibrium temperature for WASP-6b is similar to that of HD~189733b, so it may be the case that a similar obscurer is acting in both systems. Our work adds a new instrument (IMACS) to the rapidly increasing set of ground-based facilities that have been succesfully used to probe exoplanetary atmospheres. The constraints that can be obtained using ground-based facilities is a powerful complement to those possible from space-based facilities and allow us to access a much broader pool of systems more representative of the typical brightness of hosts discovered by ground-based transit surveys. An interesting goal enabled by this capability will be to probe the transmission spectra of gas giants with fairly similar surface gravities as a function of equilibrium temperatures. \acknowledgements AJ acknowledges support from FONDECYT project 1130857, BASAL CATA PFB-06, and the Millenium Science Initiative, Chilean Ministry of Economy (Nuclei: P10-022-F, P07-021-F). NE is supported by CONICYT-PCHA/Doctorado Nacional and MR is supported by FONDECYT postdoctoral fellowship 3120097. D.K.S. acknowledges support from STFC consolidated grant ST/J0016/1. J.-M.D. acknowledges funding from NASA through the Sagan Exoplanet Fellowship program administered by the NASA Exoplanet Science Institute (NExScI). A.H.M.J. Triaud is a Swiss National Science Foundation fellow under grant number PBGEP2-145594. \bibliographystyle{apj}
1,477,468,751,320
arxiv
\section{Introduction} \label{sec:Introduction} The possibility of understanding gauge symmetry enhancement from a Double Field Theory (DFT) perspective was addressed in various recent articles \cite{agimnr,Cagnacci:2017ulc,aamr}. The discussion was done in the context of the bosonic string since, even if ill defined, it is the simplest example in several aspects and allows to identify the relevant ingredients. In the present note we follow similar steps as in \cite{aamr} in order to describe the gauge symmetry enhancement (and breaking) in the heterotic string from a DFT-like formulation. Gauge symmetry enhancement is a very stringy phenomena associated to the fact that the string is an extended object and, therefore, it can wind around non-contractible cycles. String states are thus characterized by a stringy quantum number, the so-called winding number, counting the number of times that the cycle is wrapped by the string. The exchange of winding and momentum states (accompanied by a transformation of moduli fields) leads to T-duality invariance, a genuine stringy feature. At certain moduli points (fixed points of T-duality transformations) vector boson states in some combinations of windings and momenta become massless and give rise to enhanced gauge symmetries (see for instance \cite{nsw,Giveon:1994fu}). Of course, the effective low energy theory, where massive states are neglected, can be described by an usual gauge field theory Lagrangian, containing gravity, with no reference to any windings. An intriguing aspect is that this field theory somehow encodes information about stringy effects. Moreover, even if gauge symmetry breaking is achieved as usual, with some scalar fields acquiring vevs, this higgsing process must encode information about moduli away from the fixed point. Interestingly enough, this effective theory close to self-dual points originated in the bosonic string, can be embedded \cite{aamr} into a DFT-like formulation. In DFT (we will be more precise below) the internal configuration space includes, besides the usual space coordinates dual to KK momenta, new coordinates dual to winding states and therefore, coordinates are doubled. This DFT rewriting allows to highlight the stringy aspects of these gauge theories. Actually, in a generalized Scherk-Schwarz \cite{ss,effective} compactification of this DFT the fluxes, computed from an internal vielbein depending on doubled coordinates, appear to depend on moduli and become the structure constants of the enhanced group at fixed points. We show below that this rewriting also works for the bosonic sector of a toroidally compactified heterotic string. Moreover, we show that by invoking supersymmetry, a corresponding fermionic sector can also be introduced. In Section \ref{sec:DFT rewriting} we present a brief discussion of symmetry enhancement and show the DFT rewriting of heterotic string theory effective action close or at the enhancing points. It is also shown how breaking of gauge symmetry is encoded into the moduli dependence of fluxes. A simple illustration for the case of circle compactification is provided. Ideas presented in \cite{aamr} are recurrently used throughout the article. The introduction of fermions is discussed in Section \ref{sec:fermions}. In particular we show that if the gaugings in shift matrices of gauged supergravities, associated to fermionic mass terms, are replaced by Scherk-Schwarz (moduli dependent) fluxes, the masses of fermions are in correspondence with their bosonic partners, as expected from supersymmetry. Several details are presented in the Appendices. In Appendix \ref{sec:Heterotic DFT} a quick introduction to DFT and generalized Scherk-Schwarz like compactification is provided with emphasis in the heterotic case where the ingredients needed in our construction are highlighted. For a more complete introduction to DFT we provide some original references in \cite{orDFT} and refer the reader to some reviews \cite{reviewamn,gm,reviews} (where a more extensive list of references can be found). In Appendix \ref{sec:Heterotic string basics} a brief account of heterotic string features needed for our discussion is presented. Concluding remarks and a brief outlook are presented in Section \ref{sec:Summary and outlook}. \section{Heterotic Gauge symmetry enhancement and DFT rewriting} \label{sec:DFT rewriting} Toroidal compactification of the $SO(32)$ (or $E_8\times E_8$) heterotic string to $d$ space-time dimensions leads to a generic gauge group \begin{equation} G_L\times U(1)^{10-d}_R \end{equation} where the left group $G_L$ is generically a product of non-abelian and abelian gauge groups. The rank of $G_L$ is $r_L=16+10-d=26-d$ originated from the $16$ Cartan generators of the ten dimensional gauge group plus the $r=10-d$ vector bosons coming from left combinations of the KK reductions of the metric and the antisymmetric tensor. Different gauge groups do appear when moving along moduli space. At generic points in moduli space $G_L= U(1)^{26-d}_L$ while a point of maximum enhancement leads to $G_L= SO(52-2d)$ for the $SO(32)$ string case. We present some basic details in Appendix \ref{sec:Heterotic string basics}. Let $n=n_c+r_L=\dim{G}_L$ be the dimension of $G_L$ at some moduli point with $n_c$ denoting the number of charged generators. The effective low energy theory will thus be a $G_L\times U(1)^{10-d}_R$ gauge theory coupled to gravity and the Kalb-Ramond antisymmetric tensor field in $d$ dimensions. There are also $(n_c+26-d)(10-d)$ scalars. Thus, the counting of degrees of freedom leads to: $d^ 2$ corresponding to graviton plus B field, $n_c+36-2d$ vectors from $G_L\times U(1)^{10-d}_R$ and $(n_c+26-d)(10-d)$ scalars. Recall that the number of scalar fields corresponds to $(26-d)(10-d)$ moduli plus $n_c(10-d)$ extra scalars that should become massive at generic points where the broken gauge group is $U(1)^{26-d}_L \times U(1)_R^{10-d}$. It is interesting to notice that the total number of degrees of freedom coincides with \begin{equation} dim \frac{O(d+n,d+r)}{O(d+n)\times O(d+r)}=d^2 + d(n_c+36-2d)+(n_c+26-d)(10-d). \label{dftembedding} \end{equation} Indeed, this coset-like writing provides a clue of how to express the effective theory in a DFT-like form as discussed in Appendix \ref{sec:Heterotic DFT}. Following similar steps as presented in \cite{agimnr,aamr} for the bosonic string case, we propose an expression for such an action and then discuss its specific features. Namely, \begin{eqnarray} \nonumber S_{eff}&=& \frac{1}{2\kappa_d^2}\int d^dx\sqrt{g}e^{-2\varphi} \left[ {\cal R}+4\partial^\mu\varphi\partial_\mu\varphi-\frac1{12}H_{\mu\nu\rho} H^{\mu\nu\rho} \right. \\\label{actionDFTeff} &&\left.-\frac 18{\cal H}^{{}}_{AB}{ F}^{A\mu\nu} {F}^B_{\mu\nu +\frac 18 (D_\mu {\cal H})_{AB} (D^\mu {\cal H})^{AB}-V \right]. \end{eqnarray} Here \begin{eqnarray} \label{scalarpotential} V=-\frac 1{12}f_{AB}{}^Kf_{LC}{}^D\left( {\cal H}^{AL}{\cal H}^{BC}{\cal H}_{KD} \right. - 3\, {\cal H}^{AL} \eta^{BC} \eta_{KD} + 2 \, \eta^{AL} \eta^{BC} \eta_{KD} ) - \Lambda \end{eqnarray} is a scalar potential where the last two terms are just constants. The scalars parametrize the coset $\frac{O(n,r)}{O(n)\times O(r)}$ of dimension $(n_c+26-d)(10-d)$. The indices can be conveniently split in a L-R basis (named a $\cal C$ base) as $A=(a,\hat{I})$ where $a=1,\dots r_L,r_L+1,\dots r_L+n_c=n=\dim{G}_L$ index runs over the left group $G_{L}$. In addition the $\hat{I}=1,\dots r$ index corresponds to the Right $U(1)^r$ group. The index contractions are performed with $ \eta^{AB}$, the $O(r_L+n_c, r)$ invariant metric \begin{equation} \label{etam} \eta^{AB}=\begin{pmatrix} 1_{r_L+n_c} & 0 \\ 0 & -1_r \end{pmatrix}. \end{equation} ${\cal H}_{AB}$ is the (so-called) internal generalized metric encoding information about scalar fields. ${\cal R}$ is the $d$-dimensional Ricci scalar and $F_{\mu\nu}^A$ and $H_{\mu\nu\rho}$ \begin{eqnarray} \label{FHDFT} F^B & = & d A^B - \frac{1}{2\sqrt{2}}f_{CD}{}^B A^C \wedge A^D \nonumber \\ H & = & d B + F^C \wedge A_C- \frac{1}{3!\sqrt{2}} f_{ABC}A^A\wedge A^B \wedge A^C, \end{eqnarray} are the gauge field and $B$ field strengths. The covariant derivative of the scalars is \begin{equation} (D_\mu {\cal H})_{AB}=(\partial_\mu {\cal H})_{AB}+ \frac{1}{\sqrt{2}}f^K{}_{LA} A^L_\mu {\cal H}^{{}}_{KB} + \frac{1}{\sqrt{2}}f^K{}_{LB} A^L_\mu{\cal H}^{{}}_{AK}. \, \label{dftcovder} \end{equation} Finally, the $ f_{ABC}= \eta_{AK}f^K{}_{BC}$ are completely antisymmetric constants. Interestingly enough this action can be interpreted as a generalized Scherk-Schwarz reduction of a DFT-like action, as we briefly sketch in Appendix \ref{sec:Heterotic DFT}, the constants $ f^K{}_{BC}$ being the generalized fluxes of the compactification\footnote{Recall that other kinds of fluxes like $f_A$ could be present \cite{effective, reviewamn,Dibitetto:2012rk} as shown in Appendix \ref{sec:Heterotic DFT}. Here we set them to zero since they are not relevant for our discussion.}. There are $\frac{(r+n)(r-1+n)(r-2+n)}{3!}$ such fluxes which must satisfy the quadratic constraints \begin{equation} f_{[AB}{}^K f_{K]C}{}^R=0\, . \label{quadratic} \end{equation} If indices are allowed to transform then the action is globally invariant under $O(n_c+26-d,10-d)$ and it can be identified with the bosonic (electric) sector of a half-maximal gauged supergravity action \cite{Schon:2006kz, Samtleben:2008pe,Trigiante:2016mnt,Bergshoeff:1985ms}. In spite of the fact that this huge number of gaugings was explored in several situations, its physical interpretation deserves further investigation. For instance, if we restricted to $a=1,\dots, r=10-d$, and in $r=6$ dimensions, the above counting of fluxes would correspond to the $220$ gaugings of electric sector of $O(6,6)$ gauged supergravity. These gaugings have been identified (see for instance \cite{Samtleben:2008pe,fluxes, reviewamn} ) as geometric and non geometric fluxes in (orientifold) string compactifications. Here we will restrict to a very specific choice of a subset of all possible fluxes, relevant to our discussion. In order to make contact with the heterotic string effective action we first expand the generalized metric in terms of scalar fluctuations encoded in the scalar matrix $M_{a,\hat I}$ with $dim G_L\times r=(n_c+26-d)(10-d) $ independent degrees of freedom. Namely we write \begin{equation} {\cal H}_{\cal C}^{AB}= \delta^{AB}+ {\cal H}^{(1)AB}+ \frac12{\cal H}^{(2)AB}+\dots \label{scalarfluct} \end{equation} such that matrix elements vanish unless \begin{eqnarray} {\cal H}^{(1)}_{a\hat I}&=& M_{a,\hat I},\qquad{\cal H}^{(1)}_{\hat I a}=M^T_{a,\hat I}\\\nonumber {\cal H}^{(2)}_{ab}&=& (M M^T)_{ab},\qquad{\cal H}^{(2)}_{\hat I \hat J}= (M^T M)_{\hat I \hat J}\, . \label{scalarorders} \end{eqnarray} Moreover, we make a specific choice for flux values (therefore breaking the global symmetry), by identifying them with the gauge group structure constants. Namely, \begin{equation} f_{ABC}=\begin{cases} f_{abc}\qquad G_L\,\, \rm {structure\,\, constants} \\ \nonumber 0\qquad \rm {otherwise} \end{cases} \label{Gl}, \end{equation} where $f_{abc}$ is the subset of all possible fluxes (with Left indices) reproducing the structure constants of the $G_L$ group algebra. When couplings are adequately adjusted the above action reduces to the $G_L\times U(1)^{10-d}_R$ gauge theory action \begin{eqnarray} \label{effectiveSD}\nonumber S&=& \frac{1}{2\kappa_d^2}\int d^dx\sqrt{g}e^{-2\varphi} \left( {\cal R}+4\partial^\mu\varphi\partial_\mu\varphi-\frac1{12}H_{\mu\nu\rho}H^{\mu\nu\rho }\right) \\\nonumber &-&\frac 18 \left(\delta_{ab}{F}^{a\mu\nu}{F}^b_{\mu\nu} + \delta_{\hat I\hat J} {\bar F}^{\hat I\mu\nu} { \bar F}^{\hat I}_{\mu\nu} -\frac 12 g_d\sqrt{\alpha^{\prime}} M_{a\hat I} F^a_{\mu\nu} \bar F^{\hat I\mu\nu }\right) \nonumber\\ &-& D_\mu M_{a\hat I}D_\nu M^{a\hat I} g^{\mu\nu} + {\cal O}(M^4), \label{su2action} \end{eqnarray} reproducing the bosonic sector of heterotic string low energy theory at a fixed point. Here $a$ labels the Left gauge group (generically non- Abelian) generators with vector bosons $ A_{L\mu}^a$ and $\hat I=1,\dots r$ the Abelian group $U(1)_{\hat I}$ associated to vector bosons $ A_{R\mu}^{\hat I}$. The scalar fields live in the $(\bf {dim G_L})_{\hat q =0}$ adjoint representation of $G_L$ and carry zero vector charge ${\bf {\hat q }}=(\hat q_1,\dots,\hat q_r)=0$ with respect to $U(1)_{R}^r$ right group. Thus, the covariant derivative in \eqref{dftcovder} becomes \begin{equation} D_\mu M_{a\hat I}=\partial_\mu M_{a\hat I}+ g_d f^k{}_{la} A_{L\mu}^l M^{{}}_{k\hat I}\, , \end{equation} where $g_d=\kappa_{d}\sqrt{\frac{2}{\alpha^{\prime}}} $. Notice that no scalar potential is generated for this choice of structure constants. In the next section we show, in the context of DFT, how gauge symmetry breaking can be achieved by allowing structure constants to depend on moduli, as expected from string theory. \subsection{ Gauge symmetry breaking from DFT rewriting} \label{sec:GSBDFT rewriting} In string theory the structure constants can be read out from 3-point vertex vector boson operators. For the Cartan generators, which we label with the index $\check{I}_{L}=(i=1,\dots r;I=1,\dots 16$), the associated Left vector bosons vertex operators are of the form $V(\check{I}_{L})\propto \partial_z y^{\check{I}} \tilde\psi^{\mu}e^{iK.X}$, whereas for charged operators we have $V({l_L})\propto e^{i l_L.y(z)}e^{iK.X}$ where $l_L^{\check{I}}$ are the Left internal momenta defined in \eqref{leftrightmomenta}, and $X^{\mu}(z)$ and $ K^{\mu}$ are the space-time coordinate and momentum, respectively. Recall that the internal momenta depend on specific values of KK momenta $p_m$ and winding numbers $\tilde p^m$ as well as on the $\Lambda_{16}$ weights \footnote{For the sake of clarity we concentrate in the $SO(32)$ string but same conclusions are valid for the $E_8\times E_8$ heterotic case with lattice $\Lambda_{8}\times \Lambda_{8}$ .} $P^I$. We encode these values into a ``generalized momentum vector'' \begin{equation} \check {\mathbb P}=(\mathbb P; P^I),\qquad {\rm with }\qquad \mathbb P=(p_m,{\tilde p}^m). \label{generalizedmomentum} \end{equation} Let us encode denote by $\Phi=(g,B,A)$ a generic moduli point. Since momenta depend on moduli fields we actually have $l_L=l_L^{\check {\mathbb P}}(\Phi)$ and similarly, $l_R=k_R^{\check {\mathbb P}}(\Phi)$. At specific points $\Phi_0$ in moduli space and for certain values of $\check {\mathbb P}$ such that \begin{equation} k_R ^{\check{\mathbb P}}(\Phi_0)=0,\qquad (l_L^{\check {\mathbb P}}(\Phi_0))^2=2 \label{masslessvector} \end{equation} gauge symmetry enhancement occurs \eqref{masslesscharged}. At these points \begin{equation} l_L^{({\check{\mathbb P})}}(\Phi_0)\equiv \alpha^{({\check{\mathbb P})}} \label{momentumroot} \end{equation} become the roots $\alpha^{({\check{\mathbb P})}} $ of the $G_L$ gauge group. Notice that there is an associated root to each of the $n_c$ possible values of ${\check{\mathbb P}}$, satisfying the massless vector condition \eqref{masslessvector}. Three point amplitudes involving Left vector boson vertices can be expressed as \begin{eqnarray} \langle V_L({l_L^{(\check{\mathbb P}_1)}}) V_L({l_L^{(\check{\mathbb P}_2)}}) V_L({l_L^{(\check{\mathbb P}_3)}}) \rangle &\propto & f_{ \alpha^{(\check{\mathbb P}_1)}\alpha^{(\check{\mathbb P}_2)}\alpha^{(\check{\mathbb P}_3)}(\Phi)}E(\epsilon_1,K_1; \epsilon_2,K_2;\epsilon_3,K_3)\, , \end{eqnarray} where $E(\epsilon_i,K_i;i=1,2,3)$ is a Lorentz invariant antisymmetric function of polarization vectors $\epsilon_i^{\mu}(K_i)$ and space-time momenta $K_i^{\mu}$. The constants $f_{ \alpha^{(\check{\mathbb P}_1)}\alpha^{(\check{\mathbb P}_2)}\alpha^{(\check{\mathbb P}_3)}}(\Phi)$ are antisymmetric and vanish unless internal momentum is conserved, namely $\check{\mathbb P}_3=-\check{\mathbb P}_1-\check{\mathbb P}_2$. At a self-dual point $\Phi=\Phi_0$ this indicates that structure constants $f_{ \alpha_1\alpha_2\alpha_3}$ vanish unless $\alpha_1+\alpha_2$ is a root . In this case, we can normalize by setting $f_{ \alpha^{(\check{\mathbb P}_1))}\alpha^{(\check{\mathbb P}_2)}\alpha^{(\check{\mathbb P}_3)}}(\Phi)=1$. Momentum conservation also implies that, at the self-dual point, amplitudes mixing Left and Right indices vanish. However, away from the fixed point, the vertices develop a dependence on $l_{R}$, $V_L(l=({l_L,l_{R})})\propto e^{i l_L^{\check {\mathbb P}}(\Phi).y(z)+l_R^{\check {\mathbb P}}(\Phi).\bar y(\bar z) }e^{iK.X}$ and therefore mixing now occurs. In fact, it is found that the only non vanishing amplitudes are \begin{eqnarray}\nonumber \langle V_L({l^{(\check{\mathbb P})}}) V_L({l^{(-\check{\mathbb P})}}) V_{L}(\check{I})\rangle &\propto & l_{L}^{(\check{\mathbb P)}}(\Phi)_{ \check{I}};\qquad \langle V_L({l^{(\check{\mathbb P})}}) V_L({l^{(-\check{\mathbb P})}}) V_{R}(\hat{I})\rangle \propto l_{R}^{(\check{\mathbb P)}}(\Phi)_{ \hat{I}}. \label{structureconstvertices} \end{eqnarray} Following \cite{aamr}, we propose to identify the amplitude coefficients with some algebra structure constants, even (slightly) away from the fixed point $\Phi_0$. Namely we set \begin{eqnarray} f_{ \alpha^{(\check{\mathbb P})}\alpha^{(-\check{\mathbb P})}{ \check{I}_L}}(\Phi)&=& l_{L}^{(\mathbb P)}(\Phi)_{ \check{I}} ,\qquad f_{ \alpha^{(\mathbb P)}\alpha^{(\check{\mathbb -P)}}\hat I}(\Phi)= l_{R}^{(\check{\mathbb P)}}(\Phi)_{\hat I} \end{eqnarray} with the other constants being obtained as permutations, and we propose the algebra \begin{eqnarray}\nonumber \big[ E_{\alpha},E_{-\alpha } \big] &=& l_{L}^{(\alpha)\check I} H_{\check I}+l_{R}^{(\alpha)\hat I}\hat H_{\hat I} \qquad \big[ H_{I},E_{\alpha } \big] = l_{L}^{(\alpha)I} E_{\alpha} \\ \big[ E_{\alpha_1},E_{\alpha_2 } \big] &=&f_{ \alpha_1\alpha_2\alpha_3} E_{\alpha_3 }\qquad \big[ \hat H_{ I},E_{\alpha } \big] = l_{R}^{(\alpha)I} E_{\alpha}\, . \label{generalalgebra} \end{eqnarray} We have used $\alpha=\alpha^{(\mathbb P)}$ to alleviate the notation and, as we found above, $f_{ \alpha_1\alpha_2\alpha_3}=1 $ if $\alpha_3=\alpha_1+\alpha_2$ is a root and vanishes otherwise. All other commutators vanish. It is easy to show that \eqref{generalalgebra} satisfies Jacobi identities and, therefore, defines a Lie algebra. Recall that, at the self-dual point where $k_{R}^{\alpha}(\Phi_0)=0 $ and $f_{ \alpha -\alpha \hat I}= l_{L}^{(\alpha)\hat I}=\alpha ^{\hat I}$, the algebra reduces to to the gauge algebra of $G_L$ group in the Cartan-Weyl basis. For instance $[E_{\alpha},E_{-\alpha}]=\alpha^ {{\hat I}}H_{\hat I}$ for charged generators $E_{\alpha}$ and Cartan generators $H_{\hat I}$, as expected. Interestingly enough, by performing a linear combination of generators it can be shown that there is still an underlying $G_L$ algebra. To visualize the linear combination let us define a double Cartan operator $\mathbb{H}_{A}=\big(H_{\check I} , \hat H_{ \hat{I}} \big)$ and the double (moduli dependent) momentum $\mathbb{L}^{(\alpha)}_{A}=\big( l_{L}^{(\alpha)\check{I}}, l_{R}^{(\alpha)I} \big)$. The algebra \eqref{generalalgebra} can now be written as \begin{equation} \begin{aligned} \big[E_{\alpha},E_{-\alpha} \big]&= \mathbb{L}^{(\alpha)}_{A}\mathbb{H}_{A} \\ \big[\mathbb{H}_{A},E_{\alpha} \big]&= \mathbb{L}^{(\alpha)}_{A} E_{\alpha} \\ \big[E_{\alpha_{1}},E_{\alpha_{2}} \big]&= f_{\alpha_{1}\alpha_{2}\alpha_{3}} E_{\alpha_{3}} \end{aligned}\, . \label{generaloddalgebraleftright} \end{equation} It is worth observing that an $O(r_{L},r_{R})$ transformation can be performed over the double Cartan generator, namely the one given by the inverse of \begin{equation} \begin{pmatrix} \delta_{IJ} && 0 && A_{Jn}\delta_{nm}\\ -A^{J}_{m}\delta_{JI} && \delta_{nm} && G_{nm}-B_{nm} -\frac{R}{2}A^{I}_{n}A^{I}_{m}\\ -A^{J}_{m}\delta_{JI} && \delta_{nm} && -G_{nm}-B_{nm} -\frac{R}{2}A^{I}_{n}A^{I}_{m} \end{pmatrix}\, , \label{isomorphismalgebra} \end{equation} such that $\mathbb{L}^{(\alpha)}$ is mapped to $\check {\mathbb P}$ (see \eqref{generalizedmomentum}) and $\mathbb{H}$ to new double Cartan's $\mathcal{H}$ leading to \begin{equation} \begin{aligned} \big[E_{\alpha},E_{-\alpha} \big]&= \check{\mathbb{P}}^{(\alpha)}_{A}\mathcal{H}_{A} \\ \big[\mathcal{H}_{A},E_{\alpha} \big]&= \check{\mathbb{P}}^{(\alpha)}_{A} E_{\alpha} \\ \big[E_{\alpha_{1}},E_{\alpha_{2}} \big]&= f_{\alpha_{1}\alpha_{2}\alpha_{3}} E_{\alpha_{3}}. \end{aligned} \label{generaloddalgebra} \end{equation} This final algebra has the same form independently of moduli values. Furthermore since the algebra \eqref{generalalgebra} and \eqref{generaloddalgebra} are isomorphic, due to \eqref{isomorphismalgebra}, we conclude that the algebra at the self-dual point is the same at all other (neighborhood) points. In generalized Scherk-Schwarz like compactifications of DFT, the generalized fluxes $f_{ABC}$ are defined from the generalized algebra satisfied by the internal frame \eqref{structureconstants}. Let us assume for the moment that a specific choice of frame exists such that these fluxes are the structure constants found in \eqref{generalalgebra}. Once these fluxes are identified we must replace them into the action \eqref{actionDFTeff}. The output is that the resulting action is the gauge broken symmetry action where vector bosons and scalars acquire masses proportional to structure constants mixing left and right indices, namely $ f_{ \alpha^{(\mathbb P)}\alpha^{(-\mathbb P)}\hat I}(\Phi) $. \subsubsection{Goldstone bosons} We start by inspecting the couplings between vectors and scalars arising from kinetic terms in \eqref{actionDFTeff}. By keeping the first term in the internal metric expansion \eqref{scalarfluct}, ${\cal H}_{\cal C}^{AB}= \delta^{AB}+ {\cal H}^{(1)AB}+\dots$ we find that \begin{eqnarray} D_\mu {\cal H}_{AB} D^\mu {\cal H}^{AB}&\approx& 4\partial_\mu M^{AB}f^K{}_{LA}\delta_{KB}A^{\mu L}=4 \partial_\mu M^{a\hat I}f_{aL \hat I}A^{\mu L}\\&=& 4 \partial_\mu M^{a \hat I}f_{a b \hat I}A^{\mu b}\, . \end{eqnarray} Here we have used the exapnsion into Left and Right indices $A=(a,\hat{I})$, we have used the metric \eqref{etam}, the antisymmetry of $f_{ABC}$ and the fact that the only non vanishing fluxes are of the form $f_{abc}, f_{ab \hat I}$. The conclusion is that, for a given vector boson $A_{\mu}^{ b}$, there is a combination of $ \hat I=1,\dots,r=10-d$, would-be Goldstone bosons scalar fields $f_{a b \hat I} M^{a \hat I}\equiv f_{\alpha -\alpha \hat I} M^{\alpha \hat I}= l_{R}^{(\mathbb P)}(\Phi)_{ \hat{I}} M^{\alpha \hat I}$ ( whenever $f_{a b \hat I}\ne 0$). We have recast the expression in a Cartan-Weyl basis by recalling that the only non vanishing fluxes (away from the point of enhancement) containing a Cartan index are of the form $f_{\alpha -\alpha \hat I}$. Interestingly enough, this combination arises as a conformal anomaly contribution in the OPE of energy momentum tensor with scalars whenever these scalars become massive, away from the fixed point (see \cite{agimnr} for a bosonic string example). This indicates that the combination $ l_{R}^{(\mathbb P)}(\Phi)_{ \hat{I}} M^{\alpha \hat I}(K)$ of internal R-momentum and scalar polarizations, must be set to zero.\footnote{Alternatively, such a combination of scalar vertex operators must be included into a new massive, anomaly free, vector field.} Let us see, as an example, how vector bosons and scalar masses arise. \subsubsection{Vector masses} In order to read the vector boson masses we must just look at quadratic terms in the scalar kinetic term. Thus, following similar steps as above but now keeping just the constant term in the internal metric expansion \eqref{scalarfluct} ${\cal H}_{AB}= \delta_{AB}+\dots$ we find \begin{eqnarray}\nonumber \frac 18 (D_\mu {\cal H})_{AB} (D^\mu {\cal H})^{AB}&\approx& \frac 18 (f_{RLB}\delta_{KA}+f_{RLA}\delta_{KB})\eta^{RK} (f^{PSA}\delta^{P'B}+f^{PSB}\delta^{P'A})\eta_{PP'}A_{\mu}^LA^{\mu}_S\\\nonumber &=& 2 \frac 18 ( f_{RLB}f^{PSA}\delta^{P'B}+f_{RLB}f^{PSB}\delta^{P'A} )\delta_{KA}\eta_{PP'}\eta^{RK}A_{\mu}^LA^{\mu}_S\\\nonumber &=& -\frac 12 f_{ \hat I aL }f^{ \hat I a S }A_{\mu}^LA^{\mu}_S = -(f_{\alpha -\alpha \hat I}(\Phi))^2 |A^{\alpha}|^2 \, , \label{goldstonebosons} \end{eqnarray} where, again, a Cartan-Weyl rewriting was used in the last term. Namely, away from the fixed point, the vector bosons acquire a mass $m_{A^\alpha}$ given by \begin{equation} m_{A^\alpha}^2= \sum_{\hat{I}}( f_{\alpha -\alpha\hat{I}}(\Phi) )^{2}=l_{L}^2(\Phi) \, . \end{equation} \subsubsection{Scalar masses} From a DFT point of view, the scalar masses arise from quadratic terms in scalar fluctuations in the scalar potential. Thus, by inserting the expansion \eqref{scalarfluct} into the scalar potential \eqref{scalarpotential} we find: \begin{eqnarray} -\frac{1}{12}f_{ABC}f_{DEF}\mathcal{H}^{(1)AD}\mathcal{H}^{(1)BE}\delta^{CF} -\frac{1}{12} f_ {ABC}f_{DEF} \mathcal{H}^{(2)AD}\big(3\delta^{BE}\delta^{CF} - 3\eta^{BE}\eta^{CF} \big)\, . \end{eqnarray} We notice that, due to the relative minus sign between left and right indices in $\eta^{AB}$ (see\eqref{etam}) the second term vanishes unless indices organize as $\delta^{be}\delta^{\hat I \hat J}$ leading to \begin{equation} \frac{1}{2}\sum_{\alpha,\hat{I}}f_{\alpha -\alpha\hat{I}}f_{\alpha-\alpha \hat{I}}\mathcal{H}^{(2)\alpha \alpha}=\frac{1}{4}\sum_{\alpha,\hat{I},\hat{J}} ( f_{\alpha -\alpha\hat{I}} )^{2} | M^{\alpha \hat{J}}|^{2}= \frac{1}{4}\sum_{\alpha,\hat{J}} m_{\alpha}^2 |M^{\alpha \hat{J}}|^{2}\, , \end{equation} where \begin{equation} m_{\alpha}^2 =\sum_{\hat{I}}( f_{\alpha -\alpha\hat{I}} (\Phi))^{2}= m_{A^\alpha}^2 \end{equation} is the mass (square) of the scalar field $M^{\alpha \hat{J}}$, coinciding with the vector boson mass. On the other hand, the first term contribution in \eqref{goldstonebosons} leads to \begin{equation} \frac{1}{2}\sum_{\alpha} \Big(\sum_{\hat{I}} f_{\alpha -\alpha \hat{I}} m^{\alpha \hat{I}} \Big)^{2}. \end{equation} However, this contribution is irrelevant since $\Big(\sum_{\hat{I}} f_{\alpha -\alpha \hat{I}} m^{\alpha \hat{I}} \Big)$ is the Goldstone boson combination. Let us stress that the obtained masses coincide with the masses computed from string mass formula \eqref{LRstringmasses}. \subsection{Examples} Here we discuss a simple illustration of the above construction in the simplest case of compactification on a circle of radio $R$. In this case \eqref{leftrightmomenta} reads \begin{eqnarray} K_L^I&=& P^I+ R A^I \tilde p\\\nonumber k_{L}&=&\sqrt{\frac{\alpha^{\prime}}{2}}\big[\frac{p}{R}+\frac{\tilde p}{\tilde R}-P.A -\frac{R}{2} A. A\tilde p\big]\\\nonumber k_{R}&=&\sqrt{\frac{\alpha^{\prime}}{2}}\big[\frac{p}{R}-\frac{\tilde p}{\tilde R}-P.A-\frac{R}{2}A.A\tilde p \big]\, . \label{leftrightmomentaexam} \end{eqnarray} A massless state requires $k_{R}=0$ ($\bar{N}_{F}=1$) and then $k_{L}=\sqrt{2\alpha^{\prime}}\frac{\tilde p}{\tilde R}$. \subsubsection{$SU(2)\times SO(32)\times U(1)$ } A possible set of massless vectors is provided by choosing $p=\tilde{p}=\pm 1$, by setting the radio to its self-dual value $R=\sqrt{\alpha'}=\tilde R$ and $A^I=0$. Together with the massless vector state associated to KK compactification (with $p=\tilde p=0=P^ I$) mode these massless vectors lead to an $SU(2)$ left group. In addition, an $SO(32)$ group associated to the weights $P=( \underline {\pm,\pm,\dots 0})$ appears (underlining meaning permutation over the 16 entries) and the corresponding 16 Cartan oscillators. Therefore, at this moduli point, the enhanced gauge group is $SO(32)_{L}\times SU(2)_{L}\times U(1)_{R}$. In the notation of \eqref{generaloddalgebraleftright} this full set of massless states corresponds to $\check {\mathbb P}_{SU(2)\times SO(32)\times U(1)}=(\pm 1,\pm 1; 0,\dots ;0),(0,0;\underline {\pm,\pm,\dots 0})$. We can break the enhanced gauge group $SO(32)_{L}\times SU(2)_{L}\times U(1)_{R}$ to $SO(32)_{L}\times U(1)_{L}\times U(1)_{R}$ or to $U(1)^{17}_{L}\times U(1)_{R}$ depending on the direction of the moduli space on which we move. For instance, by sliding away from the self-dual radio, charged $SU(2)$ vectors become massive with mass square $m_{-}^2$ with $m_{-}=\sqrt{\frac{2}{\alpha^{\prime}}}a_-=\frac{1}{R}-\frac{1}{\tilde{R}} $, where $a_{\pm}$ are defined in \eqref{amp}. The algebra \eqref{generaloddalgebraleftright} becomes \begin{eqnarray} \big[E_{+},E_{-} \big]&=&2( a_{+} H_{3} + a_{-}H_{\bar{3}})\hspace{3em} \big[E_{P},E_{-P} \big]=P_{I}H_{I} \nonumber\\ \big[H_{3},E_{\pm}\big]&=&\pm a_{+} E_{\pm}\hspace{8em} \big[H_{I},E_{P} \big]=P_{I}E_{P} \nonumber\\ \big[H_{\bar{3}},E_{\pm}\big]&=&\pm a_{-} E_{\pm} \hspace{7em} \big[E_{P_{1}},E_{P_{2}} \big]= f_{P_{1}P_{2}P_{3}} E_{P_{3}}\, . \end{eqnarray} The subindices $\pm$ denote the two roots of $SU(2)$, the subindex $3$ denotes the corresponding Cartan whereas $f_{P_{1}P_{2}P_{3}}$ are the structure constants of $SO(32)$ where $P_{I}$ are the roots and $H_{I}$ the Cartan generators. At the self-dual radio we have $a_-=0$, $a_+=1$ and the $ SU(2)$ gauge algebra is recovered. By turning on Wilson lines $A^{I}$ the group is broken to $U(1)^{17}_{L}\times U(1)_{R}$. The algebra becomes {\footnotesize{ \begin{eqnarray} \big[E_{+},E_{-} \big]&=&\big(2-\frac{1}{2}A^{2}\big)H_{3} + \big(\frac{1}{2}A^{2}\big) H_{\bar{3}}+ A^{I}H_{I} \hspace{2em} \big[E_{P},E_{-P} \big]=P_{I}H_{I} -(P\cdot A)H_{3} -(P\cdot A)H_{\bar{3}} \nonumber\\ \big[H_{3},E_{\pm}\big]&=&\pm \big(2-\frac{1}{2}A^{2}\big) E_{\pm} \hspace{10,5em} \big[H_{I},E_{P} \big]=P_{I}E_{P} \nonumber\\ \big[H_{\bar{3}},E_{\pm}\big]&=&\pm \big(\frac{1}{2}A^{2}\big) E_{\pm} \hspace{11,5em} \big[E_{P_{1}},E_{P_{2}} \big]= f_{P_{1}P_{2}P_{3}} E_{P_{3}}\nonumber\\ \big[H_{I},E_{\pm}\big]&=&\pm A^{I} E_{\pm} \hspace{13,5em} \big[H_{\bar{3}},E_{P} \big] =-(P\cdot A )E_{P} \, . \end{eqnarray}}} As discussed, the vector boson masses are identified with the structure constants mixing Left and Right indices. Therefore we find that $SU(2)$ charged vectors $A_{\mu}^{\pm}$ acquire a mass $m_{SU(2)}=|f_{\bar3\pm}{}^{\pm}|=\frac{1}{2}A^{2}$ whereas $SO(32)$ charged vectors masses are $m_{SO(32)}=|f_{\bar 3 P -P}|= |P\cdot A|$. As discussed in the general case, the above commutators satisfy Jacobi Identities and define an $SU(2)\times SO(32)$ algebra now involving massive states. Let us recall that from DFT perspective the algebra is obtained through generalized Lie derivatives of the twists $E_A(\mathbb Y)$. The explicit twist for the $SU(2)$ sector is given in \eqref{su2frame}. \subsubsection{$SO(34)$ } Other enhanced groups can be obtained at different points in moduli space. points in moduli space can lead to different enhancements. For instance, by choosing \cite{nsw} $\tilde R= \sqrt{2\alpha^{\prime}}$ and $RA=(-1,\dots 0)$ we notice that for $\tilde p=0$ massless states are obtained if $ P=(\underline {\pm,\pm,\dots 0})$, namely an $SO(32)$ root, if KK momenta ${p}=-P^1$ is selected. Moreover, the $SO(32)$ weights $P=(\pm,\underline {\pm,\dots 0}), (0,\dots 0), (2,\dots 0)$ combined with $\tilde p=\pm1$ lead to $l_L=(\pm; \underline {\pm,\dots 0})$ states that combined with the $SO(32)$ roots lead to massless states with charged operators associated to $l_L=(\underline {\pm;\pm \dots 0})$ corresponding to the well known $SO(34)$ enhanced group \cite{nsw}. Recall thet our description holds at the neighborhood of the $SU(2)\times SO(32)\times U(1)$ point (defined by a specific choice of generalized momentum $\check{\mathbb P}_{SU(2)\times SO(32)\times U(1)}$ and moduli fields) or $SO(34)$ point with different generalized momenta $\check{\mathbb P}_{SO(34)}$ and moduli fields but it is not possible to continuously interpolete between both points. \section{Including fermions} \label{sec:fermions} The action \eqref{actionDFTeff}, for $d=4$, is nothing but the $N=4$ bosonic (electric) sector of a generic gauged supergravity theory (see for instance \cite{Bergshoeff:2002nv,Schon:2006kz,Samtleben:2008pe,Trigiante:2016mnt}). We then see that, Scherk-Schwarz reduction of DFT provides a way of deriving this gauged supergravity sector. Inclusion of the magnetic sector requires considering EFT or an extension of the initial global group. The inclusion of fermions from a DFT point of view was considered in several works \cite{dftfermions} and, in particular, a Scherk- Schwarz like reduction was proposed in \cite{Berman:2013cli} in the context of the superstring. The aim of the present section is to show that the mechanism of gauge symmetry enhancing- breaking through moduli dependent fluxes, found for the bosonic sector, is reproduced in the fermionic sector. By invoking supersymmetry we conclude that the fermionic sector is just the fermionic sector of gauged supergravities discussed in the literature. We first concentrate in the $N=4$ case in four dimensions and discuss its generalization later on. Therefore, we must deal with the global symmetry group $O(6+n,6)$. In particular we concentrate in the fermionic mass terms. For instance, quadratic terms containing the gravitini $\psi{}_{\mu i}$ and gaugini $\lambda^{a}_{j}$ read \cite{Bergshoeff:2002nv, Schon:2006kz} \begin{eqnarray} e^{-1} {\cal L}_{\text{f.mass}} \, &=& \, \frac13 \, g \,A_{1}^{ij} \, \bar \psi{}_{\mu i} \, \Gamma^{\mu \nu } \, \psi_{\mu j} + i g \, {A_{2\,a i}}^{j}\bar \psi{}_{\mu i} \,\Gamma^{\mu} \, \lambda^{a}_{j}+ A_{3ab}{}^{ij}\bar \lambda^{a}_{j}\lambda^{a}_{j} + \text{h.c.} \; , \end{eqnarray} where the matrices $A_{1}^{ij}, {A_{2\,a i}}^{j}, A_{3ab}{}^{ij}$ are known as shift matrices. Here indices $i$ span the spinorial representation of $SO(6)$ or, equivalently, the 4-dimensional representation of $SU(4)$, the universal cover of $SO(6)$. $SO(6)$ vectors $v_{\hat m}$ can be recast in terms of the antisymmetric combinations of spinorial representations, or, equivalently in terms of antisymmetric $SU(4)$ six dimensional representation $v^{ij}$ through $v_{\hat m} (\gamma^{\hat m}){}^{ij} = v^{ij}$ where \begin{equation}\label{pseudorealityconst} v^{ij}=v^{[ij]}~~~ \mbox{and}~~~ v_{ij}=(v^{ij})^{*}=\frac{1}{2}\epsilon_{ijkl}v^{kl}. \end{equation} The shift matrices are known to depend on scalars through the coset representatives ${\cal U}_{A}{}^{\bar A}(x)$ defining the scalar matrix \eqref{scalarmatrix}. For internal indices such matrix reads \begin{equation} {\cal H}_{AB}(x) ={\delta }_{\bar A \bar B}\, {\cal U}_{A}{}^{\bar A} {\cal U}_{B}{}^{\bar B} \ . \label{internalscalarmatrix}\end{equation} with \begin{equation} {\cal U}_{A}{}^{\bar A}(x)\equiv ( {\cal U}_{A}{}^{a}; {\cal U}_{A}{}^{\hat I}) =( {\cal U}_{A} {}^{a}; {\cal U}_{A}{}^{ij})\, , \label{cosetrep} \end{equation} and where the $SO(6)$ vector index $\hat I$ was expressed in terms of the spinor indices $ij$ in the last term. The shift matrices then read (see for instance\cite{Bergshoeff:2002nv, Schon:2006kz}) \begin{eqnarray}\nonumber \mbox{gravitini-gravitini:}~~ A_{1}^{ij}&\propto& ( {\cal U}_{A}{}^{kl})^{*} {\cal U}_{B}{}^{ik}{\cal U}_{C}{}^{jl}f{}^{ABC}\\\nonumber \mbox{gravitini}-\mbox{gaugini}:~~ A_{2ai}{}^{j}&\propto&{\cal U}_{A}{}^{a}({\cal U}_{B}{ }^{ik})^{*}{\cal U}_{C}{}^{jk}f{}^{ABC}\\ \mbox{gaugini}-\mbox{gaugini}:~~ A_{3ab}{}^{ij}&\propto &{\cal U}_{A}{}^{a}{\cal U}_{B}{}^{b} {{\cal U}}_{C}{}^{ij}f{}^{ABC}\, , \label{shiftmatrices} \end{eqnarray} where we have used $f{}^{ABC}$ to denote the electric sector gaugings $f_{+}{}^{ABC}$, the $+$ subindex indicating the electric sector\cite{Schon:2006kz}. In order to read vector masses we need to keep the constant term in the expansion of $ {\cal U}_{A}{}^{\bar A}$ in scalar fluctuations (see \eqref {internalscalarmatrix}) reproducing the metric expansion ${\cal H}_{AB}= \delta_{AB}+{\cal O}(M)$. Therefore $( {\cal U}_{A}{}^{ b};{\cal U}_{A}{}^{\hat I})=(\delta_{A} {}^{ b}; \delta_{A}{}^{\hat I})$ with \begin{equation}\label{vielbeindeltassu4} {\cal U}_{A}{}^{ij}=\delta_{A \hat m}(\gamma^{\hat m})^{ij}. \end{equation} By replacing this expansion into shift matrices expressions we find \begin{eqnarray}\nonumber A_{1}^{ij}&\propto& \delta_{A,\hat a} {(\gamma^{\hat a})}^{* kl} \delta_{B,\hat I} {(\gamma^{\hat I})}^{ik} \delta_{C,\hat c} {(\gamma^{\hat c})}^{jl}f{}^{ABC}= {(\gamma^{\hat a})}^{* kl} {(\gamma^{\hat I})}^{ik} {(\gamma^{\hat c})}^{jl}f{}^{\hat a\hat I\hat c} \\\nonumber A_{2ai}{}^{j}&\propto& {(\gamma^{\hat I})}^{* ik} (\gamma^{\hat c})^{jk}f{}^{a\hat I\hat c}\\ A_{3ab}{}^{ij}& \propto & {(\gamma^{\hat c})}^{ij}f{}^{ab \hat c}. \end{eqnarray} By identifying the gaugings $f{}^{ABC}$ with the fluxes defined above and by using that fluxes involving more than one right index vanish ($f_{}{}^{\hat a\hat I\hat c}=f_{}{}^{a\hat I\hat c}=0$) we find that, gravitini remain massless, as expected (same is valid for dilatini). On the other hand, gaugini masses are proportional to $f{}^{ab \hat c}$, so having the same masses as their vector boson super-partners, vanishing at the self-dual enhancing points. Together with scalars and vector bosons they fill up the ${\cal N}=4$ vector supermultiplet multiplet. Let us argue that the discussion presented here for $d=4$ extends to other dimensions. In fact, for half-maximal theories, the scalars form a coset $G/H = SO(d,d+n)/SO(d)\times SO(d+n)$ and are encoded in a coset representative $\mathcal{U}_{A}^{\bar A}=( {\cal U}_{A}{}^{a}; {\cal U}_{A}{}^{\hat I})$ as in \eqref{cosetrep} where $A$ is now a $G=SO(d,d+n)$-vector index, $a$ is a $SO(d)$-vector index and $\hat{I}$ is a $SO(d+n)$-vector index. The index $\hat{I}$ is expressed in terms of spinor indices since fermions transform under $H=Spin(d)\times SO(d+n)$. From the full set of possible gaugings it is still possible to choose a subset parametrized by an antisymmetric $G$-tensor, namely, $f_{ABC}$. For instance, in $d=4$ the full set of gaugings is parametrized by $\xi_{\alpha A}$ and $f_{\alpha ABC}$ ($\alpha=\pm$ is the electri-magnetic index) and we have restricted to $\xi_{\alpha A}=0$ and $f_{+ABC}=f_{ABC}$, $f_{-ABC}=0$. The same applies in other dimensions\footnote{If we wanted it to also hold in $d=9$ and $d=8$ we must necessarily include vector multiplets, thus, $n\geq 1$. Otherwise, $f_{ABC}=0$, see \cite{Bergshoeff:2002nv,Dibitetto:2012rk}.}. The fermion shift matrices would couple to scalars through the embedding tensor and therefore they will necessarily have the same form as in \eqref{shiftmatrices} but where $i,j$ indices span the spinorial representation of $Spin(d)$. The reader should be aware that the actual mass terms of bilinear fermions are linear combinations of the above terms including scalars factors. As before when the gaugings are the ones coming from an enhancement point of the string moduli space the structure constant (the fluxes) will take the values of the previous section. In these points, the gravitini shift matrices are zero and supersymmetry is preserved. Away from the point of enhancement scalars vectors and fermions organized into a massive supermultiplet. \section{Summary and outlook} \label{sec:Summary and outlook} In the present work we have shown how DFT can provide an interesting description of the gauge symmetry enhancing-breaking process that occurs in the heterotic strings at specific points of moduli space. The construction relies on previous ideas used to describe this process in the bosonic string case. The three key ingredients encoding enhancing information are: a global $O(n_1,n_2)$ invariant gauged (super) gravity action, a scalar fluctuation expansion of a generalized scalar metric and the presence of generalized, moduli dependent, 3-form fluxes. The heterotic effective action is obtained by choosing the global group $O(n,r)$ (where $n=\dim{G}_L$ is the dimension of the enhanced group at the fixed point and $r=10-d$ the number of compact dimensions) and by identifying 3-form fluxes $f_{ABC}(\Phi)$ with the internal momenta of the string. Recall that indices are conveniently written as $A=(a,\hat I)$ with $a=1,\dots n, \hat I=1,\dots r$. At a point of enhancement $\Phi_0$ the only non vanishing fluxes are those with only Left indices $f_{abc}(\Phi_0)$ reproducing the structure constants of the $G_L$ group. Away from this point, mixed indices give rise to non-vanishing fluxes $f_{ab\hat I}(\Phi)$. These mixed indices fluxes govern the vector boson masses, the scalar masses and the structure of the would be Goldstone bosons. It is worth emphasizing that this structure exactly matches the string theory results with the correct full dependence on moduli fields. By invoking supersymmetry, a fermionic sector can also be included. In particular we have shown that moving away from the enhancing point $\Phi_0$ produces the expected masses for gaugini partners of massive vector bosons while keeping supersymmetry unbroken. Let us address some open questions. In DFT the generalized fluxes appear as generalized Lie derivatives, involving internal coordinates $\mathbb Y^M$, of generalized internal frame vectors $E_A(\mathbb Y)$. In a generic construction, the internal coordinates transform in the vector representation of the global group $O(n_1,n_2)$, namely $M=1,\dots, n_1+n_2$ and the same is valid for the frame index $A$. However, it appears that in order to reproduce the above fluxes, just a dependence on the ``true'' internal Left and Right $16+r+r=36-2d$ coordinates, associated to string coordinates would be needed. In fact this was shown to be the case for some specific examples in \cite{agimnr,aamr} (see also \cite{Cagnacci:2017ulc}) for the bosonic string case. In a similar line of reasoning a dependence on $\mathbb Y= (Y^I, y_L^{ I}, y_R^{\hat I})$ with $I=1,\dots 16; \hat I=1,\dots r$ would be expected. Therefore, the tangent space here, spanned by $A$ would account for the gauge symmetry enhancement, associated to states with non vanishing KK momenta and windings, but the ``physical space'' would be the string torus (including $\Gamma_{16}$). The explicit construction for the heterotic string here remains as an open question. Recall that our description is valid close to a given moduli point.When moving from one point of enhancement to a new point the dimension of the gauge group can drastically change and, therefore, the dimension of the tangent space. Even if, as stressed in \cite{aamr}, these tangent directions are not physical dimensions an explanation of how, moving continuously from one point of enhancement to another could lead to a discrete change in the number of these extra tangent dimensions is still lacking. DFT description would presumably require the introduction of extra states, mimicking the string theory situation. Following the suggestions in \cite{aamr} this could be presumably achieved by considering a sort of generalized KK expansion on generalized momenta $\mathbb L$ of the different fields coming into play. Thus, very schematically a vector boson corresponding to a charged generator would read\footnote{Similar expansions were considered in \cite{Aldazabal:2016yih} for the bosonic string case and for ${\mathbb L}^2=0$.} \begin{eqnarray} A_{L\nu}(x, {\mathbb Y}) =\sum_{\mathbb L} A_{L{\nu}}^{({\mathbb L})}(x) e^{i \mathbb{L}_{M} \mathbb{Y}^{ M}}\,\delta (\mathbb L^{ 2},1) =\sum_{\mathbb L} A_{L\nu}^{({\mathbb L})}(x) e^{i K.Y+ik_L.y_L+i k_R.y_R}\,\delta (\mathbb L^{ 2},1)\nonumber \, , \end{eqnarray} where $K^I, k_L^m, k_R^m$ are functions of the moduli. Therefore, when moving continuously along the moduli space, and for specific values of generalized momenta ${\mathbb L}$ in above sum, $k_R = 0 $ and the associated vector fields $A_{L\nu}^{({\mathbb L})}(x)$ become massless. The neighborhood of each of such points is what our description would be capturing. In order to address the description of the enhancement process we made a specific choice of generalized fluxes $f_{ABC}$ with $A=(a,\hat I)$ by keeping just the indices leading to the enhanced gauge group structure constants at the enhancing moduli point and setting the other components to zero. However, it appears interesting to explore the meaning of other possible components. In fact, we have already mentioned that, if we look at all indices running from $1,\dots r+n$, namely $a=i,\hat I=1,\dots r$ the corresponding fluxes encode the geometric and non-geometric (closed string) fluxes discussed in the literature\cite{fluxes}. In the six dimensional case, these fluxes span the ${\bf 220}$ representation of $O(6,6)$. Interestingly enough, the quadratic constraints \eqref{quadratic} mixing these fluxes with the gauge group ones would impose restrictions on the possible gauge groups. This is reminiscent of the Freed-Witten anomaly\cite{Freed:1999vc} cancellation requirements discussed in \cite{Aldazabal:2008zza}, in the context of Type II string, where such conditions where obtained from quadratic constraints. Such mixings, in the heterotic string Abelian case, were found also in \cite{Kaloper:1999yr}. Notice that there are still other fluxes components to be considered that, in the context of Type II would correspond to mixings of open string indices with closed string ones. Heterotic/Type I duality could shed light on their possible interpretation. \section*{Acknowledgments} We thankD. Marqu\'es, C. Nu\~nez and A. Rosabal for useful discussions. This work was supported by CONICET grant PIP-11220110100005 and PICT-2012-513. G. A. thanks the Instituto de F\'isica Te\'orica (IFT UAM-CSIC) in Madrid for its support via the Centro de Excelencia Severo Ochoa Program under Grant SEV-2012-0249. \bigskip
1,477,468,751,321
arxiv
\section{Introduction and Draft APS statement} Informal science education activities have long played an important role in helping create and recruit the next generation of scientists, in communicating both exciting results and fundamental concepts to the world, in building public trust in science and scientists, in raising the profile of research and academic institutions, in justifying the use of taxpayer funds for research, and in securing new funds for important projects \cite{NAP12190}. Despite their value, the efforts of physicists to initiate and participate in informal education activities are often not considered in hiring decisions or in career advancement decisions by their institution, as evident from faculty handbooks ({\it e.g.}, \cite{HARV2021}. In order to recognize the benefits of informal educational efforts, and to motivate more physicists to participate in them in the future, the APS CIP has drafted a statement that advocates that engagement-based efforts be considered in recruitment and career advancement decisions by the facilitators' home institutions. \vskip 0.1cm \noindent The text of the draft statement is as follows: \vskip 0.1cm {\it Systematic, ongoing, respectful, lively, two-way conversations with the public -- that is, public engagement on science -- is critical to the field of physics, including the public image of institutions hosting physics research and education, the recruitment and diversity of new generations of physicists, the scientific interest and literacy of the general public and in turn their support of physics and science more generally, and the success of physics-based applied research and development undertaken in response to specific practical societal needs. Overall, however, such activities tend to receive inconsistent and in some cases negligible evaluation during hiring, performance assessments, and promotions of physicists. APS therefore strongly supports participation in informal education activities, and supports that such participation should be considered in recruiting and promotion decisions, including tenure decisions at universities and other forms of career advancement at non-academic institutions as appropriate.} \vskip 0.1cm In the sections below, we will define terminology for informal science education efforts and describe the different types of engagement activities, and we identify the stakeholders in these efforts and the value to each. We also give examples of some successful programs, and we suggest aspects of these activities that may be considered by institutions in evaluating informal educational efforts for hiring and career advancement decisions. \section{Description of Informal Education Terms and Activities} To properly frame our discussion, we begin by defining and then characterizing informal science education. In contrast to classroom learning, informal education takes place outside of formal learning structures, often lacks a framework and defined objectives, and includes a broad range of activities \cite{NAP12190, ainsworth2010formal}. Informal learning happens continuously, as it is based on the individual's experience. In some cases like watching TV or listening to the radio, the learning may be happening without the individual explicitly choosing it. In other cases like visiting museums or zoos, reading books, attending summer camps, or going to a public lecture, it can be more deliberate. Informal science education is widely available and regularly consumed by the public, and in fact plays an important role in people's general understanding of science. Informal science is often characterized by ``free-choice'', learner-led activities, which are unbounded by standards or metrics, and which have an emphasis on the participant's curiosity and excitement \cite{falk2005using, avraamidou2016intersections, phipps2010research}. As expressed in the NRC report Learning science in informal environments, ``An important value of informal environments for learning science is being accessible to al'' (National Research Council, 2009). Due to the carefree nature and participant-lead aspect of informal science education, it has often been seen as a key element to spark interest, support student learning, discover the joy and relevance of science in everyday life, and identify science as a possible career path \cite{adams2017informal, anderson2007predators, aroca2011ensino, eratuuli1990experiences, nielsen2009metacognitive, tal2014learning}. Besides informal science education, a number of other terms are used to describe activities whereby scientists interact with the public, including non-formal education, public engagement, and outreach. While these are often used interchangeably, it is important to define and describe the differences between these designations -- with the recognition that variations routinely occur with different authors and regions. Non-formal education activities are those that have an established framework but take place outside formal learning structures ({\it e.g.}, classrooms). For example, training activities in the arts or in sports often follow an established syllabus but have few (if any) formal assessments, as they are usually pursued by individuals striving to improve a particular set of skills as a hobby rather than as a career. Courses for senior citizens are another popular example of non-formal educational opportunities. Informal education also happens outside of formal learning structures and learners also choose to participate. However, there is not a particular objective or framework defined. Informal learning happens constantly, as it is based on the individual's experience and in some occasions the learning is happening without the individual explicitly choosing it, like reading an article or listening to a radio report. In some cases can be more deliberate, like visiting a museum, attending a summer camp, or downloading a science podcast. While a broad set of activities are classified as informal education, ``outreach'' is a more narrow term that was previously used to describe the interactions of scientists and the public. Unfortunately, the connotation of this word implies a one-way interaction wherein a knowledgeable scientist provides information to the recipient audience who may otherwise not know, be aware of, or have access to this information. This one-way interaction between facilitator and audience, referred to as a deficit model, is however no longer favored: there has been a recent shift in the community to reframe this model and design activities wherein facilitators and audience are mutually engaged, exchanging information and knowledge. Referred to as the two-way interaction model, this reframing has led to more dynamic events, which are now referred to as ``public engagement'' rather than by the term ``outreach''. In our discussion of informal science education activities, we will follow this trend by emphasizing activities where there is a mutual interaction between the different stakeholders. {\bf For brevity, we will also use the acronym ISEAs for ``informal science education activities''}. \section{Value of Informal Education for Stakeholders} There are four stakeholders that benefit from ISEAs -- the audiences, the researchers, the institutions, and the field of physics. In support of our effort to have ISEAs considered for career advancement decisions, we will describe the value of ISEAs to each of these stakeholders in this section. \subsection{The Audiences} The benefits of classroom science education to audiences have been well documented. These include, for example, enhanced critical thinking \cite{ainsworth2010formal}, cultivating a passion for learning, cross-discipline advances, preparation for the future, and economic benefits ({\it e.g.}, career opportunities) \cite{Connections2021, eng201110}; as well as health benefits ({\it e.g.}, vaccines), understanding public policy \cite{marincola2006public}, and overall science literacy \cite{archer2015science}. Not only do ISEAs supplement all these benefits of classroom science, they do so without the constraints of curriculum, text books, tests, metrics, or oversight by school boards or other organizations \cite{falk2005using, avraamidou2016intersections, NAP12190}. This freedom has led, in some cases, to extremely creative and meaningful learning experiences \cite{doering2015fostering, oner2016stem}. When two-way interaction methods are employed, the resulting dialogues can bring science ``alive'', spark interest, and stimulate curiosity in a way that is hard to replicate in classroom curricula designed to cover material on standardized tests and end-of-year exams \cite{avraamidou2016intersections, NAP12190}. For these reasons, ISEAs can inspire students to vigorously pursue classroom science studies. Further, for discussions on critical public policy issues \cite{Fleming2018} such as climate change, nuclear energy, or vaccine efficacy and safety, direct person-to-person engagement between a researcher and the public in an open and accessible forum can change hearts and minds in ways not possible with a journal article or conference lecture that will not be seen by the general public. As scientific advances permeate our daily lives more each day, the audiences benefit from having more access to science in the manner only possible through ISEAs. \subsection{The Researchers} Many physicists, and some of their supervisors, view the time spent on ISEAs as time ``lost'' from productive research \cite{andrews2005scientists, martinez2016has}. However, there are numerous ways in which researchers can benefit from their involvement in ISEAs, including: improving their communication skills \cite{hinko2013impacting, illingworth2015developing}; advancing their research; expanding their abilities to mentor, teach and train the next generation of scientists \cite{adams2017informal, avraamidou2016intersections, Bennett2017, hinko2016characterizing}; identifying diverse career paths \cite{laursen2012impact}; and satisfying stakeholder ``societal impact'' requirements \cite{NSF2021}. The origins of these benefits lie in the preparation for, as well as in the participation in, ISEAs. With regard to preparations for engagement activities, researchers must take time away from their regular research duties to first reflect on all aspects of their work -- including their overarching motivation, short-term goals, methodologies, results, impact, and next steps -- and then to formulate clear and concise descriptions of each. With regard to participation, researchers must utilize these formulations to engage in two-way interactions with audiences. We will examine in detail the benefits that this reflective thinking, message formulation, and audience engagement have in their communication, research, teaching, and societal skills. \subsubsection{Improving Communication} Since informal science education activities (ISEAs) are based on communication, researchers who are active in ISEAs can expect to hone their communication skills \cite{gagnon2017addressing, hinko2013impacting, illingworth2015developing}. This includes message crafting (creating, revising, streamlining, and rehearsing), conceiving analogies and examples to better describe your work, adapting messages to audiences with different backgrounds and interests, responding to unexpected questions, and addressing the critical question ``Why should anyone care about your research?'' These skills, especially the ability to place detailed research work into a much broader context and show its relevance to everyday life, obviously benefit researcher communications with funding agencies, academic departments, review panels, technical audiences, colleagues, and other stakeholders, and can also be invaluable for writing journal articles. \subsubsection{Advancing Research} Beyond improving communication skills, research efforts can be advanced by participation in ISEAs -- specifically, in research direction, methodologies, and a researcher's overall knowledge of the field. For example, the reflective thinking needed to prepare for ISEAs can lead to changing research directions in a way that differs from traditional mechanisms ({\it e.g.}, discussions with collaborators and colleagues, examining the work of competitors, exploring related fields, lack of resources, etc.) Another benefit of ISEAs is that engagement with non-experts can lead to ideas that differ from standard methodologies in your field. Such discussions can lead to unexpected inspirations, through ``outside the box'' thinking that can alter both detailed approaches as well as overall research directions. Audience interaction can also lead to new perspectives on priorities, assumptions, and conclusions that may not otherwise occur. This may be particularly apparent in applied physics areas, such as certain aspects of materials science and biomedical physics, where research directions may be driven by pragmatic societal needs and goals rather than pure curiosity. Finally, because physicists are often asked to engage with the public on topics outside of their primary research area, ISEA preparation and participation can be a gateway to expanding a researcher's physics understanding. It can also enable the discovery of previously unrecognized connections between their research and other areas, connections that could advance their research in unexpected and exciting ways. There have been a number of studies that have quantified the impact of ISEAs on research productivity. In a study of researchers at a sustainability research center, \cite{kassab2019does} found statistically significant and moderately positive correlations between four out of six types of public engagement activities and the number of research publications. This agrees with other studies finding a positive effect of ISEAs on research performance \cite{bentley2011academic, jensen2008scientists, van2012bench, van2011entrepreneurial}, and is consistent with other studies that have found no negative impact on research \cite{gulbrandsen2005industry, mostert2010societal}. \subsubsection{Improving Mentoring, Teaching, and Training} Because communication skills are absolutely essential to effectively mentor, teach, and train the next generation of scientists, the improved communication skills resulting from two-way engagement-based ISEAs will surely benefit these training efforts. The reflective thinking needed to craft presentations for ISEAs will aid in crafting succinct messages that will make training much more potent, and will greatly assist researchers to impart the broader context and overall motivation for their work to younger scientists. This last point is particularly important, as young scientists who are often overwhelmed with learning the daily research tasks may lose their incentive without knowing the impetus for those tasks. More broadly, ISEAs train scientists to explain their work effectively and with enthusiasm to those unfamiliar with the scientist's field of research - an obvious asset for teaching students in more formal education settings, particularly at the undergraduate and junior graduate levels. Furthermore, ISEAs play an important role in launching the careers of researchers. A recent study \cite{rethman2021impact} used surveys and interviews of over 100 undergraduate students and found that students who facilitated informal physics programs reported positive benefits to their communication, teamwork and networking, and design skills. This helped develop their physics identity, their feeling of belonging to the physics community, and their career skills. Another study \cite{Prefontaine2020} identified interactions with the audience, working in teams, thorough training, clear mission, and supportive schedule as key structures in ISEAs that help build the physics identity of undergraduate participants. \subsubsection{Satisfying Societal Impact Requirements} Funding opportunity announcements from certain agencies ({\it e.g.}, the U.S. National Science Foundation \cite{NSF2021}) require that responding proposals include a portion directed at societal impact. While the training of graduate students was, in past years, often sufficient for such broader impact components, more elaborate efforts to demonstrate how the proposed research serves the needs of society are now prevailing. In many cases, establishing ISEAs tied to the proposed research topic can satisfy, or serve as a critical component of, the societal impact requirements of a proposal. Furthermore, researchers who regularly participate in ISEAs will likely have a propensity for creating new activities that may significantly improve the societal impact component of their next proposal. \subsection{The Institutions} Institutional support for informal science education activities (ISEAs) is certainly critical for their success. This support includes, but is not limited to, facilitating, publicizing, hosting, sponsoring, and funding these activities, as well as encouraging their employees to participate. Research examining the landscape of informal physics programs in higher educational institutions (HEI) across the US have, however, found that the vast majority of these activities are not run by tenure system faculty, but rather by staff, students, and instructors. As these facilitators typically have substantially less clout in the hierarchy of academic institutions, ISEAs often exist on the fringe of awareness and resources within the physics culture. To help reverse this trend, we note that there are many benefits of ISEAs to the host institution. For example, successful ISEAs can be invaluable recruiting tools -- increasing the number of junior researchers and (at academic institutions) the number of students studying science. As an example of the latter, a recent study of North Carolina State University students found that participants in the North Carolina Science Olympiad, an activity for North Carolina high school students, were significantly influenced in their choice of colleges, their choice of majors, and their decision to go into a STEM field \cite{NCSU2018}. Institutions that run successful ISEAs can also significantly enhance their role as ``good neighbors'' in their community. One example of this is the School for Science and Math at Vanderbilt University, which provides a research-based elective STEM curriculum for high school students in Nashville, Tennessee. The program has elevated achievement scores of participants, has produced 27 Intel and Siemens semifinalists and regional finalists, and has resulted in conference presentations and scientific publications for some participants \cite{eeds2014school}. ISEAs can not only give institutions increased visibility in their local communities, they can also raise their reputation in the research world. This can come, for example, from enhanced research funding: the careful crafting of messages by facilitators of ISEAs can form the foundation of successful research proposals. Also, as discussed above in Section 3.B.1, numerous studies have shown positive correlations between ISEA participation and research productivity. Finally, ISEAs often satisfy the ``societal benefits'' component that is required in many proposals ({\it e.g.}, NSF \cite{NSF2021}). \subsection{The Field of Physics} There are numerous benefits of ISEAs to the field of physics, and of science in general, including growing the workforce, growing the general public support for research results and methodology, and changing the direction of research and development. We provide details of each of these aspects below. \subsubsection{Recruiting and Equity/Diversity/Inclusion} Recruiting the next generation of physicists, critical for the health of our field, requires substantial effort; it is not a passive endeavor. For example, researchers who simply wait for potential summer interns to contact them are at a significant disadvantage to those who take action to bring in new members to their team. ISEAs have been shown, through a large body of research \cite{barton2013science, maltese2010eyeballs, rahm2016case, zimmerman2012participating, carlone2007understanding, godwin2016identity, varelas2015explorations, archer2010doing, Hazari2019}, to be an excellent mechanism for recruiting. Specifically, these works show that participation in ISEAs are correlated with increased interest in science, development of a science identity, and career intentions. Especially because they can be creatively designed and have no tests or metrics, ISEAs are ideally suited for ``sparking the interest of future generations of scientists'' in a non-threatening and enjoyable manner. Furthermore, there is research within the Physics Education Research (PER) field that have looked at the positive impact that facilitating ISEAs has on the students who organize them, including fostering physics identity, development of communication and pedagogical skills, sense of belonging, and confidence among peers \cite{mullen2019should, Prefontaine2020, fracchiolla2020participation, fracchiolla2020community, rethman2021impact, hinko2016characterizing, bennett2020refining, adams2017informal, Hazari2019, king2019black}. These works have prompted a steady increase in the number of informal physics education programs, resources, and public campaigns, mostly targeted at youth from underrepresented minorities. This focus is warranted because disparity of representation in physics has been an issue for decades. Increasing the number and diversity of students who choose physics as a career has become a key objective nationwide -- to guarantee that the membership of the physics community reflects the rich diversity of the communities it serves, and can benefit from a heterogeneity of thought processes and experiences. Because this recent focus on increasing equity, diversity, and inclusion (EDI) holds for all fields, and the pool of applicants is finite, it is more important than ever to design ISEAs that attract underrepresented groups to physics. While the number of students from underrepresented groups that enrolled in physics has increased, in part due to engagement efforts, the trends for students who graduate and pursue a career in physics have unfortunately not changed significantly \cite{Funk2018, Mulvey2020, riegle2019does}. Prior research \cite{godwin2013development, hazari2010connecting, hyater2018critical, hyater2019deconstructing, hazari2020context} demonstrates that developing a physics identity is a key factor in attracting and retaining students in physics, and participation in informal physics spaces can support and foster the development of a physics identity for both audience and facilitators \cite{NAP12190, fracchiolla2020participation, Prefontaine2020, adams2017informal}. It should be noted, however, that many who design and facilitate these programs may possess academic experience but lack resources or research-based knowledge on best practices designing, implementing, and assessing the programs, particularly from a cultural competency perspective, putting at risk their efficacy to achieve diversity within physics. This suggests that improving EDI in the field of physics would be well served by researchers working collaboratively with physics education experts in designing ISEAs targeting underrepresented groups. \subsubsection{General Public Support for Science} Science and the technology it enables are major driving forces behind every successful nation, engender healthier, longer, richer lives in the societies that embrace them, and reveal and celebrate the mysteries of the world around us. These successes are reflected in public opinion: 73\% of Americans believe science has a mostly positive impact on society \cite{Funk2020}. Because much scientific research is publicly funded, maintaining and improving that positive public support will be critical to sustain and grow both future research efforts and scientific manpower. To this end, a body of peer-reviewed scientific literature on the value of public engagement around science as a means for boosting public acceptance of science -- and importantly, how to do it successfully -- is emerging. A number of different models for ISEAs have been identified \cite{fracchiolla2019characterizing}. \cite{archer2015science} introduce the notion of science capital and suggest framing science engagement in terms of maximizing that capital. \cite{dawson2014not} stresses that integrating low-income ethnic minority groups into public engagement on science requires looking beyond eliminating certain specific barriers, such as cost, to additionally incorporating cultural and linguistic inclusivity. Especially crucial for successfully engaging adult audiences, particularly around contentious science issues, is overcoming the deficit model of science outreach, which is based on a belief that the public is ignorant and seeks to remedy that flaw by providing more and better information; such an approach is inherently paternalistic and reflects a narrow conceptualization of knowledge ({\it e.g.}, \cite{phillips2013really}) that is likely to alienate or offend audiences. Beyond research on informal education, some quantitative studies of social interactions also provide important clues on how to develop effective strategies for public engagement on difficult science topics. For instance, \cite{williams2015network} applied complex network theoretic analysis to metadata from social-media discussions of climate change science. They found that the most extreme views were reinforced by homogeneous echo chambers largely devoid of argument, whereas more moderate views evolved in social-media interactions that breached cultural silos, leading to both increased debate but ultimately also decreased polarization. Studies like those described above collectively help provide a framework for formalizing and scaling up ISEAs in a way that can increase public approval of science and scientists. An expansion of such efforts is especially needed in light of new challenges that threaten public support for science from several directions. High-profile examples of such challenges include: fears of genetic research and related objections to GMO foods and mRNA vaccines; policy solutions for climate change that are often at the expense of workers and the poor, and similarly, social injustice in some science-based movements like environmentalism (see, {\it e.g.}, \cite{Gross2018, shammin2009impact, LaChance2018, Mock2019, Fears2020}; the recent worldwide rise of populist movements, which are by definition skeptical of experts -- including scientists; and skepticism of the public-health value of masks during the COVID-19 pandemic, which arose from a variety of sources ranging from partisan political drivers to inept science communication that conflated rigorous medical advice with public health management goals \cite{Tufekci2020}. Low-income communities and people of color can be disproportionately affected by these challenges, such as the deeply immoral and racist Tuskegee vaccine experiments that killed over 100 African-Americans in the mid-20th century, eroding the trust these communities have in scientists and resulting in their having substantially lower approval ratings for science \cite{Funk2020}. This worsening distrust of scientists is an obvious threat to the scientific enterprise and the societal benefits it can provide. A crucial way to address this unfortunate trend is to enlist more scientists to participate in ISEAs, so that they may effectively serve as ambassadors for science. However, it will be essential to train these new ambassadors so that their efforts improve, rather than erode, public trust in science. For example, training to emphasize engagement over one-way interactions, to focus on the science over public policy or partisan political statements, and to be sensitive to the metaphysical belief systems of the public nationally and globally, perhaps especially indigenous peoples \cite{Fleming2018}, will significantly elevate the quality and positive impact of enhanced engagement efforts. \subsubsection{Direction of R\&D} As discussed in Section 3.B.2 above, preparation for and participation in ISEAs may result in changes of the direction of research by individual physicists based on their reflective thinking or on suggestions by (or inspiration from) non-experts. Such a change in direction for a single research effort could, in principle, have a profound and long-lasting impact on a physics subfield. Again, this might be particularly relevant for certain applied physics areas, where research directions may be primarily driven by pragmatic societal needs and goals. Direct engagement with the public and with professional experts in fields other than physics can, additionally, be critically important to identifying and prioritizing knowledge gaps and requirements. \section{Examples of Successful Engagement Programs} To illustrate some of the concepts discussed above, we describe a number of examples of ISEAs in different physics subfields that illustrate the two-way engagement model interacting with the public. These include CERN (particle physics), Michigan State University (nuclear physics), Texas A\&M University (modern physics), University of Michigan (general physics), the University of Illinois at Urbana-Champaign (general science), Rutgers University (general physics), and the Princeton Plasma Physics Laboratory (general science), and Science Cafes (general science). CERN has an extensive Education, Communication, and Outreach effort \cite{CERN2021} that includes websites, social media, media visits, press releases, programs and activities for teachers and students, guided tours, exhibitions, the Arts AT CERN programs, and more that include photos, videos, and animations. Additionally, a new flagship education and outreach center, the CERN Science Gateway, is under construction and will open in 2023. It will feature immersive hands-on exhibitions, education laboratories, and events for international audiences of all ages. Michigan State University has established a set of ISEAs in nuclear physics that reach a broad age range and cover a wide range of activities \cite{Spyrou2021}. These include GUPPY (grades 4-6), MST@MSU (grades 7-10), PAN@MSU (high school), PAN for high school teachers, a High School Honors Science Program, NS3 -- Nuclear Science Summer School (college), Research Assistantships (college), Summer Research Experiences/REU (college), Conference Experiences for Undergraduates (college), and a variety of workshops, schools, conferences, and career opportunities for graduate students. They also offer laboratory tours (4000 visitors/year), open houses (~3000 visitors), public talks, school visits, science festivals. They have also established an infrastructure overseeing these activities, including coordinators, faculty interface, committees, web designers, and more. Texas A\&M University \cite{TAMU2020} has 14-year running Saturday Morning Physics program, where Texas high school students learn about modern physics research topics including cosmology, relativity, dark matter and neutrinos, nuclear physics in stars and in medicine, cold atoms and nanodevices. The students receive certificates for sustained attendance, and high school teachers are especially invited to participate. Since 1995, the University of Michigan has run a Saturday Morning Physics program \cite{UMICH2021} designed for general audiences. It features faculty members describing their research in non-technical terms, employing multimedia presentations with hands-on demonstrations, slides, videos, and computer simulations. The Department of Physics at the University of Illinois at Urbana-Champaign has run a traveling science show called the ``Physics Van'' since 1994 \cite{ILL2021}. The van visits schools in many states and performs programs to excite school kids of all different age levels in science. These include assembly-style, classroom workshops, mobile exhibits, and more. They are invited by colleges/universities, research institutions, museums, and private organizations to put on their programs. The Rutgers Faraday Holiday Children's Lecture series has been running for over 20 years \cite{RUTG2020}. These holiday physics shows designed for the public have entertained approximately 25,000 children and adults. They are energetic, sometimes explosive hands-on demonstrations that cover matter and motion, Newton's laws, and other topics found in a typical physics semester course. The Princeton Plasma Physics Laboratory \cite{PPPL} has been running the Science on Saturday public lecture series since 1991. It features high-school level talks given by experts in diverse fields of science. The audience varies in ages and academic backgrounds and some participants have been joining since their early years. Another event hosted by PPPL is the Young Women's Conference (YWC). The YWC brings together Middle and High School girls for a one day event featuring booths and speakers from different STEM fields and institutions. The YWC quickly outgrew the PPPL site and is hosted yearly in the Princeton University campus where it gathers ~1000 participants a year. Science Cafes \cite{SCICAFE2021} are a series of informal gatherings for science discussions held in bars, restaurants, and other casual settings. The events feature a moderator and a scientist and the showing of a short video designed to initiate a lively conversation with the audience. With a goal of igniting a passion for science among the attendees, it is common for the audience to take over and be fully engaged, and the topic to meander towards their mutual interests. The wide variety of Science Cafe venues, topics, audiences, and online resources have resulted in tremendous growth of this grassroots movement -- with hundreds of these series now established across the USA and internationally. \section{Evaluating Informal Education Efforts for Career Advancement} There are many types of ISEAs, ranging from public lectures and events to laboratory tours and school visits, from workshops and summer schools to podcasts and videos, from books and media interviews to comic strips and physics kits and more. Some of these activities reach thousands of participants, and others just a few, but all have the potential to bring about profound changes in lives, institutions, and our field. The level of effort required to carry out such a diverse collection of activities clearly varies widely in initiative, commitment, and time. For this reason, it is not practical to establish metrics that could be used to evaluate these efforts for hiring or career advancement. This same lack of metrics also holds for the service work ({\it e.g.}, chairing a department, serving on or leading a committee, conducting peer reviews, organizing conferences, authoring studies, arranging colloquia, and more) that is required for career advancement at most academic institutions. Instead of metrics, evaluations of service work are often made by letters of recommendation that detail the initiative, commitment, time, and impact of these activities. We recommend a similar modality for evaluating ISEAs for career advancement. A ``statement of Informal Education Efforts'' written by candidates can be part of their career advancement portfolio, and letter writers can be instructed to comment, when appropriate, on such efforts. In particular, in lieu of metrics, we provide a representative list of aspects of ISEAs that could be used for career advancement evaluations. These aspects include Initiative, Creativity, Audience size, Duration, Interactivity, Impact, Funding, Publicity, and Longevity. We briefly discuss each of these from the perspective of an evaluator making the case for hiring or career advancement. {\it Initiative} -- Conceiving and executing an ISEA often requires significant time and effort, but can result in an event that best utilizes the strengths and background of the presenter to be most impactful. {\it Creativity} -- Novel concepts in ISEAs have the potential to inspire new ways of learning and possibly launch new informal education paradigms. They can also demonstrate what does not work, and hence can be valuable learning opportunities for facilitators. {\it Audience} -- In some cases, the impact of an event scales with the number of participants. Larger events often require an attention to content and logistical details to maximize the potential to reach all audience members. Alternatively, the intimacy of few-to-one or one-to-one interactions can provide tremendous opportunities for learning and mutual engagement. Furthermore, the composition of the audience members should be considered. For example, engaging with a small number of high school teachers may have the same impact as directly presenting to a much larger number of students. {\it Duration} -- Some activities may require extensive preparation to condense complex science concepts into a short time frame, while others may require covering multiple topics to keep an audience engaged for longer periods. {\it Longevity} -- Establishing a web presence, distributing presentation files, creating take-away items, and collecting audience contact information can all enable an ISEA to have an impact that lasts substantially longer than the original event. {\it Interactivity} -- The recent shift towards two-way interaction can not only produce events that energize the audience, they can provide learning opportunities for the facilitators. {\it Impact} -- Some methods of determining the impact of ISEAs include audience questionnaires, oral or written feedback from participants, the number of questions asked during and after the event, and the number of new students or young researchers resulting from recruiting events. {\it Funding} -- Some ISEAs require facilitators to obtain funding, adding another facet to the commitment and effort to realize the event. {\it Publicity} -- Articles about successful ISEAs can bring positive attention to the host institution, can raise the reputation of departments or divisions, and can inspire others to initiate events. Some articles are effectively an additional ISEA that further broadens the audience and impact of the original event. Consideration of these aspects of ISEAs may facilitate hiring and career advancement decisions by the host institutions of the facilitators. We note that some of these same criteria are used by APS committees in their decisions to fund ISEAs ({\it e.g.}, through the Mini-Grant proposal review process run by the APS Committee to Inform the Public \cite{APS2021} ). \section{Summary} We have enumerated the many benefits of informal science education activities to the audiences, to researchers, to their institutions, and to the field of physics. These include: enhancing the critical thinking, understanding of public policy, career opportunities, and science literacy of the public; improved communication skills, methodology, and mentoring by researchers; increased science enrollment, visibility, reputation, and research funding for institutions; and expanded and more diverse recruiting and more public support for the field of physics. Given these benefits, we advocate for the expansion of informal science education activities, especially those that involve two-way engagement with the audience. We also advocate that institutions consider involvement in these activities as part of their decision-making process on hiring and career advancement, and we discuss numerous aspects of these activities that could be helpful in evaluations required for career advancement. \Urlmuskip=0mu plus 2mu\relax \bibliographystyle{unsrtnat}
1,477,468,751,322
arxiv
\section{Introduction} Graph similarity measurement, which is to compute distance/similarity between two graphs, is a fundamental problem in graph-related tasks. It arises in a variety of real-world applications, such as graph search in graph-based database \cite{yan2002gspan}, malware detection \cite{wang2019heterogeneous}, brain data analysis \cite{ma2019deep}, etc. Graph Edit Distance (GED) \cite{bunke1983distance} and Maximum Common Subgraph (MCS) \cite{bunke1998graph} are two domain-agnostic graph similarity metrics, yet exact computation of both are known to be NP-hard \cite{zeng2009comparing}. For instance, no algorithm can compute the exact GED between graphs of more than 16 nodes within a reasonable time so far \cite{blumenthal2020exact}. This problem has motivated interest in approximation algorithms, with a recent surge in graph similarity learning methods \cite{li2019graph,bai2019simgnn,bai2020learning,ling2021multilevel,zhang2021h2mn,xu2021graph,lan2021sub}. Recent methods improve performance by capturing node-level or subgraph-level interactions. The initial way is to encode each graph as a graph-level fixed-length vector via Graph Neural Networks (GNNs) and then combine the two vectors of both input graphs to predict similarity. However, the actual difference between two graphs often arises from very small local substructures, resulting in the graph-level fixed-length vector being difficult to contain local information \cite{bai2019simgnn}. To alleviate this problem, GMN \cite{ li2019graph}, MGMN \cite{ling2021multilevel} and H2MN \cite{zhang2021h2mn} derive node-level and graph-level embeddings containing interaction information of different scales through the cross-graph attention (propagation), and then convert these embeddings to one hidden vector (e.g. concatenation between two graph-level embeddings). SimGNN \cite{bai2019simgnn} and GraphSim \cite{bai2020learning} derive the corresponding hidden vector by applying the convolution operation to the pairwise node similarity matrix or extracting its histogram features respectively. Ultimately, all models map the hidden vector to the ground-truth similarity. Previous methods lack interpretability despite the exploitation of interaction information. It is unclear what the final hidden vector represents and how to map it to ground truth. In natural language processing \cite{vashishth2019attention}, it has been demonstrated that models with more interpretability tend to boost performance. Many tasks use attention, such as machine translation \cite{luong2015effective}, language modelling \cite{liu2018learning}, abstractive summarization \cite{ma2021global}. Attention not only provides interpretability \cite{wang2016attention,lin2017structured,ghaeini2018interpreting}, but also benefits the performance of models . Without loss of generality, the graph similarity learning model with a more reasonable and interpretable paradigm can generally capture more critical information and filter out interference information, thus outperforming the less interpretable one. \begin{figure} \centering \includegraphics[width=1\linewidth]{Architecture.pdf} \caption{The entire flow of the proposed INFMCS framework. ({\bf Previous:}) The previous method extracts hidden vectors from interactions of different scales and then maps them to the ground-truth similarity. It is less interpretable, and mapping hidden vectors to ground truth is also agnostic. ({\bf Proposed:}) Our proposed model follows that the more similar a pair of graphs is, the greater the ratio of the MCS size to the pair's average size is. Although only the ground-truth similarity score is used during training, we can infer the MCS from inside the model.} \label{f1} \end{figure} To cope with this limitation, this study proposes a more interpretable end-to-end paradigm for graph similarity learning, named Similarity Computation via Maximum Common Subgraph Inference (INFMCS). Commonly, the more significant proportion of MCS to the average size of graph pairs \cite{bai2020learning}, i.e. normalized MCS size ($\mathrm{nMCS}$)\footnote{$\mathrm{nMCS}\left(\mathcal{G}_{1}, \mathcal{G}_{2}\right)= \frac{\left|\mathrm{MCS}\left(\mathcal{G}_{1}, \mathcal{G}_{2}\right)\right|}{\left(\left|\mathcal{G}_{1}\right|+\left|\mathcal{G}_{2}\right|\right) / 2}$. In this paper, we always view the graph with the smaller size as $\mathcal{G}_{1}$, because the MCS size between two graphs is less than or equal to the size of the smaller graph.}, the more similar the two graphs are. According to this fact, we infer Maximum Common Subgraph (MCS) implicitly and then obtain the normalized MCS size in an end-to-end fashion. First, we perform message passing from $\mathcal{G}_{2}$ to $\mathcal{G}_{1}$ by the modified cross-graph attention mechanism, thereby obtaining $|\mathcal{G}_{1}|$ pairs of node-level embeddings. In each pair, one embedding represents one node in $\mathcal{G}_{1}$ and the another embedding represents one node in $\mathcal{G}_{2}$ most likely matching the former. After concatenating the two embeddings in the embedding pair, we use MLP to transform the concatenation to the matching score between zero and one, where one and zero represent that two nodes are matched and not matched respectively. Finally, we add up the $|\mathcal{G}_{1}|$ matching scores as the predicted MCS size and then normalize it to derive the predicted similarity score. The entire process is optimized by either GED\footnote{$\mathrm{nGED}\left(\mathcal{G}_{1}, \mathcal{G}_{2}\right)=\exp(\frac{-\mathrm{GED}\left(\mathcal{G}_{1}, \mathcal{G}_{2}\right)}{\left(\left|\mathcal{G}_{1}\right|+\left|\mathcal{G}_{2}\right|\right) / 2}).$}/MCS normalized similarity or graph-graph classification label end-to-end. We also stack a few vanilla transformer encoder layers \cite{vaswani2017attention} with graph convolution layers to capture more global information, called Graph Convolution with Transformer (GCwT). Unlike the inherent order of sentences in natural language \cite{vaswani2017attention}, graphs are permutation-invariant, resulting in no order for nodes. Thus we propose a novel Positional Encoding based on permutation-invariant node ordering. Comprehensive experiments demonstrate that INFMCS consistently outperforms state-of-the-art baselines for graph-graph classification and regression tasks. Ablation experiments verify the effectiveness of individual components, including the proposed graph similarity learning paradigm and GCwT with novel Positional Encoding. In brief, we highlight our main contributions as follows: \begin{itemize} \item We propose a more interpretable end-to-end paradigm for graph similarity learning. The interpretability is derived from inferring MCS implicitly. \item To our best knowledge, the proposed Positional Encoding based on permutation-invariant node ordering is the first being proposed and used in the graph-transformer related model for graph similarity task. Like the order of tokens in sentences, permutation-invariant node ordering is also an essential graph feature. \item We perform comprehensive experiments on six graph-graph regression datasets and two graph-graph classification datasets to verify the effectiveness of the INFMCS. Ablation experiments verify the effectiveness of individual components. Also, the case study and visualization demonstrate interpretability. \end{itemize} \section{Related Work} Initial methods, such as SMPNN \cite{riba2018learning}, GCNMEAN and GCNMAX \cite{ktena2017distance} directly encode each graph as a graph-level fixed-length vector via GNNs and then only use graph-level interaction to predict similarity. After that, more models were proposed to exploit node-level or subgraph-level interactions by degrees. GMN \cite{li2019graph} uses cross-graph attention to derive node-level embeddings that contain another graph's information. SimGNN \cite{bai2019simgnn} and GraphSim \cite{bai2020learning} derive the corresponding hidden vector differently by applying the convolution operation to the pairwise node similarity matrix or extracting its histogram features. \cite{xu2021graph} first partitions graphs and then conducts node-wise comparison among subgraphs. MGMN \cite{ling2021multilevel} designs node-graph matching layers by comparing each node's representation of one graph with the other whole graph representation. After converting graph to hypergraph via random walk or K-hop neighbourhood, H2MN \cite{zhang2021h2mn} utilizes hypergraph convolution and subgraph matching blocks to predict similarity. Compared with previous methods, our proposed INFMCS is simpler. INFMCS does not need to compute node-wise interactions per layer \cite{li2019graph,bai2019simgnn} but only uses the last layer of node-level embeddings to capture cross-graph interactions. Second, ours does not need to consider multi-scale matching scores of $|\mathcal{G}_{1}| \times |\mathcal{G}_{2}|$ pairs \cite{ling2021multilevel}, only calculates $|\mathcal{G}_{1}|$ matching scores. Third, ours does not need to preprocess graph data, such as graph partition \cite{xu2021graph} and hypergraph construction \cite{zhang2021h2mn}. Although our method avoids the above computational burden, it achieves good performance. \section{Model Design} Our approach follows the hypothesis that the more similar a pair of graphs is, the greater the ratio of MCS size between graphs to the pair's average size is. To achieve this, we first derive $|\mathcal{G}_{1}|$ pairs of node embeddings via cross-graph attention and then transform them into matching scores. Ideally, the sum of $|\mathcal{G}_{1}|$ matching scores is precisely equal to the size of MCS. Finally, we normalize the sum of these scores to predict the similarity. We also proposed a Graph Convolution with Transformer based on permutation-invariant Positional Encoding to fill a gap between the shallow GCN and the sizeable receptive field. The overall process is end-to-end and is outlined in Figure 1. Before describing the modules, we introduce relevant preliminaries that set the background for the remainder of the paper. \textbf{Graph Similarity Learning} Given a pair of input graphs $(\mathcal{G}_{1}, \mathcal{G}_{2})$, the aim of graph similarity learning is to produce a similarity score $y=s\left(\mathcal{G}_{1}, \mathcal{G}_{2}\right) \in \mathcal{Y}$. The graph $\mathcal{G}_{1}=\left(\mathcal{V}_{1}, \mathcal{E}_{1}\right)$ is represented as a set of $N$ nodes $v_{i} \in \mathcal{V}_{1}$ with a feature matrix $X_{1} \in \mathcal{R}^{N \times d}$, edges $\left(v_{i}, v_{i^{r}}\right) \in \mathcal{E}_{1}$ formulating an adjacency matrix $A_{1} \in \mathcal{R}^{N \times N}$. Similarly, the second graph $\mathcal{G}_{2}=\left(\mathcal{V}_{2}, \mathcal{E}_{2}\right)$ can be represented in the same way. For graph-graph classifcation task, the scalar $y$ represents the class label, i.e., $y \in \mathcal{Y}=\{0,1\}$; for graph-graph regression task, the scalar $y$ measures the graph similarity, i.e., $y \in \mathcal{Y}=\left[0,1\right]$. \subsection{Similarity Computation} Notably, we always view the graph with the smaller size as $\mathcal{G}_{1}$, since the MCS size between two graphs is less than or equal to the size of the smaller graph in this paper. Given the node representations of the last layer of the graph representation learning $\mathbf{H}^{1}=[\mathbf{h}_{1}^{1};\mathbf{h}_{2}^{1}; \cdots;\mathbf{ h}_{|\mathcal{V}_{1}|}^{1}] \in \mathcal{R}^{|\mathcal{V}_{1}| \times d}$ for $\mathcal{G}_{1}$ and $\mathbf{H }^{2}=[\mathbf{h}_{1}^{2};\mathbf{h}_{2}^{2}; \cdots;\mathbf{h}_{|\mathcal {V}_{2}|}^{2}] \in \mathcal{R}^{|\mathcal{V}_{2}| \times d}$ for $\mathcal{G}_{2}$, we pass the message from $\mathcal{G}_{2}$ to $\mathcal{G}_{1}$ by modified cross-graph attention $a_{ij}$, and then obtain the representation $\mathbf{h}_{i^{\prime}}^{1}\in \mathcal{R}^{1 \times d}$ of node $v_{j} \in \mathcal{V}_{2}$ that most likely matches node $v_{i} \in \mathcal{V}_{1}$: \begin{equation} \label{e4} \begin{aligned} a_{ij} =\frac{\exp \left(s_{h}\left(\mathbf{h}_{i}^{1}, \mathbf{h}_{j}^{1}\right) \times \tau_{*}^{-1}\right)}{\sum_{j^{\prime}} \exp \left(s_{h}\left(\mathbf{h}_{i}^{1}, \mathbf{h}_{j^{\prime}}^{1}\right)\times \tau_{*}^{-1}\right)}, \mathbf{h}_{i^{\prime}}^{1} &=\sum_{j}a_{ij}\mathbf{h}_{i}^{(t)}, \end{aligned} \end{equation} where $s_{h}$ is a vector space similarity metric, like Euclidean or cosine similarity. In order to discretize $a_{ij}$, we add a learnable parameter $\tau_{*} \in (0,1]$. In other words, it makes the weight $a_{ij}$ of one node $v_{j} \in \mathcal{V}_{2}$ ($j=\argmax_{j^{\prime}}a_{ij^{\prime}}$) tend to one and the others tend to zero due to $\sum_{j^{\prime}}a_{ij^{\prime}}=1$. Thus, $\mathbf{h}_{i^{\prime}}^{1}$ represents the representation of node corresponding to node $v_{i}$ with the highest probability. After concatenating $\mathbf{h}_{i}^{1}$ with $\mathbf{h}_{i^{\prime}}^{1}$, we transform the concatenation to the matching score $s_{i}$ by MLP. Ideally, the sum of $|\mathcal{G}_{1}|$ matching scores is precisely equal to the size of MCS. Finally, we normalize the sum of these predicted scores to compute similarity $\hat{y}_{i}$: \begin{equation} \label{e5} \hat{y}=\frac{\sum_{i}s_{i}}{(|\mathcal{G}_{1}|+|\mathcal{G}_{2}|) / 2}, s_{i} = \mathrm{sigmoid}\left(\mathrm{MLP}\left(\mathbf{h}_{i}^{1} \| \mathbf{h}_{i^{\prime}}^{1} \right)\right). \end{equation} The loss functions are defined as follows: \begin{equation} \label{e6} \begin{aligned} \mathcal{L}_{c}=-\frac{1}{|\mathrm{D}|} \sum_{i=1}^{|\mathrm{D}|} y_{i} \log \left(\hat{y}_{i}\right)+\left(1-y_{i}\right) \log \left(1-\hat{y}_{i}\right) \, or \, \mathcal{L}_{r}=\frac{1}{|\mathrm{D}|} \sum_{i=1}^{|\mathrm{D}|}\left(y_{i}-\hat{y}_{i}\right)^{2}, \end{aligned} \end{equation} where $\mathcal{L}_{c}$ represents the binary cross-entropy loss for the graph-graph classification task and $\mathcal{L}_{r}$ is the mean square error loss for the graph-graph regression task. $y_{i}$ denotes the ground-truth supervision information, and $|\mathrm{D}|$ is the size of the dataset. \subsection{Graph Convolution with Transformer} Before exploiting node-wise interactions, we need to obtain node-level embeddings as in the previous method. GCN \cite{kipf2016semi} is the most popular spatial graph convolution. In this study, we use it to compute node-level embeddings. For simplicity, we denote the encoding process by $\mathrm{GCN}(\cdot)$ and describe architectural details in the appendix. The $\mathrm{GCN}$ computes node representations $\mathbb{H} \in \mathcal{R}^{|\mathcal{V}| \times d}$ via \begin{equation} \label{e1} \begin{aligned} \mathbb{H}=\left[\mathrm{h}_{1};\mathrm{h}_{2}; \cdots;\mathrm{h}_{|\mathcal{V}|}\right], \mathrm{h}_{i}=\mathrm{GCN}\left(\mathcal{G},\mathbf{x}_{i},\left\{\mathbf{x}_{i j}\right\}_{j \in \mathcal{N}(i)}\right), \end{aligned} \end{equation} where $\mathrm{h}_{i} \in \mathcal{R}^{1 \times d}$ and $\mathcal{N}(i)$ denotes the node representation and neighbors of node $v_{i}$ respectively. Graph similarity learning requires not only node embeddings to perceive local information but also node representations to contain global information due to many subtle differences across the whole graph \cite{bai2020learning,zhang2021h2mn}. However, over-smoothing \cite{rong2019dropedge,kipf2016semi} constrains graph convolution from stacking multiple layers, resulting in a gap between the shallow GCN and the sizeable receptive field. To fill this gap, we stack some vanilla transformer encoder layers \cite{vaswani2017attention} with graph convolution layers to capture more global information naturally. The global perception ability of Transformer comes from the self-attention mechanism, which internally calculates the correlation between embeddings. The attention weight matrix is equivalent to constructing a fully connected graph, and then message passing is based on this fully connected structure. For sentence representation \cite{vaswani2017attention}, extensive experiments show the importance of Positional Encoding. Sentences with the same tokens but in different order have different semantics (Figure 2), which shows that order is also an inherent feature of sentences. However, graphs are permutation-invariant, resulting in no order for nodes. Thus, we propose a permutation-invariant node ordering $\mathbf{C} \in \mathcal{R}^{|\mathcal{V}|}$ based on closeness centrality \cite{wasserman1994social}: \begin{equation} \label{e2} \mathbf{C}=\mathrm{argtop}|\mathcal{V}|\left(\left[\mathrm{c}_{1},\mathrm{c}_{2}, \cdots, \mathrm{c}_{|\mathcal{V}|}\right]\right), \mathrm{c}_{i}=\frac{n-1}{|\mathcal{V}|-1} \frac{n-1}{\sum_{j=1}^{n-1} d(j, i)}, \end{equation} where $\mathrm{c}_{i}$ is the closeness centrality of node $v_{i}$. It is the reciprocal of the average shortest path distance to $v_{i}$ over all other nodes, and higher closeness values indicate higher centrality. Besides, $n$ is the number of nodes in the connected part of the graph containing the node $v_{i}$ and $\frac{n-1}{|\mathcal{V}|-1}$ is the proportion of this connected component in the entire graph. Hence, Eq. \ref{e2} can be generalized to graphs with more than one connected component, where the size of the connected component scales each node. $\mathrm{argtop}|\mathcal{V}|(\cdot)$ calculates the rank of each element in the vector in descending order. For example, $\mathrm{argtop}|\mathcal{V}|(\left[0.4,0.6,0.1,0.9\right])=[2,1,3,0]$. Our model also generalizes to edge representations. We convert the original graph to a line graph and then compute the permutation-invariant node order since the edges in the original graph construct the nodes in the line graph. \begin{figure} \centering \includegraphics[width=1\linewidth]{PE.pdf} \caption{{\bf a:} Sentences with the same tokens but in different order have different semantics, which shows that order is also an inherent feature of sentences. For example, sentence $s_{1}$ and sentence $s_{2}$ have different semantics, while the same token gets the identical representation by directly using self-attention without PE. {\bf b:} Our proposed Positional Encoding is permutation-invariant. Even if the nodes in the graph are re-permutated, the Positional index for these nodes does not change.} \label{f2} \end{figure} We denote the vanilla transformer encoder by $\mathrm{TranformerEncoder}(\cdot)$ for simplicity and describe architectural details in appendix. Given a learnable Positional Encoding dictionary $\mathbf{PE} \in \mathcal{R}^{m \times d}(m \gg |\mathcal{V}|)$, final node representations $\mathbf{H} \in \mathcal{R}^{|\mathcal{V}| \times d}$ is derived by Eq. \ref{e3}: \begin{equation} \label{e3} \begin{aligned} \mathbf{H}=\mathrm{TranformerEncoder}(\mathscr{H}), \mathscr{H} = \mathbb{H} + \mathbf{PE}\left[\mathbf{C}\right], \end{aligned} \end{equation} where $\mathbf{PE}\left[\mathbf{C}\right] \in \mathcal{R}^{|\mathcal{V}| \times d}$ is Positional Encoding according to the node ordering $\mathbf{C}$. \section{Evaluation} In this section, we systematically evaluate the performance of our INFMCS with comparison to recently proposed state-of-the-art approaches for both the graph-graph classification and graph-graph regression tasks, and with significant goals of addressing the following questions: {\bf Q1: }How effective, efficient and robust is INFMCS compared to the state-of-the-art approaches under MCS/GED metric? {\bf Q2: }How does the proposed similarity computation paradigm and GCwT with permutation-invariant Positional Encoding improve performance? {\bf Q3: }Does INFMCS have stronger interpretability? {\bf Data }For graph-graph classification task, we use {\bf FFmpeg}\footnote{https://ffmpeg.org/} and {\bf OpenSSL}\footnote{https://www.openssl.org/} \cite{ling2021multilevel} as datasets, where each graph denotes binary function's control flow graph (CFG). Therefore, we take two CFGs compiled from the same source code as positive samples, i.e., $s\left(\mathcal{G}_{1}, \mathcal{G}_{2}\right)=1$, and the dissimilar CFGs compiled from different source code, i.e., $s\left(\mathcal{G}_{1}, \mathcal{G}_{2}\right)=0$. Moreover, we split each dataset into 3 sub-datasets according to the graph size in order to investigate the impact of graph size. For graph-graph regression task, we employ three real datasets and three sythetic datasets, including {\bf AIDS(2-15)}, {\bf LINUX(2-15)}, {\bf PTC\_MM(all)}, {\bf BA100}, {\bf BA200} and {\bf BA300}. We extract graphs from the original datasets ({\bf AIDS \cite{riesen2008iam}}, {\bf LINUX \cite{wang2012efficient}}, {\bf PTC\_MM \cite{helma2001predictive}})\footnote{https://chrsmrrs.github.io/datasets/docs/datasets/} and construct the above three real datasets, where the values in parentheses indicate the size range of extracted graph. To verify the performance on large graphs, we also use the Barabási–Albert model \cite{jeong2003measuring} to generate sythetic graphs. We generated three sythetic datasets with graph sizes around 100, 200, and 300 respectively, called {\bf BA100}, {\bf BA200} and {\bf BA300}. Detailed descriptions and statistics of both real and synthetic datasets can be found in the appendix. {\bf Evaluation }For the graph-graph classification task, we use \textit{Area Under the Curve (AUC)} \cite{bradley1997use} to evaluate the model. For the graph-graph regression task, we use \textit{averaged Mean Squared Error (mse), Spearman’s Rank Correlation Coefficient ($\rho$)} \cite{spearman1961proof} and \textit{Precision at k (p@k)} to test the accuracy and ranking performance. To avoid information leakage, we let the model compute the similarity between the query graph and each graph in the original test set to rank. {\bf Baselines }We use eight SOTA learning-based methods as baselines, including GMN \cite{li2019graph}, SimGNN \cite{bai2019simgnn}, GraphSim \cite{bai2020learning}, MGMN \cite{ling2021multilevel}, H2MN \cite{zhang2021h2mn}, GOTSim \cite{doan2021interpretable}, PSimGNN \cite{xu2021graph}, EMBAVG \cite{bai2020learning} and SMPNN \cite{riba2018learning}. Besides methods PSimGNN \cite{xu2021graph} and EMBAVG \cite{bai2020learning}, we use the source code released by the authors for the regression task. We reproduced the above two baselines according to the original papers. We also tune their hyperparameters on the validation set on the regression task. For the classification task, we use the AUC scores reported in \cite{zhang2021h2mn} as the results for each baseline. We also use five classical algorithms as baselines to compare running time, including A* \cite{riesen2013novel}, MCSPLIT \cite{mccreesh2017partitioning}, BEAM \cite{neuhaus2006fast}, HUNGARIAN \cite{riesen2009approximate}, VJ \cite{fankhauser2011speeding} and HED \cite{fischer2015approximation}. {\bf Implementation Settings }Our proposed INFMCS is implemented with Deep Graph Library (DGL) \cite{wang2019dgl} and Pytorch \cite{paszke2019pytorch}. Regarding the model's hyperparameters, we fix the number of GCN layers to 3. We search the number of Transformer layers $L \in \{2, 4, 6, 8 \}$ and the dimension of hidden layers $d \in \{128, 256, 512 \}$, where the dimension of all hidden layers is set to the same. More details about the hyper-parameter search can be found in the appendix. We conduct all the experiments on a machine with an Intel Xeon 4114 CPU and two Nvidia Titan GPU. As for training, we use the Adam algorithm for optimization \cite{kingma2014adam} and fix the initial learning rate to 0.001. The proposed model is trained on real datasets for 100 epochs with a batch size of 128 and on synthetic datasets for 30 epochs with a batch size of 32. Checkpoints are saved for each epoch to select the best checkpoints on the evaluation set. The source code can be found in the supplementary materials. \begin{table}[] \caption{Graph-Graph classification results (AUC score) with standard deviation (in percentage).} \label{t_cls} \centering \begin{tabular}{c|lll|lll} \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c|}{FFmpeg} & \multicolumn{3}{c}{OpenSSL} \\ \cline{2-7} & \multicolumn{1}{c}{{[}3, 200{]}} & \multicolumn{1}{c}{{[}20, 200{]}} & \multicolumn{1}{c|}{{[}50, 200{]}} & \multicolumn{1}{c}{{[}3, 200{]}} & \multicolumn{1}{c}{{[}20, 200{]}} & \multicolumn{1}{c}{{[}50, 200{]}} \\ \hline \multicolumn{1}{l|}{SimGNN} & 95.38±0.76 & 94.32±1.01 & 93.45±0.54 & 95.96±0.31 & 93.38±0.82 & 94.25±0.85 \\ \multicolumn{1}{l|}{GMN} & 94.15±0.62 & 95.92±1.38 & 94.76±0.45 & 96.43±0.61 & 93.03±3.81 & 93.91±1.65 \\ \multicolumn{1}{l|}{GraphSim} & 97.46±0.30 & 96.49±0.28 & 94.48±0.73 & 96.84±0.54 & 94.97±0.98 & 93.66±1.84 \\ \multicolumn{1}{l|}{MGMN} & 98.07±0.06 & 98.29±0.10 & 97.83±0.11 & 96.90±0.10 & 97.31±1.07 & 95.87±0.88 \\ \multicolumn{1}{l|}{PSimGNN} & 96.67±0.54 & 96.86±0.95 & 95.23±0.15 & 96.10±0.46 & 94.67±1.30 & 93.46±1.59 \\ \multicolumn{1}{l|}{GOTSim} & 96.93±0.34 & 97.01±0.52 & 95.65±0.31 & 97.87±0.49 & 96.42±1.89 & 95.97±1.06 \\ \multicolumn{1}{l|}{H2MN} & 98.28±0.20 & 98.54±0.14 & 98.30±0.29 & 98.27±0.16 & 98.47±0.38 & 97.78±0.75 \\ \hline \multicolumn{1}{l|}{INFMCS} & \textbf{98.49±0.09} & \textbf{99.36±0.13} & \textbf{99.48±0.20} & \textbf{98.34±0.20} & \textbf{99.14±0.31} & \textbf{99.26±0.45} \\ \hline \end{tabular} \end{table} \subsection{Overall Performance} {\bf Graph-Graph Classification Task }We operate the training process five times and report the mean and standard deviation in \textit{AUC}. Our method is straightforward and achieves state-of-the-art performance on both datasets under all settings. The graph-graph classification performance is illustrated in Table \ref{t_cls}. We have two observations. {\bf First}, compared with GMN \cite{li2019graph}, SimGNN \cite{bai2019simgnn} and GraphSim \cite{bai2020learning}, our method obtains relative gains around 5\%. It indicates that our method makes better use of node-wise interactions. {\bf Second}, compared with PSimGNN \cite{xu2021graph} and H2MN \cite{zhang2021h2mn}, our method obtains relative gains of around 2\% . It implies that the improvement of graph representation ability can benefit experimental results. A more detailed analysis can be found in the ablation study. {\bf Notably}, we only exploit node-wise interactions at the last layer, while the previous method exploits the interaction information at each layer. It shows that our computational paradigm is simpler and has less complexity. Moreover, the construction process of the classification label is inconsistent with the internal logic of our method. It demonstrates that our method is robust even if the labels are MCS-independent. {\bf Graph-Graph Regression Task }For the graph-graph regression task, we also conduct the experiments five times and report their mean performance. The detailed performances on real and synthetic datasets are demonstrated in Table \ref{t_reg_mcs_real} and \ref{t_reg_ba}. The results for the GED metric can be found in the appendix. Our model achieves state-of-the-art performance on both MCS and GED metrics. In general, we can obtain similar conclusions as to the classification task. As for synthetic datasets, we observe that our method still outperforms other methods. It shows that our method can be adapted to bigger graphs. Notably, the results of our method on the MCS metric are about eight times better than those on the GED metric. We infer that this situation results from the model's internal logic consistent with the MCS label. The MSE of the model prediction results is close to zero, which means we can infer a more accurate MCS based on the average size of the input graph pairs. It increases the interpretability. \begin{table}[] \caption{Graph-Graph regression results about mse($\times 10^{-2}$), $\rho$ and p@10 on the MCS metric.} \label{t_reg_mcs_real} \centering \begin{tabular}{c|ccc|ccc|ccc} \hline Datasets & \multicolumn{3}{c|}{AIDS(2-15)} & \multicolumn{3}{c|}{LINUX(2-15)} & \multicolumn{3}{c}{PTC\_MM(all)} \\ \hline Metrics & mse$\downarrow$ & $\rho$$\uparrow$ & p@10$\uparrow$ & mse$\downarrow$ & $\rho$$\uparrow$ & p@10$\uparrow$ & mse$\downarrow$ & $\rho$$\uparrow$ & p@10$\uparrow$ \\ \hline \multicolumn{1}{l|}{EMBAVG} & 33.20 & 0.0045 & 0.0540 & 0.83 & 0.5922 & 0.1340 & 35.03 & 0.0497 & 0.3471 \\ \multicolumn{1}{l|}{GMN} & 32.20 & 0.0039 & 0.0578 & 3.99 & 0.0561 & 0.1340 & 35.03 & 0.0370 & 0.3500 \\ \multicolumn{1}{l|}{GraphSim} & 2.73 & 0.1688 & 0.0578 & 0.81 & 0.2260 & 0.1340 & 3.21 & 0.5001 & 0.3500 \\ \multicolumn{1}{l|}{SimGNN} & 2.65 & 0.1784 & 0.0596 & 0.83 & 0.4281 & 0.2370 & 3.27 & 0.5280 & 0.3500 \\ \multicolumn{1}{l|}{SMPNN} & 2.89 & 0.2046 & 0.1056 & 12.59 & 0.5502 & 0.4280 & 4.67 & 0.4558 & 0.4353 \\ \multicolumn{1}{l|}{MGMN} & 1.69 & 0.5300 & 0.1683 & 0.87 & 0.5351 & 0.3664 & 1.43 & 0.7329 & 0.5200 \\ \multicolumn{1}{l|}{PSimGNN} & 2.54 & 0.1031 & 0.0452 & 1.83 & 0.4311 & 0.2668 & 3.43 & 0.4359 & 0.4280 \\ \multicolumn{1}{l|}{GOTSim} & 1.77 & 0.5550 & 0.1763 & 0.61 & 0.3752 & 0.2569 & 2.75 & 0.3495 & 0.3431 \\ \multicolumn{1}{l|}{H2MN} & 1.29 & 0.6745 & 0.2097 & 0.44 & 0.6364 & 0.4795 & 1.07 & 0.8823 & 0.7182 \\ \hline \multicolumn{1}{l|}{INFMCS} & \textbf{0.30} & \textbf{0.9352} & \textbf{0.7976} & \textbf{0.02} & \textbf{0.9814} & \textbf{0.8870} & \textbf{0.71} & \textbf{0.9205} & \textbf{0.7794} \\ \hline \end{tabular} \end{table} \begin{table}[] \caption{Graph-Graph regression results about mse($\times 10^{-2}$) on the synthetic datasets.} \label{t_reg_ba} \centering \begin{tabular}{c|cc|cc|cc} \hline Datasets & \multicolumn{2}{c|}{BA100} & \multicolumn{2}{c|}{BA200} & \multicolumn{2}{c}{BA300} \\ \hline Metric & mse(MCS) & mse(GED) & mse(MCS) & mse(GED) & mse(MCS) & mse(GED) \\ \hline \multicolumn{1}{l|}{EMBAVG} & 16.21 & 10.581 & 20.24 & 9.171 & 21.79 & 12.732 \\ \multicolumn{1}{l|}{GMN} & 16.21 & 8.831 & 20.24 & 9.002 & 20.14 & 8.756 \\ \multicolumn{1}{l|}{ GraphSim} & 0.20 & 0.065 & 0.44 & 0.140 & 0.57 & 0.062 \\ \multicolumn{1}{l|}{SimGNN} & 0.20 & \textbf{0.060} & 0.05 & 0.180 & 0.02 & 0.110 \\ \multicolumn{1}{l|}{SMPNN} & 1.10 & 22.530 & 0.32 & 23.920 & 0.24 & 24.290 \\ \multicolumn{1}{l|}{MGMN} & 0.35 & 1.033 & 0.27 & 0.901 & 0.44 & 0.071 \\ \multicolumn{1}{l|}{PSimGNN} & 0.48 & 1.932 & 0.51 & 1.366 & 0.67 & 0.103 \\ \multicolumn{1}{l|}{H2MN} & 0.02 & 0.187 & 0.01 & 0.532 & 0.02 & 0.034 \\ \hline \multicolumn{1}{l|}{INFMCS} & \textbf{5.49e-7} & \textbf{0.061} & \textbf{1.80e-5} & \textbf{0.011} & \textbf{0.003} & \textbf{0.005} \\ \hline \end{tabular} \end{table} {\bf Efficiency }We show the running time between different methods in Figure \ref{f_efficiency} in order to evaluate the efficiency of INFMCS. As we can see, the learning-based approaches are consistently faster than the traditional methods in all datasets, especially the exact algorithm MCSPLIT. We also observe that INFMCS is faster than other learning-based approaches. We attribute the efficiency gains to two points. First, unlike previous methods that exploit the interaction information of each layer, our method only needs to compute the interaction information of the last layer. Second, our method does not require building hypergraphs or graph partitions. {\bf Hyperparameter sensitivity analysis }We fixed the number of Transformer layers $L$ to 8 and the dimension of hidden layers $d$ to 256 to explore the impact of several vital hyper-parameters on OpenSSL subsets (Figure \ref{f_sensitivity}). The performance under different hyperparameters is consistent, revealing the robustness of our method. We observe that the performance of INFMCS improves as the number of Transformer layers increases. We hypothesise that more Transformer layers allow node-level embeddings to contain more global information, thus improving experimental results. A comparison of the number of parameters of our method with other baselines can be found in appendix. \subsection{Ablation Study} {\bf BASE}\footnote{MCS-RSC is MCS-Related Similarity Computation. BASE = GCN + MCS-RSC; BASE+T = GCwT-w/o-PE + MCS-RSC; BASE+T+PE = GCwT-w/-PE + MCS-RSC; BASE+H = HyperGCN + MCS-RSC; H2MN-H = GCN + H2MN-RSC.} denotes the proposed similarity computation paradigm whose graph representation model (GRM) is GCN. {\bf BASE+T}'s GRM is GCwT without the permutation-invariant Positional Encoding. {\bf BASE+T+PE}'s GRM is GCwT with the permutation-invariant Positional Encoding. {\bf BASE+}$\mathbf{H}_{rw}$'s GRM is hypergraph convolution used in \cite{zhang2021h2mn}. We use the part of their code\footnote{https://github.com/cszhangzhen/H2MN} to obtain node embeddings and then pass these to our paradigm end-to-end\footnote{We use the random walk to construct the hypergraph and set hyperparameter k to 5}. {\bf H2MN-H} denotes that the H2MN model's hyperparameter k is set to 1, indicating that hypergraph convolution degenerates to GCN. The results of the ablation study is illustrated in Table \ref{t_ab_auc} and \ref{t_ab_mcs}, and on which we have the four observations: {\bf 1)} {\bf BASE} outperforms previous methods except for H2MN, thus we compare {\bf BASE} with {\bf H2MN-H} in order to demonstrate the effectiveness of the computation paradigm. We find {\bf BASE} outperforms {\bf H2MN-H}, which means more interpretable forward computation can improve performance. {\bf 2)} {\bf BASE+T} has lower performance than {\bf BASE}. It indicates that the performance of GCwT without the permutation-invariant Positional Encoding degrades. We attribute the reason to the loss of graph structural information since stacking Transformer layers only is equivalent to treating the graph as a fully connected graph. {\bf 3)} {\bf BASE+T+PE} outperforms {\bf BASE}, which confirms that GCwT with the permutation-invariant Positional Encoding not only increases the receptive field of the node but also learns the structural information through PE. More on Positional Encoding analysis can be found in the next subsection. {\bf 4)} {\bf BASE+T+PE} outperforms {\bf BASE+}$\mathbf{H}_{rw}$, implying that GCwT can be better adapted to the proposed computation paradigm and have decent global representation. \begin{minipage}{\textwidth} \begin{minipage}[t]{0.5\textwidth} \captionof{table}{Ablation study on the FFmpeg.} \label{t_ab_auc} \centering \begin{tabular}{l|ccc} \hline \multicolumn{1}{c|}{(AUC score)} & 3-200 & 20-200 & 50-200 \\ \hline H2MN-H & 97.50 & 98.12 & 98.05 \\ BASE & 98.16 & 98.83 & 98.87 \\ BASE+T & 97.13 & 98.20 & 98.48 \\ BASE+H & 98.01 &98.42 & 98.56 \\ BASE+T+PE & 98.49 & 99.36 & 99.49 \\ \hline \end{tabular} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \captionof{table}{Ablation study on the MCS metric.} \label{t_ab_mcs} \centering \begin{tabular}{l|ccc} \hline \multicolumn{1}{c|}{(mse$\times 10^{-2}$)} & AIDS & LINUX & PTC\_MM \\ \hline H2MN-H & 1.63 & 0.56 & 1.18 \\ BASE & 1.41 & 0.36 & 0.98 \\ BASE+T & 3.21 & 0.93 & 1.24 \\ BASE+H & 1.70 & 0.21 & 1.02 \\ BASE+T+PE & 0.30 & 0.02 & 0.71 \\ \hline \end{tabular} \end{minipage} \end{minipage} \begin{figure} \centering \begin{minipage}[t]{0.4\textwidth} \centering \subfigure{ \begin{minipage}[b]{1\textwidth} \centering \centerline{\includegraphics[width=1\textwidth]{efficiency.pdf}} \caption{Running time comparisons.} \label{f_efficiency} \centering \centerline{\includegraphics[width=1\textwidth]{sensitivity.pdf}} \caption{Sensitive analysis of test set.} \label{f_sensitivity} \end{minipage} } \end{minipage} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \begin{minipage}[t]{0.5\textwidth} \centering \subfigure{ \begin{minipage}[b]{0.95\textwidth} \centering \centerline{\includegraphics[width=1\textwidth]{scatter.pdf}} \caption{Positional Encoding analysis.} \label{f_PE} \end{minipage} } \end{minipage} \end{figure} \subsection{Inteprtability Analysis, Case Study and Discussion} {\bf Positional Encoding }We reduce the Positional Encoding trained on AIDS(2-15) to two dimensions via PCA and present them in Fig. \ref{f_PE}, where each red point denotes a Positional Encoding. Since the size of most samples is no more than 12, embeddings after position 12 are insufficiently trained. We present the top 12 Positional Encoding embeddings and obtain an exciting observation. The Euclidean distance between Positional Encoding 0 and the following Positional Encoding gradually increases. The Euclidean distance here corresponds to each node's centrality in the graph, which means the Positional Encoding embeddings preserve the graph's structural information well. \begin{figure}[tp] \centering \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=0.9\linewidth]{matching.pdf} \captionof{figure}{Visualizations of inferring MCS.} \label{f_vis} \end{minipage}% \,\,\,\ \hfil \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=0.9\linewidth]{mcs_bar.pdf} \caption{Interpretability analysis.} \label{f_inter} \end{minipage} \end{figure} \begin{figure} \centering \includegraphics[width=0.94\linewidth]{ranks.pdf} \caption{Visualization of ranking results. From left to right: AIDS, LINUX, PTC\_MM.} \label{f_ranks} \end{figure} {\bf Infer MCS }During inference, we infer the MCS size $m$ via multiplying the average size of the graph pair by the similarity score. Next, we extract a subgraph from $\mathcal{G}_{1}$ that consists of nodes corresponding to the top $m$ matching scores $s$ in Eq. \ref{e5}. Also, we use MCSPLIT to extract the true MCS. To evaluate the quality of our predicted MCS, we compute the similarity between the predicted and the actual MCS. The similarity results and some visualizations are presented in Fig. \ref{f_vis} and \ref{f_inter}, where red subgraphs are MCS between the predicted and the actual MCS. The mean absolute error between the predicted and actual sizes is larger on the GED metric. We attribute the reason to the inconsistency in the model logic and the label construction process. However, we note that the similarity between the predicted MCS and the ground-truth MCS is higher than 0.8 on both metrics except for the AIDS (MCS) results, revealing the interpretability of our approach. We can predict the similarity score and infer MCS to understand why this similarity score is predicted. {\bf Case Study }We demonstrate three example queries, one from each dataset in Fig. \ref{f_ranks}. The first row shows the graphs returned by our model in each demo, with the predicted similarity for each graph shown at the top. The bottom row describes the graphs returned by MCSPLIT. Notably, the top 5 results are precisely the isomorphic graphs to the query in the case of LINUX and PTM\_MM. {\bf Why is INFMCS effective? }For the computation paradigm, our method ultimately only needs to consider $n$ pairs of interaction information, which means it captures more critical information that affects the similarity compared to previous methods. Since Positional Encoding well preserves the positional information of nodes, the interaction within the model compares the similarity of local structures between nodes and considers the relative position between nodes in two graphs. Intuitively, the more similar the two graphs are, the more similar their corresponding local regions should be. \section{Conclusion} This paper proposes a more interpretable end-to-end paradigm for graph similarity learning, whose interpretable computation process improves the performance of graph similarity learning. The model can implicitly infer Maximum Common Subgraph during inference. We stack some vanilla transformer encoder layers with graph convolution layers and propose a novel permutation-invariant node Positional Encoding to capture more global information. Comprehensive experiments and ablation studies demonstrate that INFMCS outperforms previous methods and is more interpretable. \bibliographystyle{plain}
1,477,468,751,323
arxiv
\section{Introduction} Bayesian statistics provides a natural way to manage model complexity and control overfitting, with modern problems involving complicated models with a large number of parameters. One of the most powerful advantages of the Bayesian approach is hierarchical modeling, which allows partial pooling across a group of datasets, allowing groups with little data to borrow information from similar groups with larger amounts of data. However, such models pose problems for Markov chain Monte Carlo (MCMC) methods, because the joint posterior distribution is often pathological due to strong correlations between the model parameters and the hyperparameters \cite{mark13}. For example, one of the most powerful MCMC methods is Hamiltonian Monte Carlo (HMC). However, for hierarchical models even the mixing speed of HMC can be unsatisfactory in practice, as has been noted several times in the literature \cite{mark13, choo2000learning, radford2010}. Riemannian manifold Hamiltonian Monte Carlo (RMHMC) \cite{mark11} is a recent extension of HMC that aims to efficiently sample from challenging posterior distributions by exploiting local geometric properties of the distribution of interest. However, it is computationally too expensive to be applicable to large scale problems. In this work, we propose a simplified RMHMC method, called Semi-Separable Hamiltonian Monte Carlo (SSHMC), in which the joint Hamiltonian over parameters and hyperparameters has special structure, which we call \emph{semi-separability}, that allows it to be decomposed into two simpler, separable Hamiltonians. This condition allows for a new efficient algorithm which we call the \emph{alternating blockwise leapfrog algorithm}. Compared to Gibbs sampling, SSHMC can make significantly larger moves in hyperparameter space due to shared terms between the two simple Hamiltonians. Compared to previous RMHMC methods, SSHMC yields simpler and more computationally efficient samplers for many practical Bayesian models. \section{Hierarchical Bayesian Models} Let $\mathcal{D} = \{\mathcal{D}_i\}_{i=1}^N$ be a collection of data groups where $i$th data group is a collection of iid observations $\mathbf{y}_j = \{y_{ji}\}_{i=1}^{N_i}$ and their inputs $\mathbf{x}_j = \{\mathbf{x}_{ji}\}_{i=1}^{N_i}$. We assume the data follows a parametric distribution $p(\mathbf{y}_i \vert \mathbf{x}_i, \boldsymbol\theta_i)$, where $\boldsymbol\theta_i$ is the model parameter for group $i$. The parameters are assumed to be drawn from a prior $p(\boldsymbol\theta_i \vert \boldsymbol\phi)$, where $\boldsymbol\phi$ is the hyperparameter with prior distribution $p(\boldsymbol\phi)$. The joint posterior over model parameters $\boldsymbol\theta = (\boldsymbol\theta_1, \ldots, \boldsymbol\theta_N)$ and hyperparameters $\boldsymbol\phi$ is then \begin{eqnarray} p(\boldsymbol\theta, \boldsymbol\phi \vert \mathcal{D}) \propto \prod_{i=1}^N p(\mathbf{y}_i \vert \mathbf{x}_i, \boldsymbol\theta_i)p(\boldsymbol\theta_i \vert \boldsymbol\phi) p(\boldsymbol\phi).\label{eq:hbayes} \end{eqnarray} This \emph{hierarchical Bayesian} model is popular because the parameters $\boldsymbol\theta_i$ for each group are coupled, allowing the groups to share statistical strength. However, this property causes difficulties when approximating the posterior distribution. In the posterior, the model parameters and hyperparameters are strongly correlated. In particular, $\boldsymbol\phi$ usually controls the variance of $p(\boldsymbol\theta \vert \boldsymbol\phi)$ to promote partial pooling, so the variance of $\boldsymbol\theta \vert \boldsymbol\phi, \mathcal{D}$ depends strongly on $\boldsymbol\phi.$ This causes difficulties for many MCMC methods, such as the Gibbs sampler and HMC. An illustrative example of pathological structure in hierarchical models is the Gaussian funnel distribution \cite{radford2010}. Its density function is defined as $p(\mathbf{x}, v) = \prod_{i=1}^n\mathcal{N}(x_i \vert 0, e^{-v})\mathcal{N}(v \vert 0, 3^2)$, where $\mathbf{x}$ is the vector of low-level parameters and $v$ is the variance hyperparameters. The pathological correlation between $\mathbf{x}$ and $v$ is illustrated by Figure \ref{funnel_ET}. \section{Hamiltonian Monte Carlo on Posterior Manifold} Hamiltonian Monte Carlo (HMC) is a gradient-based MCMC method with auxiliary variables. To generate samples from a target density $\pi(\mathbf{z})$, HMC constructs an ergodic Markov chain with the invariant distribution $\pi(\mathbf{z}, \mathbf{r}) = \pi(\mathbf{z}) \pi(\mathbf{r})$, where $\mathbf{r}$ is an auxiliary variable. The most common choice of $\pi(\mathbf{r})$ is a Gaussian distribution $ \mathcal{N}(\mathbf{0}, G^{-1})$ with precision matrix $G$. Given the current sample $\mathbf{z}$, the transition kernel of the HMC chain includes three steps: first sample $\mathbf{r} \sim \pi(\mathbf{r})$, second propose a new sample $(\mathbf{z}', \mathbf{r}')$ by simulating the Hamiltonian dynamics and finally accept the proposed sample with probability $\alpha = \min\left\{1, \pi(\mathbf{z}',\mathbf{r}') / \pi(\mathbf{z}, \mathbf{r})\right\}$, otherwise leave $\mathbf{z}$ unchanged. The last step is a Metropolis-Hastings (MH) correction. Define $H(\mathbf{z},\mathbf{r}) := -\log \pi(\mathbf{z}, \mathbf{r})$. The Hamiltonian dynamics is defined by the differential equations $\dot{\mathbf{z}} = \partial_{\mathbf{r}}{H}\quad \dot{\mathbf{r}} = -\partial_{\mathbf{z}}{H}$, where $\mathbf{z}$ is called the \emph{position} and $\mathbf{r}$ is called the \emph{momentum}. It is easy to see that $\dot{H}(\mathbf{z}, \mathbf{r}) = \partial_{\mathbf{z}}{H}\dot{\mathbf{z}} + \partial_{\mathbf{r}}{H}\dot{\mathbf{r}} = 0$, which is called the energy preservation property \cite{radford2010, leimkuhler2004simulating}. In physics, $H(\mathbf{z},\mathbf{r})$ is known as the \emph{Hamiltonian energy}, and is decomposed into the sum of the \emph{potential energy} $U(\mathbf{z}) := -\log \pi(\mathbf{z})$ and the \emph{kinetic energy} $K(\mathbf{r}) := -\log \pi(\mathbf{r})$. The most used discretized simulation in HMC is the \emph{leapfrog} algorithm, which is given by the recursion \begin{subequations} \begin{align} \mathbf{r}(\tau + \epsilon/2)&= \mathbf{r}(\tau) - \frac{\epsilon}{2} \nabla_{\mathbf{z}}U(\tau) \label{LF1} \\ \mathbf{z}(\tau + \epsilon)&= \mathbf{z}(\tau) + \epsilon \nabla_{\mathbf{r}}K(\tau + \epsilon/2) \label{LF2} \\ \mathbf{r}(\tau + \epsilon)&= \mathbf{r}(\tau + \epsilon/2) - \frac{\epsilon}{2}\nabla_{\boldsymbol\theta}U(\tau+\epsilon), \label{LF3} \vspace{-10pt} \end{align} \end{subequations} where $\epsilon$ is the step size of discretized simulation time. After $L$ steps from the current sample $(\mathbf{z}(0), \mathbf{r}(0)) = (\mathbf{z}, \mathbf{r})$, the new sample is proposed as the last point $(\mathbf{z}', \mathbf{r}')=(\mathbf{z}(L\epsilon), \mathbf{r}(L\epsilon))$. In Hamiltonian dynamics, the matrix $G$ is called the \emph{mass matrix}. If $G$ is constant w.r.t. $\mathbf{z}$, then $\mathbf{z}$ and $\mathbf{r}$ are independent in $\pi(\mathbf{z},\mathbf{r})$. In this case we say that $H(\mathbf{z},\mathbf{r})$ is a \emph{separable} Hamiltonian. In particular, we use the term \emph{standard HMC} to refer to HMC using the identity matrix as $G$. Although HMC methods often outperform other popular MCMC methods, they may mix slowly if there are strong correlations between variables in the target distribution. Neal \cite{radford2010} showed that HMC can mix faster if $G$ is not the identity matrix. Intuitively, such a $G$ acts like a preconditioner. However, if the curvature of $\pi(\mathbf{z})$ varies greatly, a global preconditioner can be inadequate. For this reason, recent work, notably that on Riemannian manifold HMC (RMHMC) \cite{mark11}, has considered \emph{non-separable} Hamiltonian methods, in which $G(\mathbf{z})$ varies with position $\mathbf{z}$, so that $\mathbf{z}$ and $\mathbf{r}$ are no longer independent in $\pi(\mathbf{z},\mathbf{r})$. The resulting Hamiltonian $H(\mathbf{z},\mathbf{r}) = -\log \pi(\mathbf{z},\mathbf{r})$ is called a \emph{non-separable} Hamiltonian. For example, for Bayesian inference problems, Girolami et al. \cite{mark11} proposed using the Fisher Information Matrix (FIM) of $\pi(\boldsymbol\theta)$, which is the metric tensor of posterior manifold. However, for a non-separable Hamiltonian, the simple leapfrog dynamics \eqref{LF1}-\eqref{LF3} do not yield a valid MCMC method, as they are no longer reversible. Simulation of general non-separable systems requires the generalized leapfrog integrator (GLI) \cite{mark11}, which requires computing higher order derivatives to solve a system of non-linear differential equations. The computational cost of GLI in general is $\mathcal{O}(d^3)$ where $d$ is the number of parameters, which is prohibitive for large $d$. In hierarchical models, there are two ways to sample the posterior using HMC. One way is to sample the joint posterior $\pi(\boldsymbol\theta, \boldsymbol\phi)$ directly. The other way is to sample the conditional $\pi(\boldsymbol\theta \vert \boldsymbol\phi)$ and $\pi(\boldsymbol\phi \vert \boldsymbol\theta)$, simulating from each conditional distribution using HMC. This strategy is called HMC within Gibbs \cite{radford2010}. In either case, HMC chains tend to mix slowly in hyperparameter space, because the huge variation of potential energy across different hyperparameter values can easily overwhelm the kinetic energy in separable HMC \cite{radford2010}. Hierarchical models also pose a challenge to RMHMC, if we want to sample the model parameters and hyperparameters jointly. In particular, the closed-form FIM of the joint posterior $\pi(\boldsymbol\theta, \boldsymbol\phi)$ is usually unavailable. Due to this problem, even sampling some toy models like the Gaussian funnel using RMHMC becomes challenging. Betancourt \cite{MJ12} proposed a new metric that uses a transformed Hessian matrix of $\pi(\boldsymbol\theta)$, and Betancourt and Girolami \cite{mark13} demonstrate the power of this method for efficiently sampling hyperparameters of hierarchical models on some simple benchmarks like Gaussian funnel. However, the transformation requires computing eigendecomposition of the Hessian matrix, which is infeasible in high dimensions. Because of these technical difficulties, RMHMC for hierarchical models is usually used within a block Gibbs sampling scheme, alternating between $\boldsymbol\theta$ and $\boldsymbol\phi$. This \emph{RMHMC within Gibbs} strategy is useful because the simulation of the non-separable dynamics for the conditional distributions may have much lower computational cost than that for the joint one. However, as we have discussed, in hierarchical models these variables tend be very strongly correlated, and it is well-known that Gibbs samplers mix slowly in such cases \cite{robert2004monte}. So, the Gibbs scheme limits the true power of RMHMC. \section{Semi-Separable Hamiltonian Monte Carlo} In this section we propose a \emph{non-separable} HMC method that does not have the limitations of Gibbs sampling and that scales to relatively high dimensions, based on a novel property that we will call semi-separability. We introduce new HMC methods that rely on semi-separable Hamiltonians, which we call \emph{semi-separable Hamiltonian Monte Carlo (SSHMC)}. \subsection{Semi-Separable Hamiltonian} In this section, we define the semi-separable Hamiltonian system. Our target distribution will be the posterior $\pi(\boldsymbol\theta, \boldsymbol\phi) = \log p(\boldsymbol\theta,\boldsymbol\phi | \mathcal{D})$ of a hierarchical model \eqref{eq:hbayes}, where $\boldsymbol\theta \in \mathbb{R}^n$ and $\boldsymbol\phi \in \mathbb{R}^m$. Let $(\mathbf{r}_{\boldsymbol\theta},\mathbf{r}_{\boldsymbol\phi}) \in \mathbb{R}^{m+n}$ be the vector of momentum variables corresponding to $\boldsymbol\theta$ and $\boldsymbol\phi$ respectively. The non-separable Hamiltonian is defined as \begin{eqnarray} \label{ham0} H(\boldsymbol\theta, \boldsymbol\phi, \mathbf{r}_{\boldsymbol\theta}, \mathbf{r}_{\boldsymbol\phi}) = U(\boldsymbol\theta, \boldsymbol\phi) + K(\mathbf{r}_{\boldsymbol\theta}, \mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta, \boldsymbol\phi), \end{eqnarray} where the potential energy is $U(\boldsymbol\theta, \boldsymbol\phi)=-\log \pi(\boldsymbol\theta, \boldsymbol\phi)$ and the kinetic energy is $K(\mathbf{r}_{\boldsymbol\theta}, \mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta, \boldsymbol\phi) = -\log \mathcal{N}(\mathbf{r}_{\boldsymbol\theta},\mathbf{r}_{\boldsymbol\phi}; \mathbf{0}, G(\boldsymbol\theta, \boldsymbol\phi)^{-1}),$ which includes the normalization term $\log \vert G(\boldsymbol\theta, \boldsymbol\phi) \vert$. The mass matrix $G(\boldsymbol\theta, \boldsymbol\phi)$ can be an arbitrary p.d. matrix. For example, previous work on RMHMC \cite{mark11} has chosen $G(\boldsymbol\theta, \boldsymbol\phi)$ to be FIM of the joint posterior $\pi(\boldsymbol\theta, \boldsymbol\phi)$, resulting in an HMC method that requires \emph{$\mathcal{O}\left( \left( m + n\right)^3 \right)$} time. This limits applications of RMHMC to large scale problems. To attack these computational challenges, we introduce restrictions on the mass matrix $G(\boldsymbol\theta, \boldsymbol\phi)$ to enable efficient simulation. In particular, we restrict $G(\boldsymbol\theta, \boldsymbol\phi)$ to have the form \[G(\boldsymbol\theta, \boldsymbol\phi) = \left( \begin{array}{ccc} G_{\boldsymbol\theta}(\boldsymbol\phi, \mathbf{x}) & \mathbf{0} \\ \mathbf{0} & G_{\boldsymbol\phi}(\boldsymbol\theta) \end{array} \right),\] where $G_{\boldsymbol\theta}$ and $G_{\boldsymbol\phi}$ are the precision matrices of $\mathbf{r}_{\boldsymbol\theta}$ and $\mathbf{r}_{\boldsymbol\phi}$, respectively. Importantly, we restrict $G_{\boldsymbol\theta}(\boldsymbol\phi, \mathbf{x})$ to be independent of $\boldsymbol\theta$ and $G_{\boldsymbol\phi}(\boldsymbol\theta)$ to be independent of $\boldsymbol\phi$. If $G$ has these properties, we call the resulting Hamiltonian a \emph{semi-separable} Hamiltonian. A semi-separable Hamiltonian is still in general non-separable, as the two random vectors $(\boldsymbol\theta, \boldsymbol\phi)$ and $(\mathbf{r}_{\boldsymbol\theta},\mathbf{r}_{\boldsymbol\phi})$ are not independent. The semi-separability property has important computational advantages. First, because $G$ is block diagonal, the cost of matrix operations reduces from $\mathcal{O}((n+m)^k)$ to $\mathcal{O}(n^k)$. Second, and more important, substituting the restricted mass matrix into \eqref{ham0} results in the potential and kinetic energy: \begin{align} \label{pot} U(\boldsymbol\theta, \boldsymbol\phi) &= -\sum_{i} [\log p(\mathbf{y}_i \vert \boldsymbol\theta_i, \mathbf{x}_i) + \log p(\boldsymbol\theta_i \vert \boldsymbol\phi)] -\log p(\boldsymbol\phi),\\ K(\mathbf{r}_{\boldsymbol\theta}, \mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\phi, \boldsymbol\theta) &= \frac{1}{2}[ \mathbf{r}_{\boldsymbol\theta}^T G_{\boldsymbol\theta}(\mathbf{x}, \boldsymbol\phi)\mathbf{r}_{\boldsymbol\theta} + \mathbf{r}_{\boldsymbol\phi}^T G_{\boldsymbol\phi}(\boldsymbol\theta) \mathbf{r}_{\boldsymbol\phi} + \log \left\vert G_{\boldsymbol\theta}(\mathbf{x}, \boldsymbol\phi) \right\vert + \log \left\vert G_{\boldsymbol\phi}(\boldsymbol\theta) \right\vert]. \label{kin} \end{align} If we fix $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta})$ or $(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi})$, the non-separable Hamiltonian \eqref{ham0} can be seen as a separable Hamiltonian plus some constant terms. In particular, define the notation $$A(\mathbf{r}_{\boldsymbol\theta} \vert \boldsymbol\phi) = \frac{1}{2} \mathbf{r}_{\boldsymbol\theta}^T G_{\boldsymbol\theta}(\mathbf{x}, \boldsymbol\phi)\mathbf{r}_{\boldsymbol\theta}, \qquad A(\mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta) = \frac{1}{2} \mathbf{r}_{\boldsymbol\phi}^T G_{\boldsymbol\phi}(\boldsymbol\theta)\mathbf{r}_{\boldsymbol\phi}.$$ Then, considering $(\boldsymbol\phi,\mathbf{r}_{\boldsymbol\phi})$ as fixed, the non-separable Hamiltonian $H$ in \eqref{ham0} is different from the following separable Hamiltonian \begin{eqnarray} \label{ham1} H_1(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}) &=& U_1(\boldsymbol\theta \vert \boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}) + K_1(\mathbf{r}_{\boldsymbol\theta} \vert \boldsymbol\phi),\\ U_1(\boldsymbol\theta \vert \boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}) &=& -\sum_{i} [\log p(\mathbf{y}_i \vert \boldsymbol\theta_i, \mathbf{x}_i) + \log p(\boldsymbol\theta_i \vert \boldsymbol\phi)] + A(\mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta) +\frac{1}{2} \log \left\vert G_{\boldsymbol\phi}(\boldsymbol\theta) \right\vert,\label{h1b}\\ K_1(\mathbf{r}_{\boldsymbol\theta} \vert \boldsymbol\phi)&=& A(\mathbf{r}_{\boldsymbol\theta} \vert \boldsymbol\phi) \end{eqnarray} only by some constant terms that do not depend on $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta})$. What this means is that any update to $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta})$ that leaves $H_1$ invariant leaves the joint Hamiltonian $H$ invariant as well. An example is the leapfrog dynamics on $H_1$, where $U_1$ is considered the potential energy, and $K_1$ the kinetic energy. Similarly, if $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta})$ are fixed, then $H$ differs from the following separable Hamiltonian \begin{eqnarray} \label{ham2} H_2(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}) &=& U_2(\boldsymbol\phi \vert \boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}) + K_2(\mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta),\\ U_2(\boldsymbol\phi \vert \boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}) &=& -\sum_i \log p(\boldsymbol\theta_i \vert\boldsymbol\phi) - \log p(\boldsymbol\phi) + A(\mathbf{r}_{\boldsymbol\theta} \vert \boldsymbol\phi) + \frac{1}{2} \log \left\vert G_{\boldsymbol\theta}(\mathbf{x}, \boldsymbol\phi)\right\vert,\label{h2b}\\ K_2(\mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta) &=& A(\mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta) \end{eqnarray} only by terms that are constant with respect to $(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}).$ Notice that $H_1$ and $H_2$ are coupled by the terms $A(\mathbf{r}_{\boldsymbol\theta} \vert \boldsymbol\phi)$ and $A(\mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta)$. Each of these terms appears in the kinetic energy of one of the separable Hamiltonians, but in the potential energy of the other one. We call these terms \emph{auxiliary potentials} because they are potential energy terms introduced by the auxiliary variables. These auxiliary potentials are key to our method (see Section~\ref{sec:gibbs}). \begin{wrapfigure}{R}{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{SSHMC_Alg} \vspace{-20pt} \end{wrapfigure} \subsection{Alternating block-wise leapfrog algorithm} Now we introduce an efficient SSHMC method that exploits the semi-separability property. As described in the previous section, any update to $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta})$ that leaves $H_1$ invariant also leaves the joint Hamiltonian $H$ invariant, as does any update to $(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi})$ that leaves $H_2$ invariant. So a natural idea is simply to alternate between simulating the Hamiltonian dynamics for $H_1$ and that for $H_2$. Crucially, even though the total Hamiltonian $H$ is not separable in general, both $H_1$ and $H_2$ \emph{are} separable. Therefore when simulating $H_1$ and $H_2$, the simple leapfrog method can be used, and the more complex GLI method is not required. We call this method the \emph{alternating block-wise leapfrog algorithm} (ABLA), shown in Algorithm 1. In this figure the function ``leapfrog'' returns the result of the leapfrog dynamics \eqref{LF1}-\eqref{LF3} for the given starting point, Hamiltonian, and step size. We call each iteration of the loop from $1 \ldots L$ an \emph{ALBA step}. For simplicity, we have shown one leapfrog step for $H_1$ and $H_2$ for each ALBA step, but in practice it is useful to use multiple leapfrog steps per ALBA step. ABLA has discretization error due to the leapfrog discretization, so the MH correction is required. If it is possible to simulate $H_1$ and $H_2$ exactly, then $H$ is preserved exactly and there is no need for MH correction. To show that the SSHMC method by ALBA preserves the distribution $\pi(\boldsymbol\theta, \boldsymbol\phi)$, we also need to show the ABLA is a time-reversible and volume-preserving transformation in the joint space of $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}, \boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi})$. Let $\mathcal{X} = \mathcal{X}_{\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}} \times \mathcal{X}_{\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}}$ where $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}) \in \mathcal{X}_{\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}}$ and $(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}) \in \mathcal{X}_{\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}}$. Obviously, any reversible and volume-preserving transformation in a subspace of $\mathcal{X}$ is also reversible and volume-preserving in $\mathcal{X}$. It is easy to see that each leapfrog step in the ABLA algorithm is reversible and volume-preserving in either $\mathcal{X}_{\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}}$ or $\mathcal{X}_{\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi}}$. One more property of integrator of interest is \emph{symplecticity}. Because each leapfrog integrator is symplectic in the subspace of $\mathcal{X}$ \cite{leimkuhler2004simulating}, they are also symplectic in $\mathcal{X}$. Then because ABLA is a composition of symplectic leapfrog integrators, and the composition of symplectic transformations is symplectic, we know ABLA is symplectic. We emphasize that ABLA is actually \emph{not} a discretized simulation of the semi-separable Hamiltonian system $H$, that is, if starting at a point $(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}, \boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi})$ in the joint space, we run the exact Hamiltonian dynamics for $H$ for a length of time $L$, the resulting point will not be the same as that returned by ALBA at time $L$ even if the discretized time step is infinitely small. For example, ALBA simulates $H_1$ with step size $\epsilon_1$ and $H_2$ with step size $\epsilon_2$ where $\epsilon_1 = 2\epsilon_2$, when $\epsilon_2\rightarrow 0$ that preserves $H$. \subsection{Connection to other methods}\label{sec:gibbs} Although the SSHMC method may seem similar to RMHMC within Gibbs (RMHMCWG), SSHMC is actually very different. The difference is in the last two terms of \eqref{h1b} and \eqref{h2b}; if these are omitted from SSHMC and the Hamiltonians for $\pi(\boldsymbol\theta \vert \boldsymbol\phi)$, then we obtain HMC within Gibbs. Particularly important among these two terms is the auxiliary potential, because it allows each of the separable Hamiltonian systems to \emph{borrow energy} from the other one. For example, if the previous leapfrog step increases the kinetic energy $K_1(\mathbf{r}_{\boldsymbol\theta} | \boldsymbol\phi)$ in $H_1(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta})$, then, in the next leapfrog step for $H_2(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi})$, we see that $\boldsymbol\phi$ will have greater \emph{potential} energy $U_2(\boldsymbol\phi | \boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta})$, because the auxiliary potential $A(\mathbf{r}_{\boldsymbol\theta} | \boldsymbol\phi)$ is shared. That allows the leapfrog step to accommodate a larger change of $\log p(\boldsymbol\phi \vert \boldsymbol\theta)$ using $A(\mathbf{r}_{\boldsymbol\theta} | \boldsymbol\phi)$. So, the chain will mix faster in $\mathcal{X}_{\boldsymbol\phi}$. By the symmetry of $\boldsymbol\theta$ and $\boldsymbol\phi$, the auxiliary potential will also accelerate the mixing in $\mathcal{X}_{\boldsymbol\theta}$. Another way to see this is that the dynamics in RMHMCWG for $(\mathbf{r}_{\boldsymbol\phi}, \boldsymbol\phi)$ preserves the distribution $\pi(\boldsymbol\theta, \mathbf{r}_{\boldsymbol\phi}, \boldsymbol\phi) = \pi(\boldsymbol\theta, \boldsymbol\phi)\mathcal{N}(\mathbf{r}_{\boldsymbol\phi}; \mathbf{0} , G_{\boldsymbol\phi}(\boldsymbol\phi)^{-1})$ but not the joint $\pi(\boldsymbol\theta, \boldsymbol\phi, \mathbf{r}_{\boldsymbol\theta}, \mathbf{r}_{\boldsymbol\phi})$. That is because the Gibbs sampler does not take into account the effect of $\boldsymbol\phi$ on $\mathbf{r}_{\boldsymbol\theta}$. In other words, the Gibbs step has the stationary distribution $\pi(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta)$ rather than $\pi(\boldsymbol\phi, \mathbf{r}_{\boldsymbol\phi} \vert \boldsymbol\theta, \mathbf{r}_{\boldsymbol\theta}).$ The difference between the two is the auxiliary potential. In contrast, the SSHMC methods preserve the Hamiltonian of $\pi(\boldsymbol\theta, \boldsymbol\phi, \mathbf{r}_{\boldsymbol\theta}, \mathbf{r}_{\boldsymbol\phi})$. \subsection{Choice of mass matrix} The choice of $G_{\boldsymbol\theta}$ and $G_{\boldsymbol\phi}$ in SSHMC is usually similar to RMHMCWG. If the Hessian matrix of $-\log p(\boldsymbol\theta \vert \mathbf{y}, \mathbf{x}, \boldsymbol\phi)$ is independent of $\boldsymbol\theta$ and always p.d., it is natural to define $G_{\boldsymbol\theta}$ as the inverse of the Hessian matrix. However, for some popular models, e.g., logistic regression, the Hessian matrix of the likelihood function depends on the parameters $\boldsymbol\theta$. In this case, one can use any approximate Hessian $B$, like the Hessian at the mode, and define $G_{\boldsymbol\theta} := (B + B(\boldsymbol\phi))^{-1}$, where $B(\boldsymbol\phi)$ is the Hessian of the prior distribution. Such a rough approximation is usually good enough to improve the mixing speed, because the main difficulty is the correlation between model parameters and hyperparameters. In general, because the computational bottleneck in HMC and SSHMC is computing the gradient of the target distribution, both methods have the same computational complexity $\mathcal{O}(lg)$, where $g$ is the cost of computing the gradient and $l$ is the total number of leapfrog steps per iteration. However, in practice we find it very beneficial to use multiple steps in each blockwise leapfrog update in ALBA; this can cause SSHMC to require more time than HMC. Also, depending on the mass matrix $G_{\boldsymbol\theta}$, the cost of leapfrog a step in ABLA may be different from those in standard HMC. For some choices of $G_{\boldsymbol\theta}$, the leapfrog step in ABLA can be even faster than one leapfrog step of HMC. For example, in many models the computational bottleneck is the gradient $\nabla_{\boldsymbol\phi} \log Z(\boldsymbol\phi)$, $Z(\boldsymbol\phi)$ is the normalization in prior. Recall that $G_{\theta}$ is a function of $\boldsymbol\phi$. If $ \vert G_{\theta} \vert = Z(\boldsymbol\phi)^{-1}$, $Z(\boldsymbol\phi)$ will be canceled out, avoiding computation of $\nabla_{\boldsymbol\phi} \log Z(\boldsymbol\phi)$. One example is using $G_{\mathbf{x}} = e^v \mathbf{I}$ in Gaussian funnel distribution aforementioned in Section 2. A potential problem of such $G_{\theta}$ is that the curvature of the likelihood function $p(\mathcal{D} \vert \boldsymbol\theta)$ is ignored. But when the data in each group is sparse and the parameters $\boldsymbol\theta$ are strongly correlated, this $G_{\theta}$ can give nearly optimal mixing speed and make SSHMC much faster. In general, any choice of $G_{\boldsymbol\theta}$ and $G_{\boldsymbol\phi}$ that would be valid for separable HMC with Gibbs is also valid for SSHMC. \section{Experimental Results} In this section, we compare the performance of SSHMC\ with the standard HMC and RMHMC within Gibbs \cite{mark11} on four benchmark models.\footnote{Our use of a Gibbs scheme for RMHMC follows standard practice \cite{mark11}.} The step size of all methods are manually tuned so that the acceptance rate is around $70$-$85\%$. The number of leapfrog steps are tuned for each method using preliminary runs. The implementation of RMHMC we used is from \cite{mark11}. The running time is wall-clock time measured after burn-in. The performance is evaluated by the minimum Effective Sample Size (ESS) over all dimensions (see \cite{geyer1992practical}). When considering the different computational complexity of methods, our main efficiency metric is time normalized ESS. \begin{figure} [t!] \begin{center} \begin{tabular}{llll} \resizebox{0.22\textwidth}{!}{\includegraphics{H_path_HMC}} & \resizebox{0.22\textwidth}{!}{\includegraphics{traject}} & \resizebox{0.22\textwidth}{!}{\includegraphics{H_path_SSHMC}} & \resizebox{0.22\textwidth}{!}{\includegraphics{traject_SSHMC}}\\ \multicolumn{2}{c}{HMC with diagonal constant mass} & \multicolumn{2}{c}{SSHMC (semi-separable mass)} \vspace{-10pt} \end{tabular} \caption{The trace of energy over the simulation time and the trajectory of the first dimension of 100 dimensional Gaussian $\mathbf{x}_1$ (vertical axis) and hyperparameter $v$ (horizontal axis). The two simulations start with the same initial point sampled from the Gaussian Funnel. }\label{funnel_ET} \end{center} \vspace{-5pt} \end{figure} \begin{table}[tb!] \centering \begin{tabular}{lllll} & time(s) & min ESS($\mathbf{x}$, $v$) & min ESS/s ($\mathbf{x}$, $v$) &MSE($\mathbb{E}[v]$, $\mathbb{E}[v^2]$)\\ \hline HMC & 5.16&(302.97, 26.30)&(58.64, 5.09)&(2.28, 1.34)\\ RMHMC(Gibbs) &2.7& (2490.98, 8.93) &(\textbf{895.15}, 3.21)&(1.95, 1.33)\\ SSHMC\ &37.35& (\textbf{3868.79}, \textbf{1541.67})&(103.57, \textbf{41.27})&(\textbf{0.04}, \textbf{0.02})\\ \hline \end{tabular} \caption{The result of ESS of 5000 samples on 100 + 1 dimensional Gaussian Funnel distribution. $\mathbf{x}$ are model parameters and $v$ is the hyperparameter. The last column is the mean squared error of the sample estimated mean and variance of hyperparameter.} \label{tab:ESS_GF} \end{table} \begin{table}[tb!] \centering \begin{tabular}{lllll} & running time(s) &ESS $\boldsymbol\theta$ (min, med, max)& ESS $v$ & min ESS/s\\ \hline HMC & 378&(2.05, 3.68, 4.79) $\times 10^3$ &815&2.15\\ RMHMC(Gibbs) &411&(0.8, \textbf{4.08}, \textbf{4.99})$\times 10^3$ &271&0.6\\ SSHMC\ &385.82&(\textbf{2.5}, 3.42, 4.27)$\times 10^3$ &\textbf{2266}&\textbf{5.83}\\ \hline \end{tabular} \caption{The results of ESS of 5000 samples after 1000 burn-in on Hierarchical Bayesian Logistic Regression. $\boldsymbol\theta$ are 200 dimensional model parameters and $v$ is the hyperparameter.} \label{tab:ESS_HBLR} \end{table} \begin{table}[tb!] \centering \begin{tabular}{lllll} & time (s)& ESS $\mathbf{x}$(min, med, max) & ESS$(\beta, \sigma, \phi)$ & min ESS/s\\ \hline HMC &162&(1.6, 2.2, 5.2)$\times 10^2$&(50, 50, 128)& 0.31\\ RMHMC(Gibbs) &183&(12.1, 18.4, 33.5)$\times 10^2$&(385, 163, 411)&0.89\\ SSHMC\ &883&(\textbf{78.4}, \textbf{98.9}, \textbf{120.7})$\times 10^2$ &(\textbf{4434}, \textbf{1706}, \textbf{1390})&\textbf{1.57}\\ \hline \end{tabular} \caption{The ESS of 20000 posterior samples of Stochastic Volatility after 10000 burn-in. $\mathbf{x}$ are latent volatilities over 2000 time lags and $(\beta, \sigma, \phi)$ are hyperparameters. Min ESS/s is the lowest ESS over all parameters normalized by running time.} \label{tab:ESS_StochV} \vspace{-15pt} \end{table} \begin{figure} [tb] \begin{center} \centering \begin{tabular}{lll} \resizebox{0.3\textwidth}{!}{\includegraphics{hist_phi}} & \resizebox{0.3\textwidth}{!}{\includegraphics{hist_sigma}} & \resizebox{0.3\textwidth}{!}{\includegraphics{hist_beta}} \end{tabular} \caption{ The normalized histogram of 20000 posterior samples of hyperparameters (from left to right $\phi$, $\sigma$, $\beta$) after 10000 burn-in samples. The data is generated by the hyperparameter $(\phi=0.98, \sigma = 0.15, \beta = 0.65)$. It is clear that empirical distributions by the three methods are consistent. But, it is clear that SSHMC and RMHMC converges faster than HMC. \vspace{-15pt}} \label{hist_hyper_SV} \end{center} \end{figure} \subsection{Demonstration on Gaussian Funnel} We demonstrate SSHMC by sampling the Gaussian Funnel (GF) defined in Section 2. We consider $n=100$ dimensional low-level parameters $\mathbf{x}$ and 1 hyperparameter $v$. RMHMC within Gibbs on GF has block diagonal mass matrix defined as $G_{\mathbf{x}} = -\partial_v^2\log p(x, v)^{-1} = e^v \mathbf{I}$ and $G_v = -\mathbb{E}_{\mathbf{x}}[\partial_v^2\log p(x, v)]^{-1} = (n + \frac{1}{9})^{-1}$. We use the same mass matrix in SSHMC, because it is semi-separable. We use 2 leapfrog steps for low-level parameters and 1 leapfrog step for the hyperparameter in ABLA and the same leapfrog step size for the two separable Hamiltonians. We generate 5000 samples from each method after 1000 burn-in iterations. The ESS per second (ESS/s) and mean squared error (MSE) of the sample estimated mean and variance of the hyperparameter are given in Table \ref{tab:ESS_GF}. Notice that RMHMC is much more efficient for the low-level variable because of the adaptive mass matrix with hyperparameter. Figure~\ref{funnel_ET} illustrates a dramatic difference between HMC and SSHMC. It is clear that HMC suffers from oscillation of the hyperparameter in a narrow region. That is because the kinetic energy limits the change of hyperparameters \cite{radford2010, mark13}. In contrast, SSHMC\ has much wider energy variation and the trajectory spans a larger range of hyperparameter $v$. The energy variation of SSHMC is similar to the RMHMC with Soft-Abs metric (RMHMC-Soft-Abs) reported in \cite{MJ12}, an instance of general RMHMC without Gibbs. But compared with \cite{MJ12}, each ABLA step is about 100 times faster than each generalized leapfrog step and SSHMC can generate around \emph{2.5 times} more effective samples per second than RMHMC-Soft-Abs. Although RHMC within Gibbs has better ESS/s on the low level variables, its estimation of the mean and variance is biased, indicating that the chain has not yet mixed. More important, Table \ref{tab:ESS_GF} shows that the samples generated by SSHMC give nearly unbiased estimates of the mean and variance of the hyperparameter, which neither of the other methods are able to do. \subsection{Hierarchical Bayesian Logistic Regression} In this experiment, we consider hierarchical Bayesian logistic regression with exponential prior for the variance hyperparameter $v$, that is \begin{eqnarray*} p(\mathbf{w}, \phi \vert \mathcal{D}) \propto \prod_{i}\prod_{j}\sigma(y_{ij}\mathbf{w}_{i}^T\mathbf{x}_{ij})\mathcal{N}(\mathbf{w}_i \vert \mathbf{0}, v\mathbf{I})\text{Exp}(v \vert \lambda), \end{eqnarray*} where $\sigma$ is the logistic function $\sigma(z) = 1 / (1 + \exp(-z))$ and $(y_{ij}, \mathbf{x}_{ij})$ is the $j$th data points the $i$th group. We use the Statlog (German credit) dataset from \cite{Bache+Lichman:2013}. This dataset includes 1000 data points and each data has $16$ categorical features and $4$ numeric features. Bayesian logistic regression on this dataset has been considered as a benchmark for HMC \cite{mark11, Hoffman11}, but the previous work uses only one group in their experiments. To make the problem more interesting, we partition the dataset into $10$ groups according to the feature \emph{Purpose}. The size of group varies from 9 to 285. There are $200$ model parameters ($20$ parameters for each group) and $1$ hyperparameter. We consider the reparameterization of the hyperparameter $\gamma = \log v$. For RMHMC within Gibbs, the mass matrix for group $i$ is $G_i := \mathcal{I}(\mathbf{x}, \boldsymbol\theta)^{-1},$ where $\mathcal{I}(\mathbf{x}, \boldsymbol\theta)$ is the Fisher Information matrix for model parameter $\mathbf{w}_i$ and constant mass $G_v$. In each iteration of the Gibbs sampler, each $\mathbf{w}_i$ is sampled from by RMHMC using 6 generalized leapfrog steps and $v$ is sampled using 6 leapfrog steps. For SSHMC, $G_i := \text{Cov}(\mathbf{x}) + \exp(\gamma)\mathbf{I}$ and the same constant mass $G_v$. The results are shown in Table \ref{tab:ESS_HBLR}. SSHMC again has much higher ESS/s than the other methods. \subsection{Stochastic Volatility} A stochastic volatility model we consider is studied in \cite{kim1998stochastic}, in which the latent volatilities are modeled by an auto-regressive AR(1) process such that the observations are $y_t = \epsilon_t \beta \exp(x_t/2)$ with latent variable $x_{t+1} = \phi x_t + \eta_{t+1}$. We consider the distributions $x_1 \sim \mathcal{N}(0,\sigma^2 / (1-\phi^2))$, $\epsilon_t \sim \mathcal{N}(0, 1)$ and $\eta_t\sim(0, \sigma^2)$. The joint probability is defined as \begin{eqnarray*} p(\mathbf{y}, \mathbf{x}, \beta, \phi, \sigma) &=& \prod_{t=1}^Tp(y_t \vert x_t, \beta)p(x_1)\prod_{t=2}^Tp(x_t \vert x_{t-1}, \phi, \sigma) \pi(\beta) \pi(\phi)\phi(\sigma), \end{eqnarray*} where the prior $\pi(\beta)\propto 1/\beta$, $\sigma^2 \sim \text{Inv-} \chi^2(10, 0.05)$ and $(\phi + 1)/2 \sim \text{beta}(20, 1.5)$. The FIM of $p(\mathbf{x} \vert \alpha, \beta, \phi, \mathbf{y})$ depends on the hyperparameters but not $\mathbf{x}$, but the FIM of $p(\alpha, \beta, \phi \vert \mathbf{x}, \mathbf{y})$ depends on $(\alpha, \beta, \phi)$. For RMHMC within Gibbs we consider FIM as the metric tensor following \cite{mark11}. For SSHMC, we define $G_{\boldsymbol\theta}$ as inverse Hessian of $\log p(\mathbf{x} \vert \alpha, \beta, \phi, \mathbf{y})$, but $G_{\boldsymbol\phi}$ as an identity matrix. In each ABLA step, we use 5 leapfrog steps for updates of $\mathbf{x}$ and 2 leapfrog steps for updates of the hyperparameters, so that the running time of SSHMC is about 7 times that of standard HMC. We generate 20000 samples using each method after 10000 burn-in samples. The histogram of hyperparameters is shown in Figure \ref{hist_hyper_SV}. It is clear that all methods approximately converge to the same distribution. But from Table \ref{tab:ESS_StochV}, we see that SSHMC generates almost \emph{two times} as many ESS/s as RMHMC within Gibbs. \subsection{Log-Gaussian Cox Point Process} The log-Gaussian Cox Point Process (LGCPP) is another popular testing benchmark \cite{christensen2005, mark11, Wang:2013}. We follow the experimental setting of Girolami and Calderhead \cite{mark11}. The observations $\mathbf{Y} = \{y_{ij}\}$ are counts at the location $(i,j)$, $i,j = 1,\dots, d$ on a regular spatial grid, which are conditionally independent given a latent intensity process $\boldsymbol\Lambda = \{\lambda(i,j)\}$ with mean $m\lambda(i,j) = m\exp(x_{i,j})$, where $m = 1/ d^2$, $\mathbf{X} = \{x_{i,j}\}$, $\mathbf{x} = \text{Vec}(\mathbf{X})$ and $\mathbf{y} = \text{Vec}(\mathbf{Y})$. $\mathbf{X}$ is assigned by a Gaussian process prior, with mean function $m(x_{i,j}) = \mu\mathbf{1}$ and covariance function $\Sigma(x_{i,j}, x_{i',j'}) = \sigma^2\exp(-\delta(i,i',j,j')/\beta d)$ where $\delta(\cdot)$ is the Euclidean distance between $(i,j)$ and $(i',j')$. The log joint probability is given by $\log p(\mathbf{y}, \mathbf{x} \vert \mu, \sigma, \beta)= \sum_{i,j}y_{i,j}x_{i,j} - m\exp(x_{i,j}) - \frac{1}{2}(\mathbf{x}-\mu\mathbf{1})^T \Sigma^{-1}(\mathbf{x}-\mu\mathbf{1}). $ We consider a $32 \times 32$ grid that has 1024 latent variables. Each latent variable $x_{i,j}$ corresponds to a single observation $y_{i,j}$. We consider RMHMC within Gibbs with FIM of the conditional posteriors. See \cite{mark11} for the definition of FIM. The generalized leapfrog steps are required for updating $(\sigma, \beta)$, but only the leapfrog steps are required for updating $\mathbf{x}$. Each Gibbs iteration takes 20 leapfrog steps for $\mathbf{x}$ and 1 general leapfrog step for $(\sigma, \beta)$. In SSHMC, we use $G_{\mathbf{x}} = \Sigma^{-1}$ and $G_{(\sigma, \beta)} = \mathbf{I}$. In each ABLA step, the update of $\mathbf{x}$ takes 2 leapfrog steps and the update of $(\alpha, \beta)$ takes 1 leapfrog step. Each SSHMC transition takes 10 ALBA steps. We do not consider HMC on LGCPP, because it mixes extremely slowly for hyperparameters. The results of ESS are given in Table \ref{tab:ESS_LGCox}. The mean of sampled latent variable and the histogram of sampled hyperparameters are given in Figure \ref{LGCX}. It is clear that the samples of RMHMC and SSHMC are consistent, so both methods are mixing well. However, SSHMC generates about \emph{six times} as many effective samples per hour as RMHMC within Gibbs. \begin{figure} [tb] \begin{center} \centering \begin{tabular}{llll} \resizebox{0.22\textwidth}{!}{\includegraphics{LV_mean_RMHMC}} & \resizebox{0.22\textwidth}{!}{\includegraphics{LV_mean_SSHMC}} & \resizebox{0.22\textwidth}{!}{\includegraphics{hist_sigma_LGCPP}} & \resizebox{0.22\textwidth}{!}{\includegraphics{hist_beta_LGCPP}} \end{tabular} \caption{Sample mean of latent fields using RMHMC (left 1) and SSHMC (left 2). The normalized histogram of sampled hyperparameter $\sigma$ (right 1) and $\beta$ (right 2). We draw 5000 samples from both methods after 1000 burn-in. The true hyperparameter values are $(\sigma = 1.9, \beta = 0.03)$. }\label{LGCX} \vspace{-5pt} \end{center} \end{figure} \begin{table}[tb] \centering \begin{tabular}{lllll} & time(h) & ESS $\mathbf{x}$(min, med, max) & ESS($\sigma, \beta$) & min ESS/h\\ \hline SSHMC\ & 2.6&(\textbf{7.8}, \textbf{30}, \textbf{39})$\times10^2$ &(\textbf{2101}, \textbf{270})&\textbf{103.8}\\ RMHMC(Gibbs) & 2.64&(1, 29, 38.3)$\times10^2$ & (200, 46)& 16\\ \hline \end{tabular} \caption{The ESS of 5000 posterior samples from 32x32 LGCPP after 1000 burn-in samples. $\mathbf{x}$ is the 1024 dimensional vector of latent variables and ($\sigma, \beta$) are the hyperparameters of the Gaussian Process prior. ``min ESS/h'' means minimum ESS per hour.} \label{tab:ESS_LGCox} \vspace{-15pt} \end{table} \section{Conclusion} We have presented Semi-Separable Hamiltonian Monte Carlo (SSHMC), a new version of Riemannian manifold Hamiltonian Monte Carlo (RMHMC) that aims to retain the flexibility of RMHMC for difficult Bayesian sampling problems, while achieving greater simplicity and lower computational complexity. We tested SSHMC on several different hierarchical models, and on all the models we considered, SSHMC outperforms both HMC and RMHMC within Gibbs in terms of number of effective samples produced in a fixed amount of computation time. Future work could consider other choices of mass matrix within the semi-separable framework, or the use of SSHMC within discrete models, following previous work in discrete HMC \cite{zhang12,pakman13}. \newpage {\small \bibliographystyle{abbrvnat}
1,477,468,751,324
arxiv
\section{\uppercase{Introduction}} \noindent The World Wide Web today contains an exterminated amount of information, mostly unstructured, under the form of Web pages, but also documents of various nature. During last years big efforts have been conducted to develop techniques of information extraction on top of the Web. Approaches adopted spread in several fields of Mathematics and Computer Science, including, for example, logic-programming and machine learning. Several projects, initially developed in academic settings, evolved in commercial products, and it is possible to identify different methodologies to face the problem of Web data extraction. A widely adopted approach is to define \emph{Web wrappers}, procedures relying on analyzing the structure of HTML Web pages (i.e. DOM tree) to extract required information. Wrappers can be defined in several ways, e.g. most advanced tools let users to design them in a visual way, for example selecting elements of interest in a Web page and defining rules for their extraction and validation, semi-automatically; regardless of their generation process, wrappers intrinsically refer to the HTML structure of the Web page at the time of their creation. Thus, introducing not negligible problems of robustness, wrappers could fail in their tasks of data extraction if the underlying structure of the Web page changes, also slightly. Moreover, it could happens that the process of extraction does not fail but extracted data are corrupted. All these aspects clarify the following scenarios: during their definition, wrappers should be as much elastic as possible, in order to intrinsically handle minor modifications on the structure of Web pages (this kind of small local changes are much more frequent than heavy modifications); although elastic wrappers could efficiently react to minor changes, maintenance is required for the whole wrapper life-cycle. Wrapper maintenance is expensive because it requires highly qualified personnel, specialized in defining wrappers, to spend their time in rebuilding or fixing wrappers whenever they stop working properly. For improving this aspect, several commercial tools include notification features, reporting warnings or errors during wrappers execution. Moreover, to increase their reliability, data extracted by wrappers could be subject to validation processes, and also data cleaning is a fundamental step; some tools provide caching services to store the last working copy of Web pages involved in data extraction processes. Sometimes, it is even more convenient to rewrite \emph{ex novo} a wrapper, instead of trying to find causes of malfunctioning and fixing them, because debugging wrapper executions can be not trivial. The unpredictability of what changes will occur in a specific Web page and, consequently, the impossibility to establish when a wrappers will stop working properly, requires a smart approach to wrapper maintenance. Our purpose in this paper is to describe the realization and to investigate performances of an automatic process of wrapper adaptation to structural modifications of Web pages. We designed and implemented a system relying on the possibility of storing, during the wrapper definition step, a \emph{snapshot} of the DOM tree of the original Web page, namely a \emph{tree-gram}. If, during the wrapper execution, problems occur, this sample is compared to the new DOM structure, finding similarities on trees and sub-trees, to automatically try adaptating the wrapper with a custom degree of accuracy. Briefly, the paper is structured as follows: Section 2 focuses on related work, covering the literature about wrapper generation and adaptation. In Section 3 we explain some concepts related to the tree similarity algorithm implemented, to prove the correctness of our approach. Section 4 shows details about our implementation of the automatic wrapper adaptation. Most important results, obtained by our experimentation, are reported in Section 5. Finally, Section 6 concludes providing some remarks for future work. \section{\uppercase{Related Work}} \noindent The concept of analyzing similarities between trees, widely adopted in this work, was introduced by Tai \cite{Tai1979}; he defined the notion of \emph{distance} between two trees as the measure of the dissimilarity between them. The problem of transforming trees into other similar trees, namely \emph{tree edit distance}, can be solved applying elementary transformations to nodes, step-by-step. The minimum cost for this operation represents the tree edit distance between the two trees. This technique shows high computational requirements and complex implementations \cite{Bille2005}, and do not represents the optimal solution to our problem of finding similarities between two trees. The \emph{simple tree matching} technique \cite{StanleyM.Selkow1977} represents a turning point: it is a light-weight recursive top-down algorithm which evaluates position of nodes to measure the degree of isomorphism between two trees, analyzing and comparing their sub-trees. Several improvements to this technique have been suggested: Ferrara and Baumgartner \cite{Ferrara2010}, extending the concept of weights introduced by Yang \cite{Yang1991}, developed a variant of this algorithm with the capability of discovering clusters of similar sub-trees. An interesting evaluation of the simple tree matching and its weighed version, brought by Kim et al. \cite{Kim2007}, was performed exploiting these two algorithms for extracting information from HTML Web pages; we found their achievements very useful to develop automatically adaptable wrappers. Web data extraction and adaptation rely especially on algorithms working with DOM trees. Related work, in particular regarding Web wrappers and their maintenance, is intricate: Laender et al. \cite{Laender2002} presented a taxonomy of wrapper generation methodologies, while Ferrara et al. \cite{Baumgartner2010} discussed a comprehensive survey about techniques and fields of application of Web data extraction and adaptation. Some novel wrapper adaptation techniques have been introduced during last years: a valid hybrid approach, mixing logic-based and grammar rules, has been presented by Chidlovskii \cite{Chidlovskii2001}. Also machine-learning techniques have been investigated, e.g. Lerman et al. \cite{Lerman2003} exploited their know-how in this field to develop a system for wrapper verification and re-induction. Meng et al. \cite{20} developed the SG-WRAM (Schema-Guided WRApper Maintenance), for wrapper maintenance, starting from the observation that, changes in Web pages, even substantial, always preserve syntactic features (i.e. syntactic characteristics of data items like data patterns, string lengths, etc.), hyperlinks and annotations (e.g. descriptive information representing the semantic meaning of a piece of information in its context). This system has been implemented in their Web data extraction platform: wrappers are defined providing both HTML Web pages and their XML schemes, describing a mappings between them. When the system executes the wrapper, data are extracted under the XML format reflecting the previously specified XML Schema; the wrapper maintainer verifies any issue and, eventually, provides protocols for the automatic adaptation of the problematic wrapper. The XML Schema is a DTD (Document Type Definition) while the HTML Web page is represented by its DOM tree. The framework described by Wong and Lam \cite{Wong} performs the adaptation of wrappers previously learned, applying them to Web pages never seen before; they assert that this platform is also able to discover, eventually, new attributes in the Web page, using a probabilistic approach, exploiting the extraction knowledge acquired through previous wrapping tasks. Also Raposo et al. \cite{Raposo2005} suggested the possibility of exploiting previously acquired information, e.g. results of queries, to ensure a reliable degree of accuracy during the wrapper adaptation process. Concluding, Kowalkiewicz et al. \cite{Kowalkiewicz2006} investigate the possibility of increasing the robustness of wrappers based on the identification of HTML elements, inside Web pages, through their XPath, adopting relative XPath, instead of absolute ones. \section{\uppercase{Matching Up HTML Trees}} \noindent Our idea of automatic adaptation of wrappers can be explained as follows: first of all, outlining how to extract information from Web pages (i.e. in our case, how a Web wrapper works); then, describing how it is possible to recover information previously extracted from a different Web page (i.e. how to compare structural information between the two versions of the Web page, finding similarities); finally, defining how to automatize this process (i.e. how to build reliable, robust automatically adaptable wrappers). Our solution has been implemented in a commercial product \footnote{Lixto Suite, www.lixto.com}; Baumgartner et al. \cite{baumgartner2009scalable} described details about its design. This platform provides tools to design Web wrappers in a visual way, selecting elements to be extracted from Web pages. During the wrapper execution, selected elements, identified through their XPath(s) in the DOM tree of the Web page, are automatically extracted. Although the wrapper design process lets users to define several restricting or generalizing conditions to build wrappers as much elastic as possible, wrappers are strictly interconnected with the structure of the Web page on top of they are built. Usually, also slight modifications to this structure could alter the wrapper execution or corrupt extracted data. In this section we discuss some theoretical foundations on which our solution relies; in details, we show an efficient algorithm to find similar elements within different Web pages. \subsection{Methodology} \noindent A simple measure of similarity between two trees, once defined their comparable elements, can be established applying the \emph{simple tree matching} algorithm \cite{StanleyM.Selkow1977}, introduced in Section 2. We define \emph{comparable elements} among HTML Web pages, nodes, representing HTML elements (or, otherwise, free text) identified by tags, belonging to the DOM tree of these pages. Similarly, we intend for \emph{comparable attributes} all the attributes, both generic (e.g. \emph{class}, \emph{id}, etc.) and type-specific (e.g. \emph{href} for anchor texts, \emph{src} for images, etc.), shown by HTML elements; it is possible to exploit these properties to introduce additional comparisons to refine the similarity measure. Several implementations of the \emph{simple tree matching} have been proposed; our solution exploits an improved version, namely \emph{clustered tree matching} \cite{Baumgartner2010}, designed to match up HTML trees, identifying clusters of sub-trees with similar structures, satisfying a custom degree of accuracy. \subsection{Tree Matching Algorithms} \noindent Previous studies proved the effectiveness of the \emph{simple tree matching} algorithm applied to Web data extraction tasks \cite{Kim2007,19}; it measures the similarity degree between two HTML trees, producing the maximum matching through dynamic programming, ensuring an acceptable compromise between precision and recall. As improvement to this algorithm, this is a possible implementation of \emph{clustered tree matching}: let \emph{d(n)} to be the degree of a node \emph{n} (i.e. the number of first-level children); let T(i) to be the i-\emph{th} sub-tree of the tree rooted at node T; let $t(n)$ to be the number of total siblings of a node \emph{n} including itself. \begin{algorithm} \caption{ClusteredTreeMatching($T^{'}$, $T^{''}$)} \label{alg1} \begin{algorithmic}[1] \IF{$T^{'}$ has the same label of $T^{''}$} \STATE $m \leftarrow$ $d(T^{'})$ \STATE $n \leftarrow$ $d(T^{''})$ \FOR{$i = 0$ to $m$} \STATE $M[i][0] \leftarrow 0$; \ENDFOR \FOR{$j = 0$ to $n$} \STATE $M[0][j] \leftarrow 0$; \ENDFOR \FORALL{$i$ such that $1\leq i\leq m$} \FORALL{$j$ such that $1\leq j \leq n$} \STATE $M[i][j] \leftarrow$ Max($M[i][j-1]$, $M[i-1][j]$, $M[i-1][j-1] + W[i][j]$) where $W[i][j]$ = ClusteredTreeMatching($T^{'}(i-1)$, $T^{''}(j-1)$) \ENDFOR \ENDFOR \IF{$m > 0$ AND $n > 0$} \STATE return M[m][n] * 1 / Max($t(T^{'})$, $t(T^{''})$) \ELSE \STATE return M[m][n] + 1 / Max($t(T^{'})$, $t(T^{''})$) \ENDIF \ELSE \STATE return 0 \ENDIF \end{algorithmic} \end{algorithm} \begin{figure*} \small (A) \Tree [.{a\\\small N1} [.{b\\\small N2\\$\frac{1}{4}\cdot$\\\small (N6+N7)} [.{d\\\small N6\\ $\frac{1}{2}$} ] [.{e\\\small N7\\ $\frac{1}{2}$} ] ] [.{c\\\small N3\\$\frac{1}{4}\cdot$\small N8} [.{f\\\small N8\\ $\frac{1}{2}$} ] ] [.{b\\\small N4\\$\frac{1}{4}\cdot$\\\small (N9+N10)} [.{e\\\small N9\\ $\frac{1}{2}$} ] [.{d\\\small N10\\ $\frac{1}{2}$} ] ] [.{c\\\small N5\\$\frac{1}{4}\cdot$\small N11} [.{g\\\small N11\\$\frac{1}{2}\cdot$\\\small (N12+N13+N14)} [.{h\\\small N12\\ $\frac{1}{3}$} ] [.{i\\\small N13\\ $\frac{1}{3}$} ] [.{j\\\small N14\\ $\frac{1}{3}$} ] ] ] ] \small (B) \Tree [.{a\\\small N15} [.{b\\\small N16\\$\frac{1}{4}\cdot$\\\small (N18+N19)} [.{d\\\small N18\\ $\frac{1}{2}$} ] [.{e\\\small N19\\ $\frac{1}{2}$} ] ] [.{c\\\small N17\\$\frac{1}{4}\cdot$\\\small (N20+N21)} [.{g\\\small N20\\$\frac{1}{2}\cdot$\small N22} [.{h\\\small N22\\ $\frac{1}{3}$} ] ] [.{f\\\small N21\\ $\frac{1}{2}$} ] ] ] \caption{Two labeled trees, \emph{A} and \emph{B}, which show similarities in their structures.} \label{fig1} \end{figure*} \noindent The main difference between the \emph{simple} and the \emph{clustered} tree matching is the way of assigning values to matching elements. The first, adopts a fixed matching value of 1; the latter, instead, computes some additional information, retrieved in the sub-trees of matched nodes. Omitting detail, provided in \cite{Baumgartner2010}, the \emph{clustered tree matching} algorithm assigns a weighted value equal to 1, divided by the greater number of siblings, computed between the two compared nodes (also considering themselves). Figure \ref{fig1} shows two similar simple rooted, labeled trees, and the way of assignment of weights that would be calculated by applying the \emph{clustered tree matching} between them. \begin{figure*}[!ht]% \begin{center} \includegraphics[width=360px]{graph.png}% \caption{State diagram of Web wrappers design and adaptation in the Lixto Visual Developer.}% \label{diagram}% \end{center} \end{figure*} \subsubsection{Motivations} \noindent Several motivations lead us to bring these improvements. For example, considering common characteristics shown by Web pages, provides some useful tips: usually, rich sub-levels (i.e. sub-levels with several nodes) represent list items, table rows, menu, etc., more frequently affected by modifications than other elements of Web pages; moreover, analyzing which kind of modifications usually affect Web pages suggests to assign less importance to slight changes happening in deep sub-levels of the DOM tree, this because these are commonly related to missing/added details to elements, etc. On the one hand, \emph{simple tree matching} ignores these important aspects, on the other \emph{clustered tree matching} exploits information like position and number of mismatches to produce a more accurate result. \subsubsection{Advantages and limitations} \noindent The main advantage of our \emph{clustered tree matching} is its capability to calculate an absolute measure of similarity, while \emph{simple tree matching} produces the mapping value between two trees. Moreover, the more the structure of considered trees is complex and similar, the more the measure of similarity established by this algorithm will be accurate. It fits particularly well to matching up HTML Web pages, this because they often own rich and variegated structures. One important limitation of algorithms based on the tree matching is that they can not match permutations of nodes. Intuitively, this happens because of the dynamic programming technique used to face the problem with computational efficiency; both the algorithms execute recursive calls, scanning sub-trees in a sequential manner, so as to reduce the number of required iterations (e.g. in Figure \ref{fig1}, permutation of nodes [c,b] in \emph{A} with [b,c] in \emph{B} is not computed). It is possible to modify the algorithm introducing the analysis of permutations of sub-trees, but this would heavily affect performances. Despite this intrinsic limit, this technique appears to fit very well to our purpose of measuring HTML trees similarity. It is important to remark that, applying \emph{simple tree matching} to compare simple and quite different trees will produce a more accurate result. Despite that, because of the most of modifications in Web pages are usually slight changes, \emph{clustered tree matching} is far and away the best algorithm to be adopted in building automatically adaptable wrappers. Moreover, this algorithm makes it possible to establish a custom level of accuracy of the process of matching, defining a similarity threshold required to match two trees. \section{\uppercase{Adaptable Web Wrappers}} \noindent Based on the adaptation algorithms described above, a proof-of-concept extension to the Lixto Visual Developer (VD) has been implemented. Wrappers are automatically adapted based on given configuration settings, integrity constraints, and triggers. Usually, wrapper generation in VD is a hierarchical top-down process, e.g. first, a ``hotel record'' is characterized, and inside the hotel record, entities such as ``rating'' and ``room types''. Such entities are referred to as \emph{patterns}. To define a pattern, the wrapper designer visually selects an example and together with system suggestions generalizes the rule configuration until the desired instances are matched. In this extension, to support the automatic adaptation process during runtime, the wrapper designer further specifies what it means that extraction failed. In general, this means wrong or missing data, and with integrity constraints one can give indications how correct results look like. Typical \emph{integrity constraints} are: \begin{itemize} \item \emph{Occurrence restrictions}: e.g. minimum and/or maximum number of allowed occurrence of a pattern instance, minimum and/or maximum number of child pattern instances; \item \emph{Data types}: e.g. all results of a ``price'' pattern need to be of data type integer. \end{itemize} \noindent Integrity constraints can be specified with each pattern individually or be based on a data model (in our case, a given XML Schema). In case integrity constraints are violated during runtime, the adaptation process for this particular pattern is started. During wrapper creation, the application designer provides a number of configuration settings to this process. This includes: \begin{itemize} \item Threshold values; \item Priorities/order of adaptation algorithms used; \item Flags of the chosen algorithm (e.g. using HTML element name as node label, using id/class attributes as node labels, etc.); \item Triggers for bottom-up, top-down and process flow adaptation bubbling; \item Whether stored tree-grams and XPath statements are updated based on adaptation results to be additionally used as inputs in future adaptation procedures (reflecting and addressing regular slight changes of a Web page over time). \end{itemize} \noindent Used algorithms for adaptations rely on two inputs (stored example tree-gram(s), DOM tree of current page) and provide as output sub-trees that are sufficiently similar to the original (example) ones, and in consequence a generated XPath statement that matches the nodes (Fig. \ref{diagram} summarizes the process from design time and execution time perspective). Algorithms under consideration include the clustered tree matching discussed above, as well as tree-based variants of the Bigram \cite{Collins1996bigram} and Jaro-Winkler similarity \cite{winkler1999state} (which are of advantage when one assumes that permutations in the tree nodes are likely over time). Moreover, for extraction of leaf nodes which exhibit no inherent tree structure, we rely on string similarity metrics. Finally, triggers in adaptation settings can be used to force adaptation of further fragments of the wrapper: \begin{itemize} \item Top-down: forcing adaptation of all/some descendant patterns (e.g. adapt the ``price'' pattern as well to identify prices within a record if the ``record'' pattern was adapted). \item Bottom-up: forcing adaptation of a parent pattern in case adaptation of a particular pattern was not successful. Experimental evaluation pointed out that in such cases it is often the problem that the parent pattern already provides wrong or missing results (even if matched by the integrity constraints) and has to be adapted first. \item Process flow: it might happen that particular patterns are no longer detected because the wrapper evaluates on the wrong page. Hence, there is the need to use variations in the deep web navigation processes. A simple approach explored at this time is to use a switch window or back step action to check if the previous window or another tab/pop-up provides the required information. \end{itemize} \section{\uppercase{Experimental Results}} \noindent The best way of measuring reliability of automatically adaptable wrappers is to test their behavior in real world use-cases. Several common areas of application of Web wrappers have been identified: social networks and bookmarking, retail market and comparison shopping, Web search and information distribution, and, finally, Web communities. For each of these fields, we designed a test using a representative Website, studying a total of 7 use-cases, defining wrappers applied to 70 Web pages. Websites like Facebook, Google News, Ebay, etc. are usually subjected to countless, although often invisible, structural modifications; thus, altering the correct behavior of Web wrappers. Table \ref{tab-res} summarizes results: each wrapper automatically tries to adapt itself using both the algorithms described in Section 3. Column referred as \emph{thresh.} means the threshold value of similarity required to match two elements. Columns \emph{tp}, \emph{fp} and \emph{fn} represent true and false positive, and false negative, measures usually adopted to evaluate precision and recall of these kind of tasks. \begin{table}[!ht] \small \begin{tabular}{|@{}c@{} c@{}|@{}c c c@{}|@{}c c c@{}|} \cline{3-8} \multicolumn{2}{r|}{} & \multicolumn{3}{c|}{Simple T. M.} & \multicolumn{3}{c|}{Clustered T. M.} \\ \cline{3-8} \multicolumn{2}{r|}{} & \multicolumn{3}{c|}{Precision/Recall} & \multicolumn{3}{c|}{Precision/Recall}\\ \cline{3-8} \noalign{\smallskip} \hline Scenario & thresh. & tp & fp & fn & tp & fp & fn \\ \hline Delicious & 40\% & 100 & 4 & - & 100 & - & - \\ Ebay & 85\% & 200 & 12 & - & 196 & - & 4 \\ Facebook & 65\% & 240& 72 & - & 240&12 & - \\ Google news & 90\% & 604 & - & 52 & 644 & - & 12\\ Google.com & 80\% & 100 & - & 60 & 136 & - & 24 \\ Kelkoo & 40\% & 60 & 4 & - & 58 & - & 2 \\ Techcrunch & 85\% & 52 & - & 28 & 80 & - & - \\ \hline Total & - & 1356 & 92 & 140 & 1454 & 12 & 42\\ \hline \hline Recall & - & \multicolumn{3}{c|}{90.64\%} & \multicolumn{3}{c|}{97.19\%}\\ Precision & - & \multicolumn{3}{c|}{93.65\%} & \multicolumn{3}{c|}{99.18\%}\\ F-Measure & - & \multicolumn{3}{c|}{92.13\%} & \multicolumn{3}{c|}{98.18\%}\\ \hline \end{tabular} \caption{Evaluation of the reliability of automatically adaptable wrappers applied to real world scenarios.} \label{tab-res} \end{table} \noindent Performances obtained using the \emph{simple} and the \emph{clustered} tree matching are, respectively, good and excellent; \emph{clustered tree matching} definitely is a viable solution to automatically adapt wrappers with a high degree of reliability (F-Measure greater than 98\%). This system provides also the possibility of improving these results including additional checks on \emph{comparable attributes} (e.g. \emph{id}, \emph{name} or \emph{class}). The role of the required accuracy degree is fundamental; experimental results help to highlight the following considerations: very high values of threshold could result in false negatives (e.g. Google news and Google.com scenarios), while low values could result in false positives (e.g. the Facebook scenario). Our solution exploiting the \emph{clustered tree matching} algorithm, designed by us, helps to reduce wrapper maintenance tasks, keeping in mind that, in cases of deep structural changes, it could be required to manually intervene to fix a specific wrapper, since it is impossible to automatically face all the possible malfunctionings. \section{\uppercase{Conclusion}} \noindent In this paper we described several novel ideas, investigating the possibility of designing smart Web wrappers which automatically react to structural modifications of underlying Web pages and adapting themselves to avoid malfunctionings or corrupting extracted data. After explaining the core algorithms on which this system relies, we shown how to implement this feature in Web wrappers. Finally, we analyzed performances of this system through a rigorous testing of the behavior of automatically adaptable wrappers in real world use-cases. This work opens new scenarios on wrapper adaptation techniques and is liable to several improvements: first of all, avoiding some limitations of the matching algorithms, for example the inability of handling permutations on nodes previously explained, with computationally efficient solutions could be important to improve the robustness of wrappers. One limitation of adopted tree matching algorithms is also that they do not work very well if new levels of nodes are added or node levels are removed. We already investigated the possibility of adopting different tree similarity algorithms, working better in such cases. We could try to ``generalize'' other similarity metrics on strings, such as the n-gram distance and the Jaro-Winkler distance. Implementing these two metrics do not require dynamic programming and might be computationally efficient; in particular, variants of the Bigram distance on trees might work well with permutations of groups of nodes and the Jaro-Winkler distance could better reflect missing or added node levels. Another idea is investigating the possibility of improving matching criteria including additional information to be compared during the tree match up process (e.g. full path information, all attributes, etc.); then, exploiting logic-based rules (e.g. regular expressions, string edit distance, and so on) to analyze textual properties. Finally, the tree-grammar, already exploited to store a light-weight signature of the structure of elements identified by the wrapper, could be extended for classifying topologies of templates frequently shown by Web pages, in order to define \emph{standard protocols} of automatic adaptation in these particular contexts. Adaptation in the deep web navigation is a different topic than adaptation on a particular page, but also extremely important for wrapper adaptation. Future work will comprise to investigate focused spidering techniques: instead of explicit modeling of a work flow on a Web page (form fill-out, button clicks, etc.) we develop a tree-grammar based approach that decides for a given Web page which template it matches best and executes the data extraction rules defined for this template. Navigation steps are carried out implicitly by following all links and DOM events that have been defined as interesting, crawling a site in a focused way to find the relevant information. Concluding, the system of designing automatically adaptable wrappers described in this paper has been proved to be robust and reliable. The \emph{clustered tree matching} algorithm is very extensible and it could be adopted for different tasks, also not strictly related to Web wrappers (e.g. operations that require matching up trees could exploit this algorithm). \renewcommand{\baselinestretch}{0.98} \bibliographystyle{apalike} {\small
1,477,468,751,325
arxiv
\section{Introduction} Person Re-identification (Person Re-ID) has become a very popular and challenging topic during recent years. Given a pedestrian image of interest called ``probe image", the Re-ID task aims to search in a large gallery image database for images of the same identity as the probe, which can also be treated as an image retrieval task. It is of great importance in both research fields and video surveillance applications. Despite many years of researches on Re-ID task, it is still an issue full of challenges. Firstly, since the probe and gallery images are taken under non-overlapped cameras, the large variations of visual perspectives, illuminations and poses can be very confusing when making a judgment on whether two images contain the same identity. Secondly, since the human body regions are detected by existing object detection methods such as DPM~\cite{felzenszwalb2010object} or Faster-RCNN~\cite{ren2015faster}, the detected bounding boxes may be inaccurate, which together with the pose variations, to cause the problem of spatial misalignment between two images. Apart from these, the occlusion problem frequently occurring in realistic video surveillance scenes may cause the absence of important clues for identifying someone. Considering the challenges above, learning a view-invariant and robust feature expression is essential for an effective person Re-ID system. \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{attribute-graph} \end{center} \caption{Attributes are consistent with human recognition mechanism for identifying people. The pedestrian can be described by a combination of various different kinds of attributes and some of these attributes can be discriminative for distinguishing between similar but different persons.} \label{first graph} \end{figure} With increasing popularity of deep neural networks, more and more researchers tend to employ deep learning based feature representations rather than hand-crafted features for its excellent generalization on unseen data. After pretraining on ImageNet, we can easily acquire relatively powerful global feature representation for target person Re-ID task by further fine-tuning on its own training sets. While simple and effective, these global appearance representations cannot settle the misalignment problem described above for their lack of semantic perception for human body parts. Recently, popular researches over human pose estimation and attention models~\cite{vaswani2017attention,wei2016convolutional} have inspired many ideas for locating human body parts to alleviate the problems of pose misalignment and background interference. However, these methods either rely heavily on existing pose detection models which may arise errors when turned to Re-ID task, or use attention model as a part detector which may be hard to train due to the shortage of prior semantic knowledge as supervision. Attribute learning for Person Re-ID task has been studied in recent years and proven to be of great help when treated as one kind of mid-level semantic features for its invariance to many influences like pose, camera views and lighting conditions. Suppose that we need to identify the man in the upper left corner of Figure~\ref{first graph}, we usually form a description like ``\textit{a young man wearing yellow T-shirt and blue short pants, carrying a backpack}". The description is fully made up of attribute information which indicates the consistency of attribute detection with human cognitive mechanisms. While in most existing approaches, attribute information is often simply incorporated into global feature learning by designing corresponding attribute classifiers~\cite{lin2017improving,matsukawa2016person}, they reckon without two important clues. On one hand, most attributes are associated with local regions and different from the holistic image-level feature representation, joint learning of these two kinds of features may cause the heteroscedasticity (a mixture of different knowledge granularity and characteristics) learning problem~\cite{duin2004linear} analysed in~\cite{wang2018transferable}. On the other hand, the description of a stranger is often a combination of several kinds of attributes while some of them are insignificant for re-identification, thus the relationship mining and selection of different attributes are vital for a robust learning. To deal with these issues, in this paper we propose a deep learning based work called Attributes-aided Part Detection and Refinement model (APDR) which incorporates the attribute learning process in a different way. We wish that the learning process of attributes should have the perception for local human body regions and can be used as a pose-invariant part detector due to its invariance to many influences like human poses and camera views. Being exploited as prior knowledge when localizing body parts, attribute information makes the part detectors easier to learn than those attention models. Compared with the approaches using existing pose estimation models, attribute detection is directly optimized for the Re-ID task to avoid model deviation phenomenon and can provide extra semantic information in the meantime. Furthermore, attribute learning can detect regions and objects like \textit{handbag} or \textit{hat} which may be distinctive for identifying while pose models cannot. Therefore, the attribute localizer can be considered as a combination of attention model, human part detection model and salient object detection model. After that, in order to make our model work more like human experts that considering the relationship among attributes when identifying, a simple attribute fusion module is adopted to combine different kinds of attribute information. Taking it as a guidance, we make refinement on local part features extracted by the localizers to filter out the redundant and irrelevant interference introduced by the learned masks, which results in a powerful refined local descriptor for re-identification. The learned local features, along with holistic image-level feature, can further improve the accuracy on person re-identification task. We evaluate our APDR model on two public datasets with attribute annotations to verify the effectiveness of our ideas and demonstrate that our model can achieve state-of-the-art performance compared with other person Re-ID models. The main contributions of our work are summarized as follows: (1) We propose a novel deep model called Attributes-aided Part detection and Refinement network (APDR) to firstly utilize the attribute learning process as a part localizer, which handles the part misalignment problem. To our best knowledge, it is the first time that the perceptual ability of attribute learning is explicitly integrated into person Re-ID task. (2) We design a simple but effective attribute fusion network to simulate the human behaviors of identifying people through attributes. (3) The fused attribute information is exploited as a guidance to filter out useless information to refine part features for a better representation. \section{Related Works} \textbf{Person Re-ID.} Most popular person Re-ID algorithms can be categorized into two classes, feature representation learning and metric learning. For the first category, usually the human identity labels are exploited as the supervision for training a classifier for different identities and can be considered as a classification problem. During recent years, CNN-based feature representation learning has been dominating various research fields because of its excellent performance and is no exception in person Re-ID community. Xiao \etal~\cite{xiao2016learning} propose a joint learning strategy to train a single classifier for multiple domains at the same time, and then fine-tune to adapt to each single domain with a domain guided dropout policy. For metric learning methods, the similarity between different samples is compared for person matching. The input of a deep neural network is often in the form of image pairs or triplets~\cite{ahmed2015improved,li2014deepreid,varior2016gated}. The model will pull the feature distance of the same identity and push the distance between two different identities during learning process. Varior \etal~\cite{varior2016gated} design a gating function inserted in each CNN layer to compare multi-scale similarities between the input image pair. Zheng \etal~\cite{zheng2017discriminatively} design two types of classifiers which combine feature learning and metric learning together. Though simple and effective, these models with holistic image-level features did not take the human part misalignment problem into consideration. \begin{figure*} \begin{center} \includegraphics[width=0.95\linewidth]{main-graph} \end{center} \caption{The overall architecture of our proposed APDR model. The whole model consists of a two-steam network and is correlated by the part refinement module. For an input triplet, each image will firstly go through the upper stream to perform identity and attribute learning, several attribute-part detectors for corresponding attributes are learned to make full use of the perceptual ability of attributes. For the stream below, the model will utilize the learned detectors to extract part features, and guided by the fused attribute representation, the part feature can be further refined. Together with global feature, we acquire the final feature representation for our APDR model. (Best viewed in color.)} \end{figure*} \textbf{Body part-aligned representations.} To deal with the part misalignment problem in person Re-ID, more and more algorithms focus on extracting local human part feature for re-identification and can be classified into two categories. One is to employ existing human pose estimation algorithms to locate the body parts. Some researchers~\cite{wei2016convolutional,zhao2017spindle,zheng2017pose} adopt CPM~\cite{wei2016convolutional} to predict human body joints and generate body regions to extract part features, Zhao \etal~\cite{zhao2017spindle} then design a tree-structured feature fusion strategy to merge different part features to form the final feature. Zheng \etal~\cite{zheng2017pose} and Su \etal~\cite{su2017pose} rearrange human part patches to generate a new pose-aligned human image. These methods rely heavily on the detection accuracy of existing models trained for other tasks. The other category is based on attention models to locate human parts or salient regions, which can be considered as an unsupervised manner. Liu \etal~\cite{liu2017end} exploit LSTMs as attention modules to locate different human attention parts, and Zhao \etal~\cite{zhao2017deeply} learn several human part maps supervised only by triplet loss for re-identification. These models are simple to construct but hard to train because the supervision for re-identification is too weak for part detectors to learn effective salient regions. \textbf{Attribute learning.} Treated as one kind of middle and high level semantic feature, attributes have provided many valuable auxiliary information for person Re-ID. Su \etal~\cite{su2016deep} train an attribute learning model treating the deep attribute feature as the final representation for person matching which ignores the fact that different people may share similar attributes. Matsukawa and Suzuki~\cite{matsukawa2016person} propose new labels by combining different attribute labels to train extra classifiers in addition to single-attribute labels. Lin \textit{et al.}~\cite{lin2017improving} have provided the attribute annotations for two large-scale person Re-ID datasets Market-1501~\cite{zheng2015scalable} and DukeMTMC-reID~\cite{zheng2017unlabeled}. The algorithms above all treat the attribute learning as a simple feature extraction process, while ignoring the perceptual ability of attributes. \section{Perceptual attribute detection}\label{perceptual attribute detection} Person attribute learning has been studied a lot in recent years, and has been proven beneficial for the person Re-ID task. The human attributes can be grouped into two categories, one of which is corresponding to local parts of the human body or certain regions of an image such as \textit{T-shirt} or \textit{backpack}, and the other is high-level semantic attributes that cannot be assigned to specific region of human body or can be considered as associated with the whole human body like \textit{age} and \textit{gender}. Though containing global information, it is different from the holistic image-level feature for its independence of background. Briefly speaking, apart from the auxiliary semantic information brought by attributes, the procedure of attribute detection will concentrate on discriminative human body parts and salient objects contained in an input image. Since the process of attribute detection and part localization can be done at the same time, the motivation of our work is to train several attribute-part detectors to fully utilize the perceptual ability of human attributes. Compared with previous approaches based on pose or attention models, our attribute-part detectors learned through the perception of human attributes are easier to train and are directly optimized for the person ReID task. Motivated by these, we propose a simple attribute detection network based on ResNet-50 model~\cite{he2016deep}. As analysed in \cite{wang2018transferable}, co-learning attribute and identity tasks can be beneficial for both tasks, while different from the architecture in \cite{wang2018transferable} or \cite{lin2017improving}, we separate the attribute and image-level feature learning into two branches after ``pool4" block to avoid the heteroscedasticity problem~\cite{wang2018transferable}. For image-level feature learning branch, we simply replace the last $7\times7$ pooling operation by global average pooling (GAP) to accommodate to different input resolutions. For attribute learning branch, we remove the last spatial down-sampling layer of the backbone network to increase the resolution of the final feature map for preserving more details, which is beneficial for the further attribute learning and localization. Let $x$ denote the input image, $\mathbf{G}$ and $\mathbf{A}$ represent the global image-level feature extraction branch and attribute learning branch, the corresponding feature maps are obtained by: \begin{align} \mathbf{GF} = {\mathbf{G}(x; {\theta}_{g})}, \qquad \mathbf{AF} = {\mathbf{A}(x; {\theta}_{a})} \end{align} \begin{align} \mathbf{g} = {\mathcal{GAP}(\mathbf{GF})} \end{align} where ${\theta}_{g}$ and ${\theta}_{a}$ represent the parameters in backbone network, $\mathbf{g}$ is the global feature output by image-level feature learning branch. After acquiring the attribute feature map $\mathbf{AF}$, we use it to learn several attribute-part detectors to generate the attention mask for each attribute: \begin{align} \mathbf{M}_i = N_{attri\_detector_i}(\mathbf{AF}) \label{attention weights} \end{align} where $N_{attri\_detector_i}(*)$ is the attribute detector composed of a Convolution and Sigmoid operation to normalize attention scores for each location. With the generated masks, we further extract each attribute feature by performing weighted average pooling operation over all locations on the attribute feature map, whose weights are given by attention masks in Eq.~\ref{attention weights}. \begin{align} \mathbf{m}_i &= \frac{\sum_{(x, y)}\mathbf{AF}(x, y) * \mathbf{M}_i(x, y)}{H*W} \end{align} \begin{align} \mathbf{a}_i &= \mathcal{BN}(\mathcal{FC}(\mathbf{m}_i)) \end{align} where $\mathbf{m}_i$ denotes the pooled feature of the $i$th attribute, $\mathbf{AF}(x, y)$ is the $c$-dim feature vector of location $(x, y)$ on attribute feature map, $H$ and $W$ denote the size of the feature map. The averaged feature is further sent into a linear dimension-reduction layer along with a Batch Normalization layer to obtain the final attribute feature $\mathbf{a}_i$. Finally, they are sent to their corresponding classifiers, supervised by the annotated attribute labels with the cross-entropy loss. \begin{gather} \begin{split} \mathbf{L}_{attri} = \sum\limits_{i=1}^N {p}_{i} * {\log} {q}_{i} \end{split} \end{gather} where $N$ is the number of attributes, $q_i$ is the predicted probability for target class $t$ of the $i$th attribute and $p_i=1$ for its corresponding ground-truth class. Considering that different attributes may target at the same or similar human body regions, we manually merge these attributes to share the same attribute attention mask while generating separate feature representations for them. Another motivation to merge the location-shared attributes is to increase the training samples for the attribute-part detector in that some attributes like \textit{wearing hat or not} may be hard to learn and detect because of the huge imbalance between positive and negative samples in common scenarios, but when learned together with \textit{long/short hair} attribute, it is easier for our model to obtain a head-region part detector. Hence, we design $K = 8$ attribute detectors for our model and are corresponding to \textit{age}, \textit{backpack}, \textit{bag}, \textit{handbag}, \textit{lower body}, \textit{upper body}, \textit{head}, \textit{gender} for Market-1501. More details are described in Section \ref{sec:ablation} For a better optimization for re-identification task, we adopt both human identity supervision and triplet metric constraints on the image-level feature: \begin{align} \mathbf{L}_{id} = \frac{1}{L}\sum\limits_{i=1}^L p_{\mathbf{g}_i}*\log(q_{\mathbf{g}_i}) \end{align} \begin{align} \mathbf{L}_{tri} = \frac{1}{M}\sum\limits_{i=1}^M [{d_i^p} - {d_i^n} + m]_{+} \end{align} where $L$, $M$ denote the number of identities and triplets within a batch, $[*]_{+} = \max(*, 0)$ is the hinge loss, $d_i^p = \Vert{\mathbf{g}_i^a} - {\mathbf{g}_i^p}\Vert_2^2, d_i^n = \Vert{\mathbf{g}_i^a} - {\mathbf{g}_i^n}\Vert_2^2$ are the distances of positive and negative pairs, $m$ is the margin to separate them. The final loss for the perceptual attribute detection is composed of three terms: \begin{align} \mathbf{L} = {\mathbf{L}_{id}} + {\mathbf{L}_{tri}} + {\lambda}{\mathbf{L}_{attri}} \end{align} where the parameter ${\lambda}$ is determined by cross-validation to balance the attribute and identity learning to avoid over-fitting. \section{Part Feature Refinement} \label{sec:part feature refinement} When people are re-identifying someone, they usually pick out the most discriminative attributes such as \textit{red coat} or \textit{white backpack}, and neglect some common attributes which are not helpful for identification. The relationship of attributes is also important since we always combine different kinds of attributes to perform recognition. Hence, we design a simple fusion network to build the relationship of various attributes. The module is composed of a dense layer to aggregate all attribute information into a single vector, the attributes can be merged and selected through trainable weights ${\theta_f}$. \begin{align} \mathbf{f}_{attri} = \mathcal{FC}(\mathbf{a}_1, \mathbf{a}_2, ..., \mathbf{a}_N; {\theta}_{f}) \end{align} Besides this, as described in the section before, people with different identities may share similar attributes that can confuse the judgment of our model, so re-identifying person by directly comparing the attribute information may not be appropriate. Considering that despite sharing similar attributes, the corresponding local features of those parts should differ from each other due to their different identities. Based on this observation, we make use of the attribute-part detectors obtained in the previous section to extract discriminative human part features. For the reason that attention regions are learned through attribute labels whose supervision for localization is a little weak, these regions may overlap with each other and contain some irrelevant background information, so we take the fused attribute feature as a guidance to filter out insignificant components and refine the part features for a more robust representation. The part feature refinement process can be seen as follows: \begin{align} \mathbf{l}_i = \frac{\sum_{(x, y)}\mathbf{PF}(x, y) * \mathbf{M}_i(x, y)}{H * W} \end{align} \begin{align} \mathbf{p}_i = \mathbf{l}_i * \sigma(W_{pi}\tanh(W_{li}\mathbf{l}_i + W_{hi}\mathbf{f}_{attri} + b_i)) \end{align} where $\mathbf{PF} = {\mathbf{P}(x; {\theta}_{p})}$ denotes the part feature map, $\mathbf{l}_i$ and $\mathbf{p}_i$ denote the part feature before and after refined, $W_{*}$ is the linear transformation matrix. The refinement elements are calculated by correlating part features with the fused attribute information. In summary, the motivation of designing the attribute fusion module lies in two aspects. On one hand, by aggregating different kinds of attribute information to form a mid-level human semantic feature, we want to simulate the recognition process of human beings to form a compact attribute descriptor. On the other hand, the fused attribute feature can serve as a guidance for the part feature refinement process, promoting the performance of the part branch. All the refined part features $\mathbf{p}_i$ are concatenated and then dimension-reduced to form the final local feature. Together with the holistic feature, the final feature representation is shown in Eq.~\ref{final feature}, it contains three types of information: refined human part feature, fused attribute information and holistic image-level feature. We do not concatenate the fused attribute feature into our final feature representation for two reasons: one is that it is designed for aggregating attribute information but not directly optimized for distinguishing between different people, especially for those with similar attributes, the other is that during performing part refinement, the fused attribute information has been integrated into the local representations, whose effectiveness will be reflected in the final feature expression. The part feature refinement module is optimized during the second training phase, where the local feature is supervised by identity loss and the final feature by triplet loss. For the reason that person Re-ID is more a distance metric task than classification task, only triplet loss is applied to make the global and local features cooperate well in final representation. \begin{align} \mathbf{f}_p = \mathcal{FC}(\mathbf{p}_1, \mathbf{p}_2, ... , \mathbf{p}_K; \theta_p) \end{align} \begin{align} \mathbf{f} = [\mathbf{f}_p, \mathbf{g}] \label{final feature} \end{align} \section{Implementation Details} We implement our proposed algorithm based on PyTorch framework on a GTX Titan Xp GPU with 12GB memory. We adopt ResNet-50 pretrained over ImageNet as our backbone network. The feature dimensions of holistic and part feature are both set to 256, thus to form the 512-d final feature. The number of learned attribute masks is set to 8 for both two datasets. We exploit a two-stage training scheme, In the first stage, we perform perceptual attribute detection to simultaneously obtain the global image-level feature, individual attribute features and perceptual attribute masks. With learned attribute masks and individual attribute features, in the second training stage, we mainly optimize the attribute fusion module and part refinement module to obtain the refined local features. The whole network is optimized using stochastic gradient descent (SGD) with momentum on mini-batches, the initial learning rate for the first training stage is set to 0.01 and decreased by 0.2 every 50 epochs. In the second training phase, the learning rate setting of attribute fusion and part feature refinement module is the same as above, while for the perceptual attribute learning module which has been optimized in the first stage, we set a small learning rate of 0.0001 to keep the learned features and masks basically unchanged for a stable learning. The hyper-parameters $m$ and $\lambda$ in the loss function are set to 0.2 and 0.1, the weight decay and momentum are set to 0.0005 and 0.9 respectively. \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{mask_heatmap_v2} \caption{Examples of the attention masks learned in different settings. We adopt $K = 8$ in our APDR model. (Best viewed in color.)} \label{attribute maps} \end{figure*} \section{Experiments} In this section, we report our experimental results on standard datasets and give a detailed ablation study over different modules of our APDR model. Extensive experiments are conducted on two large and challenging benchmarks: Market-1501~\cite{zheng2015scalable} and DukeMTMC-reID~\cite{ristani2016performance,zheng2017unlabeled}, which demonstrate that our approach is comparable to other state-of-the-art algorithms. \subsection{Datasets and evaluation metric} \textbf{Market-1501} is one of the largest and most challenging person ReID datasets lately. The original images are collected from 6 cameras in front of a supermarket in Tsinghua University and the pedestrian bounding boxes are cropped by Deformable Part Model (DPM)~\cite{felzenszwalb2010object}. The dataset contains 32668 annotated bounding boxes of 1501 identities, and 27 kinds of attribute labels for each identity. Among 1501 identities, 12936 images of 751 identities are partitioned for training and the rest 750 identities are left for testing. \textbf{DukeMTMC-reID} is a subset of the DukeMTMC dataset~\cite{ristani2016performance} designed for person re-identification. It consists of 36411 human bounding boxes belonging to 1812 identities, among which 1404 identities appear in more than two cameras and 408 identities appear in only one camera treated as distractors. The dataset provides 23 kinds of attributes for each identity. The training set contains 16522 images of 702 identities and the rest 702 identities are assigned to the testing set. Following the evaluation metrics widely used, we adopt both cumulative matching characteristics (CMC) and mean average precision (mAP) to evaluate our model under single query setting. The CMC score measures the accuracy of identifying the correct match at each rank. While for multiple ground truth matches in gallery, it can't tell how well all the ground-truth matching images in the gallery are ranked. To remedy this, we also report mAP scores of our model. \subsection{Ablation study} \label{sec:ablation} \begin{table*} \centering \caption{The validation performance with different numbers of attribute-part detectors and the comparisons with different baselines over two datasets.} \label{ablation study} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}*{Models} & \multicolumn{4}{|c|}{Market-1501} & \multicolumn{4}{|c|}{DukeMTMC-reID} \\ \cline{2-9} & rank-1 & rank-5 & rank-10 & mAP & rank-1 & rank-5 & rank-10 & mAP \\ \hline Baseline & 87.6 & 94.9 & 96.9 & 69.0 & 76.0 & 87.5 & 90.8 & 57.6 \\ Baseline + Triplet Loss & 88.6 & 95.8 & 97.3 & 71.9 & 79.7 & 89.4 & 92.1 & 62.1 \\ 1 mask for attributes & 90.3 & 96.2 & 98.0 & 76.4 & 81.1 & 90.5 & 93.4 & 65.8 \\ 12/10 masks for attributes & 90.5 & 96.5 & 97.7 & 76.7 & 80.8 & 90.1 & 93.2 & 65.9 \\ Perceptual attribute learning & 91.3 & 96.5 & 97.9 & 77.0 & 82.0 & 91.2 & 93.8 & 66.4 \\ \hline Part branch & 89.5 & 96.4 & 97.8 & 73.7 & 80.5 & 90.5 & 93.2 & 64.6 \\ Refined part branch & 90.7 & 96.6 & 97.9 & 76.0 & 81.1 & 90.3 & 93.3 & 65.2 \\ \hline APDR (Ours) & 93.1 & 97.2 & 98.2 & 80.1 & 84.3 & 92.4 & 94.7 & 69.7 \\ \hline \end{tabular} \end{table*} \vspace{1ex} \noindent\textbf{The number of attribute-part detectors.} We empirically study how the number of attribute-part detectors affects our model's performance. The number of annotated attribute labels in Market-1501 is 27, among which 8 and 9 attributes are respectively corresponding to colors of upper-body and lower-body clothing, so we first merge these to two multi-class attributes and form 12 kinds of attributes. Furthermore, as described in previous section, we merge attributes targeted at the same or similar regions to share one mask. We do not share the same mask between \textit{gender} and \textit{age} because although they can be considered as related to the whole body, they contain different high-level semantic meanings and may result in emphasizing on different regions. Finally, we form 8 different masks for the 12 kinds of attributes. Based on above analysis, we conduct the experiment over Market-1501 dataset with different numbers of detectors $K = 1, 8, 12$. When $K = 1$, we simply generate one mask for all attributes and the mask can be seen as a detector for the whole body. The results are shown in Table~\ref{ablation study} and the learned masks are shown in Figure~\ref{attribute maps}. We find that the learned masks are able to concentrate on the whole human body regions to eliminate the influence of irrelevant background, while still miss some vital regions like heads and handbags. When $K = 12$, we can observe that some of the detected regions are similar to each other thus are redundant, and some learned masks cannot target at corresponding regions precisely due to the imbalanced training samples, like the one relevant to \textit{hat}. When $K = 8$, which is denoted as our ``perceptual attribute learning", the learned 8 masks can concentrate on different salient regions and are highly consistent with their target attributes. Obviously, merging the \textit{hair} and \textit{hat} attribute to one \textit{head} mask yields more satisfying localization results than the $K = 12$ setting. We also find that the \textit{age} and \textit{gender} masks which correspond to high-level semantic attributes, though related to the whole body, have their own focuses. For the \textit{gender} mask, it prefers the regions of heads and lower body parts, we analyze the reason is that we usually judge the gender of someone firstly by observing his facial feature or hair length, the lower-body clothing can also be more discriminative for judging than upper-body clothing, such as `dress' for female and `pants' for male, compared to `T-shirt' for both genders. While the \textit{age} mask focuses mainly on the upper body region, which is probably because that human often estimate the age of someone depending on the style of upper clothing. In summary, with the appropriate choice for the number of attribute-part detectors, the performance is the best among the three settings and the learned masks are satisfactory. For DukeMTMC-reID's 23 kinds of attributes, we also merge the 8 \textit{colors of upper-body clothing} and 7 \textit{colors of lower-body clothing} to two multi-class attributes, hence form 10 different kinds of attributes and design 8 detectors for them. The 8 detectors are corresponding to \textit{backpack}, \textit{bag}, \textit{handbag}, \textit{feet}, \textit{gender}, \textit{head}, \textit{upper body} and \textit{lower body}. Comparison results are shown in Table~\ref{ablation study}. \vspace{1ex} \noindent\textbf{Baseline comparison.} The comparisons of different baselines are listed in the first block of Table~\ref{ablation study}. Compared with the simple baseline which only adopts identity labels as supervision, our attribute learning baseline outperforms it by a large margin, which verifies the effectiveness of learning with attribute information. We add triplet loss to our baseline in that person re-identification is actually a metric learning problem. Taking the perceptual attribute learning as our strong baseline, the best performance is obtained, apart from which the learned attribute features and attention masks can be utilized for latter part feature extraction and refinement. We also conduct a baseline experiment on sharing the same feature representation for both attribute and identity learning, but the model is hard to optimize and results in unstable accuracy, so we do not list it in Table \ref{ablation study}. This phenomenon verifies the existence of heteroscedasticity learning problem and the effectiveness of our baseline structure design which separates the two learning tasks. However, considering the correlation between attribute and identity information, the two branches share the network weights in shallow layers and moreover, the learned robust attribute information is later incorporated into the final feature representation. In this way, we make full use of the semantic attribute information and at the same time, the heteroscedasticity learning problem can be avoided. \vspace{1ex} \noindent\textbf{Effect of attribute fusion and part feature refinement.} We empirically analyze the effectiveness of the attribute fusion and part refinement module in our model. The motivation of designing the attribute fusion module has been described in Section \ref{sec:part feature refinement}, whose effectiveness will be reflected in the refined part features. To further validate the effectiveness of refinement for part features, we firstly concatenate all initial part features to form the local descriptor and evaluate on test sets, whose results are denoted as ``part branch". From the results we find that using part features alone can outperform the ``baseline with triplet loss", which demonstrates the effectiveness of our learned attribute attention masks for discriminative local feature extraction. After refinement, the accuracy of the part branch is further improved. Finally, by concatenating the refined local feature and global feature, we get the results of our APDR model. It promotes the accuracy by a large margin compared with the attribute learning baseline, which is $1.8\%$ gain for rank-1 and $3.1\%$ for mAP on Market-1501, $2.3\%$ for rank-1 and $3.3\%$ for mAP on DukeMTMC-reID. \subsection{Comparison with the state-of-the-arts} In this section, we present the results of comparison with several state-of-the-art algorithms. Since our proposed approach involves both attribute learning and part detection, we compare our model with both two types of algorithms. \vspace{1ex} \noindent\textbf{Results on Market-1501.} On Market-1501, we compare our proposed algorithm with many state-of-the-art approaches, including manual feature designing algorithms: LOMO+XQDA~\cite{liao2015person} and BoW~\cite{zheng2015scalable}, metric learning based algorithms: KISSME~\cite{koestinger2012large}, WARCA~\cite{jose2016scalable} and SCSP~\cite{cheng2016person}, attribute learning algorithms: Attribute-Person Recognition network (APR)~\cite{lin2017improving}, Attribute-Complementary Re-id Net (ACRN)~\cite{schumann2017person}, algorithms based on part detection: Part Aligned Deep Features (PADF)~\cite{zhao2017deeply}, Harmonious Attention Network (HA-CNN) \cite{li2018harmonious}, Refined Part Pooling (RPP)~\cite{sun2017beyond}, Attention-Aware Compositional Network (AACN) \cite{xu2018attention}, Part-Aligned Bilinear Representations (PABR)~\cite{suh2018part}. \begin{table} \caption{comparisons with state-of-the-arts on Market-1501.} \label{SOTA-market} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}*{Methods} & \multicolumn{4}{|c|}{Market-1501} \\ \cline{2-5} & rank-1 & rank-5 & rank-10 & mAP \\ \hline LOMO+XQDA & 43.8 & - & - & - \\ BoW & 35.8 & 52.4 & 60.3 & 14.8 \\ KISSME & 44.4 & 63.9 & 72.2 & 20.8 \\ WARCA & 45.2 & 68.2 & 76 & - \\ SCSP & 51.9 & 72.0 & 79.0 & 26.4 \\ \hline ACRN & 83.6 & 92.6 & 95.3 & 62.6 \\ APR & 84.3 & 93.2 & 95.2 & 64.7 \\ \hline PADF & 81.0 & 92.0 & 94.7 & 63.4 \\ AACN & 85.9 & - & - & 66.87 \\ HA-CNN & 91.2 & - & - & 75.7 \\ PABR & 91.7 & 96.9 & 98.1 & 79.6 \\ RPP & 93.8 & \textbf{97.5} & \textbf{98.5} & 81.6 \\ \hline APDR (Ours) & 93.1 & 97.2 & 98.2 & 80.1 \\ +re-ranking & \textbf{94.4} & 97.0 & 97.9 & \textbf{90.2} \\ \hline \end{tabular} \end{table} The detailed comparison results are listed in Table \ref{SOTA-market}. We can observe from the table that our approach performs better than most state-of-the-art algorithms except a little lower than ``RPP" and achieves the best accuracy in terms of both rank-1 and mAP after applying re-ranking process~\cite{zhong2017re}, which is often adopted as a post-process algorithm for re-identification to further boost accuracy. Though simple to construct, the ``RPP" model cannot handle well the large pose variations which occur more frequently in DukeMTMC-reID thus results in an inferior performance, and the partitioned parts demonstrate less semantic meaning for specific human body parts than our model. Our model outperforms ``PABR", which also adopts a two-stream network architecture, by $1.4\%$ on rank-1, $0.5\%$ on mAP before re-ranking. It is worth noting that our algorithm performs much better than either attribute learning methods or most human part based algorithms. For the former, we fully exploit the perception ability of human attributes instead of simply classifying them correctly and for the later, the part detection based on attribute information is more reliable for Person Re-ID task than those based on existing pose models or attention models. \vspace{1ex} \noindent\textbf{Results on DukeMTMC-reID.} We also evaluate our algorithm on another large benchmark with attribute annotations. Compared with Market-1501, person images from DukeMTMC-reID have more variations in resolution and background due to more complex scene layout, resulting in a more challenging task. We compare our approach with APR~\cite{lin2017improving}, ACRN~\cite{schumann2017person}, HA-CNN \cite{li2018harmonious}, AACN \cite{xu2018attention}, RPP~\cite{sun2017beyond}, PABR~\cite{suh2018part} and Table \ref{SOTA-duke} reports the results. From the results we can find that our model performs equally well on the DukeMTMC-reID benchmark and achieve the state-of-the-art performance. Although the `dilation' structure is adopted to further improve the accuracy of the original ``PABR" model, our algorithm performs almost the same as the final ``PABR" model on rank-1. And on the whole, our algorithm achieves a better performance. Similar as on Market-1501, the accuracy can be further improved by a large margin after the re-ranking process. \noindent \begin{table} \caption{comparisons with state-of-the-arts on DukeMTMC-reID.} \label{SOTA-duke} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}*{Methods} & \multicolumn{4}{|c|}{DukeMTMC-reID} \\ \cline{2-5} & rank-1 & rank-5 & rank-10 & mAP \\ \hline APR & 70.7 & - & - & 51.9 \\ ACRN & 72.6 & 84.8 & 88.9 & 52.0 \\ \hline AACN & 76.9 & - & - & 59.3 \\ HA-CNN & 80.5 & - & - & 63.8 \\ RPP & 83.3 & - & - & 69.2 \\ PABR & 84.4 & 92.2 & 93.8 & 69.3 \\ \hline APDR (Ours) & 84.3 & 92.4 & 94.7 & 69.7 \\ +re-ranking & \textbf{87.3} & \textbf{93.0} & \textbf{95.2} & \textbf{83.2} \\ \hline \end{tabular} \end{table} \section{Conclusion} In this paper, we proposed a novel Attributes-aided Part Detection and Refinement model aiming to utilize the perceptual ability of attribute learning and solve the part misalignment problem in person Re-ID at the same time. We demonstrated that, the attribute information is associated with discriminative body parts and salient regions, thus can be exploited to generate attribute-part detectors. Besides, in order to simulate the human cognitive mechanism of dealing with multiple attributes, we merge attributes through a simple module for a more compact attribute representation. Taking the learned attribute-part detectors as part localizers, we extract and further refine the local part features guided by fused attribute information to eliminate the noises introduced by detection deviation. The experiments on two large popular benchmarks verified the effectiveness of our model. In future work, we will dig more into understanding human recognition mechanism for re-identifying people, including but not limited to attribute information. {\small \bibliographystyle{ieee}
1,477,468,751,326
arxiv
\section{Introduction} Natural language processing (NLP) approaches generally belong to either one of two main strands, which also often appear to be mutually exclusive. On the one hand we essentially have symbolic methods and models which are open, in the sense that their internal mechanisms as well as their conclusions are easy to inspect and understand, but which deal with linguistic patterns in a relatively strict manner. On the other hand, we have adaptive models based on machine learning (ML) which are usually opaque to inspection and too complex for their reasoning to be intelligible, but which achieve increasingly impressive feats that suggest deeper understanding. Presently, there is a strong research focus on the latter, and for good reason. Among adaptive models, deep neural networks, for one, managed to jointly learn and improve performance in classic NLP tasks such as part-of-speech tagging, chunking, named-entity recognition, and semantic role-labeling~\cite[as early as][]{collobert2008unified}. In other cases, modern ML enabled methods that did not exist before e.g., estimation of semantic similarity using word embeddings~\cite{mikolov2013efficient}. More recently, Bidirectional Encoder Representations from Transformers (BERT) have shown that pre-trained general models can be fine-tuned to achieve state-of-the-art performance in specific language understanding tasks such as question answering and language inference~\cite{devlin-etal-2019-bert}. Nonetheless, symbolic methods possess several proper and important features, namely that they can offer human-readable knowledge representations of knowledge, as well as language understanding through formal and inspectable rule-based logical inference. Why do we observe this apparent trade-off between openness and adaptivity? Initial approaches to NLP were of a symbolic nature, based on rules written by hand, or in algorithms akin to the ones that are used for programming language interpreters and compilers, such as recursive descent parsers. It became apparent that the diversity of grammatical constructs that can be found in natural language is too large to be tackled in such a way. The problem is compounded by the frequent use of ungrammatical constructs that are nevertheless frequent in real-world language usage (e.g. simple mistakes, neologisms, slang). In other words, content in natural language is generated by actors that are much more complex, and also more error-prone and error-tolerant than conventional algorithms. ML is a natural fit for this type of problem and, as we mentioned, vastly surpasses the capabilities of human-created symbolic systems in a variety of tasks. We suggest however that there is a ``hidden relationship'' between explicit symbolic manipulation rules and modern ML: the latter can be seen as a form of ``automatic programming'' through large-scale statistical learning processes, that amount to the generation of highly complex programs through adaptive pressure instead of human programmers' efforts. It does not matter if it is gradient descent on a multi-layered network topology, or something more prosaic like entropy reduction in a decision tree, it is still program generation through adaptation. The capability of these methods to generate such complex programs is what allows them to tackle the complexities of NL, but it is also this very complexity that makes them opaque. We can thus imagine a double dichotomy open/opaque -- strict/adaptive. We argue that existing approaches generally fall into either the open-strict or \emph{opaque-adaptive} categories. A few approaches have ventured into the \emph{open-adaptive} domain~\cite{banarescu2013abstract,mausam2012open} and our contribution aims at significantly expanding this direction. Before discussing our approach, let us consider why \emph{open-adaptive} is a desirable goal. The work we present here was performed in the context of a computational social science (CSS) research team, where NLP is a scientific instrument capable of assisting in the analysis of text corpora that are too vast for humans to study in detail. We argue that further progress in the study of socio-technical systems and their dynamics could be enabled by \emph{open-adaptive} scientific instruments for language understanding. In current CSS research, the more common approaches aim to transform natural language documents into structured data that can be more easily analyzed by scholars and are referred to by a variety of umbrella terms such as ``{text mining}''~\cite{srivastava2009text}, ``{automated text analysis}''~\cite{grimmer-stewart-2013} or ``{text-as-data methods}''~\cite{wilkerson2017large}. They exhibit a wide range of sophistication, from simple numerical statistics to more elaborate ML algorithms. Some methods indeed rely essentially on scalar numbers, for instance by measuring text similarity (e.g., with cosine distance~\cite{singhal2001modern}) or attributing a valence to text, as in the case of ideological estimation~\cite{sim-2013-mea} or sentiment analysis~\cite{pang2008opinion}, which in practice may be used to appraise the emotional content of a text (anger, happiness, disagreement, etc.) or public sentiment towards political candidates in social media~\cite{wang2012system}. Similarly, political positions in documents may be inferred from so-called ``Wordscores''~\cite{lowe2008understanding} -- a popular method in political science that also relies on the summation of pre-computed scores for individual words, and has more refined elaborations, e.g. with Bayesian regularization~\cite{monroe2008fightin}. Other methods preserve the level of words: such is the case with term and pattern extraction (i.e., discovering salient words through the use of helper measures like term frequency--inverse document frequency (TF-IDF)~\cite{salton1988term}), so-called ``Named Entity Recognition''~\cite{nadeau2007survey} (used to identify people, locations, organizations, and other entities mentioned in corpuses, for example in news corpora~\cite{diesner-2005-revealing} or Twitter streams~\cite{ritter2011named}) and ad-hoc uses of conventional computer science approaches such as regular expressions to identify chunks of text matching against a certain pattern (for example, extracting all p-values from a collection of scientific articles~\cite{chavalarias2016evolution}). Another strand of approaches operates at the level of word sets, including those geared at topic detection (such as co-word analysis~\cite{leydesdorff-2017-co-word}, Latent Dirichlet Allocation (LDA)~\cite{blei2003latent} and TextRank~\cite{mihalcea2004textrank}, used to extract the topics addressed in a text) or used for relationship extraction (meant at deriving semantic relations between entities mentioned in a text, e.g., is(Berlin, City)) \cite{angeli2015leveraging}. Recent advances in embedding techniques have also made it possible to describe topics extensionally as clusters of documents in some properly defined space \cite{angelov2020top2vec,le2014distributed}. Overall, these techniques provide useful approaches to analyze text corpora at a high level, for example, with regard to their main entities, relationships, sentiment, and topics. However, there is limited support to detect, for instance, more sophisticated claim patterns across a large volume of texts, what recurring statements are made about actors or actions, and what are the qualitative relationships among actors and concepts. This type of goal, for example, extends semantic analysis to a socio-semantic framework \citep{roth-sociacs} which also takes into account actors who make claims or who are the target of claims \cite{diesner-2005-revealing}. It is also particularly interesting to consider the model of knowledge representation that is implicitly or explicitly associated with the various NLP/text mining/information extraction approaches. To illustrate, on one extreme we can consider traditional knowledge bases and semantic graphs, which are open in our sense, but also limited in their expressiveness and depth. On the other, we have the extensive knowledge opaquely encoded in neural network models such as BERT or GPT-2/3~\cite[e.g.][]{brown2020language}. Beyond the desirability of open knowledge bases for their own sake, we propose that a language representation that is convenient for both humans and machines can constitute a \emph{lingua franca}, through which systems of cognitive agents of different natures can cooperate in a way that is understandable and inspectable. Such systems could be used to combine the strengths of symbolic and statistical inference. The central idea of our approach is the \emph{Semantic Hypergraph (SH)}, a novel knowledge representation model that is intrinsically recursive and accommodates the natural hierarchical richness of NL. The SH model is hybrid in two senses. First, it attempts to combine the strengths of ML and symbolic approaches. Second, it is a formal language representation that reduces but tolerates ambiguity, and that also reduces structural variability. We will see that SH enables simple methods of pattern detection to be more powerful and less brittle, that it is a good compromise for intelligibility both for humans and machines, and that it provides a semantically deeper starting point (in terms of explicit meaning) for further algorithms to operate and collaborate on. In the next section we discuss the state of the art, comparing SH to a number of approaches from various fields and eras. We then describe the structure and syntax of SH, followed by an explanation on how modern and standard NLP ML-based building blocks provided by an open source software library~\cite{honnibal2015improved} can be used in combination with a random forest classifier and a simple search tree to parse NL to SH. Here we also provide precision benchmarks of our current parser, which is then employed in the experiments that follow. We attempted to perform a set of experiments of a rather diverse nature, to gather evidence of SH usefulness in a variety of roles, and of its potential to tackle the challenge that we started by stating in this introduction, and to gather empirical insights. One important language understanding task is information extraction from text. One formulation of such a task that attracts significant attention is that of Open Information Extraction (OIE) --- the domain-free extraction from text of tuples (typically triplets) representing semantic relationships~\cite{etzioni2008open}. We will show that a small and simple set of SH patterns can produce competitive results in an OIE benchmark, when pitted against more complex and specialized systems in that domain. We will demonstrate concept taxonomy inference and co-reference resolution, followed by claim and conflict identification in a database of news headers. We will show how SH can be used to generate semantically rich visual summaries of text. \section{Related Work} \paragraph{Knowledge bases.} As a knowledge representation formalism, it is interesting to compare SH with traditional approaches. Let us start with triplet-based ones. For example, the \emph{Semantic Web}~\cite{berners2001publishing, shadbolt2006semantic} community tends to use standards such as \emph{RDFa}~\cite{adida2008rdfa}, which represent knowledge as \emph{subject-predicate-object} expressions, and are conceptually equivalent to semantic graphs~\cite{allen1982s, sowa2014principles} (similarly, a particular type of hypergraph has been used in~\cite{cattuto2007network} to represent tagged resources by users, yet this also reduces to fixed triplet conceptualization). Despite their usefulness for simple cases, such approaches cannot hope to match the semantic sophistication of what can be conveyed with open text. Binary relationships and lack of recursion limit the expressive power of semantic graphs, and we sill see how SHs can represent semantic information that is lost in the graphic representation, for example the ability to express $n$-ary relationships, propositions about propositions and constructive definitions of concepts. A further type of approaches relying on knowledge bases is epitomized by the famous Cyc~\cite{lenat1990cyc} project, a multi-decade enterprise to build a general-purpose and comprehensive system of concepts and rules. It is an impressive effort, nevertheless hindered by the limitations that we alluded to in the previous section concerning the ambiguity and diversity of semantic structures contained in NL, given that it relies purely on symbolic reasoning. Cyc belongs to a category of systems that are mostly concerned with question answering, a different aim that the one of the work that we propose here, which is more concerned with aiding in the analysis and summarization of large corpora of text for research purposes, especially in the social sciences, while not requiring full disambiguation of meaning nor perfect reasoning or understanding. Several other notable knowledge bases of a similar semantic graph nature have been developed, some relying on collaborative human efforts to gather ground assertions, for example MIT's ConceptNet~\cite{speer2017conceptnet}, ATOMIC~\cite{sap2019atomic} from the Allen Institute, or very rigorous scholarly efforts of annotation, as is the case with WordNet~\cite{miller1995wordnet} and its multiple variants, or relying on wiki-like platforms such as WikiData~\cite{vrandevcic2012wikidata}, or mining relationship from Wikipedia proper, as is the case with DBPedia~\cite{auer2007dbpedia}, and more recently a transformer language model has been proposed to automatically extend common-sense knowledge bases~\cite{bosselut2019comet}. We envision that such general-knowledge bases could be fruitfully integrated with SHs for various purposes, but such endeavours are beyond the scope of this work. We are instead interested in demonstrating what can be achieve by going beyond such non-hypergraphic appraches. \paragraph{Hypergraphic approaches to knowledge representation.} Hypergraphs have been proposed already in the 1970s as a general solution for knowledge representation~\cite{boley1977directed}. More recently, Ben Goertzel produced similar insights~\cite{goertzel2006patterns}, and in fact included an hypergraphic database called AtomSpace as the core knowledge representation of his OpenCog framework~\cite{hart2008opencog}, an attempt to make Artificial General Intelligence emerge from the interaction of a collection of heterogeneous system. As is the case with Cyc, the goals of OpenCog are however quite distinct from the aim of our work. A model that shares similarities with ours but purely aims at solving a meaning matching problem is that of \emph{Abstract Meaning Representation (AMR)}~\cite{banarescu2013abstract}. AMR is based on PropBank verbal propositions and their arguments~\cite{palmer2005proposition}, ensuring that all such meaning structures can be represented. SH completeness is based instead on Universal Dependencies \cite{nivre2016universal}, ensuring instead that all cataloged grammatical constructs can be represented. AMR's goal is to purely abstract meaning, while SH accommodates the ambiguity of the original NL utterances, bringing several important benefits: it makes their computational processing tractable in further ways, tolerates mistakes better and preserves communication nuance that would otherwise be lost. Furthermore, it remains open to structures that may not be currently envisioned. Parsing AMR to NL is a particularly hard task and, to our knowledge, there is currently no parser that approaches the capabilities of what we will demonstrate in this work. In part, this is a practical problem: we will see how we can take advantage of intermediary NLP tasks that are well studied and developed to achieve NL to SH parser. Doing the same for AMR requires the construction of training data by extensive annotation efforts by humans. It could be argued that this is still a preferable goal, no matter how distant, given that AMR removes all ambiguity from statements. Here we point out that this aspect of AMR is also a downside, firstly because it makes all failures of understanding catastrophic (we will see how this is not the case for SH), and secondly because NL is inherently ambiguous. It is often the case that even human beings cannot fully resolve ambiguities, or that an ambiguous statement gains importance later on, with more information. We aim to define SH as a \emph{lingua franca} for the collaboration of human an algorithmic actors of several natures, a less rigid goal than the one embodied by AMR. \paragraph{Free text parsing.} A classical NLP task is that of making explicit the grammatical structure of a sentence in the form of a parse tree. A particularly common type of such a tree in current use is the Dependency Parse Tree (DPT), based on dependency grammars. We will see that our own parser takes advantage of DPTs (among other high-level grammatical / linguistic features) as intermediary steps, but it is also interesting to notice that DPTs themselves can be considered as a type of hypergraphic representation of language \tb{\cite{reddy2016transforming}}. In fact, as we will discuss below, they are already employed in various targeted language understanding tasks in a CSS context. From the perspective of hypergraphic representation of language, the fundamental difference between DPTs and SHs is that the former aims at expressing the grammatical structure of language, while the latter its semantic structure, in the simplest possible way that enables meaning extraction in a principled and predictable way. In contrast to the \emph{ad-hoc} nature of information extraction from DPTs, we will see that SHs structure NL in a way akin to functional computer languages, and allow for example for a generic methodology of extracting patterns. The expressive power of such patterns will be demonstrated in several ways, namely by demonstrating competitive results in a standard Open Information Extraction task. We will see that the type system of SHs (relying on $8$ types) is much simpler than the diversity of grammatical roles contained in a typical set of dependency labels (such as Universal Dependencies), and we will also provide empirical evidence that SHs are not isomorphic to DPTs. In the realm of OIE, one approach in particular with which our work shares some similarities is that of learning \emph{open pattern templates}~\cite{mausam2012open}. These pattern templates combine at the same symbolic level dependency parse labels and structure, part-of-speech tags, explicit lexical constraints and higher-order inferences (e.g. that some term refers to a \emph{person}), to achieve sophisticated language understanding in the extraction of OIE tuples, being able to extract relations that are not only of a verbal nature, and demonstrating sensitivity to context. The work we will present does not attempt to directly combine diverse linguistic features at the service of a specific language understanding task. Instead, we propose to use such features to aid in the translation of NL into a structured representation, which relies by comparison on a very simple and uniform type system, and from which complex NL understanding tasks become easier, and that is of general applicability to a diversity of such tasks, while remaining fully readable and understandable by humans. Furthermore, it defines a system of knowledge representation in itself, that is directly focused on meaning instead of grammar. \paragraph{Text mining.} We have already covered in the previous section the most commonly used text mining approaches, while emphasizing the relative lack of sophistication in understanding text meaning. The need for such sophistication is all the more pregnant for social sciences. On the one hand, qualitative social science methods of text analysis do not scale to the enormous datasets that are now available. Furthermore, quantitative approaches allow for other types of analysis that are enriching and complementary to qualitative research, yet may simplify extensively the processing in such a way that it hinders their adoption by scholars used to the refinement of qualitative approaches. And the more sophisticated the NLP techniques become, the further they tend to be from being used for large-scale text analysis purposes. Indeed, these systems are fast and accurate enough to form a starting point for more advanced computer-supported analysis in a CSS context, and they enable approaches that are substantially more sophisticated than the text mining state of the art discussed above. Yet, the results of such systems may seem relatively simplistic compared to human-level understanding of natural language. The literature already features some works which attempt at going beyond language models based on word distributions (such as bags of words, co-occurrence clusters, or so-called ``topics'') or triplets. For instance, Statement Map~\cite{murakami2010statement} is aimed at mining the various viewpoints expressed around a topic of interest in the web. Here a notion of claim is employed. A statement provided by the user is compared against statements from a corpus of text extracted from various web sources. Text alignment techniques are used to match statements that are likely to refer to the same issue. A machine learning model trained over NLP-annotated chunks of text classifies pairs of claims as ``agreement'', ``conflict'', ``confinement'' and ``evidence''. More broadly, the subfield of argumentation mining~\cite{lippi2016argumentation} also makes extensive use of machine learning and statistical methods to extract portions of text corresponding to claims, arguments and premises. These approaches generally rely on surface linguistic features, there is however an increasing trend of dealing with structured and relational data. Already in 2008, \cite{van2008parsing} proposed a system to extract binary semantic relationships from Dutch newspaper articles. A recent work~\cite{ruiz2016more} presents a system aimed at analysing claims in the context of climate negotiations. It leverages dependency parse trees and general ontologies \cite{staab2010handbook} to extract tuples of the form: $\langle\text{actor}, \text{predicate}, \text{negotiation\_point}\rangle$ where the actors are stakeholders (e.g., countries), the predicates express agreement, opposition or neutrality and the negotiation point is identified by chunk of text. Similarly, in another recent work~\cite{van2017clause}, parse trees are used to automatically extract source-subject-predicate clauses in the context of news reporting over the 2008-2009 Gaza war, and used to show differences in citation and framing patterns between U.S. and Chinese sources. These works help demonstrate the feasibility of using parse trees and other modern NLP techniques to identify viewpoints and extract more structured claims from text. Being a step forward from pure bag-of-words analysis, they still leave out a considerable amount of information contained in natural language texts, namely by relying on topic detection, or on pre-defined categories, or on working purely on source-subject-predicate clauses. We propose to introduce a more sophisticated language model, where all entities participating in a statement are identified, where entities can be described as combinations of other entities, and where statements can be entities themselves, allowing for claims about claims, or even claims about claims about claims. The formal backbone of this model consists of an extended type of hypergraph that is both recursive and directed, thus generalizing semantic graphs and inducing powerful representation capabilities. \section{Semantic hypergraphs -- structure and syntax} \label{sec:hypergraphs} \subsection{Structure} The SH model is essentially a recursive, ordered hypergraph that makes the structure contained in natural language (NL) explicit. On one hand, NL is recursive, allowing for concepts constructed from other concepts as well as statements about statements, and on the other hand, it can express $n$-ary relationships. We will see how a hypergraphic formalism provides a satisfactory structure for NL constructs. While a graph $G = (V,E)$ is based on a vertex set $V$ and an edge set $E\subset V\times V$ describing dyadic connections, a \emph{hypergraph}~\cite{battiston2020networks,berge1984hypergraphs} generalizes such structure by allowing $n$-ary connections. In other words, it can be defined as $H = (V, E)$, where $V$ is again a vertex set yet $E$ is a set of hyperedges $(e_i)_{i\in{1..M}}$ connecting an arbitrary number of vertices. Formally, $e_i = \{v_1, ... v_n\} \in E = \mathcal{P}(V)$. We further generalize hypergraphs in two ways: hyperedges may be ordered~\cite{eslahchi2007some} and recursive~\cite{iord-hype}. Ordering entails that the position in which a vertex participates in the hyperedge is relevant (as is the case with directed graphs). Recursivity means that hyperedges can participate as vertices in other hyperedges. The corresponding hypergraph may be defined as $H=(V,E)$ where $E\subset\mathcal{E}_V$ the recursive set of all possible hyperedges generated by~$V$: $\mathcal{E}_V=\left\{(e_i)_{i\in\{1..n\}}\,|\,n\in\mathbb{N},\forall i\in\{1..n\}, e_i \in V \cup \mathcal{E}_V\right\}$. In this sense, $V$ configures a set of irreducible hyperedges of size one i.e., atomic hyperedges which we also denote as \emph{atoms}, similarly to semantic graphs. From here on, we simply call these recursive ordered hyperedges as ``hyperedges'', or just ``edges'', and we denote the corresponding hypergraph as a ``semantic hypergraph''. \newcommand{\T}[1]{{\small {\sf #1}}} \newcommand{\Q}[1]{\begin{center}\T{#1}\end{center}} \newcommand{\QC}[1]{\vspace{-.12em}\Q{#1}\vspace{-.12em}} \renewcommand{\QC}[1]{\Q{#1}} Let us consider a simple example, based on a set $V$ made of four atoms: the noun ``\T{(berlin)}'', the verb ``\T{(is)}'', the adverb ``\T{(very)}'' and the adjective ``\T{(nice)}''. They may act as building blocks for both hyperedges ``\T{(is berlin nice)}'' and ``\T{(very nice)}''. These structures can further be nested: the hyperedge ``\T{(is berlin (very nice))}'' represents the sentence ``Berlin is very nice''. It illustrates a basic form of recursivity. \newcommand{\h}[1]{\T{(#1)}} \subsection{Syntax} In a general sense, the hyperedge is the fundamental unifying construct that carries information within the SH formalism. We further introduce the notion of hyperedge \emph{types}, which simply describe the type of construct that some hyperedge represents: for instance, concepts, predicates or relationships, as in the above examples --- respectively \T{(berlin)}, \T{(is)} and \T{(is berlin nice)}. We extensively detail hyperedge types and their role in the next subsections. For now, it is enough to know that predicates, in particular and for instance, belong to a larger family of types that are crucial for the construction of hyperedges and that we call \emph{connectors}. In this regard, semantic hypergraphs rely on a syntactic rule that is both simple and universal: the first element in a non-atomic hyperedge must be a connector. In effect, a hyperedge represents information by combining other (inner) hyperedges that represent information. The purpose of the connector is to specify \emph{in which sense} inner hyperedges are connected. Naturally, it can be followed by one or more hyperedges which play the role of arguments with respect to the connector. As hyperedges, if they are not atoms, they must also start with a connector themselves, in a recursive fashion. We illustrate this on the hyperedge \h{is berlin (very nice)}: here, \h{is} is a predicate playing the role of connector while \h{berlin} and \h{very nice} are arguments of the initial hyperedge. \h{berlin} is an atomic hyperedge, while \h{very nice} is a hyperedge made of two elements: the connector, \h{very}, an atomic hyperedge, and an argument, \h{nice}, also an atomic hyperedge. Both cannot be decomposed further. \smallskip Readers who are familiar with Lisp will likely have noticed that hyperedges are isomorphic to \emph{S-expressions}~\cite{mccarthy1960recursive}. This is not purely accidental. Lisp is very close to $\lambda$-calculus, a formal and minimalist model of computation based on function abstraction and application. The first item of an S-expression specifies a function, the following ones its arguments. One can think of a function as an association between objects. Albeit hyperedges do not specify computations, connectors are similar to functions at a very abstract level, in that they define associations. The concepts of ``race to space'' and ``race in space'' are both associated to the concepts ``race'' and ``space'', but the combination of these two concepts yields different meaning by application of either the connector ``in'' or ``to''. For this reason, $\lambda$-calculus has also been applied to dependency parse trees in the realm of question-answering systems~\cite{reddy2016transforming}. \subsection{Types}\label{sec:types} We now describe a type system that further clarifies the role each entity plays in a hyperedge. In all, we distinguish 8 types, the smallest set we could find that appears to cover virtually all possible information representation roles cataloged in the Universal Dependencies. We first present the types that atoms may have and discuss their use in constructing higher-order entities. We then show how hyperedge types are recursively inferable from the types of the connector and subsequent arguments. \setlength{\tabcolsep}{4.5pt} \newcommand{\rowcolor[gray]{.95}}{\rowcolor[gray]{.95}} \renewcommand{\checkmark}{$\times$} \begin{table*} \centering \begin{tabularx}{\linewidth}{ >{\sf}c l X >{\sf\small}l c c } \toprule \bf{Code} & \textbf{Type} & \textbf{Purpose} & \bf{Example} & \textbf{Atom} & \textbf{Non-atom}\\ \midrule C & concept & Define atomic concepts & \x{apple/C} & \checkmark & \checkmark \medskip\\ \rowcolor[gray]{.95}{} P & predicate & Build relations & (\x{is/P} berlin/C nice/C) & \checkmark & \checkmark \\ \rowcolor[gray]{.95}{}M & modifier & Modify a concept, predicate, modifier, trigger & (\x{red/M} shoes/C) & \checkmark & \checkmark \\ \rowcolor[gray]{.95}{}B & builder & Build concepts from concepts & (\x{of/B} capital/C germany/C)\quad\quad& \checkmark & \\ \rowcolor[gray]{.95}{}T & trigger & Build specifications & (\x{in/T} 1994/C) & \checkmark & \\ \rowcolor[gray]{.95}{}J & conjunction & Define sequences of concepts or relations & (\x{and/J} meat/C potatoes/C) & \checkmark & \medskip\\ R & relation & Express facts, statements, questions, orders,...& \x{(is/P berlin/C nice/C)} & & \checkmark \medskip\\ S & specifier & Relation specification (e.g. condition, time,...) & \x{(in/T 1976/C)} & & \checkmark \\ \bottomrule \end{tabularx \caption{Hyperedge types with use purposes and examples. Connector types are emphasized with a gray background. The rightmost columns specify whether this type may be encountered in atomic or non-atomic hyperedges.} \label{tab:types \end{table*} \paragraph{Atomic concepts.} The first, simplest and most fundamental role that atoms can play is that of a \emph{concept}. This corresponds to concepts that can be expressed as a single word in the target language, for example ``apple''; they are labeled by this human-readable string, as could be guessed from the previous subsection. This defines an eponymous type, ``concept''. The nomenclature we propose further indicates the type of an atom by appending a more machine-oriented code after this label and a slash (\T{/}). For concepts, this code is ``\T{C}'': \Q{(apple/C)} As we shall see, these machine-oriented codes remove ambiguity, facilitate automatic inference and computations. The full list of types as well as their codes and purposes can be seen in table~\ref{tab:types}. \paragraph{Connectors} The second and last role that atoms can play is the role of connector. We then have five types of connectors, each one with a specific function that relates to the construction of specific types of hyperedges. The most straightforward connector is the {\em predicate}, whose code is ``\T{P}''. It is used to define relations, which are frequently statements. Let us revisit a previous example with types: \Q{(\x{is/P} berlin/C nice/C)} The predicate \h{is/P} both establishes that this hyperedge is a relation between the entities following it, and gives meaning to the relation. This is isomorphic to typical knowledge graphs~\cite{allen1982s,sowa2014principles} where \h{berlin} and \h{nice} would be connected by an edge labeled with \h{is}. \medskip The \emph{modifier} type (``\T{M}'') applies to one (and only one) existing hyperedge and defines a new hyperedge of the same type. In practice, as the name indicates, it \emph{modifies things} and can be applied to concepts, predicates or other modifiers, and also to triggers, a type that we will subsequently address. For concepts, a typical case is adjectivation, e.g.: \Q{(\x{nice/M} shoes/C)} Note here that ``nice'' is being considered as a modifier, while ``nice'' was a concept in the previous case: this is due to the fact that \h{nice/M} and \h{nice/C} refer to two distinct atoms which share the same human-readable label, ``nice''. To illustrate modification of predicates, let us revisit a previous example, but suppose that we declare that Berlin is not nice. Then we can apply a modifier to the predicate, such as \h{not/M}, so that: \QC{((\x{not/M} is/P) berlin/C nice/C)} Finally, modifiers may modify other modifiers: \QC{((\x{very/M} nice/M) shoes/C)} The \emph{builder} type (``\T{B}'') combines several concepts to create a new one. For example, atomic concepts \h{capital/C} and \h{germany/C} can be combined with the builder atom \h{of/B} to produce the concept of ``capital of Germany'': \QC{(\x{of/B} capital/C germany/C)} A very common structure in English and many other languages is that of the compound noun e.g., ``guitar player'' or ``Barack Obama''. To represent these cases, we introduce a special builder atom that we call \T{(+/B)}. Unlike what we have seen so far, this is an atom that does not correspond to any word, but indicates that a concept is formed by the compound of its arguments; it is necessary to render such compound structures. The previous examples can be represented respectively as \T{(+/B guitar/C player/C)} and \T{(+/B barack/C obama/C)}. \medskip {\em Conjunctions} (``\T{J}''), like the English grammatical construct of the same name, join or coordinate concepts or relations \Q{(\x{and/J} meat/C potatoes/C)\\ (\x{but/J} (likes/P mary/C meat/C) (hates/P potatoes/C))} \noindent We also introduce a special conjunction symbol, \T{(:/J)}, to denote implicit sequences of related concepts. For example, the phrase: ``Freud, the famous psychiatrist'', would be represented as: \Q{(:/J freud/C (the/M (famous/M psychiatrist/C)))} \medskip The remaining case, {\em triggers} (\T{T}), concerns additional specifications of a relationship, for example conditional (``We go \x{if} it rains.''), or temporal (``John and Mary traveled to the North Pole \x{in} 2015''), local (``Pablo opened a bar \x{in} Spain''), etc.: \Q{(opened/P pablo/C (a/M bar/C) (\x{in/T} spain/C))} \paragraph{Hyperedge type inference.} Atomic types are entirely covered by these six types, of which three exclusively concern atoms (builders, triggers and conjunctions). We already hinted at the fact that non-atomic hyperedges also have types. These are implicit and inferable from the types of the connector and its arguments. Given, for example, that \h{germany/C} is an atom of type concept (\T{C}), the hyperedge \h{of/B capital/C germany/C} is also a concept, and this can be inferred from the fact that its connector is of type builder (\T{B}). Builders need to be followed by at least two concepts. Modifiers (\T{M}) only accept one argument, and the hyperedge in which they participate has the type of the single argument of the modifier. For example, the hyperedge \h{northern/M germany/C} is a concept (\T{C}), and \h{not/M is/P} is a predicate (\T{P}). Table~\ref{tab:type-inference} lists all type inference rules and their respective requirements. They also induce syntactic constraints which close the SH type system. We may now introduce the two last types of our type system, relation (\T{R}) and specifier (\T{S}), which only concern non-atomic hyperedges: they are always defined as the result of a composition of hyperedges. \emph{Relations} are typically used to state some fact (even though they can also be used to represent questions, orders and other things). \h{is/P Berlin/C nice/C} is an obvious example of relation. In our context, they thus turn out to be a crucial hyperedge type. {\em Specifiers} are types that play a more peripheral role, in the proper sense, in that they are supplemental to relations. Specifiers are produced by triggers. For example, the trigger ``\T{(in/T)}'' can be used to construct the specification: \h{in/T 1976/C}. Specifications, as the name implies, add precisions to relations \hbox{e.g.,} when, where, why or in which case something happened. \begin{table} \centering\small \begin{tabular}{ >{\sf}l lc >{\sf}c } \toprule \normalsize\bf{Element types} &\multicolumn{1}{c}{\normalsize$\rightarrow$} & \normalsize\textbf{Resulting type} \\ \midrule (M \; x) & x\\ (B \; C \; C+) && C \\ (T \; [CR]) && S \\ (P \; [CRS]+) && R \\ (J \; x\; y'+) && x \\ \bottomrule \end{tabular \caption{Type inference rules. We adopt the notation of regular expressions: the symbol $+$ is used to denote one or more entities with the type that precedes it, while square brackets indicate several possibilities (for instance, \T{[CR]+} means ``at least one of any of both \T{C} or \T{R}'' types). \T{x} means any type: \T{(M x)} is of type \T{x}.} \label{tab:type-inference \end{table} \subsection{Argument roles}\label{sec:argroles} We introduce a last notion that we employ to make meaning more explicit: \emph{argument roles} for builders and predicates. They are represented as sequences of characters that indicate the role of the respective arguments following such connectors. \paragraph{Concept builders.} Given a concept hyperedge, a key issue is that of inferring its \emph{main concept}, \hbox{i.e.} the concept that can be assumed to be its hypernym. Beyond the simple case of atoms, concept hyperedges may only be formed by connectors that are either modifiers or builders. When the connector is a modifier, finding the hypernym is admittedly trivial. When the connector is a builder, it is often possible to infer the main concept among the arguments. There are only two possible roles: ``main'' (denoted by \T{m}) and ``auxiliary'' (denoted by \T{a}). For example: \QC{(+/B.am tennis/C ball/C)} The argument role annotation ``\T{.am}'' indicates that \T{ball/c} is the main concept in the construct, meaning that \T{(+/B.am tennis/C ball/C)} is a sort of \T{ball/c} --- the main concept is a hypernym of the whole construct. With compound nouns (\h{+/B} builder), we simply make use of part-of-speech and dependency labels to infer the main concept. Another common situation where finding roles is quite trivial is the case of builders derived from a proposition, such as \h{of/B}, which express a relationship between the arguments. For example, in \h{of/B.ma capital/C germany/C}, the main concept is \h{capital/C}. ``Capital of Germany'' is thus a type of capital. In English and many other languages, it is always the case that the main concept is the first argument after a builder derived from a proposition. \paragraph{Predicates.} Predicates can induce specific roles that the following arguments play in a relation. The need for argument roles in relations arises from cases where the role cannot be inferred from the type of the argument. For example, the same concept could participate in a relation as a subject or as an object. Consider for instance the sentence ``John gave Mary a flower'', represented as: \Q{(gave/P.sio john/C mary/C (a/M flower/C))} In this relation, the argument role string ``\T{sio}'' indicates that the three arguments following the predicate respectively play the roles of subject, indirect object and direct object. This relation involves three concepts united by the predicate that represents the act of giving, but without the argument roles, who the giver is, who the receiver is, and what object is being given, would remain undefined. Relying on ordering would not be enough, both due to the flexibility of NL in this regard, and to the fact that the presence of a certain role after a predicate is often optional. There are admittedly more possible roles than for builders. They are shown in table~\ref{tab:pred-argument-roles}. Once again, this set is the result of an effort to cover all grammatical cases listed in the Universal Dependencies in the most succinct way possible. Most of them (in fact, the first $8$ in the table) directly correspond to generic grammatical roles of the same name. Of these, the first $6$ are by far the most frequent. \emph{Specifications} were already discussed in the previous subsection (\ref{sec:types}), and their purpose as hyperedges coincides with their role when participating in relations: as an additional specification to the relation (temporal, conditional, etc.). Finally, a \emph{relative relation} is a nested relation, that acts as a building block of the outer relation that contains it. We will make extensive use of this later, to identify what is being claimed by a given actor. \begin{table} \centering \begin{tabular}{ l >{\sf}c } \toprule \textbf{Role} & \textbf{Code} \\ \midrule active subject & s \\ passive subject & p \\ agent (passive) & a \\ subject complement & c \\ direct object & o \\ indirect object & i \\ parataxis & t \\ interjection & j \\ specification & x \\ relative relation & r \\ \bottomrule \end{tabular \caption{Predicate argument roles.} \label{tab:pred-argument-roles} \end{table} \section{Translating NL into SH} \label{sec:text2hyper} We now discuss the crucial task of translating NL into this SH representation. This can, of course, be framed as a conventional supervised ML task. A difficulty arises from the lack of training data. SH is a novel representation, and the effort necessary to annotate a sufficiently large amount of text to train an NL to SH translator from scratch is far from trivial. We were motivated to look for an alternative, and we hypothesized that it would be much easier to infer the SH representation from grammatically-enriched representations than from raw text. We will show that this indeed appears to be the case. We propose a two-staged approach. The first ($\alpha$-stage) is a classifier that assigns a type to each token in a given sentence. The second ($\beta$-stage) is a search tree-based algorithm that recursively applies the rules in table~\ref{tab:type-inference} to impose the hypergraphic structure on the sequence of atoms produced by the $\alpha$-stage. This restricts the ML part of the process to the $\alpha$-stage, making it a trivial classification problem. \subsection{$\alpha$-stage} The classification categories correspond to the set of the six atomic types shown in table~\ref{tab:types}, with one additional category for tokens that should be discarded (typically punctuation). The open question is the feature set. We will see how, operating on the previous assumption regarding grammatical annotation, we use spaCy\footnote{ An open-source library for NLP in Python which includes convolutional neural network models for tagging, parsing and named entity recognition in multiple languages. A relatively recent comparison of ten popular syntactic parsers found spaCy to be the fastest, with an accuracy within 1\% of the best one~\cite{choi2015depends}} -- a popular NLP tool -- to generate appropriate features. Using this library we perform segmentation of text into sentences, followed by tokenization and annotation of tokens with parts-of-speech, dependency labels and named entity categories. In short, we deploy the full arsenal of off-the-shelf NLP tasks that come available with spaCy. In this work we restrict ourselves to the English language and we use the ``en\_core\_web\_lg-2.0.0'' language model. We collected randomly selected texts in English from five categories: fiction (5 books, 87738 sentences) and non-fiction books (5 books, 51597 sentences), news (10 articles, 532 sentences), scientific articles (10 articles, 3467 sentences) and Wikipedia articles (10 articles, 2888 sentences). From these we selected 60 random sentences in each category, thus a total of 300 sentences representing 6936 tokens. An interactive computer script was used to aid in the process of manually annotating each word of these sentences with one of the \emph{alpha} categories i.e., atomic types. These were used to train a random forest classifier. For this purpose we employed the one included with \emph{scikit-learn} (version 0.23.2), a widely used ML package. We did not perform any hyperparameter tuning, and used the default parameters set by this version of the package. There is possibly room for improvement here. For the aims of this work, we found it preferable to avoid introducing potentially confounding factors that could arise from hyperparameter optimization efforts. \paragraph{Feature definition.} We consider an initial set that encompasses all the potentially useful features that we could derive from a standard NLP pipeline such as spaCy. As we mentioned, it provides dependency parse labels (referred to, from now on, as \emph{DEP}) and named entity recognition categories (\emph{NER}). Parts-of-speech are provided in two flavors: the more extensive OntoNotes tag set (version 5) from the Penn Treebank, and the simpler Universal Dependencies (UD) part-of-speech tag set (version 2). Accuracy values for each of these elements are reported in \cite{spaccuracy} to be $0.97$ for the fine grained part-of-speech tagger (i.e., guessing the OntoNotes tag), $0.92$ for unlabeled dependencies (i.e., guessing the head of each token) and $0.90$ for labeled dependencies (i.e., the head and the label). Let us refer to the former as \emph{TAG}, and to the latter as \emph{POS}. We can also consider the most common words in the corpus. We consider as features the sets of 15, 25, 50 and 100 most common words (\emph{WORD15}, \emph{WORD25} and so on). Further features indicate if a token corresponds to some punctuation symbol, if it is at the root of the dependency parse trees, if it has left or right children in this same tree, and finally its shape in terms of capitalization (e.g. the shape of the word ``Alice'' is Xxxxx). Then, we establish three types of relative tokens: the ones that appear directly after or before the current one in the sentence, if they exist, and the one that is the parent of the current one in the dependency parse tree, if it exists. For each one of these tokens, all the previous features are also applied (for example, the UD part-of-speech of the dependency head is \emph{HPOS}, and the part-of-speech of the subsequent word in the sentence is \emph{POS\_AFTER}). We thus have $33$ candidate features in total. All of these features are categorical, and we employ one-hot encoding to feed them to the decision trees. \paragraph{Feature selection.} We tested two approaches for feature selection: a very simple genetic algorithm (GA) and iterative ablation. For the GA, we encoded features as bits (acting as switches to specify which features belong to the set). We used mutation only (bit-flip with a probability of $.05$), a population of $100$, and parent selection through a tournament of $3$. Search stopped at $100$ generations without improvement. The fitness function was the mean of $5$ evaluations of the accuracy of the feature set, each with a distinct and randomly selected split of the training / testing data. This eventually resulted in a set of $15$ features: \{\emph{WORD25}, \emph{TAG}, \emph{DEP}, \emph{HWORD25}, \emph{HWORD50}, \emph{HWORD100}, \emph{HPOS}, \emph{HDEP}, \emph{IS\_ROOT}, \emph{NER}, \emph{WORD\_BEFORE15}, \emph{WORD\_BEFORE100}, \emph{WORD\_AFTER15}, \emph{PUNCT\_BEFORE}, \emph{POS\_AFTER}\}. The iterative ablation procedure starts with the set of all candidate features, and $100$ runs of the learning algorithm are performed, again each run randomly split into two-thirds for training and one-third for testing. This provides us with a set of $100$ accuracy measurements. The process is then repeated, excluding one feature at at time. The feature that most degrades mean accuracy is excluded. If no feature has a negative impact on accuracy, then the one with the highest p-value (according to the non-parametric Kolmogorov–Smirnov test) above a threshold is excluded. The procedure is repeated, ablating one feature at a time, until no remaining feature fulfills any of the previous two criteria. We performed this procedure with threshold p-values of $.05$ and $.005$. The first left us with a set of five features: F5 = \{\emph{TAG}, \emph{DEP}, \emph{HDEP}, \emph{HPOS}, \emph{POS\_AFTER}\}; the second with three features: F3 = \{\emph{TAG}, \emph{DEP}, \emph{HDEP}\}. \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{figs/alpha-features-categories.png} \vspace{-2em} \caption{\emph{Left:} accuracy of the $\alpha$-classifier, comparing several feature sets; \emph{all} includes all features, \emph{GA} a features set obtained with a genetic algorithm, {F3} is the outcome of iterative ablation with $p < .005$ and {F5} with $p < .05$. \emph{Right:} accuracy by source text category using {F5}.} \label{fig:alpha-features-categories} \end{figure*} The results of these experiments are shown on the left side of figure~\ref{fig:alpha-features-categories}. As can be seen, all of the three attempts outperform the set of all features. Interestingly, {F5} is significantly better than {F3}, even at $p < .005$. The accuracy of the GA set falls between that of {F3} and {F5}. We performed these experiments not only as an endeavor to achieve acceptable accuracy for the experiments that follow, but also to obtain empirical evidence regarding the relationship between SH types and traditional linguistic features. We can conclude that SH does not correspond to some trivial mapping of any single linguistic feature. For subsequent experiments we will use {F5}, given that it has the best accuracy and still uses a relatively small number of features -- something that can make a difference regarding the computational effort needed to parse large quantities of text. It is interesting to notice that {F3} still leads to a higher accuracy than the set of all features, and having only three features, such a classifier could be feasibly implemented in a purely programmatic way. A completely human-understandable classification tree could be produced, and also implemented in a very efficient way, sacrificing relatively little in terms of accuracy. On the right side of figure~\ref{fig:alpha-features-categories} we present the accuracy of the classifier by text category, using {F5}. Here, it is interesting to note that the best performing category (fiction) and also one of the second-best (wikipedia, which is not significantly different from news) are out-of-corpus for the training set of the ML model of the underlying linguistic features. It is remarkable that the accuracies that we achieve are comparable and may even surpass the values reported by spaCy (see above). In other words, this suggests that, far from accumulating errors down the stream of the various processing steps, our $\alpha$ stage appears to even correct upstream errors. It is conceivable that more features become relevant, if a larger number of exotic cases becomes available through larger training corpora. It is also conceivable that larger windows (beyond just previous and next token) become relevant with larger datasets and more sophisticated ML approaches. Such considerations are beyond the scope of this work. \subsection{$\beta$-stage} The $\beta$-stage transforms the sequence of atoms of the original sentence, each typed by the $\alpha$-stage, into a semantic hyperedge that reflects the meaning of the sentence and respects the SH syntactic rules. In practice, this operation amounts to a bottom-up process that aggregates the deeper structures of the sentence into increasingly complex hyperedges, by recursively combining them until only a final, well-formed semantic hyperedge is left. \SetKwProg{Fn}{Function}{}{end}\SetKwFunction{FBeta}{BetaTransformation} \SetKwProg{Fn}{Function}{}{end}\SetKwFunction{FApply}{ApplyPattern}% \SetAlgoLongEnd \newcommand{$i=0$ \KwTo $n$}{$i=0$ \KwTo $n$} \newcommand\doubleplus{+\kern-1.3ex+\kern0.8ex} \begin{algorithm}[!th]\small \DontPrintSemicolon \Fn(){\FApply{seq, pos, pat}}{ \KwData{A sequence of edges $seq$, a position in the sequence $pos$ and a pattern $pat$} \KwResult{A sequence of edges with the initial edges replaced by a single one, if they match the pattern.} \uIf{$pat$ matches $seq$ at $pos$}{ $edge \longleftarrow$ reorder matching elements of $seq$ to align with $pat$ \; $seq' \longleftarrow$ matching part of $seq$ replaced with $edge$ \; \KwRet $seq'$ } \Else{ \KwRet $\varnothing$ } } \; \Fn(){\FBeta{seq}}{ \KwData{A sequence of edges $seq$} \KwResult{An edge $e$} \If{$|seq| = 1$}{ \KwRet $seq[0]$ \; } $heu_{best} \longleftarrow -\infty$ \; $seq_{best} \longleftarrow \varnothing$ \; \For{$pos=1$ \KwTo $|seq|$ }{ \For{$pat \in Patterns$}{ $seq' \longleftarrow$ \FApply{seq, pos, pat} \; $heu \longleftarrow h(seq, pos, pat)$ \; \If{$seq' \ne \varnothing \land heu > heu_{best}$}{ $heu_{best} \longleftarrow heu$ \; $seq_{best} \longleftarrow seq'$ \; } } } \uIf{$seq_{best} \ne \varnothing$}{ \KwRet \FBeta{$seq_{best}$} \; } \Else{ \KwRet (\h{:/J} $\doubleplus$ $seq[:2]$ ) $\doubleplus$ $seq[2:]$\; } } \label{algo:beta_stage} \caption{The $\beta$ transformation recursively applies the patterns from type inference rules until only the final hyperedge is left.} \end{algorithm} The process for this transformation is formalized in algorithm~\ref{algo:beta_stage}. Let us nonetheless explain in plain words how $\beta$ iteratively constructs a hyperedge, which need not be a proper semantic hyperedge except at the final step. The process starts indeed with an initial hyperedge as the simple sequence of typed atoms of the original sentence. At each step, the elements of the currently-formed hyperedge are scanned from left to right to look for a sub-sequence of types that matches the list on the left side of the type inference rules of table~\ref{tab:type-inference}, taken as unordered patterns i.e., up to any reordering. For instance, ``capital of Germany'' may have been parsed by $\alpha$ as a typed sub-sequence ``\T{capital/C, of/B, germany/C}'', which then matches the second pattern \T{(B C C)}. It may then be rearranged as such by putting the connector in first position and preserving the order of the remainder of the hyperedge i.e., ``\T{(of/B capital/C germany/C)}'', which conforms to the second inference rule of table~\ref{tab:type-inference}. Note that, in practice, we also restrict the second and fifth patterns, i.e. the builder and conjunction patterns, to the minimum number of two arguments: respectively \T{(B C C)} and \T{(J $x$ $x'$)}. We find that it fits NL more naturally and thus leads to more correct parses. Further tasks of knowledge inference might later introduce builder- and conjunction-based structures with more arguments. We complement the patterns with one rule that corresponds to the special connector \T{(+/B)}. This extra rule is admittedly needed to transform implicit builders \T{(C C)} into \T{(+/B C C)}. If only one sub-sequence matches, it is transformed into a sub-hyperedge by application of the rule. If two or more sub-sequences match, the $\beta$-stage needs to make a decision on which one to choose and proceed with as if only one sub-sequence matched. For this case, we use a heuristic function (this is function $h$ in algorithm~\ref{algo:beta_stage}). This heuristic function relies on the grammatical structure of the sentence given by the dependency tree. Our hypothesis is that grammatically connected edges are more likely to belong to the same higher-order edge, so the first criterion of $h$ is to always assign a higher score to sub-sequences where all items are directly connected in the dependency tree. By ``directly connected in the dependency tree'', we mean that all hyperedges contain one atom/token that is the head or the child of at least one atom/token in another hyperedge, and that any hyperedge can be reached from any other, following such grammatical links. In case there is a tie, the heuristic function then prefers the sub-sequence that contains the deepest atom/token in the dependency tree -- again assuming a correlation with SH depth, and thus respecting the bottom-up process of the $\beta$-transformation. Finally, if there is still a tie, rules are applied by the order of priority expressed in table \ref{tab:type-inference}, which is empirically organized by decreasing order of the depth at which each respective structure tends to appear in hyperedges. The special rule for \T{(+/B)} is assigned the highest priority. If no sub-sequence matches, the two first items in the sequence are connected by prepending the special conjunction \T{(:/J)}, which is meant to convey the most generic and abstract meaning of ``these two things are related in the most generic sense''. This captures cases often found in natural language, such as: ``A new era: quantum computation is here.'', which translates to: \Q{(:/J (a/M (new/M era/C)) (is/P (quantum/M computation/C) here/C))} If the resulting hyperedge entirely conforms to one of the type inference rules, the process stops successfully as it managed to form a recursively correct semantic hyperedge. Otherwise, the process is reiterated on the newly-formed hyperedge. The process is thus guaranteed to converge on a syntactically valid hyperedge, but is of course not guaranteed to produce the most desirable or correct representation. However, we experimentally verify below that, given a correct classification from the $\alpha$-stage and a correct dependency parse tree, this process consistently leads to the construction of a SH that correctly conveys the meaning of the original sentence. \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{figs/NL2SH.png} \caption{{\textbf{(a)}} Dependency parse tree with dependency labels (green) and fine grained part-of-speech tags (red). \textbf{(b)} $\alpha$-stage classification of atom types. \textbf{(c)} $\beta$-stage structuring of sentence by iterative application of the patterns from table~\ref{tab:type-inference}. A non-selected pattern is greyed-out.} \label{fig:NL2SH} \end{figure*} Let us first illustrate the $\beta$-stage in figure~\ref{fig:NL2SH}, which provides one example of an entire parsing process (using the \emph{F3} feature set for simplicity). In figure~\ref{fig:NL2SH}(c), the recursive application of $\beta$-transformations to an initial sequence of atoms can be followed. In the first step, we can see that the sequence \T{(the/M, capital/C)} matches the pattern \T{(M C)}, and the sequence \T{(capital/C, of/B, germany/C)} matches the pattern \T{(B C C+)}. We thus rely on the above-mentioned heuristic function, which causes \T{(of/B capital/C germany/C)} to be preferred to \T{(the/M capital/C)}. The reader can verify that selecting the latter at this stage would lead to a dead-end. The rest of the SH construction is straightforward. \begin{table*}[!t] \centerin \begin{tabular}{ l | c c c c | c } \toprule \textbf{Category} & \textbf{Correct} & \textbf{Defect} & \textbf{Wrong} & \textbf{Total} & \textbf{Mean relative defect size} \\ \midrule Non-fiction & 87 (.87) & 8 (.08) & 5 (.05) & 100 & .188 \\ Wikipedia & 81 (.81) & 12 (.12) & 7 (.07) & 100 & .190 \\ News & 77 (.77) & 16 (.16) & 7 (.07) & 100 & .147 \\ Fiction & 79 (.79) & 5 (.05) & 16 (.16) & 100 & .140 \\ Science & 71 (.71) & 19 (.19) & 10 (.10) & 100 & .290 \\ \textbf{All} & \textbf{395 (.79)} & \textbf{60 (.12)} & \textbf{45 (.09)} & \textbf{500} & \textbf{.206} \\ \bottomrule \end{tabular} \caption{Global NL to SH parser evaluation.} \label{tab:parser-evaluation} \end{table*} \paragraph{Argument roles.} Now that the core of the translation of NL into SH has been specified, assigning the argument roles introduced in Section~\ref{sec:argroles} amounts to a trivial translation from the dependency labels. Sometimes however, the parser may fail to determine an argument's role, and thus classify it as \emph{unknown} (that we code ``\T{?}'' for this purpose). \subsection{Validation of $\alpha$ and $\beta$} To test the accuracy of the complete translation from NL to SH, we randomly selected $100$ new sentences for each text category, that were used neither for training nor testing of the $\alpha$-classifier. We establish three categories: completely correct hyperedges, hyperedges with some defect and completely wrong hyperedges. A hyperedge is considered to have a defect if overall meaning is preserved, but some subedge contains a defect. Let us consider a real example from our dataset. The sentence: ``The scientists – who are part of a multi-year International Shelf Study Expedition – stressed their findings are preliminary.'' was parsed as: \Q{(stressed/P (:/J (the/M scientists/C) (are/P who/C (of/B part/C (a/M (+/B (+/B (+/B multi/C -/C) year/C) (+/B international/C (+/B shelf/C (+/B study/C expedition/C)))))))) (are/P (their/M findings/C) preliminary/C))} The hyperedge preserves most of the meaning of the sentence, but the concept \T{(+/B (+/B multi/C -/C) year/C)} is not correctly formed. Either \T{(-/B multi/C year/C)} or \T{(multi/M year/C)} would be much preferable. However, this partially defective parse is still likely to be useful in the methods that we will discuss in the following sections. We also see how different type assignments of the $\alpha$-classifier can result, in practice, in correct hyperedges at the end. We can also use this example to illustrate another metric that we employ in this evaluation: the relative defect size. This is simple the ratio of the size of the defective part to the size of the entire hyperedge. Size is measured in total number of atoms (at any depth). A wrong hyperedge is one where the meaning of the sentence is completely lost. For example, consider what would happen if, in the above case, ``stressed'' was classified as a concept instead of a predicate. This also serves to illustrate that there is a complex relationship between $\alpha$-classifier accuracy and overall parser accuracy. Some mis-classifications at the $\alpha$-stage can still allow for a completely correct parse, while others can lead to catastrophic failures or just minor defects. Nonetheless, we observe on this sample of 500 sentences that a correct $\alpha$ classification and dependency parse tree always lead to the construction of an SH that preserves the meaning of the sentence. By contrast, a badly-structured dependency tree appears to have a significant negative impact on the functioning of $\beta$, through the heuristic function. If this result generalizes, this suggests that, for a given accuracy of the dependency parsing module, increasing the quality of the NL to SH translation principally relies on improving $\alpha$ and the heuristic function. We show the results of this evaluation in table~\ref{tab:parser-evaluation}. It is interesting to notice that ``non-fiction'' is one of the worst performing categories in the $\alpha$-classifier, but ends up being the best one overall. Likewise, ``fiction'' is the best category at $\alpha$-stage but ends up being the second worst here. Unsurprisingly, ``fiction'' sentences tend to be richer in figures of speech and other complexities and ambiguities that lead to a higher rate of catastrophic failure. Conversely, ``non-fiction'' is the category with the most straight-forward sentences. In the ``science'' category, the difficulties are more related to a variety of unusual technical terms and notations, that lead more to defects than catastrophic failures. Overall, we see that a high percentage of the texts are correctly translated to SH, even in the worst-performing categories. We only work with English in this article, but supporting a new language essentially requires to generate a relatively small number of $\alpha$-classifier training examples. The rest of the process is currently language-agnostic: even though more research would be needed to explore this issue thoroughly, the fact that we cover all of the Universal Dependency cases gives us good reasons to believe that NL to SH translation shall be applicable to any language. The software package that we released to implement all the ideas discussed in this article includes the interactive script that we used to perform this annotation task ourselves. \section{Knowledge Inference and Extraction} \label{sec:knowledge-inference} We are finally in the position to explore the use of SH extracted from open text to perform language understanding tasks. First we will discuss how we naturally generalize SH to represent patterns and inference rules, and then we will manually define three such rules to perform conjunction resolution: a very useful and generic task that will be used in every following practical application discussed in this work, and very likely useful for myriad knowledge inference and extraction tasks. Then we will discuss how we systematized the process of discovery of useful patterns, and how we use this process to discover $8$ patterns for the purpose of Open Information Extraction (OIE), for which an abundant computer science literature exists where scholars are interested in inferring relations from free text. We will demonstrate the expressive power of SH by showing that these simple patterns produce competitive results when compared with a number of contemporary systems targeted at OIE, using an external benchmark. \subsection{A pattern-matching language} From a text corpus, the NL to SH translation stage attempts to convert each sentence into a hyperedge. In practice, all resulting hyperedges are stored in a proper SH database. From there, language understanding tasks may be performed in the form of inferences, which we define using SH notation with the help of patterns. Broadly, inference rules and patterns may also be written as hyperedges. \paragraph{Variables and patterns.} We introduce the concept of \emph{variable}. A variable simply indicates a placeholder that can match a hyperedge, and then be used to refer to that hyperedge. Unlike the other atoms we have seen so far, variables are represented in capital letters. With variables we can define \emph{patterns}, that can then be matched against other hyperedges. For example, consider the pattern: \QC{(is/P.sc SUBJ PROP/C)} \noindent which matches, for example: \QC{(is/P.sc (the/M sky/C) blue/C)} \newcommand{\:\mathbf{\vdash}\:}{\:\mathbf{\vdash}\:} Notice that the variable PROP includes a type code, while the predicate ``\T{is/P.sc}'' features argument roles. If type codes or argument roles are added to variables, this simply means that a hyperedge only matches this variable if the types and argument roles also match. We introduce a few more notation details for argument roles in patterns. In practice, we often allow the various pattern elements to appear in any order, denoting the order-indifferent roles between curly brackets ``\{ \: \}''. The arguments can appear in any order, as long as all of the pattern roles are present. For instance, \QC{(is/P.\{sc\} SUBJ PROP/C)} would both cover \T{(is/P.sc SUBJ PROP/C)} and \T{(is/P.cs PROP/C SUBJ)}. Furthermore, it is possible to specify the optional presence of certain arguments by listing them as ``...'', which simply indicates that any number (including zero) hyperedges may be present at that point. In a pattern, if the connector indicates argument roles, then any further arguments may be present, unless indicated otherwise. In case connectors do not indicate argument roles, ``...'' can thus be used to indicate that more hyperedges at a certain point are permissible. For instance, \QC{(is/P.\{sc\} SUBJ PROP/C ...)} matches ``The sky is blue today'' and ``Today the sky is blue''. It is also possible to denote an undefined sequence of hyperedges with a variable name by using ``X...'', which thus refers to the same specific sequence everywhere it is used. Sequences of alternative arguments roles of which anyone of them can be matched once are represented inside square brackets. For example, ``\T{[sp]}'' matches either a subject or a passive subject, once. Finally, it is possible to forbid the presence of arguments with a certain role by listing them after ``-''. For example, in the pattern ``\T{(PRED/P.-sp X...)}'', arguments with roles ``\T{s}'' or ``\T{p}'' are not allowed; it would however match \T{(play/P.o football/C)}. \begin{table*}[!h] \begin{center \begin{tabular}{ c >{\sf}l c } \toprule \textbf{\#} & \bf{Rule} & \textbf{Inferences} \\ \midrule 1 & $\big($*/J \, ... \, CONCEPT/C \, ...$\big)$ $\:\mathbf{\vdash}\:$ (CONCEPT/C) & 147 \\ \rowcolor[gray]{.95}{}2 & $\Big($*/J \, ... \, $\big($PRED/P.\{[sp]\} \, X \, Y...$\big)$ \, ...$\Big)$ $\:\mathbf{\vdash}\:$ $\big($PRED/P.\{[sp]\} \, X \,Y...$\big)$& 63 \\ 3 & $\Big($*/J \, $\big($*/P.\{[sp]\}\, SUBJ/*\, ...$\big)$ \, ... \, $\big($PRED/P.-sp \, X...$\big)$ \, ...$\Big)$ $\:\mathbf{\vdash}\:$ $\big($PRED/P.\{s\} \, SUBJ/* \, X...$\big)$ & 10\\ \bottomrule \end{tabular} \caption{Conjunction resolution rules and respective number of inferred hyperedges from the OIE benchmark}. \label{tab:conjunction-rules} \end{center} \end{table*} \paragraph{Rules.} We may now define \emph{rules} which we denote with a couple of patterns separated by the symbol ``$\vdash$'', as in ``\T{PATTERN1~$\vdash$~PATTERN2}''. This notation indicates that any hyperedge that contains a hyperedge matching the left-hand-side \T{PATTERN1} would incur the creation of a duplicated hyperedge consisting of the matching portion rewritten according to the right-hand-side expressed as \T{PATTERN2}. In a sense, these are replacement rules, except that the original hyperedge is preserved. For example, consider the rule: \Q{(is/P.sc SUBJ PROP/C) $\:\mathbf{\vdash}\:$ (property/P PROP)} \noindent which, applied to the above example, produces the inference: \Q{(property/P blue/C)} In essence, a rule makes it possible to populate an SH database with new knowledge that is inferred from NL yet need not, in turn, correspond to an actual sentence. \subsection{Conjunction Decomposition} \label{sec:conjres} Decomposing relations that include conjunctions into simpler relations not only facilitates OIE tasks, but is also of general usefulness in knowledge inference tasks. We show the three rules that we developed manually to perform conjunction decomposition in table~\ref{tab:conjunction-rules}. The first rule concerns conjunctions of concepts, such as ``Mary likes books and flowers.'': \QC{(likes/P.so mary/C (and/J books/C flowers/C))} where we generate one relation for each element: \Q{(likes/P.so mary/C books/C)\\ (likes/P.so mary/C flowers/C)} The second rule concerns conjunctions of relations with explicit subjects, for example: ``Mary likes astronomy and Alice plays football.'', which is parsed as: \QC{(and/J (likes/P.so mary/C astronomy/C) (plays/P.so alice/C football/C))} \noindent is decomposed into: ``Mary likes astronomy.'' and ``Alice plays football.'' i.e.,: \Q{(likes/P.so mary/C astronomy/C)\\ (plays/P.so alice/C football/C)} The third rule makes the subject explicit in situations such as ``Mary likes astronomy and plays football.'' i.e., \QC{(and/J (likes/P.so mary/C astronomy/C) (plays/P.o football/C))} \noindent inferring that ``Mary'' is the subject from the first relation in the conjunction and applying it to the second one: ``Mary plays football.'', resulting in: \Q{(likes/P.so mary/C astronomy/C)\\ (plays/P.so mary/C football/C)} \noindent In practice, this is done by remembering the last argument with the subject (\emph{s}) role and applying it to the following relations that miss a subject. \smallskip Naturally, these rules have a lot of space for improvement, not making distinctions for conjunctions with special meaning (e.g. ``but'', ``instead'', etc.). Nevertheless, we will see that they are already quite successful in the tasks that we will subsequently present. \subsection{Pattern Learning} \label{subsec:pattern-learning} \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{figs/pattern-learning.pdf} \caption{Pattern learning template and example with two passes. At the end of the second pass, the pattern \h{says/P.{sr} ACTOR CLAIM} is confirmed to work.} \label{fig:pattern-learning} \end{figure*} With the help of hypergraphs extracted from corpora of open text, it becomes possible to define a systematic process of discovery of patterns that enable knowledge extraction, with a human-in-the-loop. On the left side of figure~\ref{fig:pattern-learning} we present the general template for such a process. We will use this template to illustrate both how we discovered the patterns for the OpenIE task as well as the claim and conflict analysis that will be presented in section~\ref{subsec:openie}, and also how more sophisticated and automated pattern learning systems can be created. The rest of figure~\ref{fig:pattern-learning} is a step-by-step illustration on a simple example aimed at generating patterns to detect claims. In step (1), a hyperedge is selected from the hypergraph generated from a given training corpus. It can be drawn at random, or by any other criterion adapted to the pattern-learning task at hand. For instance, if we want to learn patterns typical of claims, we can first focus on hyperedges starting with a predicate (\T{*/P}) and, more precisely, the most frequent ones among them, as a strategy to attain good coverage. We observe that ``\T{says/P}'' is such a predicate and based on this, we draw \T{(says/P.sr alice/C (are/P.sc dogs/C nice/C))} i.e., ``Alice says dogs are nice''. Other pattern-learning tasks may naturally require different selection criteria, and in the next subsection~(\ref{subsec:openie}) we will provide another and more general example. A human is then presented with this hyperedge in step (2), and asked to manually perform an inference. The inference consists of selecting sub-edges and assigning them to variables according to some schema. In the example shown, the inference aim is to detect which actors making claims, and thus identify which ``\T{ACTOR}'' makes which ``\T{CLAIM}''. Step (3) consists of generalizing the original hyperedge into a pattern with the help of the variable assignments. The idea is to create the most generic pattern that fits the human inference. The matching parts are replaced by the corresponding variables, and the remaining sub-edges are replaced by wildcards, while maintaining type annotations. Then the process goes back to the whole training hypergraph. Step (4) consists of finding hyperedges that match the pattern, so that they can be presented to the human for validation. Then, in step (5), the human can simply indicate if these further matches are valid or not. When a match is not valid, the process can then return to step (3) and use this information to refine the pattern. What is now the most general version of the pattern that does not match the previously detected incorrect case? In our work, we used a conventional Jupyter notebook directly accessing the programmatic interface of Graphbrain\footnote{https://github.com/graphbrain/graphbrain}, the open-source library that we developed to implement all of the ideas discussed in this work. Refinements at step (3) were performed manually, testing hypothesis on the most general version of a pattern by simply asking Graphbrain to check how many actual hyperedges match each attempt, and choosing the one with the highest value. This process can obviously be automated with a search tree, that attempts a number of substitutions -- more or less generic wildcards, lemma matching, atom root matching, structural matching, etc. -- at each step, and then uses the training hypergraph to empirically test them and discover the most generic one that correctly matches both the positive and negative cases known so far. Then, even more sophisticated possibilities arise, such as the integration with general knowledge databases (e.g. specifying that a variable must be a concept of type ``country'', or that a predicate must be the synonym of a certain action), or the use of auxiliary methods such as semantic proximity with word2vec-like embeddings, or hybridization with ML algorithms. Another possible improvement is in the domain of software and user-interface development, allowing for less technical users to provide inferences and feedback -- a user can be invited to directly select parts of a sentence and assigning them to meanings (e.g. ``actor'', ``claim'', ``aggressor'', etc.), without having to see or interact with hypergraphic notation. Such refinements are beyond the scope of this work, but it is our hope to lay the foundations for these and other possibilities. \subsection{Open Information Extraction} \label{subsec:openie} We will now show how $5$ simple hyperedge patterns are sufficient to rank first in a recent Open Information Extraction (OIE) benchmark~\cite{DBLP:conf/acllaw/LechelleGL19}. In fact, one pattern is even sufficient to surpass a majority of the systems of that benchmark. We recognize the limitations of such benchmarks and do not claim that we have the best performing OIE system, neither are we singly focused on this application. Instead, we are interested in providing empirical evidence for the expressive power of SH patterns for the general purpose of knowledge extraction. To discover OIE patterns, we took advantage of the Wikipedia part of the open text corpus that we developed to train and validate the parser, and that we discussed in section~\ref{sec:text2hyper} -- Wikipedia content is a naturally rich source of factual assertions in NL. The resulting hypergraph contains 62528 top-level hyperedges. We then employed a simple process of generalization to transform hyperedges into abstract patterns. It consists in replacing each element of a hyperedge with its corresponding type-annotated wildcard, for example: \Q{(is/P.sc aragorn/C (of/B.ma king/C gondor/C))} \noindent becomes: \Q{(*/P.sc */C */C)} The process can continue recursively, further expanding subedges, for instance: \Q{(*/P.sc */C (*/B.ma */C */C))} These expansions have to conform to Table~\ref{tab:type-inference} (taken in reverse order i.e., from a resulting type to its antecedent), which generally leaves a small number of possibilities. We further introduced some restrictions in the generation of such patterns, to focus on simple patterns with likely relevance to our task. We limited recursive expansion to depth $2$, and only consider relations of sizes 3 or 4 -- smaller ones cannot contain triplets, larger ones that are useful are likely to contain the triplet (with optional extension) within a core that generalizes to patterns with no more than 4 elements. We excluded conjunctions (these are previously decomposed, as explained in subsection~\ref{sec:conjres}), and modifiers. These latter connectors could certainly be used to improve the OIE task, but entail more semantic complexity, and we are more interested in simplicity at this stage. We will focus on modifiers in a subsequent section. Finally, we allow for the special builder \h{+/B} to be explicitly included in the generalized patterns, given that compound concepts trivially correspond to ontological relationships of OIE interest, e.g.: ``Film director David Lynch'' implies that David Lynch is a film director. We considered the 50 most common such patterns, which we present in the appendix, in table~\ref{tab:wikipedia-patterns . Following the process described in section~\ref{subsec:pattern-learning} and the annotation guidelines document provided with the benchmark~\cite{benchmarkguidelines}, we found that 36 of these patterns can be transformed into valid OIE relationships, given correct parses. We then compressed these 36 patterns into the most general ones that: (a) imply one or more of the original patterns, and (b) do not imply patterns found to be incorrect in some way. For example, the two patterns: \Q{(+/B.\{ma\} (ARG1/C...) (ARG2/C...))\\{(+/B.\{mm\} (ARG1/C...) (ARG2/C...))}} \noindent are compressed to: \Q{(+/B.\{m[ma]\} */C */C)} Such a compression/generalization process could feasibly be algorithmically automated. \begin{table*}[!th] \centering\small \begin{tabular}{ c >{\sf}l c c c } \toprule \rm\textbf{\#} & \bf{Pattern} & \textbf{Extractions} & \textbf{F1 {\footnotesize(cumulative)}} & \rm\textbf{Rank} \\ \midrule 1 & (REL/P.\{[sp][cora]x\} ARG1/C ARG2 ARG3...) & 107 & .265 & 4 \\ \rowcolor[gray]{.95}{}2 & (+/B.\{m[ma]\} (ARG1/C...) (ARG2/C...)) & 38 & .311 & 3 \\ 3 & (REL1/P.\{sx\}-oc ARG1/C (REL2/T ARG2)) & 20 & .334 & 3 \\ \rowcolor[gray]{.95}{}4 & (REL1/P.\{px\} ARG1/C (REL2/T ARG2)) & 12 & .351 & 2 \\ 5 & (REL1/P.\{sc\} ARG1/C (REL3/B REL2/C ARG2/C)) & 16 & .365 & 1 \\ \bottomrule \end{tabular} \caption{Open Information Extraction patterns, ordered by decreasing contribution to F1 (presented cumulatively). Ranks correspond to the rank achieved in the benchmark of Table \ref{tab:openie-performance} by using patterns up to the given line.} \label{tab:openie-patterns} \end{table*} We thus arrived at the $5$ patterns which are shown in table~\ref{tab:openie-patterns}. The extracted variables imply the usual OIE tuples: \T{$\langle$REL, ARG1, ARG2, ARG3...$\rangle$}, with argument(s) \T{ARG3...} being optional. Naturally, we convert hyperedges to the actual text they correspond to before feeding them to the benchmark. In the absence of \T{REL}, the relationship ``is'' is assumed. In some cases (patterns 3, 4, 5), notice also that \T{REL} is split into two or thee variables: \T{REL1}, \T{REL2} and \T{REL2}. We just concatenate their textual representation in the order indicated by the variable names, with interleaving space characters. \medskip \begin{table*} \centering\small \begin{tabular}{ l | c c c | c c | c c c } \toprule \rm\textbf{System} & \rm\textbf{Extractions} & \rm\textbf{Matches} & \rm\textbf{Exact} & \rm\textbf{Prec. of} & \rm\textbf{Recall of} & \rm\textbf{Prec.} & \rm\textbf{Recall} & \rm\textbf{F1} \\ & & & \rm\textbf{matches} & \rm\textbf{matches} & \rm\textbf{matches} & & & \\ \midrule Semantic hypergraphs with 5 rules & 201 & 120 & 19 & .70 & \textbf{.93} & .416 & \textbf{.326} & \textbf{.365} \\ MinIE~\cite{gashteovski2017minie} & 252 & \textbf{134} & 10 & .75 & .83 & .400 & .323 & .358 \\ ClausIE~\cite{del2013clausie} & 223 & 121 & \textbf{24} & .74 & .84 & .401 & .298 & .342 \\ OpenIE 4~\cite{mausam2016open} & 101 & 74 & 5 & .68 & .84 & .501 & .182 & .267 \\ Semantic hypergraphs with 1 rule & 107 & 74 & 7 & .69 & .85 & .475 & .184 & .265 \\ Ollie~\cite{mausam2012open} & 145 & 74 & 8 & .73 & .81 & .347 & .175 & .239 \\ ReVerb~\cite{fader2011identifying} & 79 & 54 & 13 & \textbf{.83} & .77 & \textbf{.569} & .121 & .200 \\ Stanford~\cite{angeli2015leveraging} & 371 & 99 & 2 & .79 & .65 & .210 & .188 & .198 \\ PropS~\cite{stanovsky2016getting} & 184 & 69 & 0 & .59 & .80 & .222 & .162 & .187 \\ \bottomrule \end{tabular} \caption{Performance of OpenIE systems, ordered by descending F1. Bold figures indicate the best performing system for each category.} \label{tab:openie-performance} \end{table*} Notice that the first pattern is almost a tautology of the SH representation itself, producing triples where the first argument is the active or passive subject, the relation is the predicate, and the second argument is the direct or indirect object, or complement, or agent, with the optional argument being one of the specifications, if they exist. To illustrate with a real and straightforward example from the benchmark, consider the sentence: ``The population of the special wards is over 9 million people, with the total population of the prefecture exceeding 13 million''. It is parsed to: \Q{(is/P.scx (of/B.ma (the/M population/C) (the/M (special/M wards/C))) ((over/M (9/M million/M)) people/C) (with/T (exceeding/P.so (of/B.ma (the/M (total/M population/C)) (the/M prefecture/C)) (13/M million/C))))} \noindent It matches pattern $1$ with variables \T{REL\,=\,is/P.scx}; \T{ARG1\,=\,(of/B.ma (the/M population/C) (the/M (special/M wards/C)))}; \T{ARG2\,=\,((over/M (9/M million/M)) people/C} and \T{ARG3\,=\,(with/T (exceeding/P.so (of/B.ma (the/M (total/M population/C)) (the/M prefecture/C)) (13/M million/C)))}, resulting in the extraction: \emph{$\langle$the population of the special wards, is, over 9 million people, with the total population of the prefecture exceeding 13 million$\rangle$}. Pattern $2$ can also be seen as a direct consequence of SH representation, in this case inferring ontological relationship from the \h{+/B} builder structure. Cases where both arguments have the role ``\T{m}'' can be interpreted as two expressions of the same concept. Again, using a real example, the emphasized part of the sentence ``He is the younger brother of \emph{the prolific film composer Christophe Beck}'' was parsed as: \Q{(+/B.mm (the/M (prolific/M (+/B.am film/C composer/C))) (+/B.am christophe/C beck/C))} \noindent leading to the symmetrical extractions: \emph{$\langle$the prolific film composer, is, Christophe Beck$\rangle$} and \emph{$\langle$Christophe Beck, is, the prolific film composer$\rangle$}. We discuss below in section~\ref{sec:coref} how easy it is to further extract, from this point and thanks to the recursive hypergraphic structure, the relation \emph{$\langle$Christophe Beck, is, film composer$\rangle$}; for now however this is not needed for our comparison with the OIE benchmark. For a non-symmetrical example with \h{+/B.ma}, let us consider the sentence: ``Finnish police reprimanded a man for traveling in a car boot to hide his meeting with \emph{Prime Minister Juha Sipila} during a government crisis last summer, saying this was breach of the traffic code'', with the emphasized concept parsed as: \Q{(+/B.am (+/B.am prime/C minister/C) (+/B.am juha/C sipila/C))} \noindent Using the same pattern, this leads to the single extraction: \emph{$\langle$Juha Sipila, is, Prime Minister$\rangle$}, while avoiding the potentially excessive generalization: \emph{$\langle$Prime Minister, is, Juha Sipila$\rangle$}. We do know that Juha Sipila is one Prime Minister, but not necessarily the only one in the context. The restriction requiring non-atomic edges in the arguments of this pattern is a simple mechanism to avoid too trivial, and potentially silly inferences, such as ``Barack is Obama''. As a final example, let us illustrate pattern $3$ with the sentence: ``Gonzales graduated from Crescent School in Toronto, Ontario, Canada'', parsed as: \Q{(graduated/P.sx gonzales/C (from/T (in/B.ma (+/B.am crescent/C school/C) (,/J toronto/C (,/J ontario/C canada/C)))))} \noindent and resulting in extractions where the first part of REL is extracted from the predicate and the second from the trigger: \emph{$\langle$Gonzales, graduated from, Crescent School in Toronto$\rangle$}. The previously discussed conjunction decomposition process leads to the further extractions: \emph{$\langle$Gonzales, graduated from, Crescent School in Ontario$\rangle$} and \emph{$\langle$Gonzales, graduated from, Crescent School in Canada$\rangle$}. We stop illustrating how these patterns apply here for the sake of succinctness, but hope to have sufficiently shown how they are generic and straightforward manipulations of the structures enabled by SH. \medskip Table~\ref{tab:openie-performance} shows the full benchmark, comparing the performance of our approach with seven other methods. SH outperformed all baseline systems in 24.6\% of the cases that tend to consist of complicated combinations of conjunctions and prepositional phrases. For example, the sentence where the next best system is defeated by the highest margin is: ``A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers following his collaboration with Per Martin-L{\"o}f and Anders Martin-Löf.'' Furthermore, the mean number of words per sentence where SH is not the best is $21.2$, \hbox{vs.} $23.8$ where it is the best ($31.0$ when outperforming by a factor $\ge 1.5$), plausibly indicating an advantage with more complicated sentences. \section{Computations on the hypergraph: concepts, ontologies and coreference resolution} \label{sec:corefs} We have shown how SH representation makes it possible to infer knowledge using simple symbolic rules. We will now address how it enables knowledge inference using probabilistic and heuristic rules. More specifically, we will show how to derive ontologies and perform coreference resolution among concepts. For example, how automated methods can reach the conclusion that ``President Obama'' is a type of ``President'', or that ``Obama'', ``Barack Obama'' and ``President Obama'' refer to the same external entity, while ``Michelle Obama'' refers to another one. Before proceeding, let us consider implicit taxonomies. \subsection{More about concepts and implicit taxonomies} \label{subsec:taxonomies} Hyponyms of a concept can be found by looking for hyperedges where the concept appears either as the main argument of a builder-defined concept or as the argument of a modifier-defined concept. It follows from these structures that the SH representation implicitly builds a taxonomy. More generally, we can talk of an implicit ontology. Beyond the taxonomical relationships that we described, the concepts that form a concept hyperedge are related to it in a non-specified fashion. For example, we know that \T{(germany/C)} is related in an unspecified way to \T{(of/B.ma capital/C germany/C)}. Of course, this is not to say that a more specific relation cannot be inferred by further processing with other methods. Here we are simply highlighting the ontological relations that come ``for free'' with the hypergraphic representation. When parsing sentences to hyperedges, and taking advantage of another classical NLP task offered by the upstream package, we also store auxiliary hyperedges connecting every atom that corresponds to word to the lemma of that word, with the help of a special connector ``\T{lemma/J}''. For instance: \Q{(lemma/J saying/P say/P)} In the next section we will make use of this, but it is easy to see how lemmas facilitate the inference of various types of correspondences, for example between singular and plural forms such as \h{apple/C} and \h{apples/C} with the help of \h{lemma/J apple/C apples/C}, and thus more sophisticated structural variations such as \h{+/B.am apple/C season/C} and \h{of/B.ma season/C apples/C}. \subsection{Coreference resolution: co-occurrence graph} \begin{figure*}[!th] \centering \includegraphics[width=\linewidth]{figs/coreference.png} \caption{Example of coreference resolution. On the left panel we can see the co-occurrence graph and its components, identified by different colors and leading to corresponding coreference sets on the right panel. The probabilities for each coreference set are shown to their left, including the ratios of total degree of the set to total degree, used to compute them. Individual degrees are shown next to each edge. * indicates the assignment of the seed to one of the coreference sets. ** indicates the recursive nature of the process, with \h{+/B michelle/C obama/C} taking the role of seed in another instance of this coreference resolution process.} \label{fig:coref} \end{figure*} A common but challenging task in NLP is that of coreference resolution, a usual disambiguation issue which consists in identifying different sequences of n-grams that refer to the same entity (such as ``Barack Obama'' and ``President Obama''). This is an old research topic~\cite{soon2001machine} that has been revived lately with modern machine learning methods~\cite{peters2018deep}. ML approaches such as deep learning require large training sets and tend to provide black box models, where precision/recall can be measured and improved upon, but the exact mechanisms by which the models operate remain opaque. Here we do not mean to provide a complete solution to this problem, but instead show that several cases of coreference resolution can be performed in a simpler and understandable manner through the use of semantic hypergraphs for situations that are nevertheless common and useful, especially in the context of social science research. We will discuss in the following section several experimental results that we obtained on a dataset of several years of news headlines. This corpus is largely focused on political issues, and it is dominated by reports of actors of various types making claims or interacting with each other. These actors can be people, institutions, countries and so on. In our hypergraphic representation, such actors will very frequently be referred to by hyperedges forming compound nouns, with the use of the \T{(+/B)} connector, as discussed previously. In figure~\ref{fig:coref} we can see such a case: a number of compound concept edges with the main atomic concept \T{(obama/C)} refer to actors. How can we group them in sets, such that all the cases in a given set refer to the same entity? Here, we start taking advantage of the hypergraph as a type of network, and of the analysis graphs that we can easily distill from the hypergraph. Semantic graph-based disambiguation has been extensively explored since the mid-2000s, especially emphasizing the importance of centrality and proximity in deciding which sense correspond to a given word in a certain context, and semantic hypergraphs are no exception \citep{mihalcea2005unsupervised,navigli2007graph,agirre2009personalizing}. We can trivially traverse all the concepts in the hypergraph, finding the subset of concepts that play the role of main concept in the above mentioned compound concept constructs. For each of these \emph{seed concepts}, we can then attempt to find coreference relationships between the concepts they help build. In the figure, we see an example using the seed concept \T{(obama/C)}. On the right side of the figure, we see all the compound concepts containing the seed as the main concept (except for the ones marked with * and **). It is possible then to build a graph of all the auxiliary concepts that appear together with the seed. A connection between two concepts in this graph means that there is at least a compound concept in which they take part together. In the example, we can see that this graph has three maximal cliques, which we identified with different colors. We then apply this simple rule: two compound concepts are assumed to refer to the same entity if all of their auxiliary concepts belong to the same maximal clique. The intuition is that, if auxiliary concepts belong to a maximal clique, then they tend to be used interchangeably along with the seed, which indicates that they are very likely to refer to the same entity. We will show that this intuition is empirically confirmed in our corpus, from where the example in the figure was extracted. The co-occurrence graph method produces the coreference sets seen on the right of the figure, except for the items marked with * and **. As can be seen, it correctly groups several variations of hyperedges that refer to Barack Obama (president of the United States during most of the time period covered by our news corpus), and it correctly identifies a separate set referring to Michelle Obama, his wife. It can also be seen that it fails to identify that ``Mr. Obama'' is also likely to refer to Barack Obama. We will say more about this specific case when we discuss claim analysis, in the next section. But what about the seed concept itself, in this case \T{(obama/C)}? The co-occurrence method is not able to assign it to one of the sets. Here we employ another simple method, this time of a more probabilistic nature. Before tackling this method, we have to make a small detour to discuss the semantic hypergraph from a network analysis perspective. \subsection{Simple hypergraph metrics} In a conventional graph, it is common to talk of the degree of a vertex. This refers simply to the number of edges that include this vertex or, in other words, the number of other vertices that it is directly connected to (we assume here an undirected graph without self-loops). With a semantic hypergraph, such measure is not so straightforward, given that an edge can have more than two participants, and that recursivity is permitted. Let us first define the set $D_e$, containing all the edges in with a given edge $e$ participates: \begin{equation} D_e = \{e_i|e_i \in E \land e \in e_i\} \end{equation} We define the degree of a hyperedge $e$ as: \begin{equation} d(e) = \sum_{e_i \in D_e} \big(|e_i| - 1\big) \end{equation} This is to say, the hypergraphic degree is the number of edges with which a given edge is connected to by outer hyperedges. It is intuitively equivalent to the conventional graph degree. Another useful metric that we can define is the \emph{deep degree}, which considers edges connected by hyperedges not necessarily at the same level, but appearing recursively at any level of the connecting hyperedge. Let us consider the set $\Delta_e$, containing the edges that co-participate in other edges with $e$ at any level. This set is recursively defined, so we describe how to generate it, in Algorithm~\ref{algo:generate_delta}. \SetKwProg{Fn}{Function}{}{end}\SetKwFunction{FRecurs}{Generate$\Delta$}% \SetAlgoLongEnd \begin{algorithm} \DontPrintSemicolon \Fn(){\FRecurs{e}}{ \KwData{An edge $e$} \KwResult{$\Delta_e$ neighborhood of edge $e$} $\Delta_e \longleftarrow D_e$\; \For{$e' \in D_e$}{ $\Delta' \longleftarrow$ \FRecurs{e'} \; $\Delta_e \longleftarrow \Delta_e \cup \Delta'$\; } \KwRet $\Delta_e$ \; } \label{algo:generate_delta} \caption{Generating the neighborhood $\Delta_e$ of an edge $e$.} \end{algorithm} \noindent We can now define the deep degree $\delta$ as: \begin{equation} \delta(e) = \sum_{e_i \in \Delta_e} \big(|e_i| - 1\big) \end{equation} To provide a more intuitive understanding of these metrics, let us consider the edge ``\T{(is/P berlin/C (of/B capital/C germany/C))}''. Let us also assume that no other edges exist in the hypergraph. In this case, the edges \T{(is/P)}, \T{(of/B capital/C germany/C)} and \T{(germany/C)} all have degree $d=1$, because they all participate exactly in one edge. The first two (\h{is/P} and \h{of/B capital/C germany/C}) also have deep degree $\delta=1$, but the latter \h{germany/C} has deep degree $\delta=2$, because not only does it participate directly in the edge \h{of/B capital/C germany/C}, but it also participates at a deeper level in the outer edge \h{is/P berlin/C (of/B capital/C germany/C)}. In other words, the higher deep degree of \h{germany/C} indicates that it plays an increased role as a \emph{building block} in other edges. \subsection{Coreference resolution: probabilistic seed assignment}\label{sec:coref} Back to figure~\ref{fig:coref}, each coreference set is labeled with a probability $p$, representing the chance that a given seed appears in one of its edges, if we were to uniformly enumerate all edges that rely on this seed. This configures a simple estimation of the probability of the seed by itself being used with a certain meaning, represented by the given coreference set. These probabilities are thus the ratio between the sum of the degrees of the edges in each coreference set and the total degree of all edges that include the seed, \hbox{i.e.} of all coreference sets. Two simple heuristics drive this step. One is that people will tend to use an ambiguous abbreviation of a concept when the popularity of one of the interpretations is sufficiently high in relation to all the others. For example, both \h{+/B barack/C obama/C} and \h{+/B michelle/C obama/C} share the seed \h{obama/C}, but when referring only to \h{obama/C} during the period he was a US president, people tend to assume that it refers to the most frequently mentioned entity -- \emph{Barack Obama}. The other is that a given seed should only be considered as an abbreviation if it is used a sufficient amount of times as a primary concept in relations, \hbox{i.e.} if there is evidence that it is in fact used on its own to refer directly to some external concept, and not only as a common component of primary concepts. Put differently, seeds referring to common concepts which act often as building blocks of other concepts (\hbox{i.e.}, higher deep degree with respect to degree) are less likely to be valid abbreviations. Such is the case for ``house'' (which may indifferently refer to the White House or Dr. House) and ``qaida'' (which is typically used as a building block for Al Qaida and never by itself). We thus establish a criterion that consists of the fulfillment of each of these two conditions, corresponding respectively to the heuristics above. A given seed $s$ is assigned to the coreference set $C$ with the highest $p$ if: \begin{itemize} \item $p$ is above a certain threshold $\theta$ \item ${d_s}/{\delta_s}$ is above a certain threshold $\theta'$ \end{itemize} We set the threshold to the values $\theta=.7$ and $\theta'=.05$, that we verified empirically to produce good results. Naturally, these thresholds can be fine-tuned using methods more akin to hyper-parameter optimization in ML, but such optimizations are outside of the scope of this work. When the criterion is not met, the seed is left as a reference to a distinct entity. In our corpus, this happens for example with "Korea", which remains an ambiguous reference to either ``North Korea'' or ``South Korea''. \subsection{Further disambiguation cases} We do not present here a general solution for coreference resolution and synonym detection, let alone disambiguation as a whole. Some further cases beyond coreference resolution will nonetheless be covered in the next section, notably anaphora resolution, given that this requires the discussion of predicates and relations in more detail, and along with empirical results. Other cases are left out of this work, but we would like to provide a quick insight into how they may be treated. One obvious example is that of synonyms, which are not implied by a pure structural analysis of hyperedges -- \hbox{e.g.} \emph{red} and \emph{crimson}, as well as \emph{U.S.} and \emph{United States}, for they share no common seed (as opposed to the cases emphasized in the previous subsection). This type of synonym detection may be achieved with the help of a general-knowledge ontology such as Wordnet or DBPedia, and/or with the help of word embeddings such as word2vec. This is a foreseeable and desirable improvement to hypergraph-based text analysis that we leave for future work. Another case is the inverse problem of synonym detection: disambiguating atoms that correspond to the same word but to different entities, for example distinguishing ``Cambridge (UK)'' from ``Cambridge (USA/Massachusetts)''. We do not perform this type of distinction in this work, but we present another syntactic detail that enables them from a knowledge representation perspective: the atom namespace. Quite simply, beyond the human-readable part and the type and other machine-oriented codes, a third optional slash-separated part can be added to atoms, allowing to distinguish them in cases such as the above, e.g.: \T{cambridge/C/1} and \T{cambridge/C/2}. Finally, coreference resolution can also apply to cases where neither seed concepts are shared, nor anaphoras are present. Let us say that one sentence refers to ``Kubrick'' and the next one to ``the film director''. Both this type of case and the above mentioned disambiguation cases are likely to be more easily solved with the help of structured knowledge surrounding the concepts in the semantic hypergraph, eventually including general knowledge as mentioned. For example, it could be detected that a certain reference to ``Cambridge'' is closer to references related to the United States, or that ``Kubrick'' is structurally close to the concept of ``film director''. Alternatively, a hybrid approach taking advantage of deep learning models can be employed. In fact, we successfully integrated such a system\footnote{https://github.com/huggingface/neuralcoref} with the Graphbrain library, but avoid using it in this work for the sake of simplicity while defining SH methodological foundations. \section{Integrated Case Study: Claim and Conflict Analysis} \label{sec:claim-and-conflict} We arrive at the point where we can propose an integrated application of the formalisms and methods discussed so far to the analysis of a large corpus of real text, combining symbolic and probabilistic rules. More specifically, we worked with a corpus of news titles that were shared on the social news aggregator \emph{Reddit}. We extracted all titles shared between January 1st, 2013 and August 1st, 2017 on \emph{r/worldnews}, a community that is described as: \emph{``A place for major news from around the world, excluding US-internal news.''} This resulted in a corpus of 404,043 news titles. We applied the methods described in sections~\ref{sec:text2hyper} and \ref{sec:corefs} to generate a hypergraph from the titles. We decided to focus on two specific categories of utterances that are very frequent in news sources, and of special interest for the social sciences \cite{tilly-1997-parliamentarization}, especially the study of public spaces \cite{ruiz2016more,van2017clause}: a \emph{claim} made by an actor about some topic and an expression of \emph{conflict} of one actor against another, over some topic. Helpfully, the detection of such categories also allows us to illustrate simple symbolic inference over the hypergraph. \subsection{Knowledge Inference} \begin{figure*} \centering\includegraphics[width=\linewidth]{figs/patterns.pdf} \caption{\label{fig:patterns}Two examples of relations starting with either a claim or a conflict predicate.} \end{figure*} The English language allows for vast numbers of verb constructions that indicate claims or expressions of conflict. Instead of attempting to identify all of them, we considered the $100$ most common predicate lemmas in the hypergraph, and from there we identified a set of ``claim predicates'' and a set of ``conflict predicates'', that we detail below. Overall, we found $3730$ different predicate lemmas, and their rank-frequency distribution is expectedly heavy-tailed. In this case, this small fraction of the set accounts for $60.6\%$ of the hyperedges. Naturally, coverage could be improved by considering more predicates, but with diminishing returns. We then employed the same process described in subsection~\ref{subsec:pattern-learning} to discover rules that are capable of detecting claims and expressions of conflict, whereby a hyperedge contains an attributable: \begin{itemize} \item {\bf claim}, if the following conjunction of patterns is satisfied: \begin{multline*} \text{\h{PRED/P.\{sr\} \, ACTOR/C \, CLAIM/[RS]}} \\ \land \; \text{\h{lemma/J \, >PRED/P \, [say,claim]/P}} \end{multline*} \item {\bf expression of conflict}, if the following conjunction of patterns is satisfied: {\begin{multline*} \text{\T{(\,PRED/P.\{so,x\} \, SOURCE/C \, TARGET/C}} \\ \text{\T{[against,for,of,over]/T \,TOPIC/[RS]\,)}} \\ \land \; \text{\T{(\,lemma/J \, >PRED/P}} \\ \text{\T{[accuse,arrest,clash,condemn,kill,slam,warn]/P\,)}} \end{multline*}} \end{itemize} \noindent So, a claim is essentially a relation, based on a predicate of lemma ``say'' or ``claim'', between an actor and a claim, which may also be a relation or a specifier. These patterns additionally rely on a new notation, ``>''. As we have seen in section~\ref{sec:hypergraphs}, a predicate can be a non-atomic hyperedge. As with concepts, the meaning of predicate atoms can be extended with a modifier. For example, the English verb conjugation \emph{``was saying''} is represented as \T{(was/M saying/P)}. Eventually, there is always a predicate atom that corresponds to the main verb in the predicate: the notation refers to the innermost atom, i.e. removing an arbitrary amount of nesting based on modifiers. For example, ``\T{>PRED}'' matches ``\T{been/P}'', ``\T{(has/M been/P)}'', ``\T{(not/M (has/M been/P))}'', and so on. \bigskip In figure~\ref{fig:patterns} we present two real sentences from our corpus and their respective hyperedges. Example (a) was classified as a claim and example (b) as an expression of conflict. These examples were purposely chosen to be simple, but the above rules can match more complicated cases. For example, the following sentence was correctly identified and parsed as a claim: \begin{quote} U.S. Secretary of State John Kerry was the intended target of rocket strikes in Afghanistan's capital Saturday, the Taliban said in a statement claiming responsibility for the attacks. \end{quote} \paragraph{Validation.} In table~\ref{tab:sentence-parse-validation} we present an evaluation of accuracy based on the manual inspection of $100$ claims and $100$ conflicts that were randomly selected from the hypergraph. Defects are deemed to be minor if they do not interfere with the overall meaning of the hyperedge (e.g., by leading to one of the other, more serious errors listed in the table). To illustrate with a minor defect from our dataset: \Q{(claims/P.sr (+/B.am google/C boss/C) ((does/M (not/M know/P.so)) he/C (in/B.ma (his/M salary/C) (+/B.am commons/C grilling/C))))} \noindent In this case, the concept ``commons grilling'' should be a separate specification: \Q{(claims/P.sr (+/B.am google/C boss/C) ((does/M (not/M know/P.sox)) he/C (his/M salary/C) (in/T (+/B.am commons/C grilling/C))))} \begin{table}[!t] \footnotesize\centering \begin{tabular}{llcc} \emph{task}&\emph{error type}&\emph{error}&\emph{error}\\ &&\em count&\em rate\\\toprule claim inference & not a claim & 0/100 & 0\%\\ & wrong actor & 0/100 & 0\%\\ & wrong topic & 2/100 & 2\%\\ & bad anaphora resolution & 1/13 & 8\%\\ & minor defects in topic & 8/100 & 8\%\\\midrule conflict inference & not a conflict & 0/100 & 0\%\\ & wrong origin actor & 0/100 & 0\%\\ & wrong target actor & 0/100 & 0\%\\ & wrong topic & 0/100 & 0\%\\ & minor defects in topic & 4/100 & 4\%\\ \bottomrule \end{tabular} \caption{\label{tab:sentence-parse-validation}Evaluation of several types of error in claim and conflict inference. Error counts and rates are presented, based on the manual inspection of 100 randomly selected claims and 100 randomly selected conflicts.} \end{table} \paragraph{Subjects and actors.} Both claim and conflict structures imply that the hyperedge playing the role of subject in the relation is an actor. Using the methods described in section~\ref{sec:corefs}, we can identify the coreference set of each actor and replace all occurrences of this actor with the same hyperedge. For each coreference set we choose the hyperedge with the highest degree as the main identifier, following the heuristic that the most commonly used designation of an entity should be both recognizable and sufficiently compact. \begin{table*}[!t] \centering \footnotesize \begin{tabularx}{\linewidth}{>{\em}crl>{\sf}Xr} \toprule \bf{Type} & \bf{Rank} & \bf{Actor} & \bf{Hyperedges (coreference set)} & \bf{Degree} \\ \midrule \multirow{3}{*}{non-human} & 1 & China & china/C, (+/B.am south/C china/C) & 6199 \\ & 2 & Russia & russia/C & 5861 \\ & 3 & U.S. & us/C, (the/M us/C) & 3824 \\ \midrule \multirow{10}{*}{male} & 8 & Vladimir Putin & (+/B.am president/C putin/C), putin/C, (+/B.am vladimir/C putin/C), (+/B.am president/C (+/B.am vladimir/C putin/C)), (+/B.am (russian/M president/C) (+/B.am vladimir/C putin/C)), (+/B.am (russian/M president/C) putin/C) & 2338 \\ & 10 & Barack Obama & (+/B.am (+/B.am us/C president/C) (+/B.am barack/C obama/C)), (+/B.am president/C obama/C), (+/B.am barack/C obama/C), (+/B.am president/C (+/B.am barack/C obama/C)), obama/C, (+/B.am (+/B.am u.s./C president/C) (+/B.am barack/C obama/C)) & 2069 \\ & 23 & Donald Trump & (+/B.am president/C (+/B.am donald/C trump/C)), (+/B.am (+/B.am us/C president/C) (+/B.am donald/C trump/C)), (+/B.am donald/C trump/C), trump/C, (+/B.am president/C trump/C) & 1082 \\ \midrule \multirow{6}{*}{female} & 32 & Angela Merkel & merkel/C, (+/B.am angela/C merkel/C), (+/B.am (german/M chancellor/C) (+/B.am angela/C merkel/C)), (+/B.am chancellor/C (+/B.am angela/C merkel/C)), (+/B.am (german/M chancellor/C) merkel/C) & 750 \\ & 78 & Theresa May & may/C, (+/B.am theresa/C may/C), (+/B.am (+/B.am prime/C minister/C) (+/B.am theresa/C may/C)) & 270 \\ & 201 & Nicola Sturgeon & sturgeon/C, (+/B.am nicola/C sturgeon/C) & 81 \\ \midrule \multirow{3}{*}{group} & 46 & The Palestinians & palestinians/C, (the/M palestinians/C) & 487 \\ & 70 & The Kurds & kurds/C, (the/M kurds/C) & 302 \\ & 113 & The Russians & russians/C, (the/M russians/C) & 184 \\ \bottomrule \end{tabularx} \caption{Three actors with highest hypergraphic degree in each category: non-human, male, female, group (in decreasing order of highest degree).} \label{tab:top_actors} \end{table*} As seen in figure~\ref{fig:patterns}(a), the inner subject (i.e., the subject of the relative relation that represents what is being claimed) can be a pronoun. These cases are very common, and almost always correspond to a case where the actor is referencing itself in the content of the claim. On one hand, we perform simple anaphora resolution: if the inner subject is a pronoun in the set \{\T{he/C}, \T{it/C}, \T{she/C}, \T{they/C}\}, then we replace it with the outer subject. On the other hand, we take advantage of the pronoun to infer more things about the actor. The four pronouns mentioned indicate, respectively, that the actor is a male human, a non-human entity, a female human, or a group. We take the majority case, when available, to assign one of these categories to actors. The pronoun \emph{they} is being increasingly used as a gender-neutral third person singular case, but we have not found such cases in our corpus. Table~\ref{tab:top_actors} shows the top three actors per category, ranked by their degree in the hypergraph, along with their coreference set. Obviously, more sophisticated rules can be devised, both for anaphora resolution and category classification. Our goal here is to illustrate that, thanks to the SH abstraction, it becomes possible to perform powerful inferences (i.e., both useful and at a high level of semantic abstraction) with very simple rules. \subsection{Topic Structure} \label{subsec:topic-structure} The very definition of \emph{topic}, for the purpose of automatic text analysis, is somewhat contingent on the method being employed. One of the most popular topic detection methods in use nowadays is \emph{Latent Dirichlet Allocation (LDA)}~\cite{blei2003latent}, which is a probabilistic topic model~\cite{blei2012probabilistic} that views topics as latent entities that explain similarities between sets of documents. In LDA, topics are abstract constructs. Documents are seen as a random mixture of topics, and topics are characterized by their probability of generating each of the words found in the document set. LDA uses a generative process to statistically infer these probabilities. Ultimately, a topic is described by the set of words with the highest probabilities of being generated by it. Human observers can then infer some higher-level concept from these words and probabilities. For example, if the five highest probability words for a topic $X$ are \emph{\{EU, Ursula von der Leyen, Boris Johnson, Barnier, Trade\}}, a human observer may guess that a good label for this topic is \emph{Brexit Negotiations}. LDA is applicable to sets of documents, for a predefined number of topics, where each document is considered to be a \emph{bag-of-words}. Distributional topic detection methods have recently generated a variety of research endeavors, including the application of stochastic blockmodels to discover joint groups of documents which use keywords in a similar fashion \citep{gerlach2018network}. A different approach to topic detection is \emph{TextRank}~\cite{mihalcea2004textrank}, which is capable of detecting topics within a single document. With TextRank, the document is first transformed into a word co-occurrence graph. Common NLP approaches are used to filter out certain classes words from the graph (e.g., do not consider articles such as ``the''). Topics are considered to be the words with the highest network centrality in this graph, according to some predefined threshold condition. Simple statistical methods over the co-occurrence graph can be used to derive \emph{ngram} topics from the previous step. Given that the order in which words appear in the document is important, TextRank cannot be said to be a \emph{bag-of-words} approach such as LDA. It relies a bit more on the meaning of the text, and it is more local -- in the sense that it works inside a single document instead of requiring statistical analysis over a corpus of documents. \begin{table*}[t] \scriptsize \begin{tabularx}{\linewidth}{>{\bf\scriptsize}lp{2.3cm}ccX} &\scriptsize\emph{actor}&&&\scriptsize\emph{topic}\\\toprule &&&$\rightarrow$&scuppering syria peace talks\\\multirow{-2}{*}{$\Longrightarrow$}&\multirow{-2}{*}{assad}&&$\rightarrow$&war crimes in aleppo\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{damascus}&&$\rightarrow$&continuing to use chemical weapons\\\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{the united states}&&$\rightarrow$&the weakening of europe\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{us}&&$\rightarrow$&espionage\\&&$\leftarrow$&&mistral delay\\&&&$\rightarrow$&meddling in election\\\multirow{-3}{*}{$\Longleftrightarrow$}&\multirow{-3}{*}{russia}&&$\rightarrow$&rapid eu sanctions\\\rowcolor{gray!10}&&$\leftarrow$&&being europe's biggest problem child\\\rowcolor{gray!10}\multirow{-2}{*}{$\Longleftrightarrow$}&\multirow{-2}{*}{germany}&&$\rightarrow$&wage dumping in the meat sector\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{al qaeda}&$\leftarrow$&&new attacks\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{council of europe}&$\leftarrow$&&allowing to hit parents and spank their children\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{iraqis}&$\leftarrow$&&imminent isis attacks\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{kagame}&$\leftarrow$&&rwanda genocide\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{london}&$\leftarrow$&&plot\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{syria's assad}&$\leftarrow$&&supporting terrorism\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{un}&$\leftarrow$&&racist attacks on black minister\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$} {united nations top human rights official}&$\leftarrow$&&delays\\ \bottomrule \end{tabularx} \caption{\label{tab-ego-centered-france}List of actors criticizing or being criticized by ego (here, France), and the topics over which the critique applies. Single arrows show the critique direction (left to right: ego criticizes that actor) for each underlying hyperedge, double arrows indicate the overall critique direction (which can thus go both ways).} \end{table*} In this work, we move significantly more in the direction of text understanding and locality. Our topics are firstly inferred from the meaning of sentences. As we have shown, pattern analysis of hyperedges can be used to infer relationships such as \emph{claim} and \emph{conflict}, which imply both actors and topics. Given coreference detection, such topics are characterized by sets of hyperedges, but these sets are not probabilistic in the sense that LDA's are. Instead, they are a best guess of symbolic representations that map to some unique concept. Our approach relies even more on meaning than TextRank, and it allows for topic detection at an even more local scale: single sentences. In the examples given in figure~\ref{fig:patterns}, the claim shown in (a) implies the rather specific topic ``afraid of military us strike'', and (b) the topic ``military engagement in syria''. Another important aspect of our approach is that topics can be composed of other topics or concepts, forming a hierarchical structure. This is a natural consequence of how we model language, as explained in section~\ref{subsec:taxonomies}. This allows us to explore topics at different levels of detail. The topic implied by a claim or conflict can be very specific and possibly unique in the dataset, but the more general subtopics or concepts that it contains can be used to find commonalities across the hypergraph. Considering the hyperedge from one of the topic examples above, from \h{military/M (in/B engagement/C syria/C)} it is possible to extract concepts from inner edges that correspond to more general concepts, for example \h{syria/C}, \h{engagement/C} and \h{in/B engagement/C syria/C}. With the help of the implicit taxonomy, which indicates that \T{(in/B engagement/C syria/C)} is a type of \T{engagement/C}, a simple rule could also infer that \T{(military/M (in/B engagement/C syria/C))} is a type of \T{(in/B engagement/C syria/C)}. In the various tables of results that we will subsequently present, actors and topics are represented by labels in natural language. During the transformation of text to hyperedge, every hyperedge that is generated is associated with the chunks of text from which it comes. These chunks are then used as textual labels for the hyperedges. \subsection{Inter-actor criticism} Focusing on France and Germany as target actors $a$, we gather the results for the detection of conflict patterns in the tables \ref{tab-ego-centered-france} and \ref{tab-ego-centered-germany}. Each of these actors is involved in active or passive criticism of other actors, \hbox{i.e.} either critical (\ra) of or criticized by ($\leftarrow$) other actors. The critique is related to a topic, and may go in both directions, \hbox{i.e.} Germany criticizes Greece for debt commitments (second row of table~\ref{tab-ego-centered-germany}). The topics presented here correspond to the detailed topics discussed in the previous section. This structured enumeration provides a way to scan the direction, target and frequency of claims by actors on other actors in a given text corpus. \begin{table*}[!t] \footnotesize \begin{tabularx}{\linewidth}{>{\bf\scriptsize}lp{2.3cm}ccX} &\scriptsize\emph{actor}&&&\scriptsize\emph{topic}\\\toprule \multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{fiat}&&$\rightarrow$&using illegal emissions device\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{greece}&&$\rightarrow$&debt commitments\\\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{israel}&&$\rightarrow$&latest settlement expansion in east jerusalem\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{kurds}&&$\rightarrow$&one sided referendum plans\\\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{maduro}&&$\rightarrow$&holding venezuelans' hostage\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{mexico city}&&$\rightarrow$&brexit\\&&&$\rightarrow$&cold war reflexes\\\multirow{-2}{*}{$\Longrightarrow$}&\multirow{-2}{*}{putin}&&$\rightarrow$&moscow up beefs nuclear arsenal\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{syrian}&&$\rightarrow$&alleged car bomb plot\\\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{uk}&&$\rightarrow$&leaving eu\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{ukraine}&&$\rightarrow$&graft\\\multirow{-1}{*}{$\Longrightarrow$}&\multirow{-1}{*}{us}&&$\rightarrow$&stasi methods ahead of obama\\\rowcolor{gray!10}&&$\leftarrow$&&halting arms deal\\\rowcolor{gray!10}&&&$\rightarrow$&cyber attack on ukraine peace monitors\\\rowcolor{gray!10}&&&$\rightarrow$&kremlin dismisses us intelligence claims as a witch hunt\\\rowcolor{gray!10}\multirow{-4}{*}{$\Longleftrightarrow$}&\multirow{-4}{*}{russia}&&$\rightarrow$&military engagement in syria\\&&$\leftarrow$&&wage dumping in the meat sector\\\multirow{-2}{*}{$\Longleftrightarrow$}&\multirow{-2}{*}{france}&&$\rightarrow$&being europe's biggest problem child\\\rowcolor{gray!10}&&$\leftarrow$&&causing instability\\\rowcolor{gray!10}\multirow{-2}{*}{$\Longleftrightarrow$}&\multirow{-2}{*}{u.s.}&&$\rightarrow$&ceding lead role to china\\&&$\leftarrow$&&backing failed coup\\&&$\leftarrow$&&cultural racism over eu accession\\&&$\leftarrow$&&engaging in diplomatic rudeness and double standards\\&&$\leftarrow$&&genocide speech\\&&$\leftarrow$&&harbouring terrorists\\&&$\leftarrow$&&succor providing to its enemies\\&&$\leftarrow$&&succour providing to its enemies\\&&$\leftarrow$&&working against erdogan\\&&&$\rightarrow$&blackmailing eu\\&&&$\rightarrow$&itself further distancing from europe by the death penalty reinstating after a disputed referendum\\&&&$\rightarrow$&monday\\&&&$\rightarrow$&nazi\\\multirow{-13}{*}{$\Longleftrightarrow$}&\multirow{-13}{*}{turkey}&&$\rightarrow$&supporting terrorism\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{erdogan}&$\leftarrow$&&nazi practices over blocked political rallies\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{eu commission}&$\leftarrow$&&air pollution breaches\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{eu leaders}&$\leftarrow$&&pressure on migrant quotas\\\multirow{-1}{*}{$\Longleftarrow$} {french far right leader marine le pen}&$\leftarrow$&&doors opening to refugees\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{italy}&$\leftarrow$&&undermining its economic efforts\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{moscow}&$\leftarrow$&&up hushing russian girl's rape\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{orban}&$\leftarrow$&&rude tone over refugees\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{snowden}&$\leftarrow$&&nsa aiding in spying efforts\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$} {turkey's president tayyip erdogan}&$\leftarrow$&&behaving like nazis\\\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{un}&$\leftarrow$&&institutional racism and racist stereotyping against people of african descent\\\rowcolor{gray!10}\multirow{-1}{*}{$\Longleftarrow$}&\multirow{-1}{*}{un committee}&$\leftarrow$&&an anti racism - convention violating by not prosecuting a politician's comments about turks and arabs\\ \bottomrule \end{tabularx} \caption{\label{tab-ego-centered-germany}List of actors criticizing or being criticized by ego (here, Germany), and the topics over which the critique applies. Single arrows show the critique direction (left to right: ego criticizes that actor) for each underlying hyperedge, double arrows indicate the overall critique direction (which can thus go both ways).} \end{table*} \subsection{Dyadic claims} Here we focus on claims that actors make about other actors (or themselves). In other words, we refer to claims where the subject of the claim is itself an actor. Furthermore, we consider only claims for which the claim relation contains an argument playing the rule of complement, meaning that the subject of the claim is being linked with some concept, for example expressing membership in a class (e.g.: ``Pablo is a cat.'') or the possession of some property (e.g.: ``North Korea is afraid''). We also recursively extract context edges that are connected to the outer claim edge through nestings of \h{:/J}. To give an example from our corpus: \Q{(:/J (says/P.sr russia/C ('s/P.sc it/C ready/C)) ((to/M deal/P.x) (with/T (new/M (+/B.am ukraine/C president/C)))))} The edge ``\T{((to/M deal/P.x) (with/T (new/M (+/B.am ukraine/C president/C))))}'' is extracted as a context edge. Finally, specification edges of the claim and context edges are extracted out and grouped together. The predicate of the relative relation that expressed the claim is inspected to further determine the tense of the attribution (present, past, futur ), and to identify negations. Once again, this is achieved by simple rules over the hypergraphic representation: \begin{itemize} \item The presence of a negation modifier ({\T{not/M}, \T{n't/M}}), as is in fact the case with the first example of figure~\ref{fig:patterns}. \item The presence of the predicate {\T{was/P}} implies the past. \item The presence of the modifier {\T{will/M}} implies the future tense. \end{itemize} \begin{table*}[ht] \scriptsize\centering \begin{tabularx}{\linewidth}{c>{\columncolor[gray]{0.97}\raggedright}cc>{\columncolor[gray]{0.97}\raggedright}clX} \textbf{source}&\multicolumn{1}{c}{}&\textbf{target}&\multicolumn{1}{c}{}&\multicolumn{2}{l}{\textbf{property, \emph{context} and \emph{<specification>}}}\\\toprule & & & & &just the place for you\\ & & & & &the victim of intensive cyberattacks\\ & & & & &ready; \emph{to strike u.s. aircraft carrier}\\ & & & & &able; \emph{to nuke u.s. mainland}\\ & & & & &a great place for human rights\\ & & & & &ready for war with us\\ & & & & &close; \emph{developing a new satellite}; \emph{speculation fuelling it might attempt a long range rocket to fire to mark a key political anniversary}; \emph{<next month>}\\ & & & & &open; \emph{holding talks with south korea}; \emph{<including the suspension of the south's joint military drills with the united states>}; \emph{<if are met certain conditions>}\\ & & & & &open; \emph{to talk with south korea}\\ & & & & &the biggest victim in u.s. student's death\\\cline{6-6} & & & & &responsible for righteous sony hacking\\ \multirow{-14}{*}{\textbf{north korea}}&\multirow{-14}{*}{says}&\multirow{-14}{*}{\textbf{north korea}}&\multirow{-14}{*}{is}&\multirow{-2}{*}{\emph{not}}&afraid of us military strike\\ \midrule & & &\cellcolor{white}was & &ready; \emph{to put russia's nuclear weapons}; \emph{<during tensions over the crisis in ukraine and crimea>}; \emph{<on standby>}\\\cline{5-6} & & & & &possible convinced solution to ukraine crisis\\ & & & & &willing; \emph{to play a mediating role between the two koreas}; \emph{relieve the state of crisis on the korean peninsula}; \emph{<according to president moon jae in's special envoy to moscow>}; \emph{<to help>}; \emph{<by dispatching an emissary>}; \emph{<to pyongyang>}\\ & & & & &ready; \emph{to sell s-400 anti aircraft system}; \emph{<to turkey>}\\\cline{6-6} & & &\multirow{-6}{*}{is}&\emph{not} &russia's president for life\\\cline{5-6} & & &\cellcolor{white}& &russia's president; \emph{2024}\\\cline{6-6} \multirow{-10}{*}{\textbf{putin}}& & &\multirow{-2}{*}{\cellcolor{white}will be}&\emph{not} &president for life\\\cline{1-1}\cline{5-6} & & & & &ready; \emph{to improve ties with the us}\\ & & & & &moral compass of the world\\ & & & & &interested; \emph{other brics brazil, russia, india, china, south africa members}; \emph{using national currencies}; \emph{<after agreeing on such an arrangement with china>}\\ & & & & &willing; \emph{over to hand to us house of representatives and senate}\\\cline{6-6} \multirow{-6}{*}{\textbf{russia}}& & &\multirow{-6}{*}{is}&\emph{not} &a threat to anyone\\\cline{1-1}\cline{5-6} & & &\cellcolor{white}was& &the mastermind of the ukrainian coup\\\cline{5-6} \multirow{-2}{*}{\textbf{us}}&\multirow{-16}{*}{says} &\multirow{-16}{*}{\textbf{putin}}&is & &world's only superpower; \emph{back walks trump compliments}\\ \midrule & & & & &open; \emph{coordinating with u.s. in syria}\\ & & & & &ready; \emph{to deal with new ukraine president}; \emph{to retaliate for u.s. election sanctions}\\ & & & & &ready for dialogue with petro poroshenko, ukraine's next president\\ & & & & &ready; \emph{to provide the free syrian army}; \emph{<with air support in fight against islamic state>}\\ \multirow{-5}{*}{\textbf{russia}}&\multirow{-5}{*}{says}&\multirow{-5}{*}{\textbf{russia}} &\multirow{-5}{*}{is}& &building naval bases in asia, latin america \\ \midrule & & & & &disappointed over china's failure; \emph{over to hand fugitive intelligence analyst edward snowden}\\ & & & & &open; \emph{to work with new iranian president hussain rowhani}\\\cline{6-6} \multirow{-3}{*}{\textbf{us}} & \multirow{-3}{*}{says} &\multirow{-3}{*}{\textbf{us}} &\multirow{-3}{*}{is}&\emph{not}&surprised; \emph{<if north korea launches missiles>}\\ \bottomrule \end{tabularx} \caption{\label{tab:dyadic-claims}List of claims, or attributions by subject actors (sources) about other actors (targets). Automatically identified negative claims are emphasized.} \end{table*} In table~\ref{tab:dyadic-claims}, we present such attributions between the actors: North Korea, Russia, Putin and U.S. \subsection{Topic-based conflict network} So far we have presented actor-centric results. Here we will consider all conflicts that contain ``Syria'' as topic or subtopic (according to the definitions of section~\ref{subsec:topic-structure}). From this set of hyperedges we extracted a directed network connecting actors engaging in expressions of conflict over Syria. A visualization of this network is presented in figure~\ref{fig:conflict-net}. \begin{figure*} \begin{center}\includegraphics[width=.8\linewidth]{figs/syria.png}\end{center} \caption{\label{fig:conflict-net}Network of conflicts between actors over the topic ``Syria''. Arrows point from the originator of the conflict to its target. Size of nodes is proportional to their degree. Two factions were identified by a simple algorithm. One faction is represented as red, the other blue. Gray nodes do not belong to any faction.} \end{figure*} We devised a very simple algorithm to identify two factions in this conflict graph. Firstly, we attribute a score $s_{ij}$ to every hyperedge ($e_{ij}$): $$s_{ij} = \min(d_i, d_j)$$ where $d_i$ is the degree (in- and out-) of node $i$. Then we iterate through hyperedges in descending order of $s$. This heuristic assumes that hyperedges connecting more active nodes are more likely to represent the fundamental dividing lines of the overall conflict. The first hyperedge assigns one node to faction A and another to faction B. From then on, a node is assigned to a faction if it does not have a conflict with any current member of this faction, and has a conflict with a current member of the opposite faction. In the case that the node cannot be assigned to any faction, it remains unassigned. This resulted in faction A containing the actors: \T{\{russia, iran, moscow, putin, china, erdogan, palestinians\}}, faction B the actors: \T{\{us, west, israel, the united states, france, netanyahu, uk, germany, obama\}} and the following actors remaining unassigned: \T{\{turkey, assad, the european union\}}. Faction A is shown in blue in figure~\ref{fig:conflict-net}, and faction B in red. This categorization and network visualization suggest that the main axis of the conflict around Syria is a Russia / U.S conflict. Factions A and B contain state actors and political leaders that are typically aligned with, respectively, Russia and the U.S. Naturally, more sophisticated faction and alliance detection methods can be employed. Here we are mostly interested in showing the effectiveness of our approach in summarizing complex situations from large natural language corpora, and to provide some empirical validation that these results are sensible. This conflict graph was built from a total of $53$ hyperedges, and we manually verified that they all correspond to expressions of conflict, and that both the intervening actors and main topic were correctly identified in all cases. \section{Conclusions} We have presented the novel SH formalism, aimed at a new approach to language understanding based on the idea of translating NL into a structured and formal representation, while allowing room for the inherent and often irreducible ambiguities found in human communication. Developing the formalism entailed an effort of modeling of NL into a set of types and syntactic rules that (1) preserves the richness of NL, (2) facilitates computational language understanding tasks and (3) can act as a \emph{lingua franca} for hybrid systems that include both human and computational intelligence of various natures (e.g. symbolic, graph-based and statistical). Surrounding this central idea, we presented a viable parser of NL to SH using standard contemporary ML techniques as well as higher-level linguistic features provided by standards NLP libraries. Furthermore, we have shown that inference rules and knowledge extraction patterns can be represented in SH notation itself, and we have developed a procedural template for systems capable of learning inference rules with the collaboration of humans and reference hypergraphs extracted from open text. We believe to have empirically validated our approach from several angles and in several ways: in terms of the completeness of the representation by its ability to represent all grammatical constructs found in the Universal Dependencies; in terms of the precision of the parser across of variety of text categories; in terms of the expressive power of SH in being able to produce competitive results in a task for which a number of dedicated competing systems and an external benchmark exists; and finally in its ability to tackle a set of specific and related tasks of language understanding of particular interest to the social sciences (actor and gender detection, co-reference resolution, claim and conflict analysis), producing results that reasonably match common-sense intuitions about the ground truth, and also display good precision when manually verified. The central goal has been to lay the foundations of this approach and demonstrate its potential. We do not claim to have the best performing system in any of the tasks that we tackled, nor was this our aim, but we do hope to have demonstrated the versatility, completeness and potential of SH. Our further ambition is to apply this method to a variety of language understanding tasks where the understandability of results is desirable, for example in news / social media analysis (detection of actors, topics, conflicts, agreements, causality, beliefs), or to extract viewpoints from scientific articles, or to help in the study of cultural objects such as literary works (known as \emph{Distant Reading}~\cite{moretti2013distant}). Our hope is that we were able to convince the reader of the potential of SH for such tasks, enabling large-scale text corpus analysis while preserving the rich understanding expected in social science endeavors which normally require a significant amount of tedious manual coding \cite{tilly-1997-parliamentarization}. We created the Graphbrain open-source software library that implements all the ideas we described\footnote{https://github.com/graphbrain/graphbrain}, aiming not only to facilitate the replicability of all the experiments that we performed in this work, but also to facilitate the adoption and extension of SH and related methodology by the research community at large. \section*{Acknowledgements} This research has been supported by the ``Socsemics'' Consolidator grant funded by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation program, grant agreement No. 772743. \bigskip {\small
1,477,468,751,327
arxiv
\section{Outline} \label{sec:outline} We study the problem of recovering an \emph{unknown function} $f$ from a noisy and indirect \emph{observation} $Y^{n}$. In particular, we consider a class of inverse problems in Hilbert space, given as \begin{equation} \label{eq:Ys} Y^{n} = A f + \frac 1 {\sqrt n} \xi. \end{equation} Here~$A\colon X \to Y$ is a linear mapping between two separable Hilbert spaces~$X$ and~$Y$, termed \emph{the forward operator}. For our analysis, we shall assume that the mapping~$A$ is compact and injective. It will be clear from the assumptions made later, that the injectivity can easily be relaxed. These assumptions will also entail the compactness of~$A$. The \emph{observational noise} is assumed to be additive, modeled as a Gaussian white noise~$\xi$ in the space $Y$, scaled by $\frac{1}{\sqrt{n}}$. The problem of recovering the unknown $f$ from the observation $Y^{n}$ is assumed to be ill-posed, in the sense that $A$ is not continuously invertible on its range~$\mathcal R(A)\subset Y$. In particular, this means that~$\mathcal R(A)$ is not contained in a finite-dimensional subspace. Notice that although the white noise $\xi$ can be defined by its actions in the space $Y$, it almost surely does not belong to~$Y$. Rigorous meaning to model~\eqref{eq:Ys} can be given using the theory of stochastic processes, see Section~6.1.1 in~\cite{MR3588285}. In the Bayesian approach to such inverse problems, we postulate a prior distribution $\Pi$ on $f$ and combine with the (Gaussian) data likelihood $P^{n}_f$ to obtain the posterior distribution $\Pi(\cdot|Y^{n})$ on $f|Y^{n}$, see~\cite{MR3839555} for a comprehensive overview of the area. We are interested in studying the frequentist performance of the posterior distribution in the small noise asymptotic regime~$\frac 1 {\sqrt n}\to 0$, and hence~$n\to\infty$. More specifically, we consider observations generated from a fixed underlying element $f_{0}\in X$, $Y^{n}\sim P^{n}_{f_0}$, and study rates of contraction of the resulting posterior distribution around $f_{0}$, as $n\to\infty$. The study of rates of posterior contraction for inverse problems has received great attention in the last decade, initiated by~\cite{MR2906881}. The authors of that study considered Gaussian priors which were conjugate to the Gaussian likelihood. This results in Gaussian posteriors, having explicitly known posterior mean and covariance operator. Moreover, by assuming that the prior covariance operator and the linear map $A$ are mutually diagonalizable, the infinite dimensional inverse problem was reduced to an infinite product of one-dimensional problems. In this way, posterior contraction rates could be determined using explicit calculations both for moderately, and in the subsequent studies~\cite{MR3031282} and \cite{ASZ14}, for severely ill-posed linear forward operators. This approach was surveyed and extended to general ill-posedness of the linear operator by the present authors in~\cite{MR3815105}, using techniques from regularization theory. Several works extended the diagonal linear Gaussian-conjugate setting to various other directions, for example~\cite{ALS13} and~\cite{MR3985479} studied linear forward operators which are not simultaneously diagonalizable with the covariance operator of the Gaussian prior, and \cite{MR3535664} studied linear hypoelliptic pseudo-differential forward operators with Gaussian priors. More recently, there has been a wealth of contributions in more complex inverse problems, including non-linear ones arising in PDE models, see for example~\cite{MR4118619, MR4151406, kweku19}. Another line of progress has been the consideration of more general priors, so far for linear inverse problems, see~\cite{KR13} and~\cite{MR4116718}. The idea underlying all of these works, is to first establish rates of contraction for the related direct problem \begin{equation} \label{eq:direct} Y^{n} = g + \frac 1 {\sqrt n} \xi, \end{equation} with unknown $g=A f$, in which the data~$Y^{n}$ are generated from~$g_0 = A f_0$. Once such rates are established, the strategy is to control distances on the level of $f$ by distances on the level of $g$, when restricting on a sieve set $S_n$ on which the inversion of $A$ is well-behaved. This enables to translate rates for the direct problem to rates for the inverse problem when the posterior is restricted on the sieve set $S_n$. If the posterior mass concentrates on $S_n$, then these rates are also valid for the unrestricted posterior. In order to establish direct rates, the authors of the above-mentioned studies use the testing approach, see~\cite{MR2332274}. Here we shall explore the methodology proposed by~\cite{MR3757524}, which explicitly uses the \emph{modulus of continuity} (function) in order to translate rates for the direct to rates for the inverse problem. This approach is in principle general, however, so far it has been applied to certain linear inverse problems, with moderately and severely ill-posed forward operators, under Sobolev-type smoothness assumptions on the truth $f_{0}$. Our work is also related to~\cite{MR4116718}, in that both works use approximation-theoretic techniques to control the inversion or $A$. We consider (centered) Gaussian priors on $f$, arising by truncating the series representation of an underlying infinite-dimensional prior on the separable Hilbert space $X$, see e.g.~\cite[Sect.~2.4]{MR3839555}. We develop a comprehensive theory for establishing rates of contraction for general linear inverse problems, under general smoothness conditions, with a particular focus on the optimal choice of the truncation level. Truncated priors are both practically relevant since when implementing one needs to truncate, but also can lead to optimal rates of contraction for a smoothness-dependent choice of the truncation level as a function of $n$, see e.g.~\cite{MR2418663}. Furthermore, in \cite{KR13}, it was shown that putting a hyper-prior on the truncation level can lead to adaptation to unknown smoothness. This was done in the context of inverse problems with specific types of smoothness (Sobolev) and degree of ill-posedness of the operator (power or exponential type). See also~\cite{MR3091697}, where direct models are studied. The extension of adaptation to the general framework which we consider here, is interesting but beyond the scope of this work Contraction rates for the problems~\eqref{eq:Ys} and~\eqref{eq:direct} are related through the modulus of continuity of the mapping~$A^{-1}$. Thus, knowing a contraction rate, say~$\delta_n$ for the direct problem~\eqref{eq:direct}, and knowing the behavior of the modulus of continuity~$\omega_{f_0}(A^{-1},S_n,\delta),\ \delta>0$, where~$S_n$ is the (finite-dimensional) support of the prior, we obtain a contraction rate for~$f_0$ as~$\omega_{f_0}(A^{-1},S_n,\delta_n),\ n\to\infty$. In this program, the role of the truncation level~$k=k(n)$ is most important. There is $k^{(1)} =k^{(1)}(n)$ that should be used for the inverse problem, $k^{(2)}=k^{(2)}(n)$ which works for the direct problem, and finally~$k^{(3)}=k^{(3)}(n)$ used in the modulus of continuity. For the plan, as outlined above, to work we need to establish that actually a universal choice~$k=k(n)$ is suited for all three problems. In Section~\ref{sec:setting} we shall introduce the overall setting of the study, and we shall formulate Theorem~\ref{thm:direct-inverse}, which comprises the main achievements of this study. The rest of the study is composed of four parts. In Section~\ref{sec:direct} we will develop the tools needed to analyze the direct problem~\eqref{eq:direct} and obtain~$k^{(2)}(n)$ depending on the underlying prior covariance and the smoothness of $g_{0}$. Due to linearity, Gaussian priors on~$f$ induce Gaussian priors on~$g=A f$, which streamlines the analysis of problem~\eqref{eq:direct}. However, the smoothness of the induced true element in the direct problem, $g_0=A f_0$, depends on the smoothing properties of the operator $A$ and in particular, $g_0$ might not belong to any of the standard smoothness classes. For this reason, we shall study rates of contraction in the (direct) white noise model, given a Gaussian prior on $g$ and under general smoothness assumptions on~$g_{0}$. Emphasis will be given on the construction of the prior. We shall analyze truncated Gaussian priors posed directly on~$g$, obtained by truncating the series representation of an underlying infinite-dimensional Gaussian prior (called 'native', below), but also priors that are obtained as linear transformations of truncated Gaussian priors chosen for some~$f$ (called 'inherited', below). The former is relevant in the context of~\eqref{eq:direct} when $A$ commutes with the covariance operator of the underlying Gaussian prior on $f$. In the latter non-commuting case the analysis is more involved and restrictive. This section is self-contained and may be of independent interest. The main result is Theorem~\ref{thm:spc-bound} and it includes a way of assessing the optimality of the obtained bounds. In Section~\ref{sec:inverseP} we introduce the modulus of continuity, and we shall discuss its behavior, as~$\delta\to 0$, under an approximation-theoretic perspective. The main result here will be Theorem~\ref{thm:phi-theta-bound}, indicating the choice~$k^{(3)}(n)$. In Section~\ref{sec:relating} we shall show that the choice~$k^{(2)}(n)$ yields optimal behavior also of the modulus of continuity, such that we may let~$k^{(2)}=k^{(3)}$. Therefore, letting~$k^{(1)}(n) = k^{(2)}(n) = k^{(3)}(n)$ yields a contraction rate for the inverse problem allowing us to establish the main result, Theorem~\ref{thm:direct-inverse}. We exemplify the obtained (general) bounds at 'standard instances', with forward operators which have a moderate decay of singular numbers, a (severe) exponential decay, but also a (mild) logarithmic decay in Section~\ref{sec:xmpls}. Many examples for such instances are known. The Radon transform is prototypical for a power type decay of singular numbers, see the monograph~\cite{MR1847845}. The heat equation is known to exhibit an exponential decay of the singular numbers, see~\cite{MR1408680}, which is also a good resource for more examples. In particular, we explicitly derive (minimax) contraction rates under Sobolev-type smoothness, both for the direct and inverse problems, in the above-mentioned instances In order to streamline the presentation the proofs of the results are given separately in Section~\ref{sec:proofs}. \section{Setting and main result} \label{sec:setting} We next define certain concepts that will be needed for the development of the paper. After establishing some notation, we introduce rates of posterior contraction for the direct and inverse problem, links between the main operators pertaining to our analysis, as well as the concept of smoothness that will be used throughout the paper. We formulate the main result in Section~\ref{sec:main-result}. \subsection{Notation} We shall agree upon the following notation. We denote by $\norm{\cdot}{X}, \norm{\cdot}{Y}$ the norms in $X,Y$, respectively. When there is no confusion we will use plainly $\norm{\cdot}{}$ and the same notation will be used for the operator norm in $X$ or $Y$. For a (compact self-adjoint) linear operator, say~$G\colon X\to X$ we denote by~$s_{j}(G),\ j=1,2,\dots$ the non-increasing sequence of its singular numbers. We reserve the notation~$s_{j} = s_{j}(H),\ j=1,2,\dots$ for the operator~$H:=A^\ast A$, the self-adjoint companion to the mapping~$A$. Furthermore, according to whether we study the inverse problem~(\ref{eq:Ys}) or the related direct problem~(\ref{eq:direct}) we shall denote elements by~$f$ or~$g$ ($f_{0}, g_{0}$ for the corresponding true elements) For two sequences $(a_n)$ and $(b_n)$ of real numbers, $a_n\asymp b_n$ means that $|a_n/b_n|$ is bounded away from zero and infinity, while $a_n\,\lesssim\,b_n$ means that $a_n/b_n$ is bounded from above. \subsection{Prior distribution and posterior contraction} \label{sec:contraction} We shall use priors $\Pi$ which are truncations of a Gaussian prior~$\mathcal N(0,\Lambda)$, for a self-adjoint, trace-class and positive definite covariance operator $\Lambda$. Such priors are characterized by the underlying covariance~$\Lambda$ and the truncation level~$k$. Below we shall use the notation $\Lambda=\Lambda^{f}$ for Gaussian priors on $f$ in the context of \eqref{eq:Ys} and $\Lambda=\Lambda^{g}$ for Gaussian priors on $g$ in the context of \eqref{eq:direct}. \begin{xmplno}[$\alpha$-regular prior] Given~$\alpha>0$, we call the prior~$\mathcal N(0, \Lambda)$ $\alpha$\emph{-regular}, if the singular numbers~$s_{j}(\Lambda)$ decay like~$s_{j}(\Lambda)\asymp j^{- (1 + 2\alpha)},\ j=1,2,\dots$ \end{xmplno} Let us fix a prior distribution~$\Pi$ on the unknown~$f$, and consider data~$Y^{n}$ generated from the model~(\ref{eq:Ys}) for a fixed true element $f_0\in X$, $Y^n\sim P^n_{f_0}$. We are interested in deriving rates of contraction of the posterior $\Pi(\cdot|Y^{n})$ around~$f_{0}$, in the small noise limit $n\to\infty$. In particular, we find sequences $\varepsilon_n\to0$ such that, for an arbitrary sequence $M_n\to\infty$, it holds \begin{equation} \mathbb E_{0}\Pi\lr{f, \ \norm{f - f_{0}}{X} > M_{n}\varepsilon_{n}| Y^{n}}\to0\label{it:inverse}. \end{equation} Here $\mathbb E_{0}$ denotes expectation with respect to $P^n_{f_0}$. One can also derive rates of posterior contraction around $A f_{0}$, that is sequences $\delta_n\to0$, such that for arbitrary $M_n\to\infty$ \begin{equation} \mathbb E_{0}\Pi\lr{f, \ \norm{A(f - f_{0})}{Y} > M_{n}\delta_{n}| Y^{n}}\to0\label{it:direct}. \end{equation} Such rates $\delta_n$ and $\varepsilon_n$ will be called rates of contraction for the \emph{direct} and \emph{inverse} problem, respectively. We are going to derive rates of contraction for the inverse problem, by deriving rates of contraction for the direct problem and using the modulus of continuity as was proposed in~\cite{MR3757524}. These rates of contraction will be obtained by means of the \emph{squared posterior contraction}, a concept which will be introduced in detail in~\S~\ref{sec:direct}. \subsection{Relating operators in Hilbert space} \label{sec:relating-ops} As highlighted in the introduction we deal with several operators. In order to facilitate our analysis we need to relate these operators and to this end we introduce the following concept. \begin{de} [index function] \label{de:index-noncomm} A function~$\rho\colon (0,\infty)\to (0,\infty)$ is called \emph{index function} if it is continuous, non-decreasing, with~$\rho(0+)=0$. For an index function~$\rho$, we assign the companion~$\Theta_{\rho}(t) := \sqrt t \rho(t),\ t>0$. \end{de} The primary operator we deal with is the forward operator~$A:X\to Y$ which governs equation~(\ref{eq:Ys}). Its self-adjoint companion~$H= A^\ast A:X\to X$ will have the central role in our analysis. We mention the following identity: \begin{equation}\label{eq:a-asta12} \norm{A f}{Y} =\norm{\lr{A^\ast A}^{1/2}f}{X} = \norm{H^{1/2}f}{X}, f\in X. \end{equation} Furthermore, in order to obtain rates of contraction for inverse problems from rates of contraction for direct problems, we will need to link the underlying (untruncated) covariance operator~$\Lambda^{f}$ of the Gaussian prior for~$f$ to the operator~$H$. We will study two cases. Initially we shall assume that~$\Lambda^{f}$ and~$H$ commute. Precisely, we impose the following assumption: \begin{ass}[prior in scale] \label{ass:link-noncomm} There is an index function~$\chi$ such that $$ \Lambda^{f}= \chi^{2}(H). $$ \end{ass} This commutative case allows for a general analysis, but has limited applicability, as it may be hard to design a truncated Gaussian prior, because the singular basis of~$H$ (and hence $\Lambda^{f}$) may not be known. Instead, we may relax the commutativity assumption, and impose a corresponding link condition. \begin{ass} [prior linked to scale]\label{ass:prior-linked2scale-noncomm} There is some exponent~$a\geq 1/2$ such that $$ \norm{\lr{\Lambda^{f}}^{1/2}f}{X} \asymp \norm{H^{a}f}{X},\quad f\in X. $$ \end{ass} The requirement~$a\geq 1/2$ has a natural reason. We need to link the operators~$A$ and~$\Lambda^{f}$ in several places, and by virtue of~\eqref{eq:a-asta12} this can be done via $H^{1/2}$. Therefore, the case $a=1/2$ needs to be covered in the assumption We mention, that within the non-commuting case we confine to power type links. Also, notice that~$\chi(t) = t^a$ in Assumption~\ref{ass:link-noncomm} yields a special instance of Assumption~\ref{ass:prior-linked2scale-noncomm}. One may extend to more general index functions, but for the sake of simplicity we do not pursue this direction here. Both Assumptions~\ref{ass:link-noncomm} and~\ref{ass:prior-linked2scale-noncomm} have impact on the mapping properties of~$A$. First, the mappings~$H$ and~$\Lambda^{f}$ share the same null spaces (kernels). Also, since the covariance operator~$\Lambda^{f}$, being trace-class, is compact, this compactness transfers to~$H$, and a fortiori to~$A$. Thus, under these links the compactness of~$A$ cannot be avoided, while its injectivity can be relaxed by factoring out the common null spaces. \begin{rem} In our analysis the self-adjoint companion~$H$ to the operator~$A$ plays the role of the central operator. When studying contraction rates for the inverse problem \eqref{eq:Ys}, smoothness will be given with respect to it. Instead, one might give this role to the operator~$\Lambda^{f}$, and consider smoothness with respect to this operator. The analysis would be similar, and some results in this direction are given in~\cite[Sec. 5]{MR4116718}. \end{rem} \subsection{Smoothness concept} \label{sec:smoothness} For the subsequent analysis it will be convenient to introduce the smoothness of an element~$h\in Z$ in a Hilbert space~$Z$, with respect to some injective positive self-adjoint operator, say~$G \colon Z \to Z$, in terms of \emph{general source conditions}. \begin{de} [Source set]\label{def:gen-source} Given a positive-definite, self-adjoint operator~$G$, and an index function~$\rho$ the set \begin{equation} \label{eq:gen-source} G_{\rho}:= \set{h, \quad h= \rho(G)v,\quad \norm{v}{Z}\leq 1}. \end{equation} is called a source set. \end{de} \begin{rem} The sets~$G_{\rho}$ from above are ellipsoids in the Hilbert space~$Z$. The element~$v$ is often called \emph{source element}, and the representation~$h= \rho(G)v$ is called~\emph{source-wise representation}. We emphasize that elements in~$G_\rho$ are in the range of~$\rho(G)$, such that for the subsequent analysis Douglas' Range Inclusion Theorem, see its formulation in~\cite{MR3985479}, will be used several times. It is seen from~\cite{MR2384768} that, given the injective operator~$G$ each element~$h\in Z$ has a source-wise representation for some index function~$\rho$. \end{rem} Below, we shall use this concept for specific operators, and specific functions. For instance, the set \begin{equation} \label{eq:lpsi} \Lambda^g_\psi:= \lr{\Lambda^{g}}_\psi\subset Y \end{equation} will correspond to a source set for the operator~$G:= \Lambda^{g}\colon Y \to Y$, and the index function~$\psi$. In some cases we will assume that the index function~$\psi$ is \emph{operator concave}. The formal definition is given in Section \ref{sec:proofs}, but we refer to~\cite[Chapt.~X]{MR1477662} for a comprehensive discussion. Here we mention that a power type index function~$\psi(t) = t^{a}$ with~$a>0$, is concave exactly if it is operator concave, hence~$0 < a \leq 1$. \begin{xmplno}[Sobolev-type smoothness] \label{xmpl:es-beta} Let~$u_{1},u_{2},\dots$ be the eigenbasis of the compact self-adjoint operator~$G$, arranged such that the corresponding eigenvalues are non-increasing; this example can be considered either in $X$ or $Y$. Given some~$\beta>0$ we consider the \emph{Sobolev-type} ellipsoid \begin{equation} \label{eq:sobolev-ell} \eS \beta(R) := \set{h,\quad \sum_{j=1}^{\infty} j^{2\beta}\abs{\scalar{h}{u_{j}}}^{2}\leq R^{2}}. \end{equation} Now, suppose that the singular values of~$G$ decay as~$s_j(G) \asymp j^{-\gamma}$ for some~$\gamma>0$. Then it is a routine matter to check that~$h\in \eS \beta$ yields that~$h\in G_{\rho}$ for an index function~$\rho(t) \propto t^{\beta/\gamma},\ t\geq 0$, see~\cite[Prop.~2]{MR3985479}. Similarly the converse holds true, and there is thus a one-to-one correspondence between Sobolev-type ellipsoids and power-type source-wise representations for such operators~$G$. \end{xmplno} \subsection{Main result} \label{sec:main-result} We aim at deriving posterior contraction rates for the inverse problem \eqref{eq:Ys}, from contraction rates for the corresponding direct problem \eqref{eq:direct} by using the modulus of continuity, for truncated Gaussian priors. The, truncated at level~$k_{n}$, Gaussian prior on $f$, has all its mass on a finite dimensional subspace~$X_{\kn}$, and so does the posterior through the linear model~\eqref{eq:Ys}. The following result links the rates of posterior contraction corresponding to the inverse problem~\eqref{eq:Ys} and the direct problem~\eqref{eq:direct}. It is an immediate corollary of~\cite[Theorem 2.1]{MR3757524}. \begin{prop}\label{prop:ks} Assume we put a, truncated at level $k_n$, Gaussian prior on $f$. Let $\delta_n\to0$ be a rate of contraction for the direct problem~\eqref{eq:direct} around $g_0=~A f_0\in~Y$, for some $f_0\in X$. Then $\varepsilon_n:=\omega_{f_0}(H^{-1/2},X_{\kn},\delta_n)$, where $H=A^\ast A$, is a rate of contraction for the inverse problem~\eqref{eq:Ys}, at~$f_0$. \end{prop} We can thus obtain contraction rates $\varepsilon_n$ for the inverse problem by obtaining rates $\delta_n$ for the direct problem~\eqref{eq:direct}, and bounds for the inherent modulus of continuity for the inverse problem. The main result of the study implements this program in a general setting with a specific choice of the truncation level~$k_{n}$. \begin{thm}\label{thm:direct-inverse} Consider the inverse problem \eqref{eq:Ys}, recall $H=A^\ast A$, and suppose that~$f_{0}$ has smoothness~$H_\varphi$. Assume we put a truncated Gaussian prior $\mathcal{N}(0, P_{k_{n}}\Lambda^{f} P_{k_{n}})$ on $f$, with $\Lambda^{f}$ a self-adjoint, positive-definite, trace-class, linear operator in $X$, and $P_{k_{n}}$ the singular projection of $\Lambda^{f}$. We specify the related (covariance) operator~$\Lambda^{g}=A\Lambda^{f} A^\ast$. Under \begin{enumerate} \item[-] Assumption \ref{ass:link-noncomm}, or \item[-] Assumption~\ref{ass:prior-linked2scale-noncomm} with $\mu\leq a$, \end{enumerate} where for the latter assumption we specify~$\chi(t) = t^a$, and~$\varphi(t) =t^{\mu}$, consider the index function \begin{equation} \label{eq:psi-def} \psi(t) =\Theta_{\varphi}(\lr{\Theta_{\chi}^{2}}^{-1}(t)), \quad t>0. \end{equation} For the choice~$k_{n}$ according to \begin{equation}\label{eq:kn-def} k_{n} := \max\set{j,\quad \psi^{2}\lr{s_{j}(\Lambda^{g})} > \max\set{\psi^2\lr{\frac 1 n },\frac j n}}, \end{equation} let $\delta_n$ be given as \begin{equation}\label{eq:spc-main-thm} \delta_n:= C \max\set{\psi^{2}\lr{\frac 1 n},\frac {k_{n}} n} \end{equation} for some constant~$C$. Then the posterior contracts around $f_{0}$ at a rate \[ \varepsilon_n\asymp \varphi(\Theta_\varphi^{-1}(\delta_n)),\quad n\to\infty. \] \end{thm} The strategy for proving Theorem~\ref{thm:direct-inverse} is loosely outlined at the end of Section~\ref{sec:outline}. A main component of both the result and its proof, is the fact that the truncation levels~$k_{n}$, as given in~\eqref{eq:kn-def}, optimize both the rates $\delta_n$ for the direct problem as well as the bounds on the modulus of continuity. These considerations can be found in §~\ref{sec:relating}, where we establish the steps for proving Theorem~\ref{thm:direct-inverse}. \section{Direct signal estimation under truncated Gaussian priors} \label{sec:direct} Here we consider the Bayesian approach to signal estimation under white noise in the space $Y$, that is, the model \begin{equation} \label{eq:noise-model} Y^{n} = g + \frac 1 {\sqrt n} \xi, \end{equation} where~$\xi$ is Gaussian white noise in $Y$. For linear Gaussian models with Gaussian priors, it is convenient to describe posterior contraction in terms of the \emph{squared posterior contraction (SPC)}, which by Chebyshev's inequality, is the square of \emph{a} rate of contraction. For an element~$g_{0}\in Y$, given data~$Y^{n}$, and a truncation level~$k$ for the (Gaussian) prior, we assign \begin{equation}\label{eq:spc-def} \operatorname{SPC}(k,g_{0}) := \mathbb E^{g_{0}} \mathbb E^{Y^{n}}_k \norm{g_{0} - g}{Y}^2, \end{equation} where the inner expectation is with respect to the (Gaussian) posterior distribution, whereas the outer expectation concerns the sampling distribution, given the element~$g_{0}$, that is,\ the data generating distribution. The $\operatorname{SPC}$ for (regularized) untruncated Gaussian priors in the context of (linear) inverse problems, was analyzed in the previous study~\cite{MR3815105}. Here we develop a similar approach for direct problems with truncated Gaussian priors, and we will exhibit some features, as these are specific for the latter. Having fixed a class~$\mathcal F\subset Y$, and given the truncation level~$k$, we assign \begin{equation} \operatorname{SPC}^{\mathcal F}(k) := \sup_{g_{0}\in \mathcal F} \operatorname{SPC}(k,g_{0}),\label{ali:spc-uniform} \end{equation} which is a squared rate of contraction, holding uniformly over the class~$\mathcal F$. \subsection{Native and inherited Gaussian priors} In its simplest form, a centered truncated Gaussian prior for~$g$ can be defined using some orthonormal system, say~$y_{1},y_{2},\dots$ in~$Y$, independent and identically distributed standard Gaussian random variables~$\gamma_{1},\gamma_{2},\dots$, and a square summable positive sequence~$\sigma_{1},\sigma_{2},\dots$, as \begin{equation} \label{eq:prior-native} \Pi_{k}^{Y} :=\mathcal{L}\Big( \sum_{j=1}^{k}\sigma_{j} \gamma_{j}y_{j}\Big). \end{equation} The square summability of the sequence~$\sigma_{j},\ j=1,2,\dots$ ensures that the prior~$\Pi_{k}^{Y}$ is the (singular) projection of an infinite dimensional prior supported in $Y$, having finite-trace covariance operator~$\Lambda^{g} = \sum_{j=1}^{\infty} \sigma_{j}^{2} y_{j }\otimes y_{j}$. Hence, the prior~$\Pi_{k}^{Y}$ has covariance~$C_{k}=Q_{k}\Lambda^{g} Q_{k}$, where $Q_{k}$ are orthogonal projections onto~$\operatorname{span}\set{y_{1},\dots,y_{k}}$. We shall call this a \emph{native (truncated) prior} for~$g$. On the other hand, a centered finite dimensional Gaussian prior for~$g$ may be defined using a linear transformation of some native truncated prior~$\Pi_{k}^{X}$ for~$f\in X$, defined along some orthonormal system, say~$x_{1},x_{2},\dots$, and with corresponding projections~$P_{k}$ onto $X_{k}:=\operatorname{span}\set{x_{1},\dots,x_{k}}$, thus having covariance~$P_{k}\Lambda^{f} P_{k}$. The prior~$\Pi_{k}^{Y}$ for~$g\in Y$ is then obtained as the push forward~$T_{\sharp \lr{\Pi_{k}^{X}}}$ under some linear mapping~$T \colon X \to~Y$, and is supported on~$T X_{k}$. The Gaussian prior $\Pi_{k}^{Y}$ will thus have covariance~$C_{k} = T P_{k}\Lambda^{f} P_{k} T^{\ast}$, and we shall call this an \emph{inherited (truncated) prior}. Inherited priors are relevant for example when studying the direct problem \eqref{eq:direct} associated to the inverse problem~\eqref{eq:Ys}. When using such an inherited prior, we will quantify the relation between the mapping~$T$ and the covariance operator~$\Lambda^{f}$ driving~$\Pi_{k}^{X}$, in order to control the effect of~$C_{k}$. In this context we shall measure the smoothness of the truth~$g_0$ relative to the covariance operator~$\Lambda^{g}$ of the underlying infinite dimensional Gaussian prior on $g$, and we shall assume the smoothness condition~$g_{0}\in\Lambda^g_\psi$ for some index function~$\psi$, see~(\ref{eq:lpsi}). For inherited priors, the operator~$\Lambda^{g}$ will be given as the covariance of the push forward of the underlying infinite dimensional prior on~$f$,~$\Lambda^{g}= T \Lambda^{f} T^{\ast}$. We stress that for inherited priors in general we cannot ensure that the covariance $C_k$ corresponds to the singular projection of $\Lambda^{g}$, that is that~$ T P_{k} \Lambda^{f} P_{k} T^{\ast}=Q_{k}\Lambda^{g} Q_{k}$ or equivalently that $T X_{k}$ coincide with the singular spaces of $\Lambda^{g}$; see the next subsection for details. Nevertheless, we still say that $C_k$ is truncated at level $k$, since it has rank $k$. \subsection{Basic SPC bound} We shall start with proving a basic bound on the squared posterior contraction as given in~\eqref{ali:spc-uniform} in the white noise model~\eqref{eq:noise-model}, for both native and inherited truncated Gaussian priors~$\mathcal N(0,C_{k})$, under general smoothness on the truth. When treating inherited priors, it will be important that the projections~$P_{k}$ in the corresponding $C_{k}$, are along the singular spaces of~$\Lambda^{f}$, such that~$P_{k} \Lambda^{f} P_{k} = \lr{\Lambda^{f}}^{1/2} P_{k} \lr{\Lambda^{f}}^{1/2}$, and similarly,~$C_{k} = T \lr{\Lambda^{f}}^{1/2} P_{k} \lr{\Lambda^{f}}^{1/2} T^{\ast}$. If $\Lambda^{f}$ and $T^\astT$ commute, then we will show that $C_{k}$ coincides with the singular projection of $\Lambda^{g}$, and we can bound the SPC as in the native case. In the non-commuting case we cannot ensure that $C_{k}$ is the singular projection of $\Lambda^{g}$. We assign the intrinsic mapping~$H:= T^{\ast}T$, and work under Assumption \ref{ass:prior-linked2scale-noncomm}, linking the operators $\Lambda^{f}$ and $T$ via~$H$. Notice that our treatment in Section \ref{sec:direct} is standalone and does not necessarily correspond to an inverse problem, however with a slight abuse of notation we can let~$H= T^{\ast}T$ and assume the link condition described in Assumption \ref{ass:prior-linked2scale-noncomm}. Within this (finite dimensional) Gaussian-Gaussian conjugate setting, given the centered Gaussian prior with covariance~$C_{k}$, the posterior is also Gaussian with mean and covariance \begin{align} \hat g_{k} &= \lr{C_{k} + \frac 1 n}^{-1} C_{k} Y^{n},\label{ali:mean-ck}\\ C_{post,k} &= \frac 1 n \lr{C_{k} + \frac 1 n}^{-1}C_{k}.\label{ali:cov-ck} \end{align} In alignment with~\cite[Eq.~(3)]{MR3815105}, for any given~$g_0$ and truncation level~$k$, the~$\operatorname{SPC}$ is decomposed as \begin{equation} \label{eq:spc-sum} \operatorname{SPC}(k,g_{0}) = \norm{ g_{0} - \mathbb E\hat g_{k}}{Y}^{2} + \frac 1 n \mathbb E\norm{\lr{C_{k} + \frac 1 n}^{-1} C_{k}\xi}{}^{2} + \operatorname{tr}\left[C_{post,k}\right], \end{equation} where the first summand is the (squared) bias for estimating~$g_{0}$ by using the posterior mean~$\hat g_{k}$, the second summand is the related estimation variance, whereas the last summand constitutes the posterior spread. The proof of the next result is based on this decomposition. \begin{prop} \label{prop:error-direct-unified} Consider the white noise model~(\ref{eq:noise-model}) with a Gaussian prior~$\mathcal N(0, C_{k})$, and with underlying truth~$g_{0}\in\Lambda^g_\psi$ for some index function~$\psi$ (see~(\ref{eq:lpsi})), where either \begin{enumerate} \item\label{it:case1-prop} $C_{k}=Q_k\Lambda^{g} Q_k$ (nativ ), or \item\label{it:case2-prop} $C_{k}=T P_k\Lambda^{f} P_kT^\ast$ (inherited with non-commuting $\Lambda^{f}, T^\astT$, linked via Assumption \ref{ass:prior-linked2scale-noncomm} for $H=T^\astT$), \end{enumerate} and where in the latter case we let~$\Lambda^{g}=T \Lambda^{f} T^\ast$, and we assume that the function~$\psi$ is operator concave. There is a constant~$\ca \geq 2$ such that for any truncation level~$k$ the squared posterior contraction is bounded as \begin{equation} \label{eq:spc-bound} \operatorname{SPC}^{\Lambda^g_\psi}(k) \leq \ca\lr{\psi^{2}{\lr{\frac 1 { n}}}+ \psi^{2}\lr{s_{k+1}(\Lambda^{g})} + \frac{k}{n}}. \end{equation} \end{prop} \subsection{Optimized SPC bound} \label{sec:optimized-spc} We aim at optimizing the general bound~\eqref{eq:spc-bound}. This bound is constituted of two $k$-dependent terms, and a summand $$ \max\set{\psi^{2}\lr{\frac 1 {n}}, \frac 1 {n^{2}}}, $$ which is independent of the truncation level~$k$. As can be seen in the proof of Proposition~\ref{prop:error-direct-unified}, namely~\eqref{eq:spc-bound-2terms}, this summand is the result of bounding the \emph{regularization bias}, inherent in Bayesian problems with (untruncated) Gaussian priors. Hence the best (provable, by bounding the $\operatorname{SPC}$ as above) contraction rate, will be bounded below (in order) by this regularization bias. To better understand the nature of the $k$-dependent terms in the bound~\eqref{eq:spc-bound} we recall the following result from statistical inference. The minimax risk over the class~$\Lambda^g_\psi$ is given as $$ R(n):= \inf_{\hat g}\sup_{g\in \Lambda^g_\psi} \mathbb E\norm{g - \hat g}{}^2, $$ where the infimum runs over all estimators using data~$Y^n$. Similarly, let $$ R_T(n):= \inf_{\hat g_T}\sup_{g\in \Lambda^g_\psi} \mathbb E\norm{g - \hat g_T}{}^2, $$ where the infimum is taken over all (linear) truncated series estimators. Since the class $\Lambda^g_\psi$ constitutes an ellipsoid, the following result holds. \begin{prop}[{\cite[Prop.~8]{MR1062717}}]\label{prop:donohoetal} We have that $$ R_T(n) \leq 2.22 R(n). $$ In particular $$ R_T(n) = \inf_{k} \lr{\psi^{2}\lr{s_{k+1}(\Lambda^{g})} +k/n} \leq 2.22 R(n). $$ \end{prop} We are ready to optimize the bound established in Proposition~\ref{prop:error-direct-unified} while Proposition \ref{prop:donohoetal} will enable the comparison of our optimized bound to the minimax rate. \begin{thm}\label{thm:spc-bound} Consider the white noise model~(\ref{eq:noise-model}) with a Gaussian prior~$\mathcal N(0,C_{k})$, and with underlying truth~$g_0\in\Lambda^g_\psi$ as in~(\ref{eq:lpsi}), where either \begin{enumerate} \item\label{it:case1} $C_{k}=Q_k\Lambda^{g} Q_k$ (nativ ), or \item\label{it:case2} $C_{k}=T P_k\Lambda^{f} P_kT^\ast$ (inherited with non-commuting $\Lambda^{f}, T^\astT$, linked via Assumption \ref{ass:prior-linked2scale-noncomm} for $H=T^\astT$), \end{enumerate} and where in the latter case we let~$\Lambda^{g}=T \Lambda^{f} T^\ast$, and we assume that the function~$\psi$ is operator concave. We assign $k=k_{n}$ as in~(\ref{eq:kn-def}). Then, for the constant~$\ca$ from Proposition~\ref{prop:error-direct-unified}, we have \begin{equation}\label{eq:spc-bound-thm} \operatorname{SPC}^{\Lambda^g_\psi}(k_{n}) \leq 4\ca \max\set{\psi^{2}\lr{\frac 1 n},\frac {k_{n}} n}. \end{equation} If the regularization bias in~\eqref{eq:spc-bound-thm} is of lower order, then the obtained contraction rate over the class $\Lambda^{g}_\psi$ is order optimal. \end{thm} \begin{rem}\label{rem:kn-infinite} We emphasize that necessarily~$k_{n}\to\infty$ as~$n\to\infty$, because otherwise, if~$k_{n} < K<\infty$, then, from~\eqref{eq:kn-def} we find that $$ \psi^2(s_{K}(\Lambda^{g}))\leq \psi^2(s_{k_{n} +1}(\Lambda^{g})) \leq \max\set{\psi^2(1/n), \frac{k_{n} +1}{n}}\leq \max\set{\psi^2(1/n), \frac{K}{n}}\to 0, $$ hence by the properties of index functions we have~$s_{K}(\Lambda^{g})=0$, which is a contradiction. \end{rem} \begin{rem}\label{rem:truncation-dominating} The case that~$k_{n}/n < \psi^2(1/n)$ corresponds to the situation when the regularization bias dominates the overall~$\operatorname{SPC}$. In this case the truncation level is obtained from the relation~$k_{n} = \max\set{j, \quad s_{j}(\Lambda^{g}) > 1/n }$, and this may be significantly smaller than the truncation level obtained in the case that the regularization bias is dominated. \end{rem} It is thus interesting to characterize those cases when the regularization bias in~\eqref{eq:spc-bound-thm} is of lower order. We shall provide a characterization; but for this we need an additional assumption. \begin{ass}[control of decay of singular numbers]\label{ass:decay-rate} There is a constant~$\cc~>~1$ with \begin{equation} \label{eq:decay-rate} \sup_{j\in\mathbb N}\frac{s_{j}(\Lambda^{g})}{s_{j+1}(\Lambda^{g})} \leq \cc. \end{equation} \end{ass} This assumption does not hold for operators $\Lambda^{g}$ with singular values decaying faster than exponentially. \begin{prop}\label{prop:alphalessbeta} Under Assumption~\ref{ass:decay-rate}, the term~$k_{n}/n$ in~\eqref{eq:spc-bound-thm} dominates the overall bound for the $\operatorname{SPC}$ if and only if there is a constant~$1 \leq\cd<\infty$ such that \begin{equation} \label{eq:alphabeta} \psi^{2}\lr{s_{j}(\Lambda^{g})} \leq \cd j s_{j}(\Lambda^{g}),\quad j\in\mathbb N. \end{equation} \end{prop} Specifically, Proposition~\ref{prop:alphalessbeta} applies for problems with covariance operator $\Lambda^{g}$ of the underlying infinite dimensional prior on $g$, having a power type decay of the singular numbers. Thus, in such cases, under Assumption~\ref{ass:decay-rate} the truncation level~$k_{n}$ yields order optimal contraction exactly if~\eqref{eq:alphabeta} holds. \begin{xmplno}[{$\alpha$-regular prior and Sobolev smoothness}] For a native~$\alpha$-regular prior defined in $Y$ (recall the example in~\S~\ref{sec:contraction}), with (untruncated) covariance operator~$\Lambda^{g}$, the Sobolev type smoothness of the underlying truth~$g_{0}\in \eS {\beta}$ is expressed through the index function~$\psi(t)= t^{\beta/(1 + 2\alpha)},\ t>0$ (recall the example in~\S~\ref{sec:smoothness}). For this function we see that for~$\alpha \leq \beta$ it holds that $$ \psi^{2}(s_{j}(\Lambda^{g})) \asymp j^{-2\beta} \,\lesssim\, j \cdot j^{-(1 + 2 \alpha)}, $$ such that both, Assumption \ref{ass:decay-rate} and~condition~(\ref{eq:alphabeta}) hold, and Proposition~\ref{prop:alphalessbeta} applies. The truncation level~$k_{n}$ is then given from balancing~$\frac k n \asymp k^{-2\beta}$, which results in~$k_{n} \asymp n^{1/(1 + 2\beta)}$, yielding a bound for the~$\operatorname{SPC}$ of the form $$ \operatorname{SPC}^{\Lambda^g_\psi}(k_{n}) \,\lesssim\, \frac{k_{n}}{n} \asymp n^{- \frac{2\beta}{1 + 2\beta}}, $$ which is known to be minimax for direct estimation. The same bounds are valid also for inherited $\alpha$-regular priors with commuting operators $\Lambda^{f}, T^\astT$. In the non-commuting case, provided $H=T^\astT$ and $\Lambda^{f}$ satisfy Assumption \ref{ass:prior-linked2scale-noncomm}, the same bounds hold for $\alpha\leq\beta\leq1+2\alpha$, where the additional restriction on $\beta$ is needed in order to ensure that $\psi$ is operator concave. \end{xmplno} \subsection{Interlude} \label{sec:interlude} Frequentist convergence rates of the posterior distribution under Gaussian priors in the Gaussian white noise model, have been considered for example in~\cite{MR1790008} (rates for the posterior mean under Sobolev-type smoothness),~\cite{MR1983541} (contraction rates under Sobolev-type smoothness), and~\cite{MR2418663} (general contraction theory). We gave a detailed discussion here on the one hand because, as explained in Section~\ref{sec:outline}, we are interested in general smoothness assumptions, and on the other hand because we want to emphasize the specifics when using truncated (Gaussian) priors. Theorem~\ref{thm:spc-bound} highlights the general nature of our bounds for the squared posterior contraction ($\operatorname{SPC}$), in terms of both the considered prior covariances, and the smoothness of the truth, expressed using source sets. In our analysis we distinguish two cases: case~\ref{it:case1}, which uses native priors, and which is entirely based on the singular value decomposition of the underlying covariance operator, and case~\ref{it:case2}, which refers to priors inherited from external native priors using some linear mapping, and which is such that the inherited finite dimensional prior is no longer supported in a singular subspace of the covariance operator of the underlying infinite dimensional inherited prior. The latter case, which is relevant when studying the direct problem \eqref{eq:direct} associated to the inverse problem~\eqref{eq:Ys} can be treated provided that the linear mapping is appropriately linked to the external native prior's covariance. In particular this link, captured in Assumption~\ref{ass:prior-linked2scale-noncomm}, imposes a minimum smoothness on the external native prior. Special emphasis is put on the description of the optimal truncation level~$k_{n}$, made explicit in~(\ref{eq:kn-def}). It is seen that in general this level will depend on the underlying smoothness as well as on the noise level~$\frac 1 {\sqrt n}$, and that, in the case that the regularization bias is dominated, it is the same as the truncation level in (minimax) statistical estimation under white noise when using truncated series estimators, as expressed in Proposition~\ref{prop:donohoetal}. Furthermore, the obtained upper bound on the contraction rate involves a truncation-independent term, the regularization bias, and thus in Proposition~\ref{prop:alphalessbeta} we give a characterization to determine whether this term will be of lower order compared to the $k$-dependent terms, or it will be dominating. In the former case the obtained rates of contraction are minimax, while in the latter case they are suboptimal\footnote{We stress that here suboptimality refers to the \emph{obtained} rates, which are upper bounds for the rate of contraction. Lower bounds can in principle be obtained using the theory developed in \cite{IC08}, which is based on the concentration function of the Gaussian prior rather than bounding the SPC. Although very interesting, we presently do not pursue this direction.}. As already mentioned in Remark~\ref{rem:truncation-dominating}, the truncation level according to~\eqref{eq:kn-def} will be smaller for dominating regularization bias than for the case when the regularization bias is dominated. We close this discussion with the following observation. In studies dealing with scaled infinite-dimensional Gaussian priors a typical 'saturation effect' is observed: In order to achieve minimax-optimal rates of contraction the prior smoothness must not be much lower than the regularity of the underlying truth, see~\cite{MR2906881} and~\cite{MR3044507}. The contrary is true for truncated priors: when applying Proposition~\ref{prop:alphalessbeta} in specific examples later on, it will be transparent that the prior regularity must be lower than the regularity of the underlying truth; see the preceding example as well. This has also been observed in~\cite{MR2418663}, and it can be explained by the fact that truncation of a Gaussian prior increases its regularity, which can correct for an under-smoothing but not for an over-smoothing prior. In the case that the truncation of the Gaussian prior is not along some singular subspace, a limitation for the considered smoothness occurs due to the nature of the linking Assumption~\ref{ass:prior-linked2scale-noncomm}. This can be seen from the final example in~\S~\ref{sec:optimized-spc}.\ \section{Modulus of continuity for inverse problems} \label{sec:inverseP} We next consider the linear mapping~$A\colon X\to Y$ from~\eqref{eq:Ys} and shall introduce the modulus of continuity for controlling its inversion on a subset $S$ (often called \emph{conditional stability}). We shall do this for $S:= \mathcal{X}_{k}$, where~$\mathcal{X}_{k}\subset X$ is a $k$-dimensional subspace. We derive bounds on the modulus of continuity which are known to be sharp in many cases. \subsection{Modulus of continuity} \label{sec:modulus} Similarly to the recent study~\cite{MR3757524}, but restricting to normed spaces, we proceed as follows. Given the operator $A$, for a class~$S \subset X$ and a fixed element $f_0\in X$, we let \begin{equation} \label{eq:mod-k-1} \omega_{f_0}(A^{-1},S,\delta):= \sup\set{\norm{f - f_{0}}{X},\ f\in S,\ \norm{A(f -f_{0})}{Y}\leq \delta},\ \delta>0, \end{equation} be the modulus of continuity function. We stress that this modulus function controls the deviation around the element~$f_{0}$, and hence it is local. Recall the self-adjoint companion of $A$ introduced in §~\ref{sec:relating-ops}, $H=A^\ast A$. It is evident that \begin{align} \omega_{f_0}(A^{-1},S,\delta) &= \omega_{f_0}(H^{-1/2}, S,\delta ),\quad \delta>0,\label{eq:omega-A-H} \end{align} hence we shall confine the subsequent analysis to the operator~$H$. \subsection{Bounding the modulus of continuity} \label{sec:main-bound} When bounding the modulus of continuity for the inversion of an operator around an element $f_0\in X$, it is convenient to express the smoothness of $f_0$ \emph{relative to that particular operator}. Precisely, in the context of the inverse problem~(\ref{eq:Ys}), we shall measure the smoothness \emph{relative to the operator~$H$}, the companion of~$A$, and we shall assume that~$f_{0}\inH_\varphi$ for some index function~$\varphi$, see~§~\ref{sec:smoothness} where source conditions were introduced. The control of the modulus of continuity is based on several assumptions, relating a finite dimensional subspace~$\mathcal{X}_{k}\subset X$ to the operator~$H$ as well as to the target function~$f_{0}$. We denote by $P_k$ the orthogonal projection of $X$ onto the subspace $\mathcal{X}_{k}$. Furthermore, recall that we denote by~$s_{k}:= s_{k}(H)$, the $k$-th singular number of the (compact) operator~$H$. \begin{de} [degree of approximation]\label{de:degree-approximation} Let~$K\colon X\to Y$ be a (compact) operator. Given a finite dimensional subspace~$\mathcal{X}_{k}\subset X$ we assign $$ \varrho(K,\mathcal{X}_{k}):= \norm{K(I - P_{k})}{X\to Y} $$ the degree of approximation of the subspace~$\mathcal{X}_{k}$ for the operator~$K$. \end{de} \begin{de} [modulus of injectivity]\label{de:modulus-injectivity} Let~$K\colon X\to Y$ be a (compact) operator. Given a finite dimensional subspace~$\mathcal{X}_{k}\subset X$ we assign $$ j(K,\mathcal{X}_{k}) := \inf_{h\in \mathcal{X}_{k}\setminus\{0\}}\frac{\norm{K h }{Y}}{\norm{h}{X}}, $$ the modulus of injectivity, which quantifies the invertibility of the operator $K$ on the subspace $K(\mathcal{X}_{k})$. \end{de} We mention here, that the last two concepts are interesting for sequences of increasing subspaces $\mathcal{X}_{k}$. Taking $K=H^{1/2}:X\to X$, the quantities $\varrho(H^{1/2},\mathcal{X}_{k})$ and~$j(H^{1/2},\mathcal{X}_{k})$ shall allow us to quantify the impact of the choice $S=\mathcal{X}_{k}$, when bounding the modulus of continuity $\omega_{f_0}(H^{-1/2}, S,\delta )$. \begin{rem} The above~$\varrho(H^{1/2},\mathcal{X}_{k})$ relates to the \emph{$k$-th Kolmogorov number}, while~$j(H^{1/2},\mathcal{X}_{k})$ relates to the \emph{$k$-th Bernstein number}, both of which are well studied quantities in approximation theory, see~\cite{MR774404}. \end{rem} When $\mathcal{X}_{k}$ is the $k$-th singular subspace of $H$, then it can be seen that $\varrho(H^{1/2},\mathcal{X}_{k})=s_{k+1}^{1/2}$, $j(H^{1/2},\mathcal{X}_{k})=s_{k}^{1/2}$ and $\norm{(I-P_{k})\varphi(H)}{}=~ \varphi(s_{k+1}),$ for any index function $\varphi$. When using a subspace~$\mathcal{X}_{k}$ other than the $k$-th singular space, then its quality with respect to the $k$-th singular subspace is measured in terms of Jackson and Bernstein inequalities which look as follows. \begin{ass} [{relating~$\mathcal{X}_{k}$ to the $k$-th singular subspace of $H^{1/2}$}]\label{ass:relations} Consider a sequence $(\mathcal{X}_{k})_{k\in\mathbb N}$ of subspaces of $X$. There are constants~$M, C_{P},C_{B}\ge 1$ such that for $k\in\mathbb N$ we have \begin{align} \varrho(H^{1/2},\mathcal{X}_{k}) & \leq C_{P} s_{k+1}^{1/2},&\qquad \text{(Jackson Inequality)}\\ j(H^{1/2},\mathcal{X}_{k}) & \geq \frac 1 {C_{B}}s_{k}^{1/2},&\qquad \text{(Bernstein Inequality)} \intertext{and for~$f_{0}\inH_\varphi$ we have that } \norm{(I - P_{k})f_{0}}{X} &\leq M \varphi(s_{k+1}).& \qquad \text{(approximation power)} \end{align} \end{ass} \begin{rem} Within the context of projection schemes in classical ill-posed problems such assumptions were made in the study~\cite{MR2394505}. For finite element approximations, i.e.,\ when the spaces~$\mathcal{X}_{k}$ consist of finite elements, a detailed example is given in~\cite[Ex.~2.4]{MR2367863}. In the context of Bayesian methods the recent study~\cite{MR4116718} also makes similar assumptions, see ibid. Ass.~2.3. \end{rem} Under Assumption~\ref{ass:relations} the following bound holds. \begin{prop}\label{prop:main-bound} Let $f_0\in H_\varphi$ and let the sequence $(\mathcal{X}_{k})_{k\in\mathbb N}$ of subspaces of $X$ satisfy Assumption~\ref{ass:relations}. Then for all $k\in\mathbb N$, we have that $$ \omega_{f_{0}}(H^{-1/2},\mathcal{X}_{k},\delta) \leq M\lr{1 +C_{P}C_{B} }\varphi(s_{k+1}) + C_{B \frac{\delta}{\sqrt{s_{k}}}. $$ \end{prop} In the bound from Proposition~\ref{prop:main-bound} we have the flexibility of choosing the truncation level~$k\in\mathbb N$, and we next study this choice. First, we recall the following companion to the index function~$\varphi$ as \begin{equation} \label{eq:Theta} \Theta_{\varphi}(t) := \sqrt t \varphi(t),\quad t>0. \end{equation} Notice that $\Theta_\varphi$ is also an index function, more specifically, it is always strictly increasing hence invertible. Optimizing the bound from Proposition~\ref{prop:main-bound} with respect to the choice of the truncation level, we arrive at the main result of this section. \begin{thm} \label{thm:phi-theta-bound} Suppose that~$f_{0}\inH_\varphi$, and that $(\mathcal{X}_{k})_{k\in\mathbb N}$ satisfies Assumption~\ref{ass:relations}. Given~$\delta>0$ we assign \begin{equation} \label{eq:nast} {k_{\delta}} := \max\set{j,\quad \Theta_{\varphi}(s_{j}) > \delta}. \end{equation} Then there is a constant~$\cg$ such that \begin{equation}\label{eq:modbound} \omega_{f_{0}}(H^{-1/2},\mathcal{X}_{\nast},\delta) \leq \cg \varphi\lr{\Theta_{\varphi}^{-1}(\delta)}, \end{equation} for~$\delta>0$ small enough. \end{thm} Some extensions to the above bounds on the modulus of continuity, can be found in Appendix \ref{app:A}. \section{Relating the contraction rates for the direct and inverse problems} \label{sec:relating} In this section we discuss the steps for proving Theorem~\ref{thm:direct-inverse}, which is an application of Proposition~\ref{prop:ks}. We shall first use Theorem~\ref{thm:spc-bound} to establish contraction rates for the direct problem~\eqref{eq:direct}, finding rate sequences~$\delta_n,$ for truncation level $k_{n}$, $ n\in\mathbb N$. In order to apply Theorem~\ref{thm:spc-bound} we need to determine the inherited prior for the direct problem (formulated in $Y$), obtained by pushing forward the (truncated Gaussian) prior on~$f$ through the mapping~$A$ (formulated in $X$). Furthermore, given an element~$f_0\in X$ we need to express the smoothness of~$g_0= A f_{0}$ with respect to the corresponding (inherited, untruncated) covariance operator. We address both of these tasks in §~\ref{sec:relating-problems} and derive rates $\delta_n$ for the direct problem, by relying on either Assumption~\ref{ass:link-noncomm} or~\ref{ass:prior-linked2scale-noncomm}, depending on whether the (untruncated) prior covariance on $f$ commutes with $H=A^\ast A$ or not. Given such a rate $\delta_n$, we can then use the results of Section \ref{sec:inverseP}, specifically Proposition \ref{prop:main-bound}, to compute the corresponding $\omega_{f_0}(H^{-\frac12},X_{\kn},\delta_n)$, which according Proposition~\ref{prop:ks}, is a rate of contraction for the inverse problem at $f_0$. Here, $\lr{X_k}_{k\in\mathbb N}$ are the singular spaces of the prior covariance operator~$\Lambda^{f}$. A main component of the proof will be the realization that $k_{n}$ as given in \eqref{eq:kn-def}, optimizes both the contraction rate $\delta_n$ as well as our bounds on the modulus of continuity; we shall see this in §~\ref{sec:optimality}. In the course, we shall establish that $\lr{X_k}_{k\in\mathbb N}$ obey Assumption~\ref{ass:relations}. \subsection{Rates for the direct problem} \label{sec:relating-problems} Let us consider the model~\eqref{eq:Ys}, and put a Gaussian prior $\Pi_{k}^{X}=\mathcal{N}(0, P_k \Lambda^{f} P_k)$ on $f\in X$, for a given self-adjoint, positive-definite and trace-class operator $\Lambda^{f}:X\to X$. Here $P_{k}$ denotes the orthogonal projection onto the $k$-dimensional subspace $X_{k}\subset X$ corresponding to the singular value decomposition of the operator $\Lambda^{f}$. We are interested in finding contraction rates $\delta_n$ of~$A f$ around~$A f_{0}$, for a given $f_{0}\in X$. Due to linearity, the Gaussian prior on $f\in X$, induces a Gaussian prior $\Pi_{k}^{Y}$ on $A f\in Y$, which has zero mean and covariance operator \begin{equation} \label{eq:cov-push-farward} C_{k}= A P_{k} \Lambda^{f} P_{k}A^{\ast}. \end{equation} Recall the terminology of native and inherited priors from Section \ref{sec:direct}. It is interesting to ask when this push-forward prior is native for~$g\in Y$ and this is the case when the operators~$H$ and~$\Lambda^{f}$ commute. However, in general this will not be the case, that is, the push-forward of~$\Pi_{k}^{X}$ will not be native in~$Y$. Nevertheless, the $\operatorname{SPC}$ was bounded in~(\ref{ali:mean-ck}) for both native and non-native priors, respectively. See also Theorem \ref{thm:spc-bound}, which optimizes the bounds in both cases. \subsubsection{Commuting case: general smoothness} \label{sec:commute} The main observation is comprised as follows. \begin{lem} \label{lem:commuting-H-lmf} If the operators~$H=A^\ast A$ and~$\Lambda^{f}$ commute, then the push-forward prior~$A_{\sharp \lr{\Pi_{k}^{X}}}$ is native for~$g$. Furthermore, under Assumption~\ref{ass:link-noncomm} (with index function~$\chi$) we see that \begin{enumerate} \item the covariance operator~$\Lambda^{g} = A \Lambda^{f} A^{\ast}$ has a representation \begin{equation} \label{eq:lmg} \Lambda^{g}= \Theta_{\chi}^{2}(U HU^\ast), \end{equation} where~$U$ is an isometry arising from the polar decomposition~$A = U H^{1/2}$, and \item a source conditon~$f_{0}\inH_\varphi$ yields that~$g_{0}\in\Lambda^g_\psi$ with index function~$\psi(t):= \Theta_{\varphi}(\lr{\Theta_{\chi}^{2}}^{-1}(t)),\ t>0$. \end{enumerate} \end{lem} Based on the above technical result we state the following consequence. \begin{prop}\label{prop:relate-direct} Let $\Lambda^{f}:X\to X$ be a positive definite, self-adjoint, trace class linear operator, and consider the companion $H=A^\ast A$ to the forward operator $A:~X~\to~Y.$ Under Assumption \ref{ass:link-noncomm}, assign to any index function~$\varphi$ the related index function~$\psi(t): =\Theta_{\varphi}(\lr{\Theta_{\chi}^{2}}^{-1}(t)),\ t>0$. The following statements regarding a rate sequence~$\lr{\delta_n}_{n\in\mathbb N}$ are equivalent: \begin{enumerate} \item[a)]$\lr{\delta_n}_{n\in\mathbb N}$ is a contraction rate for the direct problem~\eqref{eq:direct} around $g_0=A f_0$, under the push-forwards $A_{\sharp \lr{\Pi_{\ktn}^{X}}}$ of the sequence of Gaussian priors $\Pi_{\ktn}^{X}~=~\mathcal{N}(0,P_{k(n)}\Lambda^{f} P_{k(n)})$ on $f$, where $P_k$ is the orthogonal projection onto the $k$-th singular space $X_{k}$ of $\Lambda^{f}$, and for $f_0\in {H_\varphi}$. \item[b)]$\lr{\delta_n}_{n\in\mathbb N}$ is a rate of contraction for model \eqref{eq:noise-model}, obtained for a sequence of (native) Gaussian priors~$\mathcal N(0, Q_{k(n)}\Theta_{\chi}^{2}(U HU^\ast)Q_{k(n)})$ on $g$, where $Q_k$ is the orthogonal projection onto the $k$-th singular space $U X_{k}$ of $\Lambda^{g}:=~\Theta_{\chi}^2(U HU^\ast)$, and for $g_0\in \Lambda^{g}_\psi$. \end{enumerate} \end{prop} \subsubsection{Non-commuting case: power type smoothness} \label{sec:non-commute} If the operators $H=A^\ast A$ and $\Lambda^{f}$ do not commute, the push-forward of the prior on~$f$ will no longer be native for~$g= A f$. However, even in the non-commuting case we can translate the smoothness assumption~$f_{0}\in H_\varphi$ with power-type $\varphi$, to a corresponding smoothness of~$g_{0}:= A f_{0}$ with respect to the operator~$\Lambda^{g}=A\Lambda^{f}A^\ast$, under Assumption~\ref{ass:prior-linked2scale-noncomm}. \begin{lem}\label{lem:smoothness-h2lmg} Suppose that Assumption~\ref{ass:prior-linked2scale-noncomm} holds. If~$f\in H_\varphi$ for an index function~$\varphi(t)\propto t^{\mu}$ with~$\mu\leq a$, then~$g_{0}= A f_{0} \in\Lambda^{g}_{\psi}$ for an index function~$\psi(t) \propto~t^{\frac{\mu + 1/2}{2a+1}}$, which has an operator concave square. \end{lem} The proof of Lemma~\ref{lem:smoothness-h2lmg}, which holds for $\mu\leq a$, is based on Heinz' Inequality, and this allows to treat power-type smoothness of $g_{0}$ with respect to $\Lambda^{g}$, with exponent $0\leq\theta\leq 1/2$. In particular, it does not allow to fully exploit the results of Section~\ref{sec:direct} for inherited priors, which hold for $0\leq\theta\leq1$ (since they only require operator concavity of $\psi$). Therefore, we shall highlight the following condition, which allows to extend the range of applicability in the non-commuting cases. It is a strengthening of Assumption~\ref{ass:prior-linked2scale-noncomm}: There exists $a\geq1/2$ such that \begin{equation} \label{eq:3over2} \norm{\lr{\Lambda^{f}}^{3/2}f}{X} \asymp \norm{H^{3a}f}{X},\quad f\in X. \end{equation} \begin{rem} In view of Heinz' Inequality (with~$\theta:= 1/3$), \eqref{eq:3over2} is consistent with Assumption~\ref{ass:prior-linked2scale-noncomm}. Conversely, in this non-commuting case, \eqref{eq:3over2} cannot be derived from Assumption~\ref{ass:prior-linked2scale-noncomm}, but instead is a strengthening of it. In brief, the validity of a link condition yields that the eigenfunctions for the operators on both sides must share the same smoothness (which can be seen from the modulus of injectivity, reflecting the 'inverse property'). Therefore, in general a link cannot be 'lifted' to higher powers, contrasting the commuting case, where both sides share the same eigenfunctions, and so do arbitrary powers. \end{rem} The proof of the following strengthening of Lemma~\ref{lem:smoothness-h2lmg} is based on interpolation in scales of Hilbert spaces, a concept which extends Heinz' Inequality which interpolates 'element-wise' to 'operator-wise' interpolation. \begin{lem}\label{lem:smoothness-h2lmg-3over2} Suppose that~(\ref{eq:3over2}) holds. If~$f\in H_\varphi$ for the index function $\varphi(t)=~t^{\mu}$ with~$\mu\leq 2a + 1/2$, then~$g_{0}= A f_{0} \in\Lambda^{g}_{\psi}$ for the index function~$\psi(t) = t^{\frac{\mu + 1/2}{2a+1}}$, which is operator concave. \end{lem} We summarize the developments of this section. \begin{prop}\label{prop:relate-direct-noncomm} Let $\Lambda^{f}:X\to X$ be a positive definite, self-adjoint, trace class linear operator, and consider the companion $H=A^\ast A$ to the forward operator $A:X\to Y.$ Consider a native Gaussian prior $\Pi_{k}^{X}=\mathcal{N}(0,P_{k}\Lambda^{f} P_{k})$ for~$f\in X$, where $P_k$ is the orthogonal projection onto the $k$-th singular space $X_{k}$ of $\Lambda^{f}$. Given $a\geq 1/2$, assign to the index function~$\varphi(t) = t^{\mu}, \mu>0$, the related index function~$\psi(t): = t^{(\mu + 1/2)/(2a+1)},\ t>0$. Suppose either~$\mu \leq a$ and Assumption~\ref{ass:prior-linked2scale-noncomm} holds, or $\mu\leq 2a+1/2$ and \eqref{eq:3over2} holds. Consider the direct problem~\eqref{eq:direct} around $g_0=A f_0$, under the sequence of priors $\Pi_{\ktn}^{X}$ on $f$ and for $f_0\in {H_\varphi}$. Then we can obtain a rate of contraction for this problem, by computing a rate of contraction $\lr{\delta_n}_{n\in\mathbb N}$ for model \eqref{eq:noise-model}, for the sequence $A_{\sharp \lr{\Pi_{\ktn}^{X}}}$ of inherited Gaussian priors on $g$, and for $g_0\in \Lambda^{g}_\psi$ with $\Lambda^{g}=A\Lambda^{f}A^\ast$. \end{prop} We conclude this discussion on relating the obtained smoothness of~$g_{0}=A f_{0}$ to the smoothness of $f_0$, as expressed in Propositions~\ref{prop:relate-direct} and~\ref{prop:relate-direct-noncomm}, respectively for the commuting and non-commuting cases. Specifying $\chi(t):=t^a$ and $\varphi(t):=t^\mu$ in the commuting case, we restrict to the power-type smoothness and relationship between $\Lambda^{f}$ and $H$, considered in the non-commuting case. In that setting, the obtained functions, representing the smoothness, should thus agree. Indeed, it is readily seen that the function~$\psi$ as obtained in Proposition~\ref{prop:relate-direct} is exactly the same as in Proposition~\ref{prop:relate-direct-noncomm} with this specification. Therefore, the assumptions for the non-commuting case allow to maintain the results as obtained in the commuting one, however, the limitations~$a\geq 1/2$ and~$0< \mu \leq 2a+1/2$ occur, which are not seen in the commuting case. \subsection{Rates for the inverse problem - optimality of the truncation point} \label{sec:optimality} Consider a forward operator $A$ and let $H:= A^\ast A$ be its companion self-adjoint operator. Let $\delta_n$ be a rate of contraction for the direct problem \eqref{eq:direct} around~$g_{0}=~A f_0\in Y,$ under a truncated at level $k_n$ Gaussian prior as defined in the previous subsection. If $\Lambda^{f}$ and $H$ commute, by Proposition \ref{prop:relate-direct}, under Assumption~\ref{ass:link-noncomm} such a rate can be computed using Theorem~\ref{thm:spc-bound}. Such a rate can also be computed in the non-commuting case under Assumption~\ref{ass:prior-linked2scale-noncomm}, and the corresponding result was formulated in Proposition~\ref{prop:relate-direct-noncomm}. Then according to Proposition \ref{prop:ks}, in order to compute a rate of contraction for the original inverse problem \eqref{eq:Ys}, it suffices to compute $\varepsilon_n=\omega_{f_0}(H^{-1/2},X_{\kn},\delta_n)$. We have studied bounding the modulus of continuity $\omega_{f_0}(H^{-1/2},\mathcal{X}_{k},\delta)$ in Section \ref{sec:inverseP}. Our bounds hold under Assumption \ref{ass:relations} on the relationship of the subspaces $(\mathcal{X}_{k})_{k\in\mathbb N}$ to the singular subspaces of $H$. Since in the present Bayesian inverse problem context, $(X_{k})_{k\in\mathbb N}$ are aligned to the untruncated prior covariance operator $\Lambda^{f}$, in order to apply the results of Section \ref{sec:inverseP} for bounding $\varepsilon_n=\omega_{f_0}(H^{-1/2},X_{\kn},\delta_n)$, we first need to verify that $(X_{k})_{k\in\mathbb N}$ satisfy Assumption \ref{ass:relations}. We do this in the next proposition. \begin{prop}\label{prop:validity-ass-relations} Let~$(X_{k})_{k\in\mathbb N}$ be the singular spaces for the operator~$\Lambda^{f}$. Both Assumption~\ref{ass:link-noncomm}, or Assumption~\ref{ass:prior-linked2scale-noncomm} with smoothness~$f_{0}\in H_\varphi$ with $\varphi(t) = t^\mu$ for $0<\mu\leq a$, yield the validity of Assumption~\ref{ass:relations}. Under the stronger assumption~\eqref{eq:3over2} the range in the latter setting extends to~$\mu \leq 2a + 1/2$. \end{prop} \begin{rem} The above result is in correspondence with~\cite[Prop.~5.3]{MR4116718}, in which the commuting case is concerned. Here this is extended to the non-commuting cases under the link conditions (Assumption~\ref{ass:prior-linked2scale-noncomm} and~\eqref{eq:3over2}). \end{rem} We next investigate whether the truncation level~$k_{n}$ from~\eqref{eq:kn-def}, also yields an optimized bound when used as a discretization level for the modulus of continuity, such that both bounds are optimized simultaneously. Indeed, we will see that this is the case and the following two technical results are the key. We first establish the optimality of $k_{n}$ in the commuting case, and then extend to the non-commuting case. Given an index function~$\psi$ we consider a rate sequence~$\delta_n$ which obeys \begin{equation}\label{eq:mild-ass} 2 \max\set{\psi^2(1/n), \frac{k_{n}}{n}} \leq \delta_n^{2} \leq \ci^2 \max\set{\psi^2(1/n), \frac{k_{n}}{n}}, \end{equation} for a constant~$2\leq\ci< \infty$. \begin{prop}\label{prop:relation} Under Assumption~\ref{ass:link-noncomm} the following holds true: suppose that $f_{0}\in H_\varphi$, and let~$\psi(t)=\Theta_{\varphi}(\lr{\Theta_{\chi}^{2}}^{-1}(t))$. Let~$k_{n}$ be as in~(\ref{eq:kn-def}), and assume that~\eqref{eq:mild-ass} holds true for a rate sequence $\lr{\delta_n}_{n\in\mathbb N}$. We then have that $$ \omega_{f_0}\lr{H^{-1/2}, X_{\kn},\delta_n} \leq 2 (1 + \ci)\varphi\lr{\Theta_{\varphi}^{-1}(\delta_n)}. $$ \end{prop} This result is extended to the non-commuting case as follows. \begin{prop}\label{prop:relation-noncomm} Under Assumption~\ref{ass:prior-linked2scale-noncomm} with $\mu\leq a$, or Assumption~\eqref{eq:3over2} with $\mu\leq 2a+1/2$, the following holds true: suppose that~$f_{0}\in H_\varphi$ for the power type function~$\varphi(t) = t^{\mu}$, and let~$\psi(t)= t^{(\mu + 1/2)/(2a +1)}$. Let~$k_{n}$ be as in~(\ref{eq:kn-def}), and assume that~\eqref{eq:mild-ass} holds true for a rate sequence $\lr{\delta_n}_{n\in\mathbb N}$. We then have that $$ \omega_{f_0}\lr{H^{-1/2}, X_{\kn},\delta_n} \lesssim \delta_{n}^{\frac{\mu}{\mu +1/2}}. $$ \end{prop} Evidently, for $k_{n}$ as in \eqref{eq:kn-def} a bound as in~\eqref{eq:mild-ass} holds for $\delta_n$ equal to the optimized bound for the direct problem as given in the right hand side of \eqref{eq:spc-bound-thm}, hence our bound on the modulus of continuity is indeed also optimized in both the commuting and non-commuting cases according to the last two propositions. Combined, Propositions~\ref{prop:relation} and \ref{prop:relation-noncomm} imply the validity of Theorem~\ref{thm:direct-inverse}. We emphasize that Proposition~\ref{prop:relation-noncomm} holds true for the extended range~$\mu\leq 2a+1/2$, provided that condition~\eqref{eq:3over2} holds. This yields the following corollary. \begin{cor}[Corollary to the proof of Theorem~\ref{thm:direct-inverse}] Consider the inverse problem \eqref{eq:Ys}, and suppose that~$f_{0}$ has smoothness~$H_\varphi$ for the function~$\varphi(t) = t^\mu$. Assume we put a truncated Gaussian prior $\mathcal{N}(0, P_{k_{n}}\Lambda^{f} P_{k_{n}})$ on $f$, with $\Lambda^{f}$ a self-adjoint, positive-definite, trace-class, linear operator in $X$, and specify the related covariance operator~$\Lambda^{g}=A\Lambda^{f} A^\ast$. Under condition~\eqref{eq:3over2} with $\mu\leq 2a+1/2$, consider the index function \begin{equation*} \psi(t) =t^{\frac{\mu + 1/2}{2a+1}}, \quad t>0. \end{equation*} For the choice~$k_{n}$ according to~\eqref{eq:kn-def} let $\delta_n$ be given as \begin{equation*} \delta_n:= C \max\set{\psi^{2}\lr{\frac 1 n},\frac {k_{n}} n} \end{equation*} for some constant~$C$. Then the posterior contracts around $f_{0}$ at a rate \[ \varepsilon_n\asymp \varphi(\Theta_\varphi^{-1}(\delta_n)),\quad n\to\infty. \] \end{cor} \section{Examples} \label{sec:xmpls} Here we exhibit how to use Theorem \ref{thm:direct-inverse} in order to obtain rates of contraction for the inverse problem~\eqref{eq:Ys}. The subsequent examples will distinguish between the decay of the singular numbers of the forward map~$A$, being moderate (power type), severe (exponential decay) or mild (logarithmic decay). Throughout we fix once and for all some element~$f_{0} \in\eS \beta$, see~\eqref{eq:sobolev-ell} in Section~\ref{xmpl:es-beta}. It will be transparent that, depending on the underlying operator~$H=A^\ast A$ this will result in different source-wise representations~$f_0\inH_\varphi$. However, regardless of the kind of ill-posedness of the operator~$H$ we will have that~$\varphi^2(s_j)\asymp j^{-2\beta}$. For a truncated Gaussian prior on $f$ with underlying covariance operator $\Lambda^{f}$, we thus need to determine SPC-rates~$\delta_n$ for the direct problems \eqref{eq:direct} which correspond to these examples. We will do this in Section~\ref{sec:direct-rates}, and we will apply Theorem~\ref{thm:spc-bound} which results in the bound \eqref{eq:spc-bound-thm} for the optimal truncation level $k_{n}$ given in \eqref{eq:kn-def}. For all considered types of behaviour of the singular numbers of~$A$, we will study truncated $\alpha$-regular Gaussian priors as introduced in Section~\ref{sec:contraction}. In addition, in the case that~$A$ exhibits exponential decay of the singular numbers, we shall also discuss a prior covariance operator with exponential decay (analytic prior), this is in alignment with the case analyzed in the study~\cite{MR3757524}. In all cases we will assume that $\Lambda^{f}$ and $H$ commute. Having determined rates~$\delta_n$ for the direct problem, in Section~\ref{sec:examples} we shall establish bounds for the modulus of continuity corresponding to the forward operators $A$ at hand. To this end we will apply Theorem \ref{thm:phi-theta-bound}, which for any $\delta$ results in the bound \eqref{eq:modbound} for the optimal truncation level ${k_{\delta}}$ given in \eqref{eq:nast}. We shall then highlight, that by Theorem \ref{thm:direct-inverse}, plugging $\delta=\delta_n$ in these bounds results in contraction rates for the corresponding inverse problems, for a truncated at level $k_{n}$ Gaussian prior. The rates given below for (most of) the direct and (all of the) inverse problems, correspond to the minimax rates for estimation in Gaussian white noise, under Sobolev-type smoothness. While for Examples~\ref{xmpl:power-type} and~\ref{xmpl:severe-type} these minimax rates are known, it is possible to find the minimax rates for the mildly ill-posed case in Example~\ref{xmpl:mild-type}, by using Theorem~\ref{thm:spc-bound} for the direct problem and the result from~\cite{MR3859257} for the inverse problem. These rates are given here for the first time. Finally, we will conclude with a discussion on non-commuting $\Lambda^{f}$ and $H$ cases in Section~\ref{sec:examples-discussion}. \subsection{Direct rates} \label{sec:direct-rates} We confine to the case that $\Lambda^{f}$ and $H$ commute, so that Assumption \ref{ass:link-noncomm} holds with appropriate~$\chi$. Recall that in this context, $\Lambda^{g}=A\Lambda^{f}A^\ast$, and that the smoothness of the truth is expressed relative to $\Lambda^{g}$, via $\psi(t)$ given in \eqref{eq:psi-def}. Then, in order to obtain the truncation level~$k_{n}$ from~\eqref{eq:kn-def} and the corresponding bound on the $\operatorname{SPC}$ from~\eqref{eq:spc-bound-thm}, we shall proceed as follows. In this commuting case we see that~$s_j(\Lambda^{g}) = s_j(H) s_j(\Lambda^{f})$ and we first check if Assumption~\ref{ass:decay-rate} holds, in which case we can use Proposition~\ref{prop:alphalessbeta} to determine whether the regularization term dominates in the bound \eqref{eq:spc-bound-thm} or not. Furthermore, we make use of the identity \begin{math} \psi(\Lambda^{g}) = \Theta_{\varphi}(U H U^\ast), \end{math} which holds for $\psi(t)$ from~\eqref{eq:psi-def}, and this extends to the singular numbers. Using this identity, condition~(\ref{eq:alphabeta}) translates to \begin{equation} \label{eq:alpha-beta-new} (j^{-2\beta} \asymp)\quad {\varphi}^{2}(s_{j}) \leq \cd j s_{j}(\Lambda^{f}),\quad j=1,2,\dots \end{equation} Under Assumption~\ref{ass:decay-rate} and \eqref{eq:alpha-beta-new}, we find~$k_{n}$ by balancing~$k/n \asymp \psi^2\lr{s_k(\Lambda^{g})}$ and the~$\operatorname{SPC}$ is bounded by (a multiple of)~$k_{n}/n$. This bound is known to be order optimal. If Assumption~\ref{ass:decay-rate} does not hold, then we proceed as follows, cf. Remark~\ref{rem:truncation-dominating}. We find~$l_n$ by balancing~$l/n \asymp \psi^2\lr{s_l(\Lambda^{g})}$. Then we check whether~$\psi^2(1/n)$ is larger than~$l_n/n$, in which case the regularization bias dominates. If this is the case, then $k_{n}$ is found by balancing $s_{j}(\Lambda^{g})\asymp 1/n$ and the~$\operatorname{SPC}$ is bounded by (a multiple of)~$\psi^2(1/n)$. Otherwise, $k_{n}=l_n$ and the $\operatorname{SPC}$ is bounded by (a multiple of)~$k_{n}/n$. In the latter case this is known to be order optimal again. We emphasize that we only need to explicitly compute $\psi$ (hence also $\chi, (\Theta^2_\chi)^{-1}$), in the case that the regularization bias dominates. Another consequence is worth mentioning. In case that the regularization bias is dominated, and hence the obtained contraction rate corresponds to the minimax rate of statistical estimation, then the truncation level~$k_{n}$ is obtained from balancing~$k/n \asymp \psi^2(s_k(\Lambda^{g})) = \Theta_\varphi^2(s_k(H))$. In particular, the level~$k_{n}$ does not depend on the chosen regularity of~$\Lambda^{g}$, it is entirely determined by the smoothness as expressed with respect to~$H$. Similar applies to the contraction rate for the inverse problem. As the minimax rate cannot depend on the prior regularity the same holds for the chosen truncation level. This is seen in the examples, below. Notice that in Example \ref{xmpl:severe-type} (both with $\alpha$-regular and analytic priors as considered below), the direct problem corresponds to a prior covariance and smoothness of the truth, which are not standard in the literature for the white noise model. Here they appear naturally, because the structure of the direct problem is inherited from the considered inverse problem. For this reason, it was necessary to have the general setup for the direct problem in Section \ref{sec:direct}. \begin{xmpl}[moderately ill-posed operator]\label{xmpl:power-type} Here we assume that the operator~$H$ has power type decay of the singular numbers, that is,\ $s_{j}(H)\asymp j^{-2p},\ p>0,\ j=1,2,\dots$. We need to find a corresponding index function such that~$f_{0}\in H_\varphi$. This is achieved by letting~$\varphi(t) := t^{\beta/(2p)}$, see the example in §~\ref{sec:smoothness}, which gives $\Theta_{\varphi}(t)=t^\frac{\beta+p}{2p}$. We consider truncated $\alpha$-regular priors, so that $s_j(\Lambda^{f})\asymp j^{-1-2\alpha}.$ Note that $g_{0}$ has smoothness $\Lambda^{g}_\psi=(UHU^\ast)_{\Theta_\varphi}$, which in this example translates to Sobolev-type smoothness of order $\beta+p$. We have $s_j(\Lambda^{g})=s_js_j(\Lambda^{f})\asymp j^{-1 - 2(\alpha+p)}$, and hence the regularity of the prior increases from~$\alpha$ to~$\alpha + p$, also. Assumption~\ref{ass:decay-rate} holds in this case. For $\alpha$-regular priors condition~\eqref{eq:alpha-beta-new} holds if and only if~$\alpha \leq \beta$, and in this case we know from Proposition~\ref{prop:alphalessbeta} that the regularization bias in Theorem \ref{thm:spc-bound} is of lower order. The optimized truncation level $k_{n}$ as given in \eqref{eq:kn-def}, can thus be computed by balancing $$ \frac k n \asymp \psi^{2}\lr{s_{k}(\Lambda^{g})} = \Theta_{\varphi}^{2}\lr{s_{k}}\asymp k^{-2(\beta + p)}, $$ yielding~$k_{n} \asymp n^{\frac1{1+2\beta + 2p}}$. Plugging this into \eqref{eq:spc-bound-thm}, we obtain the bound \begin{equation}\label{eq:deltan-ex1} \delta_n^2:=\operatorname{SPC}^{\Lambda^g_\psi}(k_{n})\lesssim n^{-\frac{2\beta+2p}{1+2\beta+2p}}, \end{equation} which is the square of the minimax rate for the white noise model under Sobolev-type smoothness of order $\beta+p$ (this is both asserted by Theorem \ref{thm:spc-bound} but also well known in this case). \end{xmpl} \begin{xmpl}[severely ill-posed operator]\label{xmpl:severe-type} Here we assume that the operator~$H$ has exponential decay of the singular numbers, that is,\ $s_{j}(H)\asymp e^{- 2 \gamma j^{p}}, \ p>0,\ j=1,2,\dots$. The resulting index function~$\varphi$ which realizes the source condition for~$f_{0}$ is then~$\varphi(t) = \log^{-\beta/p}(1/t)$, and the related function~$\Theta_{\varphi}$ is given as~$\Theta_{\varphi}(t)= \sqrt t \log^{-\beta/p}(1/t)$. Lemma~\ref{lem:geometry} shows that its inverse behaves like $\Theta_{\varphi}^{-1}(s) \sim s^{2}\log^{2\beta/p}(1/s)$. We again consider truncated $\alpha$-regular priors, so that $s_j(\Lambda^{f})\asymp j^{-1-2\alpha}.$ Note that again~$g_{0}$ has smoothness $\Lambda^{g}_\psi=(UHU^\ast)_{\Theta_\varphi}$, which in this example means that $g_{0}$ has coefficients decaying at least as fast as $e^{-\gamma j^p}/j^{\beta}$. We have that $s_j\lr{\Lambda^{g}}=s_js_j(\Lambda^{f})\asymp j^{-1-2\alpha}e^{-2\gamma j^p}$. In this case, Assumption~\ref{ass:decay-rate} only holds if $p\leq1$. Since we are interested in all $p>0$, we cannot apply Proposition \ref{prop:alphalessbeta} and we need to check which of the two terms dominates in the bound \eqref{eq:spc-bound-thm} in Theorem~\ref{thm:spc-bound}, thus we compute $\psi(t)=\Theta_\varphi((\Theta_\chi^2)^{-1}(t))$, explicitly. By Assumption~\ref{ass:link-noncomm} we have that $\chi^2(s_j)=s_j(\Lambda^{f})$. Thus~$\chi^2(t)\asymp \log^{-\frac{1+2\alpha}p}(1/t)$, and hence $\Theta^2_{\chi}(t)\asymp t\log^{-\frac{1+2\alpha}p}(1/t)$. Using Lemma \ref{lem:geometry}, we can invert $\Theta^2_{\chi}$ to get $$(\Theta^2_\chi)^{-1}(s)\sim s\log^{\frac{1+2\alpha}p}(1/s)\quad \text{as}\ s\to0,$$ thus \[\psi(t)\asymp t^\frac12\log^{\frac{1+2\alpha-2\beta}{2p}}(1/t),\quad \text{as } t\to0.\] On the one hand the regularization bias behaves asymptotically as \[\psi^2(1/n)\asymp\frac1n\log^{\frac{1+2\alpha-2\beta}p}(n).\] On the other hand, we find $l_n$ by balancing $$ \frac l n \asymp \psi^{2}\lr{s_{l}(\Lambda^{g})} = \Theta_{\varphi}^{2}\lr{s_{l}}\asymp l^{-2\beta}e^{-2\gamma l^p}, $$ resulting in~$l_n\asymp\log^\frac1p(n)$, again using Lemma \ref{lem:geometry}. We thus see that the regularization bias is of lower order, i.e., $\psi^2(1/n)\lesssim l_n/n$, if and only if $\alpha\leq \beta$, in which case $k_{n}$ in \eqref{eq:kn-def} is equal to $l_n$. For $\alpha>\beta$, the level $k_{n}$ can be found by balancing $s_k(\Lambda^{g})\asymp 1/n$, yielding $k_{n}\asymp \log^\frac1p(n)$ again. The right hand side of the bound \eqref{eq:spc-bound-thm} is dominated by $k_{n}/n\asymp \frac1n\log^{\frac1p}(n)$ in the former case, while in the latter by $\psi^2(1/n)$ as given above. Combining, Theorem \ref{thm:spc-bound} gives the bound \begin{equation}\label{eq:deltan-ex2} \delta_n^2:=\operatorname{SPC}^{\Lambda^g_\psi}(k_n)\lesssim n^{-1}\log^\frac{1+0\vee(2\alpha-2\beta)}p(n), \end{equation} which, whenever~$\alpha \leq \beta$, is the square of the minimax rate for the white noise model under the smoothness class $\Lambda^{g}_\psi=(U H U^\ast)_{\Theta_\varphi}$ (this is both asserted by Theorem \ref{thm:spc-bound} but also well known, again in this case). \end{xmpl} \begin{xmpl} [mildly ill-posed operator]\label{xmpl:mild-type} Here we assume that the operator~$H$ has logarithmic decay of the singular numbers, that is,\ $s_{j}(H)\asymp \log^{-2p}j,\ p>0,\ j=1,2,\dots$, such that the operator is 'almost continuously invertible'. The index function for~$f_{0}$ is then given as~$\varphi(t) = e^{-\beta/t^{1/(2p)}},\ t>0$. The inverse of the resulting function~$\Theta_{\varphi}$ is seen to behave like~$\Theta_{\varphi}^{-1}(s) \sim \beta^{2p} \log^{-2p}\lr{\frac{1/s}{\log^p(1/s)}}$ using Lemma \ref{lem:geometry}. We consider again $\alpha$-regular priors, so that we find that~$s_j(\Lambda^{g}) = s_j s_j(\Lambda^{f}) \asymp j^{-1- 2\alpha} \log^{-2p} j$. In particular Assumption~\ref{ass:decay-rate} holds, and Condition~(\ref{eq:alpha-beta-new}) is valid if and only if~$\alpha \leq \beta$. Thus, in the latter case the regularization bias is dominated, and the truncation level~$k_{n}$ is obtained from balancing $$ \frac k n \asymp \psi^{2}\lr{s_{k}(\Lambda^{g})} = \Theta_{\varphi}^{2}\lr{s_{k}}\asymp k^{-2\beta}\log^{-2p}k, $$ resulting in~$k_n\asymp n^\frac{1}{1+2\beta}\log^{-\frac{2p}{1+2\beta}}(n)$, again using Lemma \ref{lem:geometry}. Notice, that we do not need to explicitly determine the function~$\psi$ in this case, since the identity~$\psi^{2}\lr{s_{k}(\Lambda^{g})} = \Theta_{\varphi}^{2}\lr{s_{k}}$ holds throughout, as mentioned above. We obtain that \begin{equation}\label{eq:deltan-ex3} \delta_n^2:=\operatorname{SPC}^{\Lambda^g_\psi}(k_n)\lesssim \frac{k_{n}}{n} \asymp n^{-\frac{2\beta}{1+2\beta}}\log^{-\frac{2p}{1+2\beta}}(n), \end{equation} and that this is the (square of the) minimax rate of statistical estimation in the white noise model under smoothness expressed in terms of the index function~$\Theta_\varphi$ from above. \end{xmpl} Finally, we revisit Example~\ref{xmpl:severe-type}, but this time with the covariance operator of the Gaussian prior as considered in~\cite[Section 3.3]{MR3757524}. \begin{xmplno}[Example~\ref{xmpl:severe-type} with analytic prior] The covariance operator of the Gaussian prior is assumed to have eigenvalues~$s_{j}(\Lambda^{f})\asymp j^{-\alpha} e^{-\xi j^{p}}$, $\xi>0, \alpha>0, p>0, j=1,\dots$. Although the element~$g_{0}=Af_{0}$ is the same as before, i.e., $g_{0}$ has coefficients decaying at least as fast as $e^{-\gamma j^p}/j^{\beta}$, its smoothness relative to the resulting~$\Lambda^{g}$ is with respect to a different function~$\psi$, such that again~$g_0\in \Lambda^g_\psi$. Indeed, we find that $s_j\lr{\Lambda^{g}}\asymp j^{-\alpha}e^{-(\xi+2\gamma) j^p}$, so that again Assumption \ref{ass:decay-rate} only holds if $p\leq1$. We thus cannot apply Proposition \ref{prop:alphalessbeta} and we again need to explicitly check which of the two terms dominates the bound \eqref{eq:spc-bound-thm} in Theorem~\ref{thm:spc-bound}. In particular, we again need to explicitly compute $\psi(t)=\Theta_\varphi((\Theta_\chi^2)^{-1}(t)).$ By Assumption \ref{ass:link-noncomm}, we have that $\chi^2(t)\asymp t^\frac{\xi}{2\gamma}\log^{-\frac{\alpha}p}(1/t)$, so that $\Theta^2_{\chi}(t)\asymp t^{1+\frac{\xi}{2\gamma}}\log^{-\frac{\alpha}p}(1/t)$. Using Lemma \ref{lem:geometry}, we can invert $\Theta_{\chi}^2$ to get $$(\Theta^2_\chi)^{-1}(s)\sim s^{\frac{2\gamma}{2\gamma+\xi}}\log^{\frac{2\alpha\gamma}{p(2\gamma+\xi)}}(s^{-\frac{2\gamma}{2\gamma+\xi}}), \;\text{as}\ s\to0,$$ thus \[\psi(t)\asymp t^\frac{\gamma}{2\gamma+\xi}\log^{-\frac{\beta}{p}+\frac{\alpha\gamma}{2\gamma p+\xi p}}(1/t), \;\text{as}\ t\to0.\] On the one hand the regularization bias behaves asymptotically as \[\psi^2(1/n)\asymp n^{-\frac{2\gamma}{2\gamma+\xi}}\log^{-\frac{2\beta}{p}+\frac{2\alpha\gamma}{2\gamma p+\xi p}}(n).\] On the other hand, we find~$l_n$ from balancing $$ \frac l n \asymp \psi^{2}\lr{s_{l}(\Lambda^{g})} = \Theta_{\varphi}^{2}\lr{s_{l}}\asymp l^{-2\beta}e^{-2\gamma l^p}, $$ resulting in~$l_n\sim \lr{\frac{1}{2\gamma}\log(n)}^\frac 1 p$, again using Lemma \ref{lem:geometry}. As a result the second term in the bound \eqref{eq:spc-bound-thm} is $\frac{l_n}{n}\sim n^{-1}\lr{\frac{1}{2\gamma}\log(n)}^\frac 1 p$, which is always dominated by the regularization bias since $\xi>0$. Thus, the truncation~$k_{n}$ is obtained from~$s_k(\Lambda^{g})\asymp \frac 1 n$, resulting similarly in~$k_{n} \sim \lr{\frac{1}{\xi + 2\gamma}\log(n)}^\frac 1 p $ (which is smaller than $l_n$). Combining, Theorem \ref{thm:spc-bound} gives \begin{equation}\label{eq:deltan-ex3-analytic} \delta_n^2:= \operatorname{SPC}^{\Lambda^g_\psi}(k_n)\lesssim n^{-\frac{2\gamma}{2\gamma+\xi}}\log^{-\frac{2\beta}{p}+\frac{2\alpha\gamma}{2\gamma p+\xi p}}(n). \end{equation} In particular, this rate is worse than the (minimax) rate obtained by the $\alpha$-regular prior. \end{xmplno} \subsection{Modulus of continuity and inverse rates} \label{sec:examples} Below, we use Theorem~\ref{thm:phi-theta-bound} to bound the modulus at $f_0\in\eS \beta$, for $S=\mathcal{X}_{k}$ where $\mathcal{X}_{k}$ satisfies Assumption~\ref{ass:relations}, and for the three different choices of the linear operator~$H$. We then plug the rates~$\delta=\delta_n$ for the direct problem, obtained in the previous section, into these bounds. According to Theorem \ref{thm:direct-inverse}, the resulting rates are rates of contraction for the corresponding inverse problem \eqref{eq:Ys} under the respective prior. \begin{xmplno}[Example~\ref{xmpl:power-type} continued] Here the setup is exactly the same as in Example~\ref{xmpl:power-type}, with $s_j:=s_j(H)\asymp j^{-2p}$, such that $\Theta_{\varphi}(t)=t^{(\beta + p)/(2p)}$. For the (optimal) choice $k_\delta\asymp \delta^{-\frac{1}{\beta+p}}$, we thus get the bound on the modulus of continuity \begin{equation}\label{eq:modex1} \varphi\lr{\Theta_{\varphi}^{-1}(\delta)} \asymp \delta^{\frac{\beta/(2p)}{\beta/(2p) + 1/2}} = \delta^{\beta/(\beta + p)},\quad \text{as}\ \delta\to 0. \end{equation} Then, in order to get a rate of contraction for the original inverse problem with an $\alpha$-regular Gaussian prior truncated at $k_{n}$, it suffices to insert~$\delta_n$ from~\eqref{eq:deltan-ex1} into bound \eqref{eq:modex1} on the modulus of continuity. Indeed, for $\alpha\leq \beta$ we get the rate $$ \delta_n^{\frac{\beta}{\beta + p}}\lesssim n^{-\frac{\beta}{1 + 2\beta + 2p}}, $$ which is known to be the minimax rate in the inverse problem setting with the assumed moderately ill-posed operator $H$, under Sobolev-type smoothness $\beta$. \end{xmplno} \begin{xmplno}[Example~\ref{xmpl:severe-type} continued] With the representation of~$\varphi$ and $\Theta_\varphi$ as in Example~\ref{xmpl:severe-type} we get the bound on the modulus of continuity \begin{equation}\label{eq:modex2} \varphi\lr{\Theta_{\varphi}^{-1}(\delta)} \asymp \log^{-\beta/p}(1/\delta),\quad \text{as}\ \delta\to 0, \end{equation} which by again using Lemma~\ref{lem:geometry}, is achieved for $k_\delta\asymp \log^{1/p}(1/\delta).$ In order to get a rate of contraction for the original inverse problem with an $\alpha$-regular Gaussian prior truncated at ~$k_n$, it suffices to insert $\delta_n$ from~\eqref{eq:deltan-ex2} into the bound~\eqref{eq:modex2} on the modulus of continuity. Regardless of whether~$\alpha\leq \beta$ or not, we get the rate $$ \log^{-\beta/p}(1/\delta_n)\lesssim\log^{-\beta/p}(n), $$ which is known to be the minimax rate in the inverse problem setting with the assumed severely ill-posed operator $H$, and under Sobolev-type smoothness $\beta$. That is, $\alpha$-regular Gaussian priors truncated at $k_{n}\asymp \log^\frac 1 p(n)$, are rate adaptive over Sobolev-type smoothness in this severely ill-posed operator setting. When using an analytic prior we need to insert the (sub-optimal) rate from~\eqref{eq:deltan-ex3-analytic} into bound \eqref{eq:modex2} on the modulus of continuity. This yields that $$ \log^{-\beta/p}(1/\delta_n)\lesssim\log^{-\beta/p}(n) , $$ is a rate of contraction for the inverse problem. In particular, the truncated analytic Gaussian prior with truncation point~$k_n\asymp \log^\frac 1 p(n),$ is also rate adaptive over Sobolev balls $\eS\beta$, for all $\beta>0$. This is in agreement with the findings in~\cite[Section 3.3]{MR3757524}. \end{xmplno} \begin{xmplno}[Example~\ref{xmpl:mild-type} continued] With the representation of~$\varphi$ and $\Theta_\varphi$ as in Example~\ref{xmpl:mild-type}, and using Lemma \ref{lem:geometry} again, we observe an asymptotic behavior for the modulus of continuity \begin{equation}\label{eq:modex3} \varphi\lr{\Theta_{\varphi}^{-1}(\delta)} \asymp \delta \log^p\lr{\frac 1 \delta},\quad \text{as}\ \delta\to 0, \end{equation} and this bound is achieved for $k_\delta\asymp \delta^{-\frac1\beta}\log^{-\frac{p}\beta}(1/\delta).$ This is (up to a logarithm) linear in~$\delta$, and the inverse problem is not much harder than the direct one. In analogy to~\cite{MR3815105} the problem is~\emph{mildly ill-posed}. Inserting the rate for~$\delta_n$ from~\eqref{eq:deltan-ex3} into bound \eqref{eq:modex3} on the modulus of continuity yields that $$ \delta_n \log^{p}(1/\delta_n)\lesssim n^{- \frac{\beta}{1 + 2 \beta}} \log^{{\frac{2\beta p}{1 + 2\beta}}}(n), $$ is a rate of contraction for the inverse problem. \end{xmplno} \subsection{Discussion on the non-commuting case} \label{sec:examples-discussion} We conclude with a discussion about the non-commuting case, and we revisit the setup of Example~\ref{xmpl:power-type}, i.e.,\ with Sobolev type smoothness~$\beta$ and power type decay of the singular numbers of~$H$ as~$s_j(H) \asymp j^{-2p}$. In this case the applicability of Theorem~\ref{thm:direct-inverse} was limited to~$\mu \leq 2a + 1/2$, due to the assumed concavity of the function~$\psi$. Translating the assumed setup, we find that the exponent giving the smoothness of $f_{0}$ specifies to~$\mu:= \beta/(2p)$, while the exponent~$a$ in Assumption~\ref{ass:prior-linked2scale-noncomm} to~$a:= (1 + 2 \alpha)/(4p)$. First, the assumption~$a\geq1/2$ imposes a minimum regularity of the prior $1+2\alpha\geq 2p$, if $2p>1$. In terms of Sobolev smoothness~$\beta$, and for~$\alpha$-regular priors, the above limitation translates to~$\beta + p \leq 1 + 2(\alpha + p)$, and the function~$\psi$ would be given by~$\psi(t) = t^{(\beta + p)/(1 + 2 (\alpha + p))}$, being concave under this limitation. This is in accordance with the discussion at the end of~\S~\ref{sec:optimized-spc}, because when turning from~$f_0$ to~$g_0 = A f_0$ the Sobolev-type smoothness increases from~$\beta$ to~$\beta + p$. Also, the regularity of the prior, when turning from~$\Lambda^{f}$ to~$\Lambda^{g}$ increases from~$\alpha$ to~$\alpha + p$, see \eqref{eq:weyl-H-lmg}. Using this information to compute~$k_{n}$ from \eqref{eq:kn-def}, we get that the $\alpha$-regular prior truncated at $k_{n}\asymp n^\frac1{1+2\beta+2p}$ gives the minimax rate in this non-commuting setting, for~$\alpha\leq\beta\leq 1+2\alpha+p$. \section{Proofs} \label{sec:proofs} In order to understand the arguments that are used in some of the subsequent proofs, we recall a few facts from the theory of (bounded non-negative) self-adjoint operators in Hilbert space; we refer to~\cite{MR1477662} for a comprehensive treatment. First, we introduce the partial ordering for (non-negative) self-adjoint operators, say~$G^{1},G^{2}\colon Z \to Z$, acting in a Hilbert space~$Z$. We write~$G^{1}\prec G^{2}$ if~$\scalar{G^{1}z}{z} \leq \scalar{G^{2}z}{z},\ z\in Z$, and~$G^{1}\asymp G^{2}$ if there are constants~$0 < a_{1},a_{2} < \infty$ such that both~$G^{1} \prec a_{2}G^{2}$ and~$G^{2} \prec a_{1}G^{1}$. Weyl's Monotonicity Theorem, see~\cite[III.2.3]{MR1477662} asserts that~$G^1\prec G^2$ implies that the singular numbers also obey~$s_j(G^1) \leq s_j(G^2),\ j=1,2,\dots$ Furthermore, we recall \emph{Heinz' Inequality}, see~\cite[Prop.~8.21]{MR1408680}, which states that for~$0 \leq \theta \leq 1$ the inequality~$\norm{G^{1}z}{Z}\leq \norm{G^{2}z}{Z}$ implies that~$\norm{\lr{G^{1}}^{\theta}z}{Z}\leq \norm{\lr{G^{2}}^{\theta}z}{Z}$, where the fractional power is again defined by spectral calculus. We shall also use the fact that for a positive-definite, self-adjoint operator~$H\colon X \to~X$, an isometry~$U\colon X \to Y$, and an index function~$\zeta$, we have from spectral calculus that \begin{equation}\label{eq:speccalc} U \zeta(H) U^\ast = \zeta(U H U^\ast). \end{equation} Finally, the above ordering in the space of self-adjoint operators in Hilbert space gives rise to notions as \emph{operator monotonicity} and \emph{operator concavity}, extending the usual comparisons from real valued functions to self-adjoint operators by spectral calculus, and we refer to the monograph~\cite{MR1477662}. Specifically, for some range, say~$[0,a]$, an operator valued function~$\psi$ is operator concave if for any pair of non-negative self-adjoint operators~$G^1, G^2$ with spectra in~$[0,a]$ it holds true that $$\alpha \psi(G^1) + (1-\alpha) \psi(G^2)\prec \psi\lr{\alpha G^1 + (1-\alpha)G^2},\quad 0 \leq \alpha \leq 1. $$ In our subsequent analysis we will confine to power type index functions. Such functions are operator concave if and only if they are concave. However, we occasionally use and highlight the relevance of the operator concavity to indicate that the results have extensions to the more general context, without dwelling into this. \subsection{Proofs of Section~\ref{sec:direct}} \begin{proof}[Proof of Proposition~\ref{prop:error-direct-unified}] The bound for the $\operatorname{SPC}(g_{0},k)$ will be based on the decomposition in~(\ref{eq:spc-sum}), and we shall bound each summand, separately. We start with bounding the posterior spread, and notice that for a (non-negative finite rank) operator~$G\colon Y \to Y$ we always have that~$\operatorname{tr}{G} \leq (\operatorname{rank}{G}) \norm{G}{Y\to Y}$. Since the prior covariance~$C_{k}$ has rank at most~$k$, and since~$\lr{C_{k} + \frac 1 n}^{-1}C_{k}$ is norm-bounded by one, we can bound the posterior spread as $$ \operatorname{tr}\lr{C_{post,k}} \leq \frac k n \norm{\lr{C_{k} + \frac 1 n}^{-1}C_{k}}{Y \to Y} \leq \frac k n. $$ Similarly we bound the estimation variance as \begin{align*} &\mathbb E\norm{\frac 1 {\sqrt n}\lr{C_{k} + \frac 1 n}^{-1} C_{k}\xi}{}^{2} = \frac 1 n \operatorname{tr}\left(\lr{C_{k} + \frac 1 n}^{-2} C_{k}^{2} \right)\\ &\leq \norm{\lr{C_{k} + \frac 1 n}^{-1} C_{k}}{}\times\operatorname{tr}\lr{C_{post,k}}\leq \frac{k}{n}. \end{align*} It remains to bound the estimation bias~$\norm{g_{0} - \mathbb E \hat g_{k}}{Y}$ under smoothness~$g_{0}\in~\Lambda^{g}_{\psi}$. To this end we notice that~$\mathbb E(\hat g_k) =\lr{C_{k} + \frac 1 n}^{-1} C_{k} g_0$. Then the bias simplifies to \begin{equation} g_{0} - \mathbb E\hat g_{k} =\lr{I -\lr{C_{k} + \frac 1 n}^{-1} C_{k} }g_{0} = \frac 1 n\lr{C_{k}+ \frac 1 n}^{-1}g_{0}.\label{ali:bias} \end{equation} We introduce the residual function of Tikhonov regularization~$r_{\alpha}(t) := \alpha/(t +~\alpha),$ $\alpha>0, t>0$, and it is readily checked that for a sub-linear index function~$\psi$ we have that~$r_{\alpha}(t) \psi(t) \leq \psi(\alpha)$. This is then used by spectral calculus as operator function~$r_{\alpha}(C_{k})$, which implies that~$\norm{r_{\alpha}(C_{k})\psi(C_{k})}{Y\to Y}\leq \psi(\alpha)$. Since $\norm{r_{\alpha}(C_{k})}{Y \to Y}\leq~1$, this yields with~$\alpha:= 1/n$ and for~$g_0 \in\Lambda^g_\psi$, that \begin{align} \norm{g_{0} - \mathbb E\hat g_{k}}{Y} & = \norm{r_{\alpha}(C_{k})g_0}{Y}\notag\\ & \leq \norm{r_{\alpha}(C_{k})\psi(C_{k})}{Y\to Y} +\norm{r_{\alpha}(C_{k})\lr{\psi(\Lambda^{g}) - \psi(C_{k})}}{Y\to Y} \notag\\ & \leq \psi\lr{\frac 1 n} + \norm{\psi(\Lambda^{g}) - \psi(C_{k})}{Y\to Y},\label{ali:psi-diff} \end{align} where the last inequality holds if the index function~$\psi$ is sub-linear. Otherwise, the maximal decrease of the first summand (as~$n\to \infty$) is of the order~$\frac 1 { n}$, which is known as the saturation of Tikhonov regularization. The second summand in~\eqref{ali:psi-diff} will be bounded, both for the commuting (native prior or inherited prior with commuting $\Lambda^{f}, T^\astT$) and non-commuting (inherited prior with non-commuting $\Lambda^{f}, T^\astT$) cases. This will then result in an overall bound for the~$\operatorname{SPC}$ after taking into account the bounds for the posterior spread and the estimation variance as already established. First, in the native case, the projections~$Q_{k}$ are orthogonal on the singular subspaces of~$\Lambda^{g}$, and we have that~$\psi(C_{k}) = Q_{k}\psi(\Lambda^{g})$. Thus~$\psi(\Lambda^{g}) - \psi(C_{k}) = (I - Q_k) \psi(\Lambda^{g}) $, and hence we have that $$ \norm{\psi(\Lambda^{g}) - \psi(C_{k})}{Y\to Y} = \psi\lr{s_{k+1}(\Lambda^{g})}. $$ Thus, overall, from~\eqref{ali:psi-diff}, and the corresponding bounds for the posterior spread and the estimation variance, the $\operatorname{SPC}^{\Lambda^g_\psi}(k,g_0)$ is bounded by \begin{equation} \label{eq:spc-bound-2terms} \operatorname{SPC}^{\Lambda^g_\psi}(k, g_{0}) \leq \max\set{\psi^{2}\lr{\frac 1 { n}}, \frac \cb {n^2}} + \psi^{2}\lr{s_{k+1}(\Lambda^{g})} + 2\frac k n. \end{equation} for some constant $\cb>0$, and this holds uniformly for~$g_0\in \Lambda^g_\psi$. The proof is complete, since~$1/n^2\leq k/n$ We turn to the case of inherited priors, and we shall use the operator concavity of the index function~$\psi$. This implies, cf.~\cite[Thm.~X.1.1]{MR1477662}, that the second summand in~\eqref{ali:psi-diff}, is bounded as \begin{equation}\label{eq:psi-norm-diff} \norm{\psi(\Lambda^{g}) - \psi(C_{k})}{Y \to Y} \leq \psi\lr{\norm{\Lambda^{g} - C_{k}}{Y \to Y}}. \end{equation} We have that $$ \Lambda^{g} - C_{k} = T \Lambda^{f} T^{\ast} - T \lr{\Lambda^{f}}^{1/2} P_{k} \lr{\Lambda^{f}}^{1/2} T^{\ast} = T \lr{\Lambda^{f}}^{1/2} \lr{I - P_k} \lr{\Lambda^{f}}^{1/2} T^{\ast}, $$ which gives $$ \norm{\Lambda^{g} - C_{k}}{Y \to Y} = \norm{T \lr{\Lambda^{f}}^{1/2} \lr{I - P_k}}{X \to Y}^2. $$ We thus bound the approximation error \begin{equation}\label{eq:rhok} \rho_k := \norm{T \lr{\Lambda^{f}}^{1/2}(I - P_{k})}{X \to Y} , \end{equation} which expresses the capability to approximate the compound operator~$T \lr{\Lambda^{f}}^{1/2}$ by finite rank approximations, yielding by virtue of \eqref{eq:psi-norm-diff} that \begin{equation}\label{eq:psi-norm-rhok} \norm{\psi(\Lambda^{g}) - \psi(C_{k})}{Y \to Y} \leq \psi\lr{\rho_k^{2}}. \end{equation} To this end, we will rely upon the link between~$T$ and~$\Lambda^{f}$, as captured by Assumption~\ref{ass:prior-linked2scale-noncomm}. First, using Weyl's monotonicity Theorem with Assumption \ref{ass:prior-linked2scale-noncomm} we find that \begin{equation} \label{eq:weyl-H-lmf} s_{j}^{2a}(H) \asymp s_{j}(\Lambda^{f}),\quad j=1,2,\dots \end{equation} Next, by applying Heinz' Inequality with~$\theta= 1/(2a)\leq 1$, we see that \begin{equation} \label{eq:h12-lmf} \norm{H^{1/2}f}{X} \asymp \norm{\lr{\Lambda^{f}}^{1/(4a)}f}{X},\quad x\in X. \end{equation} Using spectral calculus and Assumption \ref{ass:prior-linked2scale-noncomm} with~$f:= T^{\ast}g,\ g\in Y$ we find for arbitrary~$g\in Y$ that \begin{equation} \label{eq:aast-lmg} \norm{\lr{T T^{\ast}}^{a+1/2} g}{Y} = \norm{ H^{a} T^{\ast}g}{{X}} \asymp \norm{\lr{\Lambda^{f}}^{1/2} T^{\ast}g}{{X}} = \norm{\lr{\Lambda^{g}}^{1/2}g}{Y}. \end{equation} Thus, Weyl's Monotonicity Theorem yields \begin{equation} \label{eq:weyl-H-lmg} s_{j}^{a+1/2}(H) = s_{j}^{a+1/2}(T T^{\ast}) \asymp s_{j}^{1/2}(\Lambda^{g}),\quad j=1,2,\dots \end{equation} We shall use these estimates, and the fact that~$P_{k}$ are the singular projections of~$\Lambda^{f}$, to bound~$\rho_{k}$ as \begin{align*} \rho_{k} & = \norm{T \lr{\Lambda^{f}}^{1/2}(I - P_{k})}{X \to Y} = \norm{H^{1/2}\lr{\Lambda^{f}}^{1/2}(I - P_{k})}{X \to X}\\ & \asymp \norm{\lr{\Lambda^{f}}^{1/(4a)}\lr{\Lambda^{f}}^{1/2}(I - P_{k})}{X \to X} = s_{k+1}^{\frac{2a+1}{4a}}(\Lambda^{f})\\ & = s_{k+1}^{a+1/2}(H) \asymp s_{k+1}^{1/2}(\Lambda^{g}),\quad k=1,2,\dots \end{align*} Thus~$\rho_k^{2}\asymp s_{k+1}(\Lambda^{g})$, as~$k\to\infty$. Inserting this into the bound from~(\ref{eq:psi-norm-rhok}) we complete the estimate for the bias from~\eqref{ali:psi-diff}, and obtain the same bound as in the native case, when restricting to operator concave~$\psi$. This completes the proof. \end{proof} \begin{rem} Within the context of projection schemes for ill-posed equations in Hilbert space, a more elaborate analysis allows for bounding the bias for general spectral regularization schemes, and for certain index functions which can express higher order smoothness. Specifically, such index functions are products of operator concave and Lipschitz ones; we refer to~\cite[Thm.~2]{MR2036530} for details. \end{rem} \begin{proof}[Proof of Theorem~\ref{thm:spc-bound}] Let~$k_{n}$ be as in~\eqref{eq:kn-def} and consider the right hand side of~\eqref{eq:spc-bound}. We then observe that for the index~$k_{n} +1$ we have that~$\psi^2\lr{s_{k_{n}+1}(\Lambda^{g})} \leq \max\set{\psi^{2}\lr{\frac 1 n},\frac {k_{n}+1} n}$. For the proof we shall distinguish two cases. First, we shall assume that~$\psi^2(1/n) \leq k_{n}/n$. In this case we bound \begin{align*} \psi^2\lr{\frac 1 n} + \psi^2\lr{s_{k_{n}+1}(\Lambda^{g})} + \frac{k_{n}}{n} & \leq 2 \frac{k_{n}}{n} + \psi^{2}\lr{s_{k_{n}+1}(\Lambda^{g})}\\ & \leq 2 \frac{k_{n}}{n} + \frac{k_{n} +1}{n}\leq 4 \frac{k_{n}}{n}. \end{align*} In the other case, when~$1/n \leq k_{n}/n < \psi^2(1/n)$, we bound \begin{align*} \psi^2\lr{\frac 1 n} + \psi^2\lr{s_{k_{n}+1}(\Lambda^{g})} + \frac{k_{n}}{n} & \leq 2 \psi^2\lr{\frac 1 n} + \max\set{\psi^2\lr{\frac1n}, \frac{k_{n}}n+\frac1n}\\ & \leq 4 \psi^2\lr{\frac 1 n}. \end{align*} Thus, in either case we find that $$ \psi^2\lr{\frac 1 n} + \psi^2\lr{s_{k_{n}+1}(\Lambda^{g})} + \frac{k_{n}}{n} \leq 4 \max\set{\psi^{2}\lr{\frac 1 n},\frac {k_{n}} n}, $$ and by Proposition~\ref{prop:error-direct-unified} we get the bound~\eqref{eq:spc-bound-thm} in both settings considered in the statement. In order to assert that the contraction rate is order optimal in the case when ~$\psi^2(1/n) \leq k_{n}/n$, we use the fact that $$ \inf_{k} \lr{\psi^{2}\lr{s_{k+1}(\Lambda^{g})} +k/n} \geq \frac {k_{n}}{n}, $$ together with Proposition~\ref{prop:donohoetal}. The last bound is seen as follows. First, if~$k\geq k_{n}$ the above bound is trivial. If, on the other hand~$k < k_{n}$, yielding that~$k+1 \leq k_{n}$, then $$ \psi^{2}\lr{s_{k+1}(\Lambda^{g})} +k/n \geq \psi^{2}\lr{s_{k_{n}}(\Lambda^{g})} \geq \frac{k_{n}}{n}, $$ by the definition of~$k_{n}$ in~(\ref{eq:kn-def}). \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:alphalessbeta}] Notice that at~$k_{n}+1$ we have that~\begin{equation}\label{eq:kn+1}\psi^{2}\lr{s_{k_{n}+1}(\Lambda^{g})} \leq \max\set{\psi^2\lr{\frac 1 n },\frac {k_{n} +1} n}. \end{equation} We first assume~\eqref{eq:alphabeta} to hold and show that $k_{n}/n$ dominates in \eqref{eq:spc-bound-thm}. If~$\psi^2(1/n) \leq (k_{n}+1)/n$, then we find \begin{equation}\label{eq:lower-order} \psi^2(1/n) \leq \frac{k_{n}+1}{n}\leq \cc\cd \frac{k_{n}}{n} \end{equation} for~$n \geq n_0:=1/(\cc\cd -1)$, which means that $k_{n}/n$ dominates in \eqref{eq:spc-bound-thm}. Otherwise, if~$(k_{n}+1)/n \leq \psi^2(1/n)$ then \eqref{eq:kn+1} gives that~$\psi^{2}\lr{s_{k_{n}+1}(\Lambda^{g})} \leq \psi^2(1/n)$, thus~$s_{k_{n}+1}(\Lambda^{g}) \leq 1/n$. Using~\eqref{eq:kn-def},~\eqref{eq:alphabeta} and~\eqref{eq:decay-rate}, we bound $$ \psi^2\lr{\frac 1 n} < \psi^2\lr{s_{k_{n}}(\Lambda^{g})} \leq \cd k_{n} s_{k_{n}}(\Lambda^{g}) \leq \cc\cd k_{n} s_{k_{n}+1}(\Lambda^{g}) \leq \cc\cd \frac{k_{n}}{n}, $$ which again proves that $k_{n}/n$ dominates in \eqref{eq:spc-bound-thm}. For the converse implication, suppose that~\eqref{eq:alphabeta} is violated. Again, if $\frac{k_{n}~+~1}n~\leq~\psi^2(1/n)$ then $$ \frac{k_{n}}{n} \leq \frac{k_{n}+1}{n}\leq \psi^2(1/n), $$ showing that the regularization bias $\psi^2(1/n)$ dominates in \eqref{eq:spc-bound-thm}. Otherwise, if~$\psi^2(1/n) \leq (k_{n}+1)/n$, then by \eqref{eq:kn+1} we find that $\psi^2\lr{s_{k_{n} +1}(\Lambda^{g})} \leq \frac{k_{n}+1} n$, or equivalently that $$ n \leq \frac {k_{n}+1} {\psi^{2}\lr{s_{k_{n} + 1}(\Lambda^{g})}}. $$ The violation of~\eqref{eq:alphabeta} yields that \begin{equation}\label{eq:quottozero} \frac{j s_{j}(\Lambda^{g})}{\psi^{2}\lr{s_{j}(\Lambda^{g})}} \to 0\quad \text{as}\ j\to\infty. \end{equation} Hence, first using Assumption~\ref{ass:decay-rate}, we can bound \begin{align*} n s_{k_{n}}(\Lambda^{g})& \leq \cc n s_{k_{n} + 1}(\Lambda^{g})\leq \cc \frac{(k_{n} +1) s_{k_{n} + 1}(\Lambda^{g})}{\psi^{2}\lr{s_{k_{n} +1}(\Lambda^{g})}} \longrightarrow 0, \end{align*} by virtue of~\eqref{eq:quottozero}, because~$k_{n} \to \infty$, see Remark~\ref{rem:kn-infinite}. Thus, we must have that $s_{k_{n}}(\Lambda^{g}) \leq \frac 1 n$, for $n$ sufficiently large. Overall, we conclude that then $$ \frac {k_{n}}{n} < \psi^2\lr{s_{k_{n}}(\Lambda^{g})} \leq \psi^{2}\lr{\frac 1 n}, $$ showing that the regularization bias dominates~$k_{n}/n$, as $n\to\infty$ and thus~$k_{n}\to~\infty$. \end{proof} \subsection{Proofs of Section~\ref{sec:inverseP}} In order to establish the bound for the modulus of continuity, we rely on the following auxiliary result, bounding the modulus of continuity in terms of the degree of approximation and the modulus of injectivity, as introduced in Section~\ref{sec:main-bound}. \begin{prop}\label{prop:main-error-bound} Let~$f_{0}\in X$. The following bound holds true for every~$k\in\mathbb N$ and~$h\in \mathcal{X}_{k}$. \begin{equation} \label{eq:main-error-Xn} \norm{h - f_{0}}{X} \leq \lr{1 + \frac{\varrho{(H^{1/2},\mathcal{X}_{k})}}{j(H^{1/2},\mathcal{X}_{k})}}\norm{(I - P_{k})f_{0}}{X} + \frac{\norm{H^{1/2}(h - f_{0})}{X}}{j(H^{1/2},\mathcal{X}_{k})}. \end{equation} \end{prop} \begin{proof} Given~$f_{0}$ we assign~$f_{k}:= P_{k}f_{0}$. Clearly, both~$h, f_{k}\in \mathcal{X}_{k}$. Then we can bound $$ \norm{h - f_{0}}{} \leq \norm{h - f_{k}}{} + \norm{f_{k} - f_{0}}{} = \norm{h - f_{k}}{} + \norm{(I - P_{k})f_{0}}{}. $$ For the first summand we continue and bound, noticing that~$P_{k}(h - f_{k}) = h - f_{k}$ as \begin{align*} \norm{h - f_{k}}{} & = \norm{P_{k}(h- f_{k})}{} \leq \frac{\norm{H^{1/2}(h - f_k)}{}}{j(H^{1/2},\mathcal{X}_{k})}\\ & \leq \frac{\norm{H^{1/2}(h - f_{0})}{} + \norm{H^{1/2}(f_{0} -f_{k})}{} }{j(H^{1/2},\mathcal{X}_{k})}\\ & \leq \frac{\norm{H^{1/2}(h - f_{0})}{}}{j(H^{1/2},\mathcal{X}_{k})} + \frac{\norm{H^{1/2}(f_{0} - f_{k})}{}}{j(H^{1/2},\mathcal{X}_{k})}\\ & \leq \frac{\norm{H^{1/2}(h - f_{0})}{}}{j(H^{1/2},\mathcal{X}_{k})}+ \frac{\varrho(H^{1/2},\mathcal{X}_{k})\norm{(I - P_{k})f_{0}}{}}{j(H^{1/2},\mathcal{X}_{k})}. \end{align*} This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:phi-theta-bound}] Given the bound from Proposition~\ref{prop:main-bound} there is a constant~$\cb$ such that $$ \omega_{f_{0}}(H^{-1/2},\mathcal{X}_{\nast},\delta) \leq \cb \lr{{\varphi}(s_{{k_{\delta}}+1}) + \frac{\delta}{\sqrt{s_{{k_{\delta}}}}}}. $$ (We can take~$\cb:= \max\set{M\lr{1 + C_{P}C_{B },C_{B }$.) Let~$t_{\delta}$ be the solution to the equation~$\Theta_{\varphi}(t) = \delta$, and ${k_{\delta}}$ be given as in~(\ref{eq:nast}). Notice that~$t_{\delta} \to 0$ as~$\delta\to 0$, and hence that~${k_{\delta}}\to\infty$ as~$\delta\to 0$. First we see that~$\Theta_{\varphi}\lr{s_{{k_{\delta}} +1} }\leq \delta$, and hence $\varphi(s_{{k_{\delta}}+1})\leq \varphi\lr{\Theta_{\varphi}^{-1}(\delta)}$. Also, $s_{{k_{\delta}}} >t_{\delta}$, and therefore we find that $$ \frac{\delta}{\sqrt{s_{{k_{\delta}}}}} < \frac{\delta}{\sqrt{t_{\delta}}} = \frac{\Theta_{\varphi}\lr{\Theta_{\varphi}^{-1}(\delta)}}{\sqrt{\Theta_{\varphi}^{-1}(\delta)}} = \varphi\lr{\Theta_{\varphi}^{-1}(\delta)}. $$ Overall this results in the desired bound with~$\cg:= 2 \cb$. \end{proof} \subsection{Proofs of Section~\ref{sec:relating}} \begin{proof} [Proof of Lemma~\ref{lem:commuting-H-lmf}] This is a consequence of the simultaneous diagonalization Theorem, see e.g.~\cite{MR2978290}. If~$H=A^\ast A$ and~$\Lambda^{f}$ commute then this also holds true for the projections~$P_{k}$, because these are singular with respect to~$\Lambda^{f}$. The polar decomposition of~$A$ yields an isometry~$U:X\to Y$ such that~$A = U H^{1/2}$, so that $$ \Lambda^{g}:=A\Lambda^{f}A^\ast=U H^{1/2}\Lambda^{f} H^{1/2}U^\ast, \;\Lambda^{g}:Y\to Y. $$ We thus see that $$ C_{k}= A P_{k} \Lambda^{f} P_{k}A^{\ast} = U P_{k} U^{\ast} U H^{1/2} \Lambda^{f} H^{1/2}U^{\ast} U P_{k} U^{\ast}=U P_{k}U^\ast\Lambda^{g} U P_{k}U^\ast, $$ where~$Q_{k}:= U P_{k} U^{\ast}$ are orthogonal projections onto the spaces~$U X_{k}\subset~Y$, which coincide with the singular spaces of $\Lambda^{g}$. Hence $$ C_{k}= Q_{k}\Lambda^{g} Q_{k}, $$ such that the push-forward prior~$A_{\sharp \lr{\Pi_{k}^{X}}}$ is native for~$g$. We next express the covariance $\Lambda^{g}$ via $H$. To this end, we use Assumption \ref{ass:link-noncomm}, quantifying the commutativity between~$H$ and~$\Lambda^{f}$ by assuming that these are linked via a certain index function~$\chi$. Then, recalling from Definition \ref{de:index-noncomm} that $\Theta_{\chi}$ denotes the companion $\chi$, we can write, using~\eqref{eq:speccalc}, that \begin{equation} \label{eq:lmg-proof} \Lambda^{g}= U H^{1/2} \Lambda^{f} H^{1/2} U^{\ast} = U \Theta_{\chi}^{2}(H) U^{\ast}= \Theta_{\chi}^{2}(U HU^\ast). \end{equation} For the last equality we used. In particular, identity~\eqref{eq:lmg-proof} yields $s_j(\Lambda^{g}) = \Theta_{\chi}^{2}(s_j),\ j=1,2,\dots$, where we recall that $s_j:=s_j(H).$ Assumption~\ref{ass:link-noncomm} also allows us to translate a given source condition (smoothness class) for $f_0\in X$ relative to the operator~$H$, to a source condition for $g_0=A f_0 = U H^{1/2}f_0\in Y$, relative to the operator~$\Lambda^{g}$. More precisely, we shall identify an index function~$\psi$ such that $g_0\in\Lambda^{g}_\psi.$ Indeed, for any element~$f_{0}\in {H_\varphi}$ we find, with~$w:= U v\in Y$, and using~\eqref{eq:lmg}, that \begin{equation} \label{eq:g0inlambda} g_{0}:= U H^{1/2}f_{0} = U \Theta_{\varphi}(H)U^\ast (U v) = \Theta_{\varphi}\lr{\lr{\Theta_{\chi}^{2}}^{-1}(\Lambda^{g})}w,\quad \norm{w}{Y}\leq 1, \end{equation} where $\Theta_{\varphi}$ is the companion of $\varphi$, see Definition \ref{de:index-noncomm}. This corresponds to~$g_{0}\in \Lambda^{g}_{\psi}$ with index function~$\psi(t):= \Theta_{\varphi}(\lr{\Theta_{\chi}^{2}}^{-1}(t)),\ t>0$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:smoothness-h2lmg}] Here we start from~(\ref{eq:aast-lmg}) (with~$T := A$), and apply Heinz' Inequality with~$\theta:= \frac{\mu+1/2}{a + 1/2}\leq 1$. This gives $$ \norm{ H^{\mu} A^{\ast}g}{X} =\norm{\lr{A A^{\ast}}^{\mu+1/2} g}{Y} \asymp \norm{\lr{\Lambda^{g}}^{\frac{\mu+1/2}{2a+1}}g}{Y},\quad g\in Y. $$ By virtue of Douglas' Range Inclusion Theorem, and we refer to~\cite{MR3985479}, this implies $$ \mathcal R\lr{A H^{\mu}} = \mathcal R\lr{\lr{\Lambda^{g}}^{\frac{\mu+1/2}{2a+1}}}, $$ such that a source-wise representation $f_0=H^\mu v, \;v\in X$, yields a corresponding representation $g_{0}=A f_{0} =(\Lambda^{g})^{\frac{\mu+1/2}{2a+1}} w, \;w\in Y$. Finally, under~$\mu\leq a$ we see that~$\frac{\mu + 1/2}{2a+1}\leq 1/2$ which yields the operator concavity of~$\psi^{2}$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:smoothness-h2lmg-3over2}] First, as argued before, the relation in~\eqref{eq:3over2} implies the validity of Assumption~\ref{ass:prior-linked2scale-noncomm}, such that~(\ref{eq:aast-lmg}) holds true, and we have that \begin{equation} \label{eq:lmg12} \norm{H^{a}A^{\ast}g}{X} \asymp \norm{\lr{\Lambda^{g}}^{1/2}g}{Y},\quad g\in Y. \end{equation} Applying~\eqref{eq:h12-lmf} (which holds under Assumption~\ref{ass:prior-linked2scale-noncomm}) to~$f:= \Lambda^{f} A^{\ast}g,\ g\in Y$ we infer that \begin{equation} \label{eq:lmg-lmf} \norm{\Lambda^{g} g}{Y} \asymp \norm{\lr{\Lambda^{f}}^{(4a +1)/(4a)}A^{\ast}g}{X},\quad g\in Y. \end{equation} Now we actually use~(\ref{eq:3over2}). Indeed, for~$a\geq 1/2$ we find that~$(4a +1)/(4a) \leq 3/2$. Thus, we can apply Heinz' Inequality with~$\theta:= 2(4a +1)/(12a)\in[0,1]$ to~(\ref{eq:3over2}) which gives \begin{equation} \label{eq:lmg1} \norm{\Lambda^{g} g}{Y} \asymp \norm{\lr{\Lambda^{f}}^{(4a +1)/(4a)}A^{\ast}g}{X} \asymp \norm{ H^{2a+1/2} A^{\ast}g}{X},\quad g\in Y. \end{equation} Thus we have two inequalities (derived from~(\ref{eq:lmg12}) and~(\ref{eq:lmg1}), respectively) and with generic constant~$C$), namely \begin{align*} \norm{H^{a} A^{\ast}g}{X} & \leq C \norm{\lr{\Lambda^{g}}^{1/2}g}{Y},\quad g\in Y, \intertext{and also} \norm{H^{2a+1/2}A^{\ast}g}{X} & \leq C \norm{\Lambda^{g} g}{Y}, \quad g\in Y. \end{align*} We are thus in the setting of~\cite[Thm.~3]{MR2277542} of interpolation in Hilbert scales, and we conclude that for~$a \leq \mu \leq 2a +1/2$ we have that \begin{equation} \label{eq:interpol-bound32} \norm{H^{\mu} A^{\ast}g}{X} \leq C \norm{\lr{\Lambda^{g}}^{(\mu + 1/2)/(2a +1)} g}{Y}, \quad g\in Y. \end{equation} Again, Douglas' Range Inclusion Theorem asserts that then $$ \mathcal R\lr{A H^{\mu}} \subseteq \mathcal R\lr{\lr{\Lambda^{g}}^{(\mu + 1/2)/(2a +1)}}. $$ In other words, every element~$g_{0}=A H^{\mu} v$ belongs to (a multiple of)~$\Lambda^g_\psi$ for the function~$\psi(t)= t^{(\mu + 1/2)/(2a +1)}$, and the proof is complete. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:validity-ass-relations}] If Assumption~\ref{ass:link-noncomm} holds, then the operator~$\Lambda^{f}$ and~$H$ commute, and hence the spaces~$X_{k}$ are also singular spaces for~$H$. In this case we have that $$ j(H^{1/2},X_{k}) = s_k,\quad \varrho(H^{1/2},X_{k}) = s_{k+1},\quad \text{and}\ \norm{(I - P_k) \varphi(H)}{X} = \varphi(s_{k+1}), $$ such that Assumption~\ref{ass:relations} holds with~$C_P = C_B = M = 1$. Under Assumption~\ref{ass:prior-linked2scale-noncomm} we use~(\ref{eq:h12-lmf}) with~$f:= (I - P_{k})v,\ \norm{v}{X}\leq 1$ $$ \norm{H^{1/2}(I - P_{k})v}{X} \asymp \norm{\lr{\Lambda^{f}}^{1/(4a)}(I - P_{k})v}{X}\leq s_{k+1}^{1/(4a)}(\Lambda^{f}) \asymp s_{k+1}^{1/2}, $$ where we used~(\ref{eq:weyl-H-lmf}) for the last asymptotics. This shows the Jackson Inequality. Similarly, using~(\ref{eq:h12-lmf}) with~$f\in X_{k},\ \norm{f}{X}=1$ we bound $$ \norm{H^{1/2}f}{X} \asymp \norm{\lr{\Lambda^{f}}^{1/(4a)}f}{X} \geq j\lr{\lr{\Lambda^{f}}^{1/(4a)},X_{k}} = s_{k}^{1/(4a)}(\Lambda^{f}) \asymp s_{k}^{1/2}, $$ which shows that a Bernstein Inequality also holds true. Finally, for a power type function~$\varphi(t) = t^{\mu}$ and for~$0 < \mu \leq a$, Heinz' Inequality with~$\theta:= \mu/a$ applied to the asymptotics in Assumption~\ref{ass:prior-linked2scale-noncomm} yields $$ \norm{H^{\mu}(I - P_{k})v}{X} \asymp \norm{\lr{\Lambda^{f}}^{\mu/(2a)}(I - P_{k})v}{X}\leq R s_{k+1}^{\mu/(2a)}(\Lambda^{f}) \asymp s_{k+1}^{\mu}, $$ whenever~$\norm{v}{X}\leq R$. Similarly, under the stronger assumption~\eqref{eq:3over2} and for the extended range $0<\mu\leq 2a+1/2$, we find by using Heinz' Inequality with~$\theta=\mu/(3a)$ that $$ \norm{H^{\mu}(I - P_{k})v}{X} \asymp \norm{\lr{\Lambda^{f}}^{\mu/(2a)}(I - P_{k})v}{X}\leq R s_{k+1}^{\mu/(2a)}(\Lambda^{f}) \asymp s_{k+1}^{\mu}. $$ Consequently, whenever~$f_0\inH_\varphi$ for~$\varphi(t)= t^{\mu}$ with~$0 < \mu \leq 2a+1/2$ (under the stronger assumption \eqref{eq:3over2} when $\mu >a$), it holds $$ \norm{(I - P_{k})f_{0}}{X} \leq R \norm{(I - P_{k})H^{\mu}}{X\to X} = R \norm{H^{\mu}(I - P_{k})}{X\to X} \lesssim s_{k+1}^{\mu}, $$ which completes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:relation}] By Proposition \ref{prop:validity-ass-relations}, the spaces~$(X_{k})_{k\in\mathbb N}$ satisfy Assumption~\ref{ass:relations} (with~$C_{B}=C_{P}=M=1$), and hence Proposition~\ref{prop:main-bound} applies and yields \begin{align*} \omega_{f_0}\lr{H^{-1/2}, X_{\kn},\delta_n} & \leq 2\lr{\varphi(s_{k_{n}+1}) + \frac{\delta_n}{\sqrt {s_{k_{n}}}}}. \end{align*} We bound the two summands. By the definition of~$k_{n}$, and recalling from §~\ref{sec:commute} that $s_{j}(\Lambda^{g})=\Theta^2_{\chi}(s_j),$ we find that $$ \psi^2(\Theta_{\chi}^2(s_{k_{n}+1})) \leq \max\set{\psi^2(1/n), \frac{k_{n}+1}{n}}\leq 2 \max\set{\psi^2(1/n), \frac{k_{n}}{n}}\leq \delta_n^2. $$ This yields~$\Theta_{\varphi}(s_{k_{n} +1}) \leq \delta_n$, and hence that $$ \varphi(s_{k_{n}+1}) \leq \varphi\lr{\Theta_{\varphi}^{-1}(\delta_n)}. $$ To bound the second summand we recall the definition of $k_{n}$ to see that $$ \Theta_{\varphi}^{2}(s_{k_{n}}) = \psi^2\lr{s_{k_{n}}(\Lambda^{g})} > \max\set{\psi^2(1/n),\frac{k_{n}}{n}}\geq \frac{\delta_n^2}{\ci^2}. $$ Thus we see that~$s_{k_{n}} \geq \lr{\Theta_{\varphi}^{2}}^{-1}\lr{\frac{\delta_n^2}{\ci^2}}$, and consequently \begin{align*} \frac{\delta_n^{2}}{s_{k_{n}}} & \leq \frac{\delta_n^{2}}{\lr{\Theta_{\varphi}^{2}}^{-1}\lr{\frac{\delta_n^2}{\ci^2}}} = \ci^2 \frac{\Theta_\varphi^2\lr{\Theta_\varphi^2}^{-1}\lr{\frac{\delta_n^{2}}{\ci^2}}}{\lr{\Theta_{\varphi}^{2}}^{-1}\lr{\frac{\delta_n^2}{\ci^2}}}\\ & = \ci^2 \varphi^2\lr{\Theta_\varphi^2}^{-1}\lr{\frac{\delta_n^{2}}{\ci^2}} \leq \ci^2 \varphi^2\lr{\Theta_\varphi^2}^{-1}\lr{\delta_n^{2}}, \end{align*} where for the last bound we used that both $\varphi$ and $\Theta_\varphi$ are non-decreasing. The proof can thus be completed. \end{proof} The proof of Proposition~\ref{prop:relation} above, consisted of three steps. First, we used Assumption~\ref{ass:relations} in order to derive a bound for the modulus of continuity in terms of a decreasing (in~$k$) smoothness-dependent part, and a non-decreasing part. Then, each of the two terms were appropriately bounded by using the definition of~$k_{n}$. We follow a similar strategy in the next proof as well. \begin{proof} [Proof of Proposition~\ref{prop:relation-noncomm}] By virtue of Propositions~\ref{prop:validity-ass-relations} and~\ref{prop:main-bound}, we have the following error bound for the modulus of continuity: $$ \omega_{f_0}\lr{H^{-1/2}, X_{\kn},\delta_n} \lesssim {\varphi(s_{k_{n}+1}) + \frac{\delta_n}{\sqrt {s_{k_{n}}}}}. $$ By the definition of~$k_{n}$ from~(\ref{eq:kn-def}) we shall derive an upper bound for~$s_{k_{n}+1}$, and a lower bound for~$s_{k_{n}}$. For this we recall from~(\ref{eq:weyl-H-lmg}) that~$s_{j}^{2a+1} \asymp s_{j}(\Lambda^{g}),\ j=1,2,\dots$. So, by the choice of~$k_{n}$ and from~(\ref{eq:mild-ass}) we see $$ s_{k_{n} +1}^{2\mu + 1} \lesssim \psi^{2}\lr{s_{k_{n} +1}(\Lambda^{g})} \leq \frac 1 2 \delta_{n}^{2}, $$ such that we find~$s_{k_{n}+1}^{\mu} \lesssim \delta_{n}^{\mu/(\mu + 1/2)}$, bounding the decay of the smoothness-dependent term. It remains to lower bound~$s_{k_{n}}$. Again from~(\ref{eq:kn-def}) and~(\ref{eq:mild-ass}) we see that $$ \psi^{2}\lr{s_{k_{n}}(\Lambda^{g})} > \frac{1}{\ci^2}\delta_{n}^{2}, $$ which yields~$s_{k_{n}} \gtrsim \delta_{n}^{\frac 1 {\mu + 1/2}}$. This in turn yields~$\delta_{n}/s_{k_{n}} \lesssim \delta_{n}^{\mu/(\mu + 1/2)}$, hence completing the proof. \end{proof}
1,477,468,751,328
arxiv
\section{Introduction} Stochastic games were introduced in the 1950's by Shapley \cite{shapley53}. They are played by two opponents over a finite set of states: the state variable follows a Markov chain controlled by both players, to each state corresponds a matrix game that determines a stage payoff, and the first player maximises a discounted sum of the stage payoffs while the second player minimises the same amount. Stochastic games have a value $v^k_\lambda\in \ensuremath{\mathbb R}$ for any discount factor $\lambda\in(0,1]$ and any initial state $k$. Moreover, for each $\lambda$, the vector of values $v_\lambda=(v^1_\lambda,\dots,v^n_\lambda)\in \ensuremath{\mathbb R}^n$ is the unique fixed point of the so-called Shapley operator $\Phi(\lambda,\,\cdot\,):\ensuremath{\mathbb R}^n\to \ensuremath{\mathbb R}^n$, where $n\in \ensuremath{\mathbb N}^*$ denotes the number of states. The convergence of the values as $\lambda$ vanishes was established by Bewley and Kohlberg \cite{BK76} in the late 70's using Tarski-Seidenberg elimination theorem from mathematical logic and the Puiseux theorem. Three alternative proofs have been provided since then, by Szczechla, Connell, Filar and Vrieze \cite{SCFV97}, Oliu-Barton \cite{OB14} and Attia and Oliu-Barton \cite{AOB18a}. Besides the convergence, the latter provided a characterisation of the limit values. \\ But let us go back to matrix games. In the late 1920's, Von Neumann proved the celebrated minmax theorem: ``\emph{Every matrix game $G$ has a value, denoted by $\mathrm{val}(G)$, and both players have optimal strategies}''. The set of optimal strategies was characterised in the 1950's by Shapley and Snow \cite{SS50} as a polytope, and each of its extreme points corresponds to a square sub-matrix $\dot{G}$ of $G$, for which the following formula holds: \begin{equation} \label{formula_value} \mathrm{val}(G)=\frac{\det(\dot{G})}{S(\mathrm{co}(\dot{G}))} \end{equation} For any matrix $M$, $\mathrm{co}(M)$ denotes its co-factor matrix and $S(M)$ denotes the sum of its entries. The sub-matrices characterising the extreme points of the set of optimal strategies are the so-called \emph{Shapley-Snow kernels} of the game. Szczechla, Connell, Filar and Vrieze \cite{SCFV97} noted, in the late 1990's, that applying the theory of Shapley and Snow to stochastic games provides, for any fixed discount factor $\lambda$, a system of $n$ polynomial equalities (in $n$ variables) that is satisfied by the vector of values $v_\lambda$. Indeed, for each $1\leq k\leq n$ and $z\in \ensuremath{\mathbb R}^n$, the $k$-th coordinate of Shapley's operator $\Phi^k(\lambda,z)$ is the value of a matrix game, denoted by $\mathcal{G}^k(\lambda,z)$, whose entries depend polynomially in $(\lambda,z)$. By considering a Shapley-Snow kernel of each of these games at $z=v_\lambda$ and by setting: \begin{equation}\label{syst_1} P^k(\lambda,z):=\det(\dot{\mathcal{G}}^k(\lambda,z))-z^k {S(\mathrm{co}(\dot{\mathcal{G}}^k(\lambda,z)) \end{equation} one deduces from \eqref{formula_value} that $v_\lambda$ satisfies the polynomial equality $P^k(\lambda,v_\lambda)=0$. Although initially defined for the variables $(\lambda,z)\in(0,1]\times \ensuremath{\mathbb R}^n$, the polynomial system \begin{equation}\label{aas}P^1(\lambda,z)=\dots=P^n(\lambda,z)=0\end{equation} can also be seen as an analytical variety in $\ensuremath{\mathbb C}^{n+1}$. To every choice of a square sub-matrix of $\mathcal{G}^k(\lambda,z)$ for each $1\leq k\leq n$ thus corresponds an analytical variety, so that their union $\mathcal{C}$ is an analytical variety too. Szczechla et al. \cite{SCFV97} prove that the set $\{(\lambda,v_\lambda),\, \lambda\in(0,1]\}$ is a regular $1$-dimensional connected component of $\mathcal{C}$, from which they deduce the convergence of the values $v_\lambda$ as $\lambda$ vanishes.\\ In the present paper, we propose to apply the theory of Shapley and Snow to stochastic games in a different manner, namely through multiparameter eigenvalue problems (MEP), a terminology introduced by Atkinson in the 1960's. As we will show, our approach considerably simplifies the analysis of stochastic games and provides several new results. % The connection between MEP, the theory of Shapley and Snow and stochastic games can be described as follows. First of all, represent the stochastic game by an $n\times (n+1)$ array of matrices: \begin{equation*}\label{aaas}D=\begin{pmatrix} M_0^1 & M_1^1 &\dots & M^1_n\\ \vdots & \vdots & \ddots & \vdots \\ M_0^n & M_1^n &\dots & M^n_n \end{pmatrix}\end{equation*} that contains all the relevant data of the stochastic game, namely, the matrices corresponding to each state, the transition probabilities and the discount factor. The array representation is reminiscent of MEP, except that the matrices in $D$ might be rectangular while MEP are only defined for arrays of square matrices. Indeed, for each $1\leq k\leq n$, the matrices $M^k_0,\dots,M^k_n$ are square matrices of equal size, then $D$ defines a MEP, that is, the problem of finding a vector $z\in \ensuremath{\mathbb C}^n$ which satisfies: \begin{equation}\label{syst_2}\det({M}^k_0+ z^1 {M}^k_1+\dots+z^n {M}^k_n)=0,\quad 1\leq k\leq n \end{equation} MEP can be tacked by introducing $n+1$ auxiliary matrices, denoted by $\Delta^0,\dots,\Delta^n$, which allow to transform \eqref{syst_2} into the following uncoupled system: \begin{equation}\label{syst_3}\det(\Delta^k-z^k \Delta^0)=0,\quad 1\leq k\leq n \end{equation} System \eqref{syst_3} is simpler to solve, as each variable appears in a separate equation. Moreover, Atkinson \cite{atkinson72} proved that \eqref{syst_2} and \eqref{syst_3} have the same solutions under suitable assumptions, such as the invertibility of the matrix $\Delta^0$. Applying the theory of MEP to stochastic games has two important consequences: on the one hand, it allows to transform the polynomial system \eqref{aas} satisfied by the vector $v_\lambda$ into an uncoupled a polynomial system, that is, a polynomial equation $P^k(\lambda,z^k)=0$ satisfied by $v^k_\lambda$, for each $1\leq k\leq n$; on the other hand, it provides new algebraic insight on the values. The bridge between MEP and stochastic games is provided by the theory of Shapley and Snow. Indeed, by considering a Shapley-Snow kernel of each of the games $\mathcal{G}^k(\lambda,v_\lambda)$, $1\leq k\leq n$, like in Szczechla et al. \cite{SCFV97}, we restrict our attention to a $n\times (n+1)$ array $\dot{D}$ of square (and relevant) matrices. \subsection{Main results} The combination of these two theories (Shapley and Snow \cite{SS50} and Atkinson \cite{atkinson72}), and their application to stochastic games is the main novelty of this paper, since our approach provides new tools, new results and simpler proofs of important known results. Our main results are the following: \paragraph{Result 1.} \emph{For any fixed $\lambda\in(0,1]$ and $1\leq k\leq n$ one has: \begin{itemize} \item[$(i)$] $v_\lambda^k$ is the unique $w\in \ensuremath{\mathbb R}$ satisfying $\mathrm{val}((-1)^n(\dot{\Delta}^k-w\dot{\Delta}^0))=0$. \item[$(ii)$] $\mathrm{rank}(\dot{\Delta}^k-v^k_\lambda \dot{\Delta}^0) < \max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\dot{\Delta}^k-w\dot{\Delta}^0)$. \item[$(iii)$] There exists a polynomial $P^k\in E^k$ such that $P^k(\lambda,v_\lambda^k)=0$. \item[$(iv)$] If the stage payoffs, the transition probabilities and the discount factor are rational, then $v^k_\lambda$ is algebraic of order at most $\mathrm{rank}(\dot{\Delta}^0)$. \end{itemize} } In this result, $E^k$ denotes a finite set of bi-variate polynomials and $\dot{\Delta}^k$ and $\dot{\Delta}^0$ denote two square sub-matrices of $\Delta^k$ and $\Delta^0$ respectively. An explicit construction of $E^k$, $\dot{\Delta}^k$ and $\dot{\Delta}^0$ will be provided. Moreover, the bound of $(iv)$ is tight \paragraph{Result 2.}\emph{ For any fixed $1\leq k\leq n$ one has: \begin{itemize} \item[$(i)$] Any accumulation point of $(v_\lambda^k)$, as $\lambda$ vanishes, belongs to a finite set $V^k$. As a consequence, the values converge and the limit $v^k_0:=\lim_{\lambda\to 0} v_\lambda^k$ belongs to $V^k$. \item[$(ii)$] If the stage payoffs, the transition probabilities and the discount factor are rational, then $v^k_0$ is algebraic of order at most $\mathrm{rank}(\dot{\Delta}^0)$. \end{itemize}} In this result, the set $V^k$ is constructed explicitly: it is the set of the real roots of finitely many polynomials obtained from the set of polynomials $E^k$. The set $V^k$ provides an alternative method to compute the value of $v^k_0$ exactly, provided that all the data of the game is rational. \paragraph{Result 3.} \emph{For any fixed $1\leq k\leq n$ one has: \begin{itemize} \item[$(i)$] There exists $P^k\in E^k$ and $\lambda^k_0>0$ such that $P^k(\lambda,v_\lambda^k)=0$, for all $\lambda\in (0,\lambda^k_0)$. \item[$(ii)$] There exists $\lambda^k_0>0$ and a finite set $W^k$ of Puiseux series on $(0,\lambda^k_0)$ such that the value function $\lambda\mapsto v^k_\lambda$, $\lambda\in(0,\lambda^k_0)$ belongs to $W^k$. \item[$(iii)$] As $\lambda$ vanishes one has: $|v^k_\lambda-v^k_0|=O(\lambda^{1/a})$, where $a=\mathrm{rank}(\dot{\Delta}^0)$. \end{itemize} In this result, the main novelty is the explicit construction of $E^k$ and $W^k$, from which one can deduce new upper bounds for the speed of convergence of the discounted values as the discount factor vanishes. Moreover, these results are obtained without invoking neither Tarski-Seidenberg elimination principle nor the geometry of complex analytic varieties. \paragraph{Comments} \begin{itemize} \item Result 1 refines the characterisation of $v^k_\lambda$ provided by the authors in \cite{AOB18a}, which states that ``\emph{$v^k_\lambda$ is the unique $w\in \ensuremath{\mathbb R}$ satisfying $\mathsf{val}((-1)^n(\Delta^k-w\Delta^0))=0$}''. \item In the sequel, stochastic games with state-dependent actions sets will be considered. If $K$ denotes some finite set of states and $p^k$ and $q^k$ denote the number of actions of Player 1 and 2, respectively, in state $k\in K$, then it is worth mentioning that $\dot{\Delta}^0$ is a square matrix whose size is bounded by $\prod_{k\in K} \min(p^k,q^k)$. Consequently, one has the following more explicit bound: $$\mathrm{rank}(\dot{\Delta}^0)\leq \prod\nolimits_{k\in K} \min(p^k,q^k)$$ \end{itemize} \subsection{Outline of the paper and notation} The paper is organised as follows. Section 2 is devoted to a brief presentation of the three theories we are concerned with: stochastic games, the theory of Shapley and Snow and multiparameter eigenvalue problems. For the former, we propose two presentations (a classical one, and a new one) and some relevant known results. Section 3 is devoted to establishing a link between the three theories, for a fixed discount factor, and to prove Result 1. In Section 4, we consider the case where the discount factor vanishes, and prove Results 2 and 3. Section 5 is devoted to some additional remarks concerning the tightness of the bounds, the exact computation of the limit values and an alternative construction of the so-called characterising polynomials. Section 6 is an Appendix. \paragraph{Notation.} The following notation will be used throughout the paper: \begin{itemize} \item For any finite set $Z$, we denote its cardinality by $|Z|$ and $\Delta(Z)$ denotes the set of probability distributions over $Z$, i.e. $\{\alpha:Z\to [0,1],\ \sum_{z\in Z}\alpha(z)=1\}$. \item For any matrix $M$, $\transp{M}$ denotes its transpose, $S(M)$ denotes the sum of its entries by $S(M)$ and $\dot{M}$ denotes a sub-matrix of $M$. By $M\geq 0$ we indicate that all the entries of $M$ are nonnegative. \item For any \emph{square} matrix $M$, we denote its trace by $\ensuremath{\operatorname{tr}}(M):=\sum_i M_{ii}$ and its cofactor matrix by $\mathrm{co}( M)$. For each $(i,j)$, the $(i,j)$-th entry of $\mathrm{co}(M)$ is equal to $(-1)^{i+j}M_{ij}$ where $M_{ij}$ is the determinant of the matrix obtained by deleting the $i$-th row and $j$-th column. The matrices $M$ and $\mathrm{co}(M)$ are of same size and satisfy the following well-known formula: $$M \transp{\mathrm{co}(M)}=\det(M)\ensuremath{\operatorname{Id}}$$ where $\mathrm{co}(M)=1$ for any $1\times 1$-matrix $M$, by convention. \item We denote by $\textbf{1}$ and $U$, respectively, a column vector and a matrix of $1$'s. Their dimension will depend on the context. \item Any matrix $M$ is identified with a matrix game, also denoted by $M$. The value of $M$ is denoted by $\mathrm{val}(M)$. Mixed strategies of both players are considered as \emph{column} vectors. For any couple of mixed strategies $(x,y)$, the expected payoff is given by $\transp{x} M y$. \item Suppose that $M$ is a $p\times q$-matrix, and that $(x,y)\in \ensuremath{\mathbb R}^p\times \ensuremath{\mathbb R}^q$. For any sub-matrix $\dot{M}$ of $M$, we denote by $\dot{x}$ and $\dot{y}$ the restrictions of $x$ and $y$ to the row and column indices of $\dot{M}$, respectively. \end{itemize} \section{Preliminaries} The aim of this section is to provide a brief presentation of stochastic games, the theory of Shapley and Snow and multiparameter eigenvalue problems. For the former, we propose two presentations: the classical one, due to Shapley \cite{shapley53}, along with the results obtained therein; and a new presentation recently proposed by the authors in a previous work \cite{AOB18a}. Some examples will be provided, in order to help the reader getting acquainted with each of these theories. \subsection{Standard stochastic games}\label{ssg} Stochastic games are described by a tuple $\Gamma^k_\lambda=(K,I,J,g,q,\lambda,k)$, where $K$ is the set of states, $I$ and $J$ are the action sets of Player 1 and 2 respectively, $g:K\times I\times J\to \ensuremath{\mathbb R}$ is the payoff function, $q:K\times I\times J\to \Delta(K)$ is the transition function, $\lambda\in(0,1]$ is a discount factor and $k\in K$ is an initial state.\\ \noindent {We assume throughout the paper that $K$, $I$ and $J$ are finite sets}, and $K=\{1,\dots,n\}$.\\ \noindent The game $\Gamma^k_\lambda$ is defined as follows. At every stage $m\geq 1$, knowing the current state $k_m$, the players choose simultaneously and independently actions $i_m\in I$ and $j_m\in J$. The triplet $(k_m,i_m,j_m)$ has two effects: it produces a stage payoff $g_m=g(k_m,i_m,j_m)$ and determines the law $q(\,\cdot\,| k_m,i_m,j_m)$ of the state at stage $m+1$. Player $1$ maximises the expectation of $\sum_{m\geq 1}\lambda(1-\lambda)^{m-1} g_m$ given $k_1=k$, whereas Player $2$ minimizes the same amount. By Shapley \cite{shapley53}, this game has a value, denoted by $v^k_\lambda$. For both players, a strategy is a mapping from the set of finite histories into his own set of mixed actions. A strategy is optimal if it guarantees the value against any strategy of the opponent.\\ As already observed by Shapley \cite{shapley53}, the assumption that the current state is observed implies the existence of optimal stationary strategies, that is, strategies that depend only on the current state. For this reason, we will restrict our attention to stationary strategies throughout the paper. The set of stationary strategies are denoted, respectively, by $\Delta(I)^n$ and $\Delta(J)^n$. For any couple of stationary strategies $(x,y)$ and any initial state $k$, we denote by $\ensuremath{\mathbb P}^{k}_{x,y}$ the unique probability on the set of plays $(K\times I\times J)^\ensuremath{\mathbb N}$ induced by a couple $(x,y)$ on the $\sigma$-algebra generated by the cylinders. Similarly, we denote by $\ensuremath{\mathbb E}^k_{x,y}$ the expectation with respect to $\ensuremath{\mathbb P}^k_{x,y}$. Finally, we denote by $\gamma^k_\lambda(x,y)$ the corresponding expected payoff, i.e.: \begin{equation}\label{expected_payoff} \gamma^k_\lambda(x,y):=\ensuremath{\mathbb E}_{x,y}^{k}\left[\sum\nolimits_{m\ge 1} \lambda (1-\lambda)^{m-1}g(k_m,i_m,j_m)\right]\end{equation} The following useful notions were introduced in \cite{shapley53}. \begin{definition}\label{local_game} For each $\lambda\in(0,1]$, $1\leq k\leq n$ and $z\in \ensuremath{\mathbb R}^n$, the \defn{local game} $\mathcal{G}^k(\lambda,z)$ is an $I\times J$-matrix game whose entries are given by: \begin{equation*}(\mathcal{G}^k(\lambda,z))^{ij} :=\lambda g(k,i,j)+(1-\lambda)\sum_{\ell=1}^n q(\ell|k,i,j)z^{\ell}\end{equation*} \end{definition} \begin{definition}\label{Shapley_operator} For each $\lambda\in(0,1]$, the \defn{Shapley operator} $\Phi(\lambda,\,\cdot\,):\ensuremath{\mathbb R}^n\to \ensuremath{\mathbb R}^n$ is defined as follows. For each $1\leq k\leq n$ and $z\in \ensuremath{\mathbb R}^n$: $$\Phi^k(\lambda,z):=\mathrm{val}(\mathcal{G}^k(\lambda,z))$$ \end{definition} The main results of \cite{shapley53} can be stated as follows: \begin{enumerate} \item For each $\lambda\in(0,1]$ and $1\leq k\leq n$, the stochastic game $\Gamma_\lambda^k$ has a value, denoted by $v^k_\lambda$, and both players have optimal stationary strategies. Moreover: \begin{equation*} v_{\lambda}^k=\max_{x \in \Delta(I)^n} \min_{y\in \Delta(J)^n} \gamma^k_\lambda(x,y)=\min_{y\in \Delta(J)^n} \max_{x \in \Delta(I)^n} \gamma^k_\lambda(x,y) \end{equation*} \item For each $\lambda\in(0,1]$, the vector of values $v_\lambda\in \ensuremath{\mathbb R}^n$ is the unique fixed point of $\Phi(\lambda,\,\cdot\,)$, which is a strict contraction of $\ensuremath{\mathbb R}^n$ with respect to the $L^\infty$-norm, i.e. $\max_k|\Phi^k(\lambda,z)-\Phi^k(\lambda,\bar{z})|\leq (1-\lambda)\max_k |z^k-\bar{z}^k|$, for all $z,\bar{z}\in \ensuremath{\mathbb R}^n$. \item For each $\lambda\in(0,1]$ and $1\leq k\leq n$, one has $|v^k_\lambda|\leq \|g\|:=\max_{(k,i,j)}|g(k,i,j)|$ and the map $\lambda\mapsto v^k_\lambda$ is $\|g\|$-Lipschitz continuous. \end{enumerate} \subsection{A new presentation of stochastic games}\label{newpres} It is customary to present stochastic games as a tuple $(K,I,J,g,q,\lambda,k)$, like we did in Section \ref{ssg}. Consider now an alternative presentation of the game as the following $n\times (n+1)$ {array of matrices}: \begin{equation}\label{data} D(\lambda):=\begin{pmatrix} \lambda G^1 & (1-\lambda)Q^1_1-U & (1-\lambda) Q^1_2& \dots & (1-\lambda) Q^1_n \\ \lambda G^2 & (1-\lambda)Q^2_1 & (1-\lambda) Q^2_2 -U &\dots & (1-\lambda) Q^2_n \\ \vdots & \vdots & \vdots & \\ \lambda G^n & (1-\lambda)Q^n_1 &(1-\lambda) Q^n_2 & \dots &(1-\lambda) Q^n_n- U \end{pmatrix} \end{equation} where for each $1\leq k,\ell \leq n$, we have set $Q^k_\ell$ and $G^k$ to be the following $|I|\times |J|$-matrices: \begin{equation}\label{GQ}Q^k_\ell:=(q(\ell| k, i,j))_{i,j}\quad \text{ and }\quad G^k:=(g(k,i,j))_{i,j}\end{equation} and where $U$ stands for a $|I|\times |J|$ matrix of ones. We will refer to $D(\lambda)$ as the \emph{data array}, as it carries all the data of the game, just like the tuple $(K,I,J,g,q,\lambda)$, where the initial state is not specified. Let us go one step further. For any $1\leq k\leq n$ and $0\leq \ell \leq n$, set: \begin{equation}\label{Ms} {M}^k_\ell:=\begin{cases} \lambda {G}^k & \text{ if }\ell=0\\ (1-\lambda){Q}^k_k-U & \text{ if }k=\ell\\ (1-\lambda){Q}^k_\ell & \text{ if }1\leq k\neq \ell \leq n \end{cases}\end{equation} where the dependence on $\lambda$ has been omitted in order to simplify the notation. By doing so, the stochastic games $(K,I,J,g,q,\lambda,k)$, $1\leq k\leq n$ are now presented in the following form: \begin{equation}\label{atk_SM}D(\lambda)=\begin{pmatrix} M_0^1 & M_1^1 &\dots & M^1_n\\ \vdots & \vdots & \ddots & \vdots \\ M_0^n & M_1^n &\dots & M^n_n \end{pmatrix} \end{equation} Note that, by construction, the array $D(\lambda)$ satisfies the following two properties: \begin{itemize} \item[$(H1)$] For each $1\leq k\leq n$, the matrices $M^k_0,\dots,M^k_n$ are of same size \item[$(H2)$] {For all $1\leq k, \ell \leq n$ and $k\neq \ell$ one has:} $$M^k_k\leq 0,\quad M^k_\ell \geq 0\quad \text{ and }\quad M^k_1+\dots + M^k_n \leq -\lambda U$$ \end{itemize} \begin{remarque}\label{rem0} In our setting, all the $M^k_\ell$ are of same size, namely $|I|\times |J|$. However, for later purposes, it is more convenient to state the less restrictive property (H1), which corresponds to the situation where the sets of actions are state-dependent. \end{remarque} To the array $D(\lambda)$ one can associate $n+1$ auxiliary matrices, denoted by $\Delta^0,\dots,\Delta^n$. \begin{definition}\label{important_def1} For each $0\leq \ell \leq n$, let $D^{(\ell)}(\lambda)$ be the $n\times n$ array of matrices obtained by deleting the $(\ell+1)$-th column from $D(\lambda)$. Then, set: $$\Delta^\ell:=(-1)^\ell\det\nolimits_{\otimes} D^{(\ell)}(\lambda)$$ where $\det_\otimes$ stands for the Kronecker determinant. \end{definition} The Kronecker determinant is very similar to the usual determinant except that 1) the usual product of scalars is replaced by the so-called Kronecker product of matrices and 2) rows and columns do not play symmetric roles. We refer the reader to the Appendix A for more details on Kronecker products and determinants. The matrices $\Delta^0,\dots,\Delta^n$ are well-defined thanks to $(H1)$, are of equal size\footnote{If $p^k\times q^k$ denotes the common size of $M^k_0,\dots,M^k_n$ then $\Delta^0,\dots,\Delta^n$ are of equal size $\prod_{k=1}^n p^k\times \prod_{k=1}^n q^k$. In the present context, $p^k=|I|$ and $q^k=|J|$ for all $k$ so that $\Delta^0,\dots,\Delta^n$ are $|I|^n\times |J|^n$-matrices.} and each of their entries depends polynomially on $\lambda$ of degree at most $n$.\\ Let us recall two useful results from \cite{AOB18a}. The following elementary lemma, which is a consequence of the diagonally dominant aspect of $(H2)$, will be used in the sequel. \begin{lemme}\label{positivite} All the entries of $(-1)^n {\Delta}^0$ are greater or equal than $\lambda^n$. \end{lemme} \begin{remarque}For any couple of matrices $A$ and $B$ of equal size, the fact that all the entries of $B$ are nonzero and of same sign implies the existence of a unique $w\in \ensuremath{\mathbb R}$ such that $\mathsf{val}(A-wB)=0$. Thus, Lemma \ref{positivite} implies, in particular, that the equation $\mathrm{val}(\Delta^k-w\Delta^0)=0$ admits a unique solution.\end{remarque} The following result is the building stone of \cite{AOB18a} in obtaining a characterisation for the limit values. None of the results of the present manuscript rely on this result; rather, a refinement is proposed in Theorem \ref{charRD}, stated as \emph{Result 1} in the introduction. \begin{theoreme}\label{charac1} Fix $\lambda\in(0,1]$ and $1\leq k\leq n$. Then $v_\lambda^k$ is the unique $w\in \ensuremath{\mathbb R}$ satisfying $\mathrm{val}((-1^n)(\Delta^k-w\Delta^0))=0$. \end{theoreme} \begin{remarque} The definition of the auxiliary matrices $\Delta^0,\Delta^1,\dots, \Delta^n$ was slightly different in \cite{AOB18a}. However, the two constructions coincide, up to a sign $(-1)^n$. \end{remarque} \begin{exemple}\label{stand_ex} To illustrate the data array representation of a stochastic game, consider the following game introduced by Kohlberg \cite{kohlberg74}. Consider a stochastic game with $4$ states. States $3$ and $4$ are \emph{absorbing}, that is, once they are reached, these states are never left. The payoff in theses states is, respectively, $1$ and $-1$, regardless of the player's actions. By simplicity, we will assume that the players have only one action in these states. States $1$ and $2$ have action sets $I=\{T,B\}$ and $J=\{L,R\}$. The payoff functions are defined by $g(1,i,j)=\ensuremath{\mathds 1}_{\{(i,j)=(T,L)\}}$ and $g(2,i,j)=-\ensuremath{\mathds 1}_{\{(i,j)=(T,L)\}}$, and the transitions, which are all deterministic, are described as follows: \begin{center} \begin{tikzpicture}[xscale=1, yscale=1] \draw[fill=red!0] (0,0) rectangle (1.6,1.6); \draw[thin] (0,0.8)-- (1.6,0.8); \draw[thin] (0.8,0)-- (.8,1.6); \node[scale=1] at (0.4,1.2) {\textcolor{black!100}{$1$}}; \node[scale=1] at (1.2,1.2) {\textcolor{black!100}{$2$}}; \node[scale=1] at (0.4,0.4) {\textcolor{black!100}{$2$}}; \node[scale=1] at (1.2,0.4) {\textcolor{black!100}{$3$}}; \node [above] at (0.4,1.6) { \textcolor{black!100}{L}}; \node [above] at (1.2,1.6) {\textcolor{black!100}{R}}; \node [left] at (0,1.2) {\textcolor{black}{T}}; \node [left] at (0,0.4) {\textcolor{black}{B}}; \draw[fill=red!0] (4,0) rectangle (5.6,1.6); \draw[thin] (4,0.8)-- (5.6,0.8); \draw[thin] (4.8,0)-- (4.8,1.6); \node[scale=1] at (4.4,1.2) {\textcolor{black!100}{$2$}}; \node[scale=1] at (5.2,1.2) {\textcolor{black!100}{$1$}}; \node[scale=1] at (4.4,0.4) {\textcolor{black!100}{$1$}}; \node[scale=1] at (5.2,0.4) {\textcolor{black!100}{$4$}}; \node [above] at (4.4,1.6) { \textcolor{black!100}{L}}; \node [above] at (5.2,1.6) {\textcolor{black!100}{R}}; \node [left] at (4,1.2) {\textcolor{black}{T}}; \node [left] at (4,0.4) {\textcolor{black}{B}}; \node at (0.8,-0.5) {\textcolor{black}{1}}; \node at (4.8,-0.5) {\textcolor{black}{2}}; \end{tikzpicture} \end{center} where the numbers stand for states. In state $1$, for instance, both $(T,R)$ and $(B,L)$ lead to state $2$, whereas $(B,R)$ leads to state 1 and $(T,L)$ induces no transition. The corresponding array $D(\lambda)$ is given by: $$ \begin{pmatrix} \begin{pmatrix} \phantom{-} \lambda\phantom{-} & \phantom{-}0\phantom{-} \\ 0 & 0\end{pmatrix} & \begin{pmatrix} -\lambda & \phantom{1}-1\phantom{-} \\ -1& -1\end{pmatrix} & \begin{pmatrix} 0& 1-\lambda \\ 1- \lambda & 0\end{pmatrix} & \begin{pmatrix} \phantom{-}0\phantom{-} &\phantom{-} 0\phantom{-} \\ 0 &1- \lambda\end{pmatrix} & \begin{pmatrix} \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-} \\ 0 & 0\end{pmatrix}\\ \begin{pmatrix} -\lambda & 0 \\ \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-}\end{pmatrix} & \begin{pmatrix} 0 & 1-\lambda \\ 1-\lambda & 0\end{pmatrix} & \begin{pmatrix} -\lambda & \phantom{1}-1\phantom{-} \\ -1& -1\end{pmatrix} & \begin{pmatrix} \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-} \\ \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-}\end{pmatrix}& \begin{pmatrix} \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-} \\ 0 & 1-\lambda\end{pmatrix}\\ \phantom{-}\lambda & 0 & 0& -\lambda & \phantom{-}0\\ -\lambda & 0 & 0& \phantom{-}0& -\lambda \end{pmatrix}$$ where we have identified the $1\times 1$ matrices of the last two rows with scalars. The first auxiliary matrix is given by: \begin{eqnarray*}\label{de0}\Delta^0&=\lambda^2& \begin{pmatrix} \lambda^2 & \lambda& \lambda& \lambda(2 - \lambda) \\ \lambda &\lambda &\lambda(2 - \lambda)&1 \\ \lambda & \lambda(2 - \lambda) &\lambda&1 \\ \lambda(2 - \lambda)&1&1&1 \end{pmatrix} \end{eqnarray*} As states 1 and 2 are similar to each other, let us focus on state $1$. The auxiliary matrix $\Delta^1$ is given by: \begin{eqnarray*} \Delta^1&=\lambda^2&\begin{pmatrix} \lambda^2 & \lambda &-\lambda(1-\lambda) & 0\\ \lambda & \lambda & 0 & -(1-\lambda)^2\\ -\lambda(1-\lambda) &0 & \lambda(1-\lambda) &1 \\ 0 & -(1-\lambda)^2 &1-\lambda &1-\lambda \end{pmatrix} \end{eqnarray*} Hence, for any $w\in \ensuremath{\mathbb R}$: $$\Delta^1-w\Delta^0=\lambda^2 \begin{pmatrix} \lambda^2 (1-w) & \lambda (1-w) & -\lambda(1-\lambda)-\lambda w & -(2\lambda-\lambda^2)w \\ \lambda (1-w) & \lambda (1-w) & -(2\lambda- \lambda^2)w & -(1-\lambda)^2-w\\ -\lambda(1-\lambda)-\lambda w& -(2\lambda-\lambda^2)w & \lambda(1-\lambda)-\lambda w &1-\lambda -w \\ -(2\lambda-\lambda^2)w &-(1-\lambda)^2-w &1-\lambda -w&1-\lambda -w \end{pmatrix} $$ We will come back to this example later on. \end{exemple} \subsection{The theory of Shapley and Snow}\label{sskth} The aim of this section is to briefly present the theory developed by Shapley and Snow \cite{SS50}. Throughout this section, $G$ will denote a fixed $|I|\times |J|$-matrix game with value $v=\mathrm{val}(G)$. The set of optimal strategies for player $1$ and $2$ in $G$ will be denoted by $X^*\subset \Delta(I)$ and $Y^*\subset \Delta(J)$, respectively. These sets are compact, non-empty polytopes, so that they can be described by their (finitely many) extreme points. A characterisation of the extreme points of $X^*\times Y^*$, called \emph{basic solutions} of $G$, was the main result in Shapley and Snow \cite{SS50}. The following theorem is a convenient restatement of their results. \begin{theoreme}[Shapley and Snow 1950] \label{SSK} A couple $(x,y)\in X^*\times Y^*$ is a basic solution of $G$ if and only if there exists a square sub-game $\dot{G}$ satisfying: \begin{enumerate \item[$(1)$] $S(co(\dot{G})) \neq 0$ \item[$(2)$] $\dot{x}= \frac{co(\dot{G})}{S(co(\dot{G}))} \dot{\mathbf{1}}$ and $\dot{y} = \frac{^t co(\dot{G})}{S(co(\dot{G}))} \dot{\mathbf{1}}$ \end{enumerate} In this case, the following additional properties hold: \begin{enumerate} \item[$(3)$] $\mathrm{val}(G) = \mathrm{val}(\dot{G})=\frac{det(\dot{G})}{S(co(\dot{G}))}$ \item[$(4)$] $\transp{\dot{x}} \dot{G}=\mathrm{val}(G)\transp{\dot{\mathbf{1}}}$ and $\dot{G}\dot{y}=\mathrm{val}(G)\dot{\mathbf{1}}$ \end{enumerate} \end{theoreme} \begin{remarque} Note that the $\dot{x}$ and $\dot{y}$ appearing in Theorem \ref{SSK} are strategies, as their components are nonnegative and add up to 1 (see Theorem \ref{SSK} (2)). Hence, $(x,y)$ and $(\dot{x},\dot{y})$ are equal, up to completing the latter with zeros. \end{remarque} \begin{definition A \defn{Shapley-Snow kernel} (SSK) of $G$ is a square sub-matrix $\dot{G}$ satisfying the four conditions of Theorem \ref{SSK}, for some basic solution $(x,y)$. Let $\dot{I}\subset I$ and $\dot{J}\subset J$ be the subsets of actions that define the sub-matrix $\dot{G}$. \end{definition} Theorem \ref{SSK} has many consequences. Among them, the next statement gathers those that will be used in the sequel. Its proof can be found in Section \ref{proofsssk}. For any vector $z\in \ensuremath{\mathbb R}^d$, we denote its span by $<z>:=\{tz,\, t\in \ensuremath{\mathbb R}\}$. \begin{proposition}\label{lemme_G-vU} Let $\dot{G}$ be a Shapley-Snow kernel of $G$, corresponding to a basic solution $(x,y)$. Then: \begin{enumerate} \item[$(i)$] $S(\mathrm{co}(\dot{G}-v\dot{U})) \neq 0$ \item[$(ii)$] $\det(\dot{G}-v\dot{U})=\mathrm{val}(\dot{G}-v\dot{U})=\mathrm{val}({G}-v{U})=0$ \item[$(iii)$] $\mathrm{Ker}(\dot{G}-v\dot{U})=<\dot{y}>$ and $\mathrm{Ker}(\transp{(\dot{G}-v\dot{U})})=<\dot{x}>$ \item[$(iv)$] $\mathrm{co}(\dot{G}-v\dot{U})=S(\mathrm{co}(\dot{G}-v\dot{U}))\, \dot{x}\transp{\dot{y}}$. \end{enumerate} \end{proposition} The following example illustrates the notion of a Shapley-Snow kernel. \begin{exemple} Consider the following $3\times 3$ matrix game: $$G=\begin{pmatrix} 1 & 0& 1 \\ 0 & 1 & 2\\ 3 & 2 & 0\end{pmatrix} $$ Clearly, Player $1$ does not have any pure optimal strategy so that none of the entries of $G$ is an SSK. Let us show that the following sub-matrix, obtained by setting $\dot{I}=\{2,3\}$ and $\dot{J}=\{1,3\}$ is one: $$\dot{G}=\begin{pmatrix} 0& 2 \\ 3 & 0\end{pmatrix} $$ By Theorem \ref{SSK} it is enough to check that $\dot{G}$ satisfies $S(co(\dot{G}))\neq 0$, and that completing $\dot{x}= \frac{co(\dot{G})}{S(co(\dot{G}))} \dot{\mathbf{1}}$ and $\dot{y} = \frac{^t co(\dot{G})}{S(co(\dot{G}))}\dot{\mathbf{1}}$ with zeros (outside $\dot{I}$ and $\dot{J}$, respectively) gives a pair of optimal strategies. An easy computation gives: $$\mathrm{co}(\dot{G})=\begin{pmatrix} \phantom{-}0& -3 \\ -2 & \phantom{-}0\end{pmatrix}, \quad S(\mathrm{co}(\dot{G}))=-5, \quad \dot{x}=\left(\frac{3}{5},\frac{2}{5}\right), \quad \dot{y}=\left(\frac{2}{5},\frac{3}{5}\right)$$ A quick verification gives that, indeed, $x=(0,\frac{3}{5},\frac{2}{5})$ and $y=(\frac{2}{5},0,\frac{3}{5})$ are optimal strategies, so that $\dot{G}$ is an SSK corresponding to the basic solution $(x,y)$. The value of $G$ can be obtained using the formula of Theorem \ref{SSK} $(3)$: $$\mathsf{val}(G)=\mathsf{val}(\dot{G})=\frac{\det(\dot{G})}{S(\mathrm{co}(\dot{G}))}=\frac{6}{5}$$ \end{exemple} \subsection{Multiparameter eigenvalue problems}\label{sec_atkinson} Consider an $n\times(n+1)$ array of real matrices: \begin{equation}\label{atk_SM_2}D=\begin{pmatrix} M_0^1 & M_1^1 &\dots & M^1_n\\ \vdots & \vdots & \ddots & \vdots \\ M_0^n & M_1^n &\dots & M^n_n \end{pmatrix} \end{equation} where for each $1\leq k\leq n$, the matrices $M^k_0,\dots,M^k_n$ are \emph{square matrices} of equal size. \noindent Multiparameter eigenvalue problems (MEP), a terminology introduced by Atkinson \cite{atkinson72}, is the problem of finding $z=(z^1,\dots,z^n)\in \ensuremath{\mathbb C}^{n}$ satisfying\footnote{It is worth mentioning that Atkinson \cite{atkinson72} considered the homogenous version of this problem, namely the problem of finding $(z^0,\dots,z^n)\in \ensuremath{\mathbb C}^{n+1}$ satisfying $\det(z^0 {M}^k_0+ \dots+z^n {M}^k_n)=0$ for all $1 \leq k\leq n$. Solutions to an homogeneous MEP are determined only up to a multiplicative factor. Moreover, there is a one-to-one map between the solutions to \eqref{M_syst} and solutions to the homogeneous MEP satisfying $z^0\neq 0$. For this reason, Atkinson's results can be easily transposed to the non-homogeneous case, more relevant for us.}: \begin{equation}\label{M_syst} \left \{ \TABbinary\tabbedCenterstack[l]{ \det({M}^1_0+ z^1 {M}^1_1+\dots+z^n {M}^1_n)&=&0\\ \vdots &&\\ \det({M}^n_0+ z^1 {M}^n_1+\dots+z^n {M}^n_n)&=&0 } \right. \end{equation} \begin{remarque} The array $D$ differs from the array representation $D(\lambda)$ of stochastic games in two aspects: \begin{itemize} \item In addition to $(H1)$, all matrices in the array are supposed to be square matrices. \item The array $D$ may or may not satisfy $(H2)$. \end{itemize} \end{remarque} Each row $1\leq k\leq n$ of \eqref{M_syst} defines polynomial in $n$ variables whose degree is the common size of $M^k_0,\dots, M^k_n$. In particular, when all the $M^k_\ell$ are of size $1\times 1$ (i.e. scalars) the system \eqref{M_syst} boils down to an affine system of equations. In this case, the auxiliary matrices are also scalars and \eqref{M_syst} admits a unique solution if and only if $\Delta^0\neq 0$. When this is the case, the unique $z\in \ensuremath{\mathbb R}^n$ satisfying \eqref{M_syst} is given by $z^k=\Delta^k / \Delta^0$ for $1\leq k \leq n$, by Cramer's rule.\\ % The extension of Cramer's rule to an arbitrary $n\times (n+1)$ array of matrices, due to Atkinson \cite{atkinson72}, relies on the $n+1$ auxiliary matrices $\Delta^0,\dots,\Delta^n$ of Definition \ref{important_def1}. Namely, introduce the so-called generalised MEP, which consists in finding $z\in \ensuremath{\mathbb C}^n$ satisfying: \begin{equation}\label{De_syst} \left \{ \TABbinary\tabbedCenterstack[l]{ \det({\Delta}^1- z^1 {\Delta}^0)&=&0\\ \, \vdots &&\\ \det({\Delta}^n- z^n {\Delta}^0)&=&0 } \right. \end{equation} Note that, unlike \eqref{M_syst}, where the unknown $z$ appears in every equation, in \eqref{De_syst} each coordinate of $z$ appears in a separate equation. In this sense, the latter system is an \emph{uncoupled system}, and thus much simpler to tackle. \\ Throughout this section, we denote by $S^M$ and $S^\Delta$ the set of solutions of \eqref{M_syst} and \eqref{De_syst}, respectively.\\ One distinguishes between \emph{singular} and \emph{regular} MEP according to whether some of the polynomials $P^k(w):=\det({\Delta}^k- w {\Delta}^0)$ are identically zero or not. The case where $\Delta^0$ is invertible is the so-called nonsingular case, for which the problem can be easily solved. The following result can be found in Atkinson \cite[Chapter 6]{atkinson72}. \begin{theoreme} If $\Delta^0$ is invertible, then $S^M=S^\Delta$. \end{theoreme} \begin{remarque} The set $S^M=S^\Delta$ can be easily described in this case. Indeed, the non-singularity of $\Delta^0$ implies that z\in S^\Delta$ if and only if $\det(\Delta^k(\Delta^0)^{-1}- z^k \ensuremath{\operatorname{Id}} )=0$ for all $1\leq k\leq n$, so that $S^M$ is entirely described by the set of eigenvalues of the matrices $\Delta^k(\Delta^0)^{-1}$, $1\leq k\leq n$. In particular, $S$ is a finite set and can be computed efficiently. \end{remarque} When some polynomial $P^k$ is identically zero, the problem is a singular MEP. This occurs, for instance, when $\mathrm{Ker}(\Delta^k)$ and $\mathrm{Ker}(\Delta^0)$ share a non-zero vector. The vacuous equality $P^k(w)=0$ is then replaced by the \emph{rank drop} condition: $$\mathrm{rank}(\Delta^k - z^k \Delta^0) < \max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\Delta^k - w \Delta^0)$$ The auxiliary system \eqref{De_syst} is thus replaced by the following one: \begin{equation}\label{RP_syst} \left \{ \TABbinary\tabbedCenterstack[l]{ \mathrm{rank}(\Delta^1 - z^1 \Delta^0) &<& \max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\Delta^1 - w \Delta^0)\\ \, \vdots &&\\ \mathrm{rank}(\Delta^n - z^n \Delta^0) &<& \max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\Delta^n - w \Delta^0)} \right. \end{equation} Let $S^R$ denote the set of solutions of \eqref{RP_syst}. This set refines the set $S^\Delta$ in the sense that the two coincide in the nonsingular case, but the inclusion $S^R\subset S^\Delta$ holds in general. Muhi\v{c} and Plestenjak \cite[Theorems 3.5 and 3.7] {MP09} establish the equality $S^M=S^R$ under the assumption that $n=2$ and that $S^M$ has finitely many solutions, each of which is algebraically and geometrically simple\footnote{See \cite{MP09} for more details.}. Instead, we will use the following result, proved in the Appendix (Section \ref{proof37}). \begin{prop}\label{rank_drop} Suppose that all the entries of $\Delta^0$ are nonzero and of same sign. Let $z\in S^M$. Suppose that for each $1\leq k\leq n$ there exists a couple of vectors $(x^k,y^k)$ satisfying: $$\begin{cases} x^k\in \mathrm{Ker} \transp{(M^k_0+z^1 M^k_1+\dots+z^n M^k_n)}, & x^k>0\\ y^k\in \mathrm{Ker}(M^k_0+z^1 M^k_1+\dots+z^n M^k_n), & y^k> 0\end{cases}$$ Then $z\in S^R$. \end{prop} \noindent The following example illustrates the relation between MEP and generalised MEP. \begin{exemple} Consider the following $2\times 3$ array: $$D=\begin{pmatrix} M^1_0 & M^1_1 & M^1_2 \\ M^2_0 & M^2_1 & M^2_2 \end{pmatrix}= \begin{pmatrix} 2 & 1 & 1\\ \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix} & \begin{pmatrix} -1 & 0 \\ -1 & -1 \end{pmatrix} & \begin{pmatrix} 2 & 1 \\ 3 & 2\end{pmatrix}\end{pmatrix}$$ The associated MEP is the problem of finding $(u,w)\in \ensuremath{\mathbb C}^2$ satisfying: $$\det\begin{pmatrix} 2 +u + w \end{pmatrix}=0,\quad \quad \det\begin{pmatrix} 1-u +2w & w \\ -u+3w & 1-u+2w\end{pmatrix}=0$$ By definition, the auxiliary matrices $\Delta^0$, $\Delta^1$ and $\Delta^2$ are given by: $$\Delta^0=M^1_2\otimes M^2_1 - M^1_1\otimes M^2_2, \quad \Delta^1=-(M^1_0\otimes M^2_2 - M^1_2\otimes M^2_0), \quad \Delta^2=M^1_0\otimes M^2_1 - M^1_1\otimes M^2_0$$ The Kronecker product $A\otimes B$ coincides with the usual product when $A$ or $B$ (or both) are scalars so that one can easily compute: $$\Delta^0=\begin{pmatrix} 3 & 1 \\ 4 & 3\end{pmatrix}, \quad \Delta^1=\begin{pmatrix} -3 & -2 \\ -6 & -3 \end{pmatrix}, \quad \Delta^2=\begin{pmatrix}-3 & 0\\ -2 & -3 \end{pmatrix}$$ The generalised MEP consists then in finding $(u,w)\in \ensuremath{\mathbb C}^2$ satisfying: $$\det\begin{pmatrix} 3+3u & 2+u\\ 6+4u & 3+3u \end{pmatrix}=0,\quad \quad \det\begin{pmatrix} 3+3w & w \\ 2+4w & 3+3w\end{pmatrix}=0$$ These equalities determine two separate polynomial equations of degree $2$ which have roots $(u_1,u_2)$ and $(w_1,w_2)$, respectively. The matrix $\Delta^0$ being nonsingular, the MEP and the generalised MEP have the same solutions, i.e. $$S^M=S^R=S^\Delta=\{(u_1,w_1),(u_1,w_2),(u_2,w_1),(u_2,w_2)\}$$ \end{exemple} \section{Stochastic games, SSK and MEP} \label{SG_MEP} \subsection{From stochastic games to MEP}\label{reduced} Let $D(\lambda)$ be an $n\times (n+1)$ array representation of some stochastic game. Fix $1\leq k\leq n$. By Shapley and Snow \cite{SS50}, the local game $\mathcal{G}^k(\lambda,v^k_\lambda)$ introduced in Definition \ref{local_game} admits a Shapley-Snow kernel, defined by some subsets of actions $\dot{I}^k\subset I$ and $\dot{J}^k\subset J$ satisfying $|\dot{I}^k|=|\dot{J}^k|$. By construction, for any $z\in\ensuremath{\mathbb R}^n$ one has: \begin{equation}\label{GM}\mathcal{G}^k(\lambda,z)-z^k U=M^k_0+ z^1 M^k_1+\dots + z^n M^k_n\end{equation} Let $\dot{M}^k_0,\dot{M}^k_1,\dots,\dot{M}^k_n$ denote, respectively, the $\dot{I}^k\times \dot{J}^k$ sub-matrices of ${M}^k_0,{M}^k_1,\dots,{M}^k_n$, and let $\dot{D}(\lambda)$ be the corresponding $n\times (n+1)$ array of matrices, that is: \begin{equation}\label{atk_SM_dot}\dot{D}(\lambda)=\begin{pmatrix} \dot{M}_0^1 & \dot{M}_1^1 &\dots & \dot{M}^1_n\\ \vdots & \vdots & \ddots & \vdots \\ \dot{M}_0^n & \dot{M}_1^n &\dots & \dot{M}^n_n \end{pmatrix} \end{equation} Let $\dot{\Delta}^0,\dot{\Delta}^1,\dots,\dot{\Delta}^n$ be the auxiliary matrices associated to $\dot{D}(\lambda)$ which, by construction, are sub-matrices of $\Delta^0,\Delta^1,\dots,\Delta^n$, respectively. Moreover, they are all square and of equal size $\prod_{k=1}^n |\dot{I}^k|$. Note that their dependence on $\lambda$, which is polynomial of degree at most $n$, and on the choice of the couples $(\dot{I}^k,\dot{J}^k)$, $1\leq k\leq n$ is omitted from the notation. \\ \noindent Consider now the following four systems in the variable $z\in \ensuremath{\mathbb R}^n$, where $1\leq k\leq n$: $$\mathrm{val}(\dot{M}^k_0+ z^1 \dot{M}^k_1+\dots+z^n \dot{M}^k_n)=0$$ $$\det(\dot{M}^k_0+ z^1 \dot{M}^k_1+\dots+z^n \dot{M}^k_n)=0$$ $$\mathrm{rank}(\dot{\Delta}^k - z^k \dot{\Delta}^0) < \max\nolimits_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\dot{\Delta}^k - w \dot{\Delta}^0)$$ $$\det(\dot{\Delta}^k- z^k \dot{\Delta}^0)=0$$ Let $T^{\dot{M}}$, $S^{\dot{M}}$, $S^{\dot{R}}$ and $S^{\dot{\Delta}}$ denote, respectively, the set of solutions of each these systems. Next paragraph is devoted to the relation between these subsets of $\ensuremath{\mathbb R}^n$. \subsection{From MEP to stochastic games}\label{reduced2} Applying the theory of MEP to stochastic games one obtains the following result. \begin{prop}\label{SSK_atk} $\{v_\lambda\}= T^{\dot{M}}\subset S^{\dot{M}}$ and $\{v_\lambda\} \subset S^{\dot{R}}\subset S^{\dot{\Delta}}$. \end{prop} \begin{proof} By Shapley \cite{shapley53}, the discounted value $v_\lambda\in \ensuremath{\mathbb R}^n$ is the unique solution to the system: \begin{equation}\label{loc_shap}\mathrm{val}(\mathcal{G}^k(\lambda,z)-z^k U)=0,\quad 1\leq k\leq n\end{equation} Indeed, this system is a restatement of Shapley's fixed-point formulation $\Phi(\lambda,v_\lambda)=v_\lambda$ using the fact that $\mathrm{val}(M+w U)=\mathrm{val}(M)+w$ for any matrix $M$ and any $w\in \ensuremath{\mathbb R}$. Similarly, the system $$\mathrm{val}(\dot{M}^k_0+ z^1 \dot{M}^k_1+\dots+z^n \dot{M}^k_n)=0,\quad 1\leq k\leq n$$ has a unique solution, namely the value of the stochastic game in which, for each $1\leq k\leq n$, the players are restricted to play actions in the set $\dot{I}^k\times \dot{J}^k$. Hence $T^{\dot{M}}$ is a singleton. Fix $1\leq k\leq n$ now. The game $\mathcal{G}^k(\lambda,z)-z^k U$ has the same set of optimal strategies as the local game $\mathcal{G}^k(\lambda,z)$ for all $(\lambda,z)$ (see Lemma B2 in the Appendix). Therefore, $\dot{\mathcal{G}}^k(\lambda,z)-v^k_\lambda \dot{U}=\dot{M}^k_0+ v_\lambda^1 \dot{M}^k_1+\dots+v_\lambda^n \dot{M}^k_n$ is a Shapley-Snow kernel of $\mathcal{G}^k(\lambda,v_\lambda^k)-v_\lambda^k U$. Consequently, by \eqref{loc_shap} and Proposition \ref{lemme_G-vU} $(ii)$ one has: \begin{eqnarray*} 0&=& \mathrm{val}(\mathcal{G}^k(\lambda,v_\lambda)-v^k_\lambda U)\\&=&\mathrm{val}(\dot{M}^k_0+ v_\lambda^1 \dot{M}^k_1+\dots+v_\lambda^n \dot{M}^k_n)\\& =&\det(\dot{M}^k_0+ v_\lambda^1 \dot{M}^k_1+\dots+v_\lambda^n \dot{M}^k_n)\end{eqnarray*} As these equalities hold for every $1\leq k\leq n$, it follows that $\{v_\lambda\}= T^{\dot{M}}\subset S^{\dot{M}}$. The inclusion $\{v_\lambda\}\subset S^{\dot{R}}$ follows from Proposition \ref{rank_drop}, which can be applied since: first, all the entries of $\dot{\Delta}^0$ are non-zero and of same sign by Lemma \ref{positivite}; second, the optimal strategies $(\dot{x}^k,\dot{y}^k)\in \Delta(\dot{I}^k)\times\Delta(\dot{J}^k)$ corresponding to the Shapley-Snow kernel $\mathcal{\dot{G}}^k(\lambda,v_\lambda)-v_\lambda^k U$ satisfy $\dot{x}^k>0$ and $\dot{y}^k>0$ because they are strategies; and third, by Proposition \ref{lemme_G-vU} $(iii)$ one has: $$\begin{cases}\dot{x}^k\in \mathrm{Ker}(\transp{(\dot{\mathcal{G}}^k(\lambda,v_\lambda)-v_\lambda^k\dot{U})})\\ \dot{y}^k\in \mathrm{Ker}(\dot{\mathcal{G}}^k(\lambda,v_\lambda)-v_\lambda^k\dot{U})\end{cases}$$ To prove the last inclusion $S^{\dot{R}}\subset S^{\dot{\Delta}}$, let $z\in S^{\dot{R}}$. By definition of $S^{\dot{R}}$, for all $1\leq k \leq n$ one has $\mathrm{rank}(\dot{\Delta}^k - z^k \dot{\Delta}^0) < \max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\dot{\Delta}^k - w \dot{\Delta}^0)$, so that $\dot{\Delta}^k - z^k \dot{\Delta}^0$ is not of full rank or, equivalently, $\det(\dot{\Delta}^k - z^k \dot{\Delta}^0)=0$. Consequently, $z\in S^{\dot{\Delta}}$. \end{proof} \subsection{The inclusions in Proposition \ref{SSK_atk}}\label{ex_2 Let us illustrate the relations obtained in Proposition \ref{SSK_atk} via an easy example where $v_\lambda$ is not the unique element neither of $S^{\dot{M}}$ nor of $S^{\dot{R}}$, so that the inclusions are strict. \begin{remarque} These strict inclusions also hold for Example \ref{stand_ex}, but the analysis is more intricate. For this reason, we have preferred to illustrate this particular point with another example. \end{remarque} \begin{exemple} \label{ex_2} Consider the following \emph{absorbing} game: \begin{center} \begin{tikzpicture}[xscale=1, yscale=1] \draw[fill=red!0] (0,0) rectangle (1.6,1.6); \draw[thin] (0,0.8)-- (1.6,0.8); \draw[thin] (0.8,0)-- (.8,1.6); \node[scale=1] at (0.4,1.2) {\textcolor{black!100}{$1^*$}}; \node[scale=1] at (1.2,1.2) {\textcolor{black!100}{$0$}}; \node[scale=1] at (0.4,0.4) {\textcolor{black!100}{$0$}}; \node[scale=1] at (1.2,0.4) {\textcolor{black!100}{$1^*$}}; \node [above] at (0.4,1.6) { \textcolor{black!100}{L}}; \node [above] at (1.2,1.6) {\textcolor{black!100}{R}}; \node [left] at (0,1.2) {\textcolor{black}{T}}; \node [left] at (0,0.4) {\textcolor{black}{B}}; \end{tikzpicture} \end{center} where $*$ indicates an absorbing payoff. That is, the stage payoff is $0$ until $(T,L)$ or $(B,R)$ is played, in which case the stage payoffs are equal to $1$ forever after. This game can be represented by the following array: $$D(\lambda)=\begin{pmatrix} \begin{pmatrix} \lambda & 0\\0& \lambda \end{pmatrix} & \begin{pmatrix} -1 & -\lambda \\ -\lambda & -1 \end{pmatrix} & \begin{pmatrix}1- \lambda & 0\\0& 1-\lambda \end{pmatrix} \\ \lambda & 0 & -\lambda \end{pmatrix}$$ The so-called ``normalised local games'' at $(u,w)\in\ensuremath{\mathbb R}^2$ are given by: $$\begin{cases}\mathcal{G}^1(\lambda,(u,w))-uU= \begin{pmatrix} \lambda+(1-\lambda)w-u & -\lambda u\\ -\lambda u & \lambda + (1-\lambda)w -u\end{pmatrix}\\ \mathcal{G}^2(\lambda,(u,w))-wU= \lambda(1-w) \end{cases}$$ Any square sub-matrix of $\mathcal{G}^1(\lambda,v_\lambda)$ is a possible candidate for being a Shapley-Snow kernel of this game, so that there are $5$ of them: the entire matrix and each of its entries. To see that the entire matrix is the unique SSK, we proceed as follows. Suppose that the top-left entry is a Shapley-Snow kernel, and define $\dot{D}(\lambda)$ and $S^{\dot{M}}$ accordingly. In this case, $(u,w)\in S^{\dot{M}}$ if and only if $\lambda+(1-\lambda)w-u=0$ and $\lambda(1-w)=0$, which has a unique solution $(1,1)$. But this cannot be the vector of values $v_\lambda$ because one has $v^1_\lambda\in(0,1)$ for each $\lambda\in(0,1]$; to see this, let Player $2$ choose $L$ and $R$ with equal probability and independently at every stage. A similar reasoning rules out the three other entries of the matrix, so that the unique kernel of the game is the entire matrix.\\ \noindent As $\dot{D}(\lambda)=D(\lambda)$, one can omit the ``dots'' from the notation. By definition, $S^M$ is given by the following system of equations in the unknown $(u,w)\in \ensuremath{\mathbb R}^2$ $$\begin{cases}\det\begin{pmatrix} \lambda+(1-\lambda)w-u & -\lambda u\\ -\lambda u & \lambda + (1-\lambda)w -u\end{pmatrix}&=0\\ \det(\lambda(1-w))&=0\end{cases}$$ Clearly, this system admits two solutions, namely $(\frac{1}{1+\lambda},1)$ and $(\frac{1}{1-\lambda},1)$. Consider now the auxiliary systems $S^R$ and $S^\Delta$. An easy calculation gives: $$\Delta^0=\begin{pmatrix}\lambda & \lambda^2 \\ \lambda^2 & \lambda\end{pmatrix},\quad \Delta^1=\begin{pmatrix}\lambda & 0 \\ 0 & \lambda\end{pmatrix} ,\quad \text{ and }\quad \Delta^2=\Delta^0$$ The last equality is in fact a more general property: \emph{for any absorbing state $k$ with payoff $g^k$ one has $\Delta^k=g^k \Delta^0$}. Note that $\Delta^0$ is invertible here, so that the corresponding MEP is nonsingular and, consequently, one has $S^R=S^\Delta$. To compute this set one solves the following uncoupled system: $$\begin{cases}\det \begin{pmatrix} \lambda(1-u) & -\lambda^2u\\ -\lambda^2 u & \lambda(1- u)\end{pmatrix}&=0\\ \det((1-w)\Delta^0)&=0\end{cases}$$ which, again, has two solutions, $(\frac{1}{1+\lambda},1)$ and $(\frac{1}{1-\lambda},1)$. We have thus obtained: $$v_\lambda=\left(\frac{1}{1+\lambda},1\right)\in S^M=S^R=S^\Delta=\left\{\left(\frac{1}{1+\lambda},1\right), \left(\frac{1}{1-\lambda},1\right)\right\}$$ \end{exemple} \subsection{A characterising polynomial}\label{characpoly} In the previous example we showed that, for any $\lambda\in(0,1]$, the value $v_\lambda^1$ is one of the two real roots of the univariate polynomial: $$P^1(u)=\det \begin{pmatrix} \lambda(1-u) & -\lambda^2u\\ -\lambda^2 u & \lambda(1- u)\end{pmatrix}=\lambda^2((1-u)^2-\lambda^2 u^2)$$ The next result states that this property holds in general as a consequence of Proposition \ref{SSK_atk}. \begin{prop}\label{char2} Fix $\lambda\in(0,1]$ and $1\leq k\leq n$. Then, there exists a polynomial $\dot{P}^k$ satisfying: $$\dot{P}^k(v_\lambda^k)=0,\quad \dot{P}^k\not\equiv 0 \quad \text{ and }\quad \mathrm{deg} \dot{P}^k \leq \mathrm{rank}(\dot{\Delta}^0)$$ \end{prop} \begin{proof} By definition, the rank of $A$ is the size of the largest invertible square sub-matrix $\dot{A}$ of $A$. Similarly, for two matrices $A$ and $B$ of equal size, $\max_{w\in \ensuremath{\mathbb R}}\mathrm{rank}(A+wB)$ is the size of the largest square sub-matrix $\dot{A}+w\dot{B}$ of $A+wB$ such that the polynomial $w\mapsto \det(A+wB)$ is not identically $0$. Let $r^k:=\max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\dot{\Delta}^k - w \dot{\Delta}^0)$. By the definition of the rank, there exists some $r^k\times r^k$ sub-matrix of $\dot{\Delta}^k - w \dot{\Delta}^0$ such that its determinant is a polynomial which is not identically $0$. Denote this polynomial by $\dot{P}^k$. The inclusion $v_\lambda\in S^{\dot{R}}$ obtained in Proposition \ref{SSK_atk} implies $\mathrm{rank}(\dot{\Delta}^k - v_\lambda^k \dot{\Delta}^0)< r^k$, so that $\dot{P}^k(v^k_\lambda)=0$. The bound on the degree of $\dot{P}^k$ follows from \cite[Proposition 4.6]{demmel97} which states that, for any couple of square matrices $A$ and $B$ of equal size, the polynomial $P(w):=\det(A+w B)$ is either identically $0$, or of degree $\mathrm{rank}(B)$. By definition, $\dot{P}^k$ is the determinant of some sub-matrix of $\dot{\Delta}^k-w\dot{\Delta}^0$ so that its degree is bounded by the rank of $\dot{\Delta}^0$. \end{proof} \bigskip Determining the polynomial $P^k$ of Proposition \ref{char2} may be difficult, as it requires knowing the value $v_\lambda\in \ensuremath{\mathbb R}^n$, computing a Shapley-Snow kernel of each local game $\mathcal{G}^k(\lambda,v^k_\lambda)$, and then finding a sub-matrix of maximal rank. One way to overcome this difficulty is to note that $\dot{P}^k$ is the determinant of some square sub-matrix of $\dot{\Delta}^k-w\dot{\Delta}^0$, which is a square sub-matrix of ${\Delta}^k-w{\Delta}^0$. Hence, by considering the determinant of all possible square sub-matrices of ${\Delta}^k-w{\Delta}^0$ one obtains a finite family of polynomials containing $\dot{P}^k$. Among them, let $E^k$ denote the set of polynomials which are nonzero and degree at most $\mathrm{rank}(\dot{\Delta}^0)$. \\ Summing up, we have obtained the following results, which refine the characterisation of the values given by Theorem \ref{charac1}. \begin{theoreme}\label{charRD} Fix $\lambda\in(0,1]$ and $1\leq k\leq n$. Then: \begin{itemize} \item[$(i)$] $v_\lambda^k$ is the unique $w\in \ensuremath{\mathbb R}$ satisfying $\mathrm{val}(\dot{\Delta}^k-w\dot{\Delta}^0)=0$ \item[$(ii)$] $\mathrm{rank}(\dot{\Delta}^k-v^k_\lambda \dot{\Delta}^0) < \max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(\dot{\Delta}^k-w\dot{\Delta}^0)$ \item[$(iii)$] There exists $P^k\in E^k$ such that $P^k(v_\lambda^k)=0$. \item[$(iv)$] If all the entries of $D(\lambda)$ are rational, $v^k_\lambda$ is algebraic of order at most $\mathrm{rank}(\dot{\Delta}^0)$ \end{itemize} \end{theoreme} \begin{remarque} \label{biv2} The novelty in $(iii)$ with respect to the so-called semi-algebraic approach is the identification of the finite set of polynomials $E^k$. Also, it is important to note that the polynomial satisfying $(iii)$ depends on $\lambda$ so that different polynomials may correspond to different discount factors. Finally note that, because $\Delta^k-w\Delta^0$ depends polynomially on $\lambda$, every coefficient of every polynomial in $E^k$ is a polynomial in $\lambda$, so that the elements of $E^k$ can be seen as bi-variate polynomials in $(\lambda,w)$. \end{remarque} \subsection{Consequences: back to the examples}\label{exmlpe} \emph{Back to Example \ref{ex_2}.} Let $P^1(\lambda,u)$ denote the characterising polynomial for state $1$, seen as a bi-variate polynomial. Namely, $$P^1(\lambda,u)=\lambda^2((1-u)^2-\lambda^2 u^2)$$ Let us show that the polynomial equality satisfied by $v^1_\lambda$, namely $P^1(\lambda,v^1_\lambda)=0$, for every $\lambda\in(0,1]$, implies its convergence. First of all, note that $r:=\mathrm{deg}_\lambda P^1(\lambda,u)=4$, so that there exist unique univariate polynomials $P_0,\dots P_4$ such that: $$P^1(\lambda,w)=P_0(u)+P_1(u)\lambda+P_2(u)\lambda^2+P_3(u)\lambda^3+P_4(u)\lambda^4$$ Namely, $P_0=P_1=P_3\equiv 0$, $P_2(u)=(1-u)^2$ and $P_4(u)=-u^2$. Hence, for any $u\in \ensuremath{\mathbb R}$: $$P^1(\lambda,u)=P_2(u)\lambda^2 + O(\lambda^4),\quad \text{ as }\lambda\to 0$$ Let us show that the polynomial $P_2$ and the term $O(\lambda^4)$ determine the limit value $v_0^1:=\lim_{\lambda\to 0} v^1_\lambda$, and the speed of convergence of $v_\lambda^1$ to $v^1_0$. Let $w_0$ be some accumulation point of $(v^1_\lambda)_\lambda$ along some vanishing sequence $(\lambda_m)$. Then: \begin{equation}\label{eqZ} \lim_{m\to +\infty}\frac{P^1(\lambda_m,v_{\lambda_m}^1)}{\lambda_m^{2}}=\lim_{m\to +\infty}P_2(w_0)+O(\lambda_m^2)=P_2(w_0)=0\end{equation} Consequently, $w_0$ is a root of $P_2$. As this is true for any accumulation point, and $P_2$ has a unique root at $1$, one obtains $\lim_{\lambda\to 0}v^1_\lambda=1$. Moreover, the relation $$0=\frac{P^1(\lambda,v^1_\lambda)}{\lambda^2}=(1-v_\lambda^1)^2+O(\lambda^2),\quad \text{ as }\lambda \to 0$$ implies $|v_\lambda^1-1|=O(\lambda)$. \paragraph{Comments} \begin{enumerate}\item The fact that $P_2$ has a unique root is not important. Indeed, suppose that $w_0<w_1$ are two different accumulation points. The continuity of $\lambda\mapsto v_\lambda^1$ implies then that every $w\in[w_0,w_1]$ is an accumulation point of $(v_\lambda^1)$. But, by \eqref{eqZ}, every accumulation point of $(v^1_\lambda)$ is a root of $P_2$. A contradiction, since by the choice of $P_2$, this polynomial is nonzero, and has thus finitely many roots. \item The bound on the convergence rate is not given by the degree, nor the subindex of $P_2$. Rather, it is given by the algebraic order of $v^1_0$ as a root of $P_2$. \item The fact that the values converge is not a surprise, as the values converge for any stochastic game. Note, however, that this approach differs from all previously known proofs of convergence\footnote{In chronological order, the convergence of the values has been established by Bewley and Kohlberg \cite{BK76}, Szczechla, Connell, Filar and Vrieze \cite{SCFV97}, Oliu-Barton \cite{OB14} and Attia and Oliu-Barton \cite{AOB18a}.}. A new proof of convergence, which extends the ideas exposed here, is provided in Section \ref{conv}. \end{enumerate \bigskip \noindent For completeness, let us also illustrate these arguments on our standing example. \emph{Back to Example \ref{stand_ex}.} The characterising polynomial of state $1$ is given by: $$P^1(\lambda,u)=\det(\Delta^1-u\Delta^0)$$ For all $u\in \ensuremath{\mathbb R}$, this polynomial satisfies: $$P^1(\lambda,u)=16 u^2\lambda^{10} + O(\lambda^{11}),\quad \text{ as }\lambda\to 0$$ Therefore, the asymptotic behavior of $v^1_\lambda$ can be deduced from $P_{10}(u)=16u^2$ and $O(\lambda^{11})$. Like before, one obtains: $$\begin{cases}v^1_0:= \lim_{\lambda\to 0} v^1_\lambda=0\\ |v_\lambda^1-v_0^1|=O(\lambda^{1/2}) \end{cases}$$ \subsection{The rank drop condition} As the following example shows, $v^k_\lambda$ does not necessarily drop the rank of ${\Delta}^k-w{\Delta}^0$. Hence, the rank drop property of Theorem \ref{charRD} $(ii)$ gives a tighter characterisation for the values. \begin{exemple} Consider the following array: $$ D=\begin{pmatrix} 1 & -\frac{3}{4} & \frac{1}{4} \\ A& \frac{1}{4} U & -\frac{3}{4} U\end{pmatrix} $$ where $A= \begin{pmatrix} \phantom{-}1 & -3 & -3 \\ -3 & \phantom{-}1 & -3 \end{pmatrix}$ and $U$ stands for a $2\times 3$ matrix of ones. It corresponds to a stochastic game with $2$ states and state-dependent actions sets: both players have one action in state 1, while in state 2 the players have 2 and 3 actions, respectively. More precisely, it is a specific instance of the array: $$ D(\lambda)=\begin{pmatrix} \lambda G^1 & (1-\lambda)Q^1_1-U & (1-\lambda)Q^1_2\\ \lambda G^2 & (1-\lambda)Q^2_1 & (1-\lambda)Q^2_2-U\end{pmatrix} $$ for $\lambda=\frac{1}{2}$, $G^1=2$, $Q^1_1=Q^1_2=\frac{1}{2}$, $Q^2_1=Q_2^2=\frac{1}{2} U$ and $G^2=2 A$. \\ \noindent A straightforward calculation gives: $$\Delta^0=\frac{1}{2} \begin{pmatrix} 1 & 1 &1 \\ 1 & 1 & 1\end{pmatrix}, \quad \Delta^1=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \quad \text{ and }\quad \Delta^2=\begin{pmatrix} \phantom{-}1 & -2 & -2 \\ -2 & \phantom{-} 1 & -2 \end{pmatrix} $$ Consequently, for all $w\in \ensuremath{\mathbb R}$ one has: $$\Delta^1-w\Delta^0=\begin{pmatrix} 1-\frac{w}{2} & -\frac{w}{2} &-\frac{w}{2} \\ -\frac{w}{2} &1- \frac{w}{2} &-\frac{w}{2} \end{pmatrix} \quad \text{ and }\quad \Delta^2-w\Delta^0=\begin{pmatrix} 1-\frac{w}{2} & -2 - \frac{w}{2} &-2-\frac{w}{2} \\ -2-\frac{w}{2} &1- \frac{w}{2} &-2-\frac{w}{2} \end{pmatrix} $$ Clearly, $\mathrm{rank}(\Delta^1-w\Delta^0)=\mathrm{rank}(\Delta^1-w\Delta^0)=2$ for all $w\in \ensuremath{\mathbb R}$, so that the rank drop condition is never satisfied. \\ Consider now the reduced array $\dot{D}$ constructed in Section \ref{SG_MEP}. Recall that, in order to compute a Shapley-Snow kernel for each local game, one needs to know the vector of values $v_\lambda=(v^1_\lambda,v^2_\lambda)$. To do so, first note that the third action is dominant for player $2$ in both games $\Delta^1-w\Delta^0$ and $\Delta^2-w\Delta^0$, for all $w\in \ensuremath{\mathbb R}$, so that: $$\mathrm{val}(\Delta^1-w\Delta^0)=-\frac{w}{2}\quad \text{ and }\quad \mathrm{val}(\Delta^2-w\Delta^0)=-2-\frac{w}{2}$$ As $\mathrm{val}(\Delta^1-v^1_\lambda\Delta^0)=\mathrm{val}(\Delta^2-v_\lambda^2\Delta^0)=0$ by Theorem \ref{charac1}, it follows that $v_\lambda=(0,-4)$. The local game at state $1$ is a scalar, so it is trivially a Shapley-Snow kernel. At state $2$, the ``normalised local game'' $\mathcal{G}^2(\lambda,v_\lambda)-v^2_\lambda U$ is given by: $$A+ \frac{1}{4}Uv^1_\lambda-\frac{3}{4}Uv^2_\lambda=\begin{pmatrix} 4 & 0& 0 \\ 0&4 &0 \end{pmatrix}$$ which admits several Shapley-Snow kernels, the simplest being the scalar matrix $0$ corresponding to the top-right corner. By selecting this kernel one obtains the following reduced array (of scalars): $$\dot{D}=\begin{pmatrix} \phantom{-}1 & -\frac{3}{4} & \phantom{-} \frac{1}{4} \\ -3& \phantom{-}\frac{1}{4} & -\frac{3}{4}\end{pmatrix} $$ As $\dot{D}$ is a real matrix, the auxiliary matrices $\dot{\Delta}^0,\dot{\Delta}^1$ and $\dot{\Delta}^2$ are scalars. A calculation gives $\dot{\Delta}^0=\frac{1}{2}$, $\dot{\Delta}^1=0$ and $\dot{\Delta}^2=-2$ so that $\dot{\Delta}^1-w\dot{\Delta}^0=-\frac{w}{2}$ and $\dot{\Delta}^2-w\dot{\Delta}^0=-2-\frac{w}{2}$. Hence, the rank drops at the values: $$\begin{cases} 0=\mathrm{rank}(\dot{\Delta}^1)<\max\nolimits_{w\in \ensuremath{\mathbb R}} \mathrm{rank}(\dot{\Delta}^1-w\dot{\Delta}^0)&=1\\ 0=\mathrm{rank}(\dot{\Delta}^2+4\dot{\Delta}^0)<\max\nolimits_{w\in \ensuremath{\mathbb R}} \mathrm{rank}(\dot{\Delta}^2-w\dot{\Delta}^0)&=1\end{cases}$$ \end{exemple} \section{Asymptotic behaviour of the values}\label{asym} The aim of this section is to derive from Theorem \ref{charRD} several consequences on the asymptotic behavior of the discounted values. In order to state our results with the best possible bounds, we will no longer assume that the action sets are state-independent. Rather, let $I^k\times J^k$ denote the action set at state $k$, for all $1\leq k\leq n$, and let the payoff function $g$ and transition function $q$ be defined over the set $$Z:=\{(k,i,j)\,|\, 1\leq k\leq n,\ (i,j)\in I^k\times J^k\}$$ All the results obtained so far can the extended word for word to the case of state-dependent action sets. Let $D(\lambda)$ be the array representation of the game, which satisfies the properties $(H1)$ and $(H2)$, and let $\Delta^0,\dots, \Delta^n$ denote the corresponding auxiliary matrices, which are of equal size $\prod_{k=1}^n |I^k|\times \prod_{k=1}^n |J^k|$. \paragraph{Notation.} Set $L:=\prod_{k=1}^n \min(|I^k|,|J^k|)$\\[0.2cm] Set $g^-:=\min_{(k,i,j)\in Z} g(k,i,j)$ and $g^+:=\max_{(k,i,j)\in Z} g(k,i,j)$ \\ Let $\dot{D}(\lambda)$ be the reduced array, and let $\dot{\Delta}^0,\dots,\dot{\Delta}^n$ be the corresponding auxiliary matrices, which are square matrices of size less or equal than $L$. Hence, in particular, $\mathrm{rank}(\dot{\Delta}^0)\leq \min(L,\mathrm{rank}(\Delta^0))$, a bound which does not require knowing the values, nor computing a Shapley-Snow kernel for each local game. \subsection{A new proof for the convergence of the values}\label{conv Let $1\leq k\leq n$ be fixed throughout this section. As already noted, see Remark \ref{biv2}, the polynomials in $E^k$ are bi-variate polynomials in $(\lambda,w)$. By construction, for all $P^k\in E^k$ one has: $$\begin{cases} \mathrm{deg}_\lambda P^k(\lambda,w)\leq Ln \\ \mathrm{deg}_w P^k(\lambda,w)\leq \mathrm{rank}(\dot{\Delta}^0) \end{cases}$$ For each $P^k\in E^k$ there exists unique $0\leq s\leq Ln$ and a unique (uni-variate) polynomial $\varphi(P^k)\not\equiv 0$ satisfying the following relation for all $w\in \ensuremath{\mathbb R}$: $$P^k(\lambda,w)=\lambda^s \varphi(P^k)(w)+o(\lambda^s),\quad \text{ as }\lambda \to 0$$ Indeed, like we did in Section \ref{exmlpe}, let $r:= \mathrm{deg}_\lambda P^k(\lambda,w)$ and let $P_0,\dots,P_{r}$ be the unique univariate polynomials satisfying $P^k(\lambda,w)=\sum_{\ell=0}^{r} P_\ell(w)\lambda^\ell$. Then, $\varphi(P^k)=P_s$, where $s$ is the smallest integer $m$ such that $P_m\not\equiv 0$.\\ Let $V^k$ denote the set of all roots of $\varphi(P^k)$ that lie on the interval $[g^-,g^+]$ as $P^k$ ranges over all polynomials of $E^k$. The following result formalises what was obtained in the two examples of Section \ref{exmlpe}. \begin{proposition}\label{finite_set} The limit $v^k_0:=\lim_{\lambda\to 0}v^k_\lambda$ exists. Moreover, $v^k_0\in V^k$. \end{proposition} \begin{proof} Let $w_0$ be an accumulation point of $(v_\lambda^k)$ along the sequence $(\lambda_m)$, that is $\lim_{m\to+\infty} \lambda_m=0$ and $\lim_{m\to +\infty} v^k_{\lambda_n}=w_0$. Accumulation points exist because $v^k_\lambda\in [g^-,g^+]$ for all $\lambda\in(0,1]$. By Theorem \ref{charRD}, for each $m\geq 1$ there exists a polynomial $P^k_m\in E^k$ such that $P_m^k(\lambda_m,v_{\lambda_m}^k)=0$. The set $E^k$ being finite, up to extracting a sub-sequence we can assume that $P^k_m=P^k$ for all $m\geq 1$ and some fixed polynomial $P^k\in E^k$. Hence: \begin{equation*}\label{eq2}P^k(\lambda_m, v^k_{\lambda_m})=0,\quad \forall m\geq 1\end{equation*} By the definition of $\varphi(P^k)$, for all $w\in \ensuremath{\mathbb R}$ one has: $$P^k(\lambda,w)=\lambda^s \varphi(P^k)(w)+o(\lambda^s),\quad \text{ as }\lambda \to 0$$ Consequently, dividing by $\lambda^s$ and taking $\lambda$ to $0$ one obtains: $$0=\lim_{m\to +\infty}\frac{P^k(\lambda_m,v_{\lambda_m}^k)}{\lambda_m^s}= \varphi(P^k)(w_0)$$ Thus, any accumulation point of $(v^k_\lambda)$ belongs to $V^k$. Yet, $\lambda\mapsto v_\lambda^k$ is a real continuous function, and the set of accumulation points of a real continuous function is either a singleton or an interval. The finiteness of $V^k$ implies that it is necessarily a singleton, which gives the desired result. \end{proof} The next result follows directly from Proposition \ref{finite_set}, and the fact that, for any $P^k\in E^k$ one has $\mathrm{deg} \, \varphi(P^k)\leq \mathrm{rank}(\dot{\Delta}^0)$. \begin{corollaire}\label{alg2} Suppose that the payoff function $g$ and the transition function $q$ take only rational values. Then $v_0^k$ is algebraic of degree at most $\mathrm{rank}(\dot{\Delta}^0)$.\end{corollaire} \subsection{Speed of convergence and Puiseux expansion} Fix $1\leq k\leq n$ throughout this section. Let us start by recalling the definition of a Puiseux series. A map $f:(0,\varepsilon)\to \ensuremath{\mathbb C}$ is a \defn{Puiseux series} if there exists $N\in \ensuremath{\mathbb N}^*$, $m_0\in \ensuremath{\mathbb Z}$ and a sequence $(b_m)_{m\geq m_0}$ in $\ensuremath{\mathbb C}$ such that: $$f(\lambda)=\sum_{m\geq m_0}b_m \lambda^{m/N}$$ Any bounded Puiseux series satisfies $m_0\geq 0$ so that, in particular, it converges as $\lambda$ vanishes. The following result, due to Puiseux \cite{puiseux1850}, will be referred as the \emph{Puiseux theorem}. \emph{For any bi-variate polynomial $P(\lambda,w)$ satisfying $\mathrm{deg}_w P\geq 1$, there exists $\lambda_0>0$ such that the roots of $P(\lambda,\,\cdot\,)$ are Puiseux series in the interval $(0,\lambda_0)$}.\\ By the Puiseux theorem, the set of roots of all polynomials $P^k\in E^k$ satisfying $\mathrm{deg}_w P\geq 1$, is a finite set of Puiseux series. Let this set of series be denoted by $W^k$. Our next result follows directly from Theorem \ref{charRD} $(iii)$ and the Puiseux theorem. \begin{proposition}\label{boundpuiseux} The following assertions hold: \begin{itemize} \item[$(i)$] There exists $P^k\in E^k$ and $\lambda^k_0>0$ satisfying: $P^k(\lambda,v_\lambda^k)=0$, for all $\lambda\in (0,\lambda^k_0)$ \item[$(ii)$] There exists $\lambda^k_0$ such that $\lambda\mapsto v_\lambda^k$ belongs to $W^k$ on $(0,\lambda^k_0)$. \item[$(iii)$] As $\lambda$ vanishes one has: $|v^k_\lambda-v^k_0|=O(\lambda^{1/a})$, where $a=\mathrm{rank}(\dot{\Delta}^0)$. \end{itemize} \end{proposition} \begin{remarque} The main novelty of $(i)$ and $(ii)$ is the explicit construction of $E^k$ and $W^k$, and the fact that we use directly the Puiseux theorem, that is, without invoking Tarski-Seidenberg elimination principle. Concerning $(iii)$, not only this bound is sharper than all previously obtained bounds, there are also good reasons to expect it to be tight (see Section \ref{tight}). \end{remarque} \begin{proof} $(i)$ and $(ii)$ By finiteness of the set $E^k$, there exists a common interval $(0,\varepsilon)$ where all the Puiseux series of $W^k$ are well-defined. By Theorem \ref{charRD} $(iii)$, for each $\lambda\in(0,1]$ there exists $P^k\in {E}^k$ satisfying $P^k(\lambda,v_\lambda^k)=0$. Consequently, for any $\lambda\in(0,\varepsilon)$, the point $(\lambda,v_\lambda^k)\in \ensuremath{\mathbb R}^2$ lies on the graph of one of the Puiseux series in $W^k$. The continuity of $\lambda\mapsto v^k_\lambda$ implies that, as $\lambda$ goes to $0$, $v^k_\lambda$ may change from one Puiseux series to another only at points where two series intersect. As two different Puiseux series cannot intersect infinitely many times on $(0,\varepsilon)$, so that there exist some $0<\varepsilon'<\varepsilon$ such that any two Puiseux series are either congruent or disjoint in $(0,\varepsilon')$. Consequently, $\lambda\mapsto v^k_\lambda$ is one of them on $(0,\varepsilon')$, which proves $(i)$ and $(ii)$, for $\lambda^k_0:=\varepsilon'$. \\ $(iii)$ Let $P^k\in E^k$ and $\lambda^k_0$ be given by $(i)$. By definition, $\varphi(P^k)$ is a uni-variate polynomial whose degree is bounded by $\mathrm{deg}_w P^k$. Moreover, there exists $0\leq s\leq Ln$ and $t\geq 1$ such that, for all $w\in \ensuremath{\mathbb R}$: $$P^k(\lambda,w)=\lambda^s \varphi(P^k)(w)+O(\lambda^{s+t}),\quad \text{ as }\lambda \to 0$$ Since one also has $P^k(\lambda,v_\lambda^k)=0$ for all $\lambda\in(0,\lambda^k_0)$, it follows that $\varphi(P^k)(v^k_\lambda)=O(\lambda^{t})$ as $\lambda\to 0$. Thus, in particular one has $\varphi(P^k)(v_0^k)=0$. Consequently, there exists an integer $1\leq b\leq \mathrm{deg}\, \varphi(P^k)\leq \mathrm{rank}(\dot{\Delta}^0)$ and a polynomial $R^k$ such that $R^k(v^k_0)\neq 0$ and: $$\varphi(P^k)(w)=(w-v^k_0)^b R^k(w),\quad \forall w\in \ensuremath{\mathbb R}$$ Hence, taking $w=v^k_\lambda$ one has: \begin{eqnarray*} 0=P^k(\lambda,v^k_\lambda)=\lambda^s (v_\lambda^k-v^k_0)^b R^k(v_\lambda^k)+ O(\lambda^{s+t}),\quad \text{ as }\lambda \to 0\end{eqnarray*} which implies $|v^k_\lambda-v_0^k|= O(\lambda^{t/b})$ for $\lambda$ close to $0$. The result follows, as $t/b$ is minimal for $t=1$ and $b=\mathrm{rank}(\dot{\Delta}^0)$. \end{proof} \section{Concluding remarks} \subsection{Tightness of the bounds}\label{tight} \paragraph{Simple stochastic games.} A \emph{simple stochastic game} is one satisfying $$\min(|I^k|,|J^k|)=1,\quad \text{ for all } 1\leq k\leq n$$ In particular, Markov decision processes are simple stochastic games, as they can be modeled as a stochastic game where $|J^k|=1$ for all $1\leq k\leq n$. For these games, one has $L=\prod_{k=1}^n\min(|I^k|,|J^k|)=1$ so that by Proposition \ref{boundpuiseux}, $v_\lambda^k$ converges to $v^k_0$ at a rate $O(\lambda)$ and there exist polynomials $a^k_0(\lambda)$ and $a^k_1(\lambda)$ of degree at most $n$ such that $v_\lambda^k$ is a root of $a^k_1(\lambda)w+ a^k_0(\lambda)=0$ for all sufficiently small $\lambda$. \paragraph{Absorbing games. } An \emph{absorbing game} is one satisfying $|I^k|=|J^k|=1$ for all $2\leq k\leq n$, as one can assume with no loss of generality that state $1$ is the unique non-absorbing state and that both players have one action at every other state. Hence, $L=\min(|I^1|,|J^1|)$. By Proposition \ref{boundpuiseux} the characterising polynomial $P^1(\lambda,w)$ of $v_\lambda^1$ is of degree at most $L$ in $w$. The following example, due to Kohlberg \cite{kohlberg74} shows that this bound is tight. \begin{exemple}\em For any $p \geq 1$, consider the following absorbing game of size $p\times p$ introduced by Kohlberg \cite{kohlberg74}: $$ \begin{pmatrix} 1^* & 0^* & \dots & 0^*\\ 0 & 1^* & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0^*\\ 0 & \dots & 0 & 1^* \end{pmatrix}$$ where $c^*$ indicates a stage payoff of $c$ and a certain transition to an absorbing state with payoff $c$. For every $\lambda\in(0,1)$, the entire matrix is the unique Shapley-Snow kernel of $\mathcal{G}^1(\lambda,v^1_\lambda)$ so that the characterising polynomial for $v^1_\lambda$ is given by the following equation: $$P^1(\lambda,w)=\det\begin{pmatrix} 1-w & -w &\dots & -w \\ -\lambda w & 1-w& & \vdots\\ \vdots & & \ddots & -w\\ -\lambda w & \dots & -\lambda w & 1-w\end{pmatrix}=(1-w)^p+\lambda R(w)+o(\lambda) $$ for some univariate polynomial $R$ satisfying $R(1)\neq 0$. From the equality $P^1(\lambda,v^1_\lambda)=0$ one deduces: $$v_\lambda^1=1-\lambda^{1/p} (R(v_\lambda^1))^{1/p}+o(\lambda^{1/p})$$ Hence, $v^1_\lambda$ converges to $1$ at a rate $\lambda^{1/p}$. As $p=L$, this example hits the bound of Corollary \ref{alg2}, namely $ |v^1_\lambda - v^1_{0}| = O(\lambda^{\frac{1}{L}})$. \end{exemple} \paragraph{The general case.} Hansen et al. \cite{HKMT11} proved that, for a game with state-independent action sets $I$ and $J$ of common size $m$ and rational data, the algebraic degree of $v^k_\lambda$ (and, similarly, of the limit $v^k_0$) is bounded by $(2m+5)^n$, the best known bound so far. An example is also provided in \cite{HKMT11} of a game with $n+1$ states satisfying $|I^1|=|J^1|=1$ and $|I^k|=|J^k|=m$ for $2\leq k \leq n+1$, and where the algebraic degree of the discounted values is $m^n$. Note that $m^n$ coincides with $L:=\prod_{k=1}^{n+1}\min(|I^k|,|J^k|)$ in this example. Hence, there exists a family of stochastic games of arbitrary size, both in states and in actions, such that the algebraic degree of $v_\lambda^k$ is $L$. In this sense, Theorem \ref{charRD} $(iv)$ provides a tight bound for the algebraic degrees of $v^k_\lambda$. Because the algebraic degree of $v_\lambda^k$, the algebraic degree of $v^k_0$ and the speed of convergence of $v_\lambda^k$ to $v_0^k$ are closely related to each other, it is natural to think that the bounds we have obtained for the latter are tight too. However, we have not been able to establish these results. \subsection{Computing the exact values}\label{algo} Fix $1\leq k\leq n$. By Proposition \ref{finite_set}, the limit value $v^k_0$ belongs to $V^k$, which is a set of roots of finitely many polynomials. The finiteness of this set was crucial in determining a new proof for the convergence of the values. We argue here that the set $V^k$ can also be used for algorithmic purposes, namely, if we are looking for the \emph{exact value} of $v_0^k$ in the case where all the entries of $g$ and $q$ are rational. \paragraph{An efficient algorithm.} For an algebraic number $w\in \ensuremath{\mathbb R}$ its \emph{minimal polynomial} is the unique monic polynomial of least degree satisfying $P(w)=0$. By Kannan, Lenstra and Lovasz \cite{KLL88}, there exists an algorithm (referred in the sequel as the KLL algorithm) that computes the minimal polynomial $P$ of $w$, given a bound on the degree of $P$, a bound on the bit-size\footnote{For any integer $p\in \ensuremath{\mathbb Z}$, its bit-size is given by $\mathrm{bit}(p):=\log_2(\lfloor p\rfloor)+1$. For any rational number $p/q$ one defines $\mathrm{bit}(p/q):=\mathrm{bit}(p)+\mathrm{bit}(q)$.} of the coefficients of $P$ and an $\varepsilon$-approximation of $w$, for $\varepsilon$ small enough. A precise upper bound for $\varepsilon$ is provided in \cite{KLL88}, as a function of the bounds on the coefficients and the degree of the minimal polynomial.\\ As already noted by one of the authors \cite{OB19}, the approach proposed in this manuscript yields to the best known bounds concerning the algebraic degree of $v^k_0$ and the coefficients of its minimal polynomial. Plugging them into the KLL algorithm, together with some approximation $v^k_\lambda$ yields a method for computing the exact values of $v^k_0$. A precise upper bound for $\lambda$ so that $v^k_\lambda$ is a good enough approximation of $v^k_0$ in the KLL algorithm, is obtained in \cite[Proposition 4.1]{OB19}. \paragraph{An alternative algorithm.} Consider now the following alternative method for computing $v^k_0$ exactly. First, compute the \emph{separation} of $V^k$, that is: $$\delta:=\min\{|w-w'|\,|\,w,w'\in V^k,\ w'\neq w\}$$ Second, compute a $\delta/2$-approximation for $v^k_0$, that is, $v^k_\lambda$ for an appropriate $\lambda$ such that $|v_\lambda^k-v^0|\leq \delta/ 2$, where a precise explicit expression for $\lambda$ is given in \cite[Proposition 4.1]{OB19}. By the choice of $\delta$ one clearly has: $$\left[v^k_\lambda-\frac{\delta}{2},v^k_\lambda+\frac{\delta}{2}\right]\cap V^k =\{v^k_0\}$$ Hence, the exact value of $v^k_0$ is obtained. \begin{remarque}Though the computation of $V^k$ may be problematic for large games, the alternative algorithm has the advantage of being easy and self-contained. \end{remarque \subsection{Another characterising polynomial}\label{altpoly} For a given $\lambda\in(0,1]$ and $1\leq k\leq n$, we proved in Section \ref{characpoly} the existence of a characterising polynomial for $v^k_\lambda$, that is, one that satisfies $P^k(\lambda,v_\lambda^k)=0$. Our construction requires two steps: first, we define a reduced matrix game $\dot{\Delta}^k-w\dot{\Delta}^0$ by taking a Shapley-Snow kernel of each local game $\mathcal{G}^k(\lambda,v_\lambda)$; second, we use the rank drop condition of the values for this game. In this paragraph, we propose the following different method for determining a characterising polynomial, based on 1) the theory of Shapley and Snow, but this time applied to the game $\Delta^k-w\Delta^0$ directly, and 2) the equality $\mathsf{val}((-1)^n(\Delta^k-v_\lambda^k\Delta^0))=0$ established by the authors in \cite{AOB18a} (see Theorem \ref{charac1}). \\ Fix $1\leq k\leq n$. Set $\mathcal{I}:=I^1\times \dots\times I^n$ and $\mathcal{J}:=J^1\times \dots\times J^n$. The game $\Delta^k-v_\lambda^k \Delta^0$ is a $\mathcal{I}\times \mathcal{J}$-matrix game, so that it admits a Shapley-Snow kernel. Let $\overline{\mathcal{I}}^k\subset \mathcal{I}$ and $\overline{\mathcal{J}}^k\subset \mathcal{J}$ be the subsets of actions that define one of its Shapely-Snow kernels. Let $\overline{\Delta}^0$ and $\overline{\Delta}^k$ be the $\overline{\mathcal{I}}^k\times \overline{\mathcal{J}}^k$ sub-matrices of $\Delta^0$ and $\Delta^k$, respectively, and for any $w\in \ensuremath{\mathbb R}$ set: $$\begin{cases}{G}^k(w):= (-1)^n({\Delta}^k-w{\Delta}^0)\\ \overline{G}^k(w):= (-1)^n(\overline{\Delta}^k-w\overline{\Delta}^0)\\ \overline{P}^k(w):=\det(\overline{\Delta}^k-w\overline{\Delta}^0)\end{cases}$$ The polynomial thus obtained is another characterising polynomial of $v_\lambda^k$. \begin{prop}\label{char3} The polynomial $\overline{P}^k$ satisfies: $$\overline{P}^k(v_\lambda^k)=0,\quad \overline{P}^k\not\equiv 0,\quad \text{ and }\quad \mathrm{deg} \overline{P}^k = \mathrm{rank}(\overline{\Delta}^0)$$ \end{prop} \begin{proof} $(i)$ By Theorem \ref{charac1}, one has $\mathsf{val}(G^k(v_\lambda^k))=0$. By Proposition \ref{lemme_G-vU} $(iii)$ one then has: \begin{equation}\label{three}\mathsf{val}(G^k(v_\lambda^k))=\mathsf{val}(\overline{G}^k(v_\lambda^k))=\det(\overline{G}^k(v_\lambda^k))=0 \end{equation} Therefore, $\overline{P}^k(v_\lambda^k)=0$. To prove $\overline{P}^k\not\equiv 0$ it is enough to show that its derivative $(\overline{P}^k)'$ is not identically $0$. For any two square matrices $M$ and $H$ of same size, by Jacobi's formula one has: $$\det(M+\varepsilon H)=\det(M)+\varepsilon \ensuremath{\operatorname{tr}}(\transp{\mathrm{co}(M)}H)+o(\varepsilon),\quad \text{ as } \varepsilon\to 0$$ Equivalently, the directional derivative of $\det(M)$ in the direction $H$ is $\ensuremath{\operatorname{tr}}(\transp{\mathrm{co}(M)}H)$. Applying this result to $M=\overline{\Delta}^k-w\overline{\Delta}^0$ and $H=-\overline{\Delta}^0$ one obtains: $$(\overline{P}^k)'(v_\lambda^k)= \ensuremath{\operatorname{tr}}(\transp{\mathrm{co}(\overline{G}^k(v^k_\lambda))}(-\overline{\Delta}^0))$$ By Proposition \ref{lemme_G-vU} $(i)$ and $(iv)$, the matrix $\mathrm{co}(\overline{G}^k(v^k_\lambda))$ is not identically zero and has all its entries of same sign. Similarly, by Lemma \ref{positivite}, all the entries of $-\overline{\Delta}^0$ are non-zero and of same sign. Consequently, $(\overline{P}^k)'(v_\lambda^k)\neq 0$, so that $\dot{P}^k\not\equiv 0$. To obtain the degree of $\overline{P}^k$, we proceed like in the proof of Proposition \ref{char2}. By \cite[Proposition 4.6]{demmel97}, for any square matrices $A$ and $B$ the polynomial $P(w):=\det(A+w B)$ is either identically $0$ or of degree $\mathrm{rank}(B)$. \end{proof} \begin{remarque} The bound on the degree of $\dot{P}^k$ is considerably better than the bound we obtained for $\overline{P}^k$. Indeed, one has: $$\begin{cases} \mathrm{deg}\dot{P}^k\leq \mathrm{rank}(\dot{\Delta}^0)\leq \prod_{k=1}^n \min(|I^k|,|J^k|)\\ \mathrm{deg}\overline{P}^k=\mathrm{rank}(\overline{\Delta}^0)\leq \min(\prod_{k=1}^n|I^k|, \prod_{k=1}^n |J^k|)\end{cases}$$ \end{remarque} \begin{remarque} We have exhibited two different constructions that lead to a characterising polynomial for $v^k_\lambda$, that is: either we consider a Shapley-Snow Kernel of the game $(-1)^n(\Delta^k-v_\lambda^k \Delta^0)$ and use the fact that this game has value $0$, or we consider a sub-matrix of maximal rank of the reduced game $\dot{\Delta}^k-v_\lambda^k\dot{\Delta}^0$ obtained by taking a Shapley-Snow kernel at each local game. If the following condition holds: $$S(\mathrm{co}(\dot{\Delta}^k-v_\lambda^k\dot{\Delta}^0)) \neq 0$$ then $(-1)^n(\dot{\Delta}^k-v_\lambda^k \dot{\Delta}^0)$ is a Shapley-Snow kernel of $(-1)^n({\Delta}^k-v_\lambda^k{\Delta}^0)$, in which case the same polynomial can be obtained with the two constructions. \end{remarque} \section{Appendix} \section*{Appendix A: Kronecker products} \label{kro} Let us start by recalling the definition of the Kronecker product of two matrices and of the Kronecker determinant of an array of matrices. \paragraph{Definition A1.} The \defn{Kronecker product} of two matrices $A$ and $B$ of sizes $m\times n$ and $p\times q$ respectively, denoted by $A\otimes B$, is an $mp \times nq$ matrix defined by blocks as follows: \[A \otimes B = \begin{pmatrix} a_{11} B & \cdots & a_{1n}B \\ \vdots & \ddots & \vdots \\ a_{m1} B & \cdots & a_{mn} B \end{pmatrix}\] \paragraph{Definition A2.} The \defn{Kronecker determinant} of an $n\times n$ array of matrices: $$ \begin{pmatrix} A^1_1& \dots & A^1_n\\ \vdots & \ddots & \vdots \\ A^n_n & \dots & A^n_n\end{pmatrix}$$ is well-defined if and only for each $1\leq k\leq n$, the matrices $A^k_1,\dots,A^k_n$ are of same size. In this case, it is given by: $$ \det\nolimits_\otimes\begin{pmatrix} A^1_1& \dots & A^1_n\\ \vdots & \ddots & \vdots \\ A^n_n & \dots & A^n_n\end{pmatrix}:= \displaystyle\sum_{\sigma\in \Sigma(n)} \epsilon(\sigma) A^1_{\sigma(1)} \otimes \cdots \otimes A^n_{\sigma(n)} $$ where $\Sigma(n)$ is the set of permutations of $\{1,\dots,n\}$ and $\epsilon(\sigma)$ is the signature of $\sigma$. \paragraph{Properties A3.} The following well-known properties have been used in this manuscript: \begin{itemize} \item[$(K1)$] The Kronecker product $\otimes$ is bilinear and associative, but not commutative. \item[$(K2)$] Let $A_1,\dots,A_n$ and $B_1,\dots,B_n$ be some matrices such that the products $A_kB_k$ are well-defined. Then $(A_1 \otimes \cdots \otimes A_n) (B_1 \otimes \cdots \otimes B_n) = (A_1 B_1) \otimes \cdots \otimes (A_n B_n)$. \item[$(K3)$] The Kronecker determinant $\det_\otimes$ has similar properties as the usual determinant, that is: it is multilinear and alternating, but only with respect to the columns. Indeed, because of the non-commutativity of the Kronecker product, rows and columns do not play the same role, and the determinant needs to be developed by columns. \end{itemize} \medskip In order to express the last two properties we need to introduce the \emph{canonical bijection} mapping the product set $\{1,\dots, p_1\}\times \dots\times \{1,\dots, p_n\}$ into the set $\{1,\dots, \prod_{\ell=1}^n p_\ell\}$ using the lexicographical order. That is, for any $p_1,\dots,p_n\in \ensuremath{\mathbb N}^*$ set: \begin{eqnarray*} \{1,\dots,p_1\}\times \dots\times \{1,\dots,p_n\} &\rightarrow &\{1,\dots, \prod\nolimits_{\ell=1}^n p_\ell\}\\ (i_1,\dots, i_n) & \mapsto & (i_1-1) C_1+\dots+ (i_n-1)C_n+1 \end{eqnarray*} where $C_\ell:=\prod_{r:=\ell+1}^n p_r$ for each $1\leq \ell< n$ and $C_n=1$. \medskip \begin{itemize} \item[$(K4)$] Let $A_k$ be a $p_k\times q_k$, for all $1\leq k\leq n$. Let $r$ and $s$ be, respectively, the images of $(i_1,\dots,i_n)\in \{1,\dots,p_1\}\times \dots\times \{1,\dots, p_n\}$ and $(j_1,\dots,j_n)\in \{1,\dots,q_1\}\times \dots\times \{1,\dots, q_n\}$ with respect to the canonical bijection. Then the entry $(r,s)$ of $A_1 \otimes \cdots \otimes A_n$ is given by: $$(A_1 \otimes \cdots \otimes A_n)^{r s}= A_{1}^{i_1j_1}\cdots A_{n}^{i_n j_n}$$ \item[$(K5)$] Let $A_{11},\dots, A_{nn}$ be an $n\times n$ array of matrices such that for all $1\leq k\leq n$, the matrices on the $k$-th row $A_{k1},\dots,A_{kn}$ are of same size $p_k\times q_k$. Let $r$ and $s$ be, respectively, the images of $(i_1,\dots,i_n)\in \{1,\dots,p_1\}\times \dots\times \{1,\dots, p_n\}$ and $(j_1,\dots,j_n)\in \{1,\dots,q_1\}\times \dots\times \{1,\dots, q_n\}$ with respect to the canonical bijection. Then the entry $(r,s)$ of the matrix $\det_\otimes (A_{11},\dots, A_{nn})$ satisfies: $$\det\nolimits_\otimes \begin{pmatrix} A_{11}& \dots & A_{1n}\\ \vdots & \ddots & \vdots \\ A_{n1} & \dots & A_{nn}\end{pmatrix}^{r s}= \det \begin{pmatrix} A^{i_1 j_1}_{11}& \dots & A^{i_1j_1}_{1n}\\ \vdots & \ddots & \vdots \\ A^{i_nj_n}_{n1} & \dots & A^{i_nj_n}_{nn}\end{pmatrix} $$ \end{itemize} \section*{Appendix B: Proof of Proposition \ref{lemme_G-vU}}\label{proofsssk} Let us start by recall its statement (see Section \ref{sskth}) \paragraph{Proposition \ref{lemme_G-vU}.} \emph{ Let $\dot{G}$ be a Shapley-Snow kernel of $G$, corresponding to a basic solution $(x,y)$. Then: \begin{enumerate} \item[$(i)$] $S(\mathrm{co}(\dot{G}-v\dot{U})) \neq 0$ \item[$(ii)$] $\det(\dot{G}-v\dot{U})=\mathrm{val}(\dot{G}-v\dot{U})=\mathrm{val}({G}-v{U})=0$ \item[$(iii)$] $\mathrm{Ker}(\dot{G}-v\dot{U})=<\dot{y}>$ and $\mathrm{Ker}(\transp{(\dot{G}-v\dot{U})})=<\dot{x}>$ \item[$(iv)$] $\mathrm{co}(\dot{G}-v\dot{U})=S(\mathrm{co}(\dot{G}-v\dot{U}))\, \dot{x}\transp{\dot{y}}$. \end{enumerate}} \bigskip \noindent The proof is based on three easy lemmas. \paragraph{Lemma B1.} \emph{ For any square matrix $M$ one has: \begin{itemize} \item[$(i)$] $\det(M+wU)=\det(M)+w S(\mathrm{co}(M))$, for all $w\in \ensuremath{\mathbb R}$ \item[$(ii)$] The map $w\mapsto S(\mathrm{co}(M+w U))$ is constant \item[$(iii)$] The maps $w\mapsto \mathrm{co}(M+wU)\textbf{1}$ and $z\mapsto \transp{\mathrm{co}(M+wU)}\textbf{1}$ are constant\\ \end{itemize} } \begin{proof} $(i)$ Let $M$ be some square matrix. The function $w\mapsto \det(M+w U)$ is a polynomial in $w$. Subtracting one row from all other rows of $M+wU$, it is clear that its degree is at most $1$. From the formulae $\ensuremath{\operatorname{tr}}(\transp{M}U)=S(M)$ and from Jacobi's formula: $$\det(M+\varepsilon H)=\det(M)+\varepsilon\ensuremath{\operatorname{tr}}(\transp{\mathrm{co}(M)}H)+o(\varepsilon), \quad \text{ as } \varepsilon \to 0$$ which hold for any square matrix $M$, one deduces $\frac{\partial}{\partial w}\det(M+w U)(0)=S(\mathrm{co}(M))$. Hence, $\det(M+w U)=\det(M)+ w S(\mathrm{co}(M))$ for any square matrix $M$ and any $w\in \ensuremath{\mathbb R}$.\\ $(ii)$ Applying $(i)$ to $M+wU$ and $-w$ yields: $$\det(M)=\det((M+wU)-wU)=\det(M+wU)-w S(\mathrm{co}(M+wU))$$ Comparing with $(i)$, one obtains $S(\mathrm{co}(M))=S(\mathrm{co}(M+wU))$ for any $M$ and $w$.\\ $(iii)$ By the symmetric role of both players, it is enough to prove the first statement. Let $m\in\ensuremath{\mathbb N}^*$ be the size of $M$, and let $M_1,\dots,M_m$ be its rows. Then, for each $\ell=1,\dots,m$ the $\ell$-th component of the vector $\mathrm{co}(M)\textbf{1}$ satisfies: \begin{eqnarray* (\mathrm{co}(M)\textbf{1})^\ell&=&\det(M_1,\dots,M_{\ell-1},\textbf{1},M_{\ell+1},\dots,M_m)\\ &=& \det(M_1+w\textbf{1},\dots,M_{\ell-1}+w\textbf{1},\textbf{1},M_{\ell+1}+w\textbf{1},\dots,M_m+w\textbf{1})\\ &=& (\mathrm{co}(M+wU)\textbf{1})^\ell \end{eqnarray*} where the second equality follows from the properties of the determinant, as we added $w$ times the $\ell$-th column to the other columns. \end{proof} \paragraph{Lemma B2.} \emph{ Let $\dot{G}$ be a Shapley-Snow kernel for $G$. Then $\dot{G}+w\dot{U}$ is a Shapley-Snow kernel of the translated game $G+wU$, for any $w\in \ensuremath{\mathbb R}$.}\\ \begin{proof} To prove that $\dot{G}+w\dot{U}$ is a Shapley-Snow kernel, it is enough to check properties $(1)$ and $(2)$ of Theorem \ref{SSK}. On the one hand, $S(\mathrm{co}(\dot{G}+w\dot{U}))=S(\mathrm{co}(\dot{G}))\neq 0$ by Lemma B1 $(ii)$. On the other hand, it follows from Lemma B1 $(iii)$ that the following strategies do not depend on $w$: $$\dot{x}(w)= \frac{\mathrm{co}(\dot{G}+w\dot{U})}{S(\mathrm{co}(\dot{G}+w\dot{U}))}\mathbf{1}, \quad \dot{y}(w)= \frac{\transp{\mathrm{co}}(\dot{G}+w\dot{U})}{S(\mathrm{co}(\dot{G}+w\dot{U}))}\mathbf{1} $$ which completes the proof. \end{proof} \paragraph{Lemma B3.}\emph{ Let $M$ be a square matrix of size $a\in \ensuremath{\mathbb N}^*$ and rank $a-1$, and let $x$ and $y$ be such that $\mathrm{Ker}(\transp{M})=<x>$ and $\mathrm{Ker}(M)=<y>$. Then there exists a constant $\alpha\neq 0$ such that $\mathrm{co}(M)=\alpha \, x \transp{y}$.} \\%\end{lemme} \begin{proof} Using the relation $\transp{A} \ \mathrm{co}(A) = \det(A) \ensuremath{\operatorname{Id}}$, which is valid for any matrix $A$, and $\det(M) = 0$, we get $\transp{M} \mathrm{co}(M) = 0$. Moreover since $\mathrm{Ker}(M) = <y>$, all the rows of $\mathrm{co}(M)$ are proportional to $y$. Hence, there exists $x'$ such that $\mathrm{co}(M) = x' \transp{y}$. This equality shows that the columns of $\mathrm{co}(M)$ are proportional to $x'$, and a symmetric argument gives that the columns are proportional to $x$. Therefore, $x$ and $x'$ are proportional. Let $\alpha\in \ensuremath{\mathbb R}$ be such that $x'=\alpha x$ so that $\mathrm{co}(M) = \alpha x \transp{y}$. As $M$ is of rank $n-1$, the matrix $\mathrm{co}(M)$ is non-zero, so that $\alpha\neq 0$, which proves the result. \end{proof \paragraph{Proof of Proposition \ref{lemme_G-vU}.} $(i)$ By Lemma B2, $\dot{G}-v\dot{U}$ is a Shapley-Snow kernel for $G-vU$ so that, in particular, $S(\mathrm{co}(\dot{G}-v\dot{U}))\neq 0$. \\[0.2cm] $(ii)$ For any matrix $M$ and $z\in \ensuremath{\mathbb R}$, clearly $\mathrm{val}(M+zU)=\mathrm{val}(M)+z$. Hence, the formulae of Theorem \ref{SSK} $(3)$ yield: $$0=\mathrm{val}({G}-vU)=\mathrm{val}(\dot{G}-v\dot{U})=\frac{\det(\dot{G}-v\dot{U})}{S(\mathrm{co}(\dot{G}-v\dot{U}))}$$ $(iii)$ By the symmetric role of both players, it is enough to prove the first statement. The matrix $\dot{G}-v\dot{U}$ is not invertible by $(ii)$, and its matrix of cofactors $\mathrm{co}(\dot{G}-v\dot{U})$ is non-zero, thanks to $(i)$. Hence, if $1\leq b\leq \min(|I|,|J|)$ denotes its size, one has $\mathrm{rank}(\dot{G}-v\dot{U})=b-1$ or, equivalently $\mathrm{dim}(\mathrm{Ker}(\dot{G}-v\dot{U}))=1$. Yet by Theorem \ref{SSK} $(4)$, one has $\dot{G} \dot{y} =v \dot{\textbf{1}}$ and $\dot{y}\neq 0$, so that $(\dot{G}-v \dot{U})\dot{y}=0$. Consequently $\mathrm{Ker}(\dot{G}-v\dot{U})=<\dot{y}>$. \\[0.2cm] $(iv)$ It follows from Lemma B3, as the hypotheses are satisfied thanks to $(iii)$. Hence, $\mathrm{co}(\dot{G}-v\dot{U})=\alpha\, \dot{x}\transp{\dot{y}}$ for some $\alpha\neq 0$, so that $S(\mathrm{co}(\dot{G}-v\dot{U}))= S(\alpha\, \dot{x}\transp{\dot{y}})=\alpha$ because $S(\dot{x}\transp{\dot{y}})=1$. $\hfill \blacksquare$ \section*{Appendix C: Proof Proposition \ref{rank_drop}}\label{proof37} Let us start by recalling the statement this result (see Section \ref{sec_atkinson}). Recall that an $n\times (n+1)$ array of matrices $D=(M^k_\ell)$ is given, where for all $1\leq k\leq n$ the matrices $M^k_0,\dots,M^k_n$ are square and of equal size. \paragraph{Proposition \ref{rank_drop}.} \emph{Suppose that all the entries of $(-1)^n\Delta^0$ are strictly positive, and that there exists $z\in S^M\subset \ensuremath{\mathbb R}^n$ and $(x^1,y^1),\dots, (x^n,y^n)$ satisfying, for each $1\leq k\leq n$: $$\begin{cases} x^k\in \mathrm{Ker} \transp{(M^k_0+z^1 M^k_1+\dots+z^n M^k_n)}, & x^k>0\\ y^k\in \mathrm{Ker}(M^k_0+z^1 M^k_1+\dots+z^n M^k_n), & y^k> 0\end{cases}$$ Then $z\in S^R$.}\\ Our proof relies on two lemmas: the first one comes from Muhi\v{c} and Plestenjak \cite[Lemma 3.4]{MP09}; the second is borrowed from Atkinson \cite[Chapter 6]{atkinson72}. We include both proofs for completeness, as the results are stated slightly differently in \cite{MP09} and \cite{atkinson72}. \paragraph{Lemma C1.} \emph{ Let $A,B$ be two square matrices of the same size $m$, and let $v\in \ensuremath{\mathbb R}$ be such that $\det(A + v B) = 0$. Suppose there exists $x\in \mathrm{Ker}(\transp{(A + v B)})$ and $y\in \mathrm{Ker}(A + v B)$ such that $\transp{x} B y \neq 0$. Then $ \mathrm{rank}(A + v B) < \max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(A + wB)$. }\\ \begin{proof} Let $r:= \mathrm{rank}(A + v B)$ and suppose that $r=\max_{w \in \ensuremath{\mathbb R}} \mathrm{rank}(A + wB)$. Note that $r<m$, as the kernel of $A + v B$ contains at least one non-zero vector. As the rank of a matrix is the size of the largest invertible square sub-matrices, $A+v B$ admits some invertible $r\times r$ sub-matrix. By the continuity of the determinant, there exists $\varepsilon>0$ such that this sub-matrix is invertible in the interval $(v-\varepsilon,v+\varepsilon)$, so that $\mathrm{rank}(A + wB)\geq r$ in this interval. As we have supposed that $r$ is the maximal rank, the converse inequality also holds, so that $\mathrm{rank}(A + wB)=r$ on $(v-\varepsilon,v+\varepsilon)$. This implies the existence of a vector $y(w)\in \ensuremath{\mathbb R}^m$ with polynomial entries satisfying $(A+w B)y(w)=0$ on $(v-\varepsilon,v+\varepsilon)$ and $y(v)=y$. Derivating the first equality with respect to $w$, one obtains: $$ B y(w)+ (A+z B)y'(w)=0$$ Multiplication by $\transp{x}$ and taking $w=v$ yields then: $$ \transp{x} B y+ \transp{x}(A+v B)y'(v)=0$$ where $\transp{x}(A+v B)=0$ by the choice of $x$. Hence $\transp{x} B y=0$, a contradiction. \end{proof} \paragraph{Lemma C2.} \emph{ Let $z\in S^M$ and let $y^k\neq 0$ belong to $\mathrm{Ker}(M^k_0+z^1M^k+\dots+z^n M^k)$, for all $1\leq k\leq n$. Then $z\in S^\Delta$ and $(y^1\otimes\dots\otimes y^n) \in \mathrm{Ker}(\Delta^k-z^k \Delta^0)$ for all $1\leq k\leq n$.}\\ \begin{proof} Let $z=(z^1,\dots, z^n)\in S^M$. The existence of $y^k\neq 0$ such that $y^k \in \mathrm{Ker}(M^k_0+z^1M^k_1+\dots+z^n M^k _n)$ for all $1\leq k\leq n$ follows from the fact that the matrices $M^k_0+z^1M^k_1+\dots+z^n M^k _n$ are singular. Moreover, one has $y^1\otimes\dots\otimes y^n \neq 0$, as $A\otimes B=0$ if and only if either $A=0$ or $B=0$. Fix $1\leq k\leq n$ and let $\ \widehat{}\ $ the omission of the $k$-th column. Then: \begin{eqnarray*} \Delta^k (y^1\otimes\dots\otimes y^n) &=& (-1)^k \det\nolimits_{\otimes} \begin{pmatrix} M_0^1 y^1 & \dots &\widehat{M^1_k y^1} & \dots & M^1_n y^1\\ \vdots & &\vdots & & \vdots \\ M_0^n y^n &\dots &\widehat{M^n_k y^n} & \dots & M^n_n y^n \end{pmatrix} \\ &=& (-1)^{k+1} \sum_{\ell=1}^n z^\ell \det\nolimits_{\otimes} \begin{pmatrix} M_\ell^1 y^1 & \dots &\widehat{M^1_k y^1} & \dots & M^1_n y^1\\ \vdots & &\vdots & & \vdots \\ M_\ell^n y^n &\dots &\widehat{M^n_k y^n} & \dots & M^n_n y^n\end{pmatrix}\\ &=& (-1)^{k+1} z^k \det\nolimits_{\otimes} \begin{pmatrix} M_k^1 y^1 & \dots &\widehat{M^1_k y^1} & \dots & M^1_n y^1\\ \vdots & &\vdots & & \vdots \\ M_k^n y^n &\dots &\widehat{M^n_k y^n} & \dots & M^n_n y^n\end{pmatrix} \\[0.25cm] &=& z^k \Delta^0 (y^1\otimes\dots\otimes y^n) \end{eqnarray*} Indeed, the first equality follows from $(K2)$, the second is a consequence of the equalities $M^{\ell'}_0 y^{\ell'}= -\sum_{\ell=1}^n z^\ell M^{\ell'}_\ell y^{\ell'}$ which hold for all $1\leq \ell' \leq n$ and $(K3)$, the third follows from the fact that, for all $\ell\neq k$, the array of matrices has two equal columns so that its Kronecker determinant vanishes, and finally the last equality is obtained by taking a cyclic permutation of the columns (of matrices) which has signature $(-1)^{k+1}$. Hence $$(\Delta^k-z^k \Delta^0)(y^1\otimes\dots\otimes y^n)=0$$ or, equivalently, $y^1\otimes\dots\otimes y^n\in \mathrm{Ker}(\Delta^k-z^k\Delta^0)$ and $\det(\Delta^k-z^k\Delta^0)=0$. The result follows as this holds for every $1\leq k\leq n$. \end{proof} \bigskip We are now ready to prove Proposition \ref{rank_drop}. For any matrix $M$, we write $M>0$ to indicate $M\geq 0$ and $M\neq 0$. \paragraph{Proof of Proposition \ref{rank_drop}.}On the one hand, ${x}^1 \otimes \cdots \otimes {x}^n>0$ since for any pair of matrices $A,B>0$ implies $A\otimes B> 0$. Similarly, ${y}^1 \otimes \cdots \otimes {y}^n>0$. Together with the assumption that all entries of $(-1)^n\Delta^0$ are strictly positive, it follows that: $$\transp{({x}^1 \otimes \cdots \otimes {x}^n)}\Delta^0 (y^1 \otimes \cdots \otimes {y}^n)\neq0$$ On the other hand, by Lemma C2, one has $z\in S^\Delta$ and ${y}^1 \otimes \cdots \otimes {y}^n\in \mathrm{Ker}({\Delta}^k-z^k{\Delta}^0)$ for all $1\leq k\leq n$. Reversing the roles of the players, one similarly has ${x}^1 \otimes \cdots \otimes {x}^n \in \mathrm{Ker}(\transp{({\Delta}^k-z^k{\Delta}^0))}$ for all $1\leq k\leq n$. The result follows then from Lemma C1, applied to $\Delta^k$, $\Delta^0$, $-z^k$, ${x}^1 \otimes \cdots \otimes {x}^n$ and $y^1 \otimes \cdots \otimes {y}^n$, for all $1\leq k\leq n$.$ \hfill \blacksquare$ \section*{Acknowledgements} The authors are very grateful to the comments and insight brought by the Editor and the two anonymous referees. \\ The second author gratefully acknowledges the support of the French National Research Agency, under grant ANR CIGNE (ANR-15-CE38-0007-01). \bibliographystyle{amsplain}
1,477,468,751,329
arxiv
\section{Introduction} The b-quark mass is a fundamental parameter of QCD, and its accurate knowledge is needed for theoretical predictions of B meson decay rates. The understanding of the latter is a very active field of high energy physics research. At the same time the B meson decay constant plays a crucial role in the description of these phenomena. We focus our attention on the pseudoscalar $\mrm{B_s}$ meson, a system characterized by two different scales: the heavy quark mass ($m_{\rm b}\sim 5$~GeV) and the typical QCD scale. The mass of the strange quark is around or below the latter. We fix it to its physical value through $\mk$ as in \cite{mbar:pap3}. \section{HQET and step scaling method (SSM)} We deal with these two scales in (quenched) lattice QCD with the SSM introduced in \cite{deDivitiis:2003iy,deDivitiis:2003wy}, but constraining the large mass behaviour by HQET~\cite{hqet:pap1}. The computation of an observable $O(m_{\rm h})$ using the SSM is based on the identity \be\label{eq:SSM_identity} O(m_{\rm h},L_\infty)=O(m_{\rm h},L_0)\, \frac{\displaystyle O(m_{\rm h},L_1)}{\displaystyle O(m_{\rm h},L_0)}\, \cdots\, \frac{\displaystyle O(m_{\rm h},L_N)}{\displaystyle O(m_{\rm h},L_{N-1})}\, \frac{\displaystyle O(m_{\rm h},L_\infty)}{\displaystyle O(m_{\rm h},L_N)}\,, \ee where $m_{\rm h}$ stands generically for a heavy quark mass whose precise definition is needed only later. In order to be able to extract each factor in the continuum limit, the starting volume $L_0$ has to be small enough to properly account for the dynamics of the b-quark, using a relativistic $\rmO(a)$-improved action. A good choice is $L_0=0.4$ fm ~\cite{deDivitiis:2003iy,deDivitiis:2003wy}, where easily lattice spacings of $a\approx0.012$~fm can be used. (Physical units are set using $r_0=0.5$ fm ~\cite{pot:r0,Necco:2001gh,Guagnelli:2002ia}). Furthermore, $L_\infty$ has to be large enough such that finite size effects in $O(m_{\rm h},L_\infty)$ are negligible. In practise we will use $L_\infty \approx L_2=1.6$~fm. We will choose a fixed ratio $s=L_i/L_{i-1}$ in the step scaling functions \be\label{eq:SSM_def} \sigma_{O}(m_{\rm h},L_i)= \frac{\displaystyle O(m_{\rm h},L_i)}{\displaystyle O(m_{\rm h},L_{i-1})}\,. \ee The number $N$ and the scale ratio $s$ of the steps are in principle dependent on the considered observable and on the desired level of accuracy. It has been seen ~\cite{deDivitiis:2003iy,deDivitiis:2003wy} that $(N,s)=(2,2)$ is a suitable choice for the mass and decay constant of the ${\rm B_s}$ meson. In HQET the step scaling functions are expanded as \be\label{eq:SSM_expansion} \sigma_{O}(m_{\rm h},L_i)=\sigma_{O}^{(0)}(L_i) +\frac{\displaystyle \sigma_{O}^{(1)}(L_i)}{\displaystyle L_im_{\rm h}} +{\rm O}\left(\frac{\displaystyle 1}{\displaystyle \left(L_im_{\rm h}\right)^2}\right)\, \ee at fixed $L_i$. We will see that the correction terms to the leading order are small for the masses of interest.\\ We first consider the case of a finite volume pseudoscalar meson mass, $O(m_{\rm h},L)=M_{\rm PS}(m_{\rm h},L)$, which will be defined in the following section. In this case, $\sigma_{O}^{(0)}=1$ and the first non-trivial term $\sigma_{O}^{(1)}$ is computable in the static approximation of HQET. We further define \be\label{eq:def_x} x(m_{\rm h},L)\equiv\frac{\displaystyle 1}{\displaystyle LM_{\rm PS}(m_{\rm h},L)} = \frac{\displaystyle 1}{\displaystyle Lm_{\rm h}} + \rmO\left(\frac{\displaystyle 1}{\displaystyle \left(Lm_{\rm h}\right)^2}\right)\,, \ee as the natural non-perturbative dimensionless mass variable. The step scaling function for the meson mass is then written as \be\label{eq:sigma_m_2} \sigma_{\rm m}(x,L_i)\equiv \frac{\displaystyle M_{\rm PS}(m_{\rm h},L_i)}{\displaystyle M_{\rm PS}(m_{\rm h},L_{i-1})}= 1+\sigma_{\rm m}^{\rm stat}(L_i)\cdot x+\Or(x^2)\,, \quad x=x(m_{\rm h},L_i)\,. \ee It is defined for all $x,L$. The idea for its numerical evaluation is to compute $\sigma_{\rm m}^{\rm stat}(L)$ explicitly in the static approximation and fix the small remainder by the relativistic QCD data with quarks of masses of the physical charm quark and higher. In other words we interpolate to the physical b-quark mas. With the experimental mass of the $\mrm{B_s}$ meson, $\MBs=5.3675(18)\,\GeV$ we fix $ x_2 = 1/L_2\MBs $ and the physical points corresponding to the b-quark are then given by \be\label{eq:x1_star} x_2 = 1/(L_2\MBs)\,,\quad x_{i-1}=2\sigma_{\rm m}(x_i,L_i)\cdot x_i\,. \ee The numerical results will have to be evaluated at these points. In the smallest volume we relate the meson mass to the renormalization group invariant (RGI) quark mass, $M_{\rm h}$, defining \be\label{eq:rho} \rho(x,L_0) \equiv \frac{\displaystyle M_{\rm PS}(m_{\rm h},L_0)}{\displaystyle M_{\rm h}}= \rho^{(0)}(L_0)+\rho^{(1)}(L_0)\cdot x+\Or(x^2)\,. \ee We thus have the connection of the $\mrm{B_s}$ meson mass and the RGI b-quark mass \be\label{eq:b_mass} M_{\rm b}=\frac{\displaystyle \MBs}{\displaystyle \rho(x,L_0)\cdot \sigma_{\rm m}(x_1,L_1)\cdot \sigma_{\rm m}(x_2,L_2)}\,. \ee For the decay constant the step scaling function \bea \sigma_{\rm f}(x,L_i)&\equiv& \frac{\displaystyle f_{\rm PS}(m_{\rm h},L_i )\sqrt{M_{\rm PS}(m_{\rm h},L_i)}}{ \displaystyle f_{\rm PS}(m_{\rm h},L_{i-1})\sqrt{M_{\rm PS}(m_{\rm h},L_{i-1})} =\sigma_{\rm f}^{\rm stat}(L_i) +\sigma_{\rm f}^{(1)}(L_i)\cdot x+\Or(x^2) \label{eq:sigma_dc}\, \eea yields straightforwardly the connection between the finite volume decay constant and the infinite volume one. Note that the only approximation made in the above equations is to neglect finite size effects on mass and decay constant in the volume of linear extent $L_2$. \section{Finite volume observables} \subsection{Relativistic QCD} Suitable finite volume observables are defined in the QCD \SF ~\cite{SF:LNWW,SF:stefan1} with a space-time topology $L^3\times T$, where $T=2L$ and $C=C'=0$ is chosen for the boundary gauge fields, and $\theta=0$ for the phase in the spatial quark boundary conditions.\\ The \Oa-improved correlation functions $f_{\rm A}(m_{\rm h},L,x_0), f_{\rm P}(m_{\rm h},L,x_0)$ and $f_1(m_{\rm h},L)$ are defined and renormalized as in ~\cite{deDivitiis:2003iy}, allowing to compute the pseudoscalar meson decay constant \be\label{eq:dc_qcd} f_{\rm PS}(m_{\rm h},L)=\frac{\displaystyle -2}{\displaystyle \sqrt{L^3 M_{\rm PS}(m_{\rm h},L)}} \frac{\displaystyle f_{\rm A}(m_{\rm h},L,L)}{\displaystyle \sqrt{f_1(m_{\rm h},L)}}\stackrel{m_{\rm h}\to m_{\rm b}}{=} \fBs(L)\stackrel{L\to\infty}{=}\fBs \ee and the pseudoscalar meson mass \be\label{eq:meson_mass_qcd} M_{\rm PS}(m_{\rm h},L)=\frac{\displaystyle 1}{\displaystyle 2a} \ln{\left[\frac{\displaystyle f_{\rm A}(m_{\rm h},L,L-a)}{\displaystyle f_{\rm A}(m_{\rm h},L,L+a)}\right]} \stackrel{m_{\rm h}\to m_{\rm b}}{=}\MBs(L)\stackrel{L\to\infty}{=}\MBs \ee For all observables computed in relativistic (quenched) QCD we employ the non-perturbatively \Oa-improved Wilson action\cite{impr:pap1,impr:pap3}. The data at finite heavy quark mass were published in ~\cite{deDivitiis:2003iy,deDivitiis:2003wy}. They have been reanalyzed, taking into account the correlation between observables computed on the same gauge configurations. The statistical uncertainties on the renormalization constants and the lattice spacing are included before performing the continuum limit extrapolations; they do not appear as a separate uncertainty. \subsection{HQET} In the static approximation of HQET, unrenormalized correlation functions $\fastat$ and $\fonestat$ are defined in complete analogy to the relativistic ones, see \cite{Heitger:2003xg}. As in this reference, we use the RGI static axial current, related to the bare one by a factor $\ZRGI$. It serves to define the RGI ratio\,, \be \YRGI(L)=\ZRGI\frac{\displaystyle \fastat(L,L)}{\displaystyle\sqrt{\fonestat(L)}}\,, \ee which is related to the QCD decay constant $f_{\rm PS}$ via \be\label{eq:fb_hqet} f_{\rm PS}(m_{\rm h},L)\sqrt{L^{3}M_{\rm PS}(L)}= -2\Cps(\Lambda_\msbar/M_{\rm h})\times\YRGI(L)+\rmO(1/m_{\rm h})\,. \ee The function $\Cps(\Lambda_\msbar/M_{\rm h})$, defined in \cite{Heitger:2003xg}, can be accurately evaluated in perturbation theory; we use the 3-loop anomalous dimension $\gamma^{\rm PS}$ computed in \cite{Chetyrkin:2003vi}. Just like $\ZRGI$, it is needed only for $f_{\rm PS}(m_{\rm h},L_0)$; it cancels out in the step scaling functions. In analogy to \eq{eq:meson_mass_qcd} we further define $ \Gamma_{\rm stat}(L)=\frac{1}{ 2a} \ln\left[{ \fastat(L,L-a)}/{\fastat(L,L+a)}\right]\,. $ The static step scaling functions then read \be\label{eq:stat_sig} \sigma_{\rm f}^{\rm stat}(L_i)=\frac{\displaystyle 1}{\displaystyle 2^{3/2}} \frac{\displaystyle \YRGI(L_i)}{\displaystyle \YRGI(L_{i-1})},\quad \sigma_{\rm m}^{\rm stat}(L_i)=L_i\,[\Gamma_{\rm stat}(L_i)-\Gamma_{\rm stat}(L_{i-1})]\,, \quad L_i = 2L_{i-1}\,. \ee These quantities will be precisely computed by using the static action denoted by HYP2 in \cite{DellaMorte:2005yc} (see also \cite{HYP}), and the corresponding $\Oa$-improvement coefficients for the static axial current. The regularization independent part of the factor $\ZRGI$ is known from \cite{Heitger:2003xg}, while the regularization dependent one is computed in this work. \section{Numerical results for the $\mathbf{b}$ quark mass} The computation of $\sigma_{\rm m}(x,L_2)$ is performed at finite quark mass on lattices with $\beta=5.960$, $6.211,\ 6.420\ $ and resolutions $L_2/a=16,\ 24,\ 32$; the continuum limits for the three heaviest quark masses are shown on the left of \Fig{fig:sigma2_m}. For the static step scaling function we took the results for $L=L_2$ from an extension\cite{Estat:me} of the work of the ALPHA collaboration\cite{stat:letter}, while in the intermediate volume ($L_1$) we simulated lattices with $5.960\leq\beta\leq6.737$. The continuum limit \be\label{eq:CL_S2_stat} \sigma_{\rm m}^{\rm stat}(L_2)=1.549(33)\,, \ee is used in the interpolation of $\sigma_{\rm m}(x,L_2)$ between values of $x$ corresponding to about the mass of the charm quark and the limit $\sigma_{\rm m}(0,L_2)=1$. It constrains the slope of the fitting curve to the cone shown in \Fig{fig:sigma2_m}. The result of the quadratic fit in $x$ reads \be\label{eq:res_s2m} \sigma_{\rm m}(x_2,L_2)=1.0328(11)\,, \ee hardly distinguishable from a purely static result. Analogously the interpolation of the step scaling function for the intermediate volume gives \be\label{eq:res_s1m} \sigma_{\rm m}(x_1,L_1)=1.0092(18)\,. \ee \begin{figure}[t] \vspace{-0.4cm} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.4]{PLOTS/CL_s2_MPS_pos06.eps} & \includegraphics[scale=0.4]{PLOTS/fig8_constraint_pos06.eps} \qquad \\ \end{tabular} \end{center} \vspace{-0.5cm} \caption[]{\label{fig:sigma2_m} {Continuum limit extrapolation and interpolation of $\sigma_\mrm{m}(x,L_2)$}} \end{figure} In the small volume only the relativistic data are needed to establish a finite volume relationship between the meson and the heavy quark masses. The renormalization is non-perturbatively achieved through the renormalization factor $\zM(g_0)$ and the $\Oa$-improvement terms computed in \cite{Capitani:1998mq,impr:babp,Heitger:2003ue}. Using \eq{eq:b_mass}, the interpolated value \be\label{eq:res_rho} \rho(x_0,L_0)=0.748(11)\,, \ee is combined with the above step scaling functions to find the scale and scheme independent number \be\label{eq:res_b_mass} M_{\rm b}=6.888(105)\,\GeV\hspace{0.3cm}\Rightarrow\hspace{0.3cm}\mbMSbar(\mbMSbar)=4.421(67)\,\GeV. \ee \section{Numerical results for the decay constant} For the computation of $\sigma_{\rm f}(x,L_2)$ the relativistic data originate from the same gauge configurations used earlier, while in the static case the decay constant in the bigger volume, \be\label{eq:dc_stat_Linfty} \YRGI(L_2)=-4.63(19)\,, \ee was again computed and extrapolated to the continuum limit as an extension \cite{Estat:me} of \cite{stat:letter}. The continuum extrapolation of the same quantity in the intermediate volume ($L=L_1$) is shown on the left of \Fig{fig:sigma2_dc}. The result \be\label{eq:YRGI_2L0} \YRGI(L_1)=-1.628(19) \ee is used together with (\ref{eq:dc_stat_Linfty}) and the relativistic data, as shown on \Fig{fig:sigma2_dc} (right), to get \be\label{eq:S2} \sigma_{\rm f}^{\rm stat}(L_2)=1.006(44),\hspace{1.0cm} \sigma_{\rm f}(x_2,L_2)=0.974(30)\,. \ee Similarly, but by extrapolating the step scaling function to the continuum limit rather than $\YRGI(L_1)$ and $\YRGI(L_0)$ separately, we obtain \be\label{eq:S1} \sigma_{\rm f}^{\rm stat}(L_1)=0.4337(44),\hspace{1.0cm} \sigma_{\rm f}(x_1,L_1)=0.4260(31)\,. \ee {\flushleft With the small volume results (see \Fig{fig:sv_dc})} \be\label{eq:SV_DC} \YRGI(L_0)=-1.347(13),\hspace{1.0cm} Y_{\rm PS}(x_0,L_0)= \frac{\displaystyle -\fBs(L_0)\sqrt{L_0^3 M_{\rm PS}(L_0)}}{\displaystyle 2\Cps(\Lambda_\msbar/M_{\rm b})} =-1.280(17)\,, \ee we finally arrive at the result \be\label{eq:res_fbs} \fBs=191(6)\,\MeV \,. \ee \section{Conclusions} The combination of the Tor Vergata strategy to compute properties of heavy-light mesons ~\cite{deDivitiis:2003iy,deDivitiis:2003wy} with the expansion of all quantities in HQET~\cite{hqet:pap1}, changes extrapolations in the former computations into interpolations. As expected, our numerical results demonstrate that these are very well behaved. Indeed the higher order mass dependence of the step scaling functions is very weak, and in all but one steps the static approximation alone gives very accurate results. In the one exception ($Y_{\rm PS}(x_0,L_0)$, \fig{fig:sv_dc}) the $\rmO(1/\mbeauty)$ corrections are around 5\%. Our results do not suffer from any systematic errors apart from the use of the quenched approximation; small systematic errors quoted in ~\cite{deDivitiis:2003iy,deDivitiis:2003wy} for the extrapolation uncertainties have been eliminated. Our results are in agreement with the ones of ~\cite{deDivitiis:2003iy,deDivitiis:2003wy,stat:letter,mb:nf0}, within the errors.\\ Concerning dynamical fermion computations, the challenge in this strategy is to simulate in a large volume (such as $L_2$) with small enough lattice spacings, where quark masses of around $\m_{\rm charm}$ and higher can be simulated with confidence.\\ \noindent {\bf Acknowledgement.} We thank Michele Della Morte, Stephan D\"urr, Jochen Heitger and Andreas J\"uttner for useful discussions and the permission to use results of \cite{Estat:me} prior to publication. \begin{figure}[t] \vspace{-0.4cm} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.4]{PLOTS/sigma2_pos06.eps} & \includegraphics[scale=0.4]{PLOTS/DC_S2_pos06.eps} \qquad \\ \end{tabular} \end{center} \vspace{-0.5cm} \caption[]{\label{fig:sigma2_dc} {Continuum extrapolation of $Y_{\rm SF}(L_1)$ and interpolation of $\sigma_\mrm{f}(x,L_2)$}} \end{figure} \begin{figure}[t] \vspace{-0.4cm} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.4]{PLOTS/Fig3_new_pos06.eps} & \includegraphics[scale=0.4]{PLOTS/DC_SV_pos06.eps} \qquad \\ \end{tabular} \end{center} \vspace{-0.5cm} \caption[]{\label{fig:sv_dc} {Continuum extrapolation of $Y_{\rm SF}(L_0)$ and interpolation of the decay constant on the small volume}} \end{figure} \bibliographystyle{h-elsevier}
1,477,468,751,330
arxiv
\section{Introduction} Close massive black hole (MBH) binaries are expected to form in large numbers following the hierarchical assembly of massive galaxies \citep[e.g.][]{begelman80,volonteri03}, but their merger history remains poorly understood. Few observational probes of the processes that lead to and accompany the shrinking and inspiral of a MBH binary have been proposed to date: 1) the gravitational slingshot ejection of hypervelocity stars from the Galactic Center into the halo \citep[e.g.][]{yu03,levin06,sesana06,sesana07,brown09,perets09}; 2) interruption or redirection of jets due to MBH binary-accretion disk interaction or MBH coalescence \citep{merritt02,liu03,liu04,liu07}; 3) the coalescence of MBH pairs with masses in the range $(10^4-10^7)/(1+z)\,\,{\rm M_\odot}$ giving origin to gravitational wave events that are one of the primary targets for the planned {\it Laser Interferometer Space Antenna} \citep[{\it LISA}; e.g.][]{haehnelt94,hughes02,wyithe03,sesana04,sesana05}; 4) the electromagnetic afterglow from a circumbinary accretion disk that would follow such coalescence \citep[e.g.][]{milos05,dotti06,lippai08,shields08}; and 5) the high-velocity recoil experienced by the plunging binary due to the asymmetric emission of gravitational waves \citep[e.g.][and references therein]{baker08}. A recoiling hole that retains the inner parts of its accretion disk may have fuel for a long-lasting luminous phase along its trajectory, and shine as an off-center AGN \citep[e.g.][]{madau04,blecha08,volonteri08}. In this {\it Letter} we return to the dynamical processes that determine the decay of MBH binaries in a stellar background, prior to the gravitational wave regime, and put forward another possible observational signature of close binaries in the nuclei of galaxies. Using results from scattering experiments we show that gravitational slingshot interactions between an unequal-mass ``hard" binary and a bound stellar cusp will inevitably be accompanied by a burst of stellar tidal disruptions, at a rate that can be {\it several orders of magnitude larger} than that appropriate to a single MBH fed by two-body relaxation. The duration of the phase of enhanced tidal disruption is of the order of $10^4-10^5\,$ yr. \section{Scattering experiments} Analytical techniques have been used by \citet{ivanov05} to study the enhanced stellar disruption rates induced by the secular non-resonant interaction with a non-evolving MBH binary. Here, we perform detailed numerical experiments of the close encounters between stars and the pair of MBHs, collisions that perturb stellar orbits in a chaotic way and scatter stars initially bound to the primary MBH into its tidal disruption loss cone. Consider a MBH binary of mass $M=M_1+M_2=M_1(1+q)$ ($M_2\ll M_1$) and semi-major axis $a$, orbiting in a background of stars of mass $m_*$, radius $R_*$, and velocity dispersion $\sigma_*$. When $a\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} a_h\equiv GM_2/4\sigma_*^2$, the ``hard" binary loses orbital energy by three-body slingshot interactions \citep{quinlan96,sesana06,sesana07}. For unequal-mass pairs, the radius of influence $r_{\rm inf} \equiv G(M_1+M_2)/(2\sigma_*^2)$ is much larger than the hardening radius $a_h$, and almost all interacting (low angular-momentum) stars are bound to $M_1$.\footnote{Note that, for extreme mass ratios $q\ll 1$, the encounter is essentially a two-body scattering as the star and the secondary hole move in the static potential of the primary.}\ In the case of an isothermal stellar distribution around $M_1$, the total stellar mass within $a_h$ is equal to $M_2/2$. \begin{figure*} \plotone{fig1.eps} \caption{{\it Upper solid line:} Close-encounter probability for bound stars interacting with the primary member (of mass $M_1$) of a MBH binary of mass ratio $q$, eccentricity $e$, and separation $a=a_h$. The vertical axis shows the fraction of stars $N/N_{\rm tot}$ with closest approach distance $r_{\rm min}<r$. {\it Lower solid line:} same for $M_2$. {\it Dotted line:} same in the case of an isolated MBH of mass $M_1$. Stars are drawn from a spherical isotropic distribution bound to $M_1$ and have semi-major axis in the range $a/2<a_*<2a$. The short vertical lines mark the tidal radii of $M_1$ and $M_2$ and the Schwarzschild radius of $M_1$ (all in units of $a$) for $M_1/m_*=10^7$, $r_*=\rsun$, and $\sigma_*=100\,\,{\rm km\,s^{-1}}$. From the top left to the bottom right, the fraction of stars tidally disrupted ($r_{\rm min}<r_{t1})$ by $M_1$ is 0.24, 0.19, 0.39, and 0.48, respectively. } \label{fig1} \vspace{0.2cm} \end{figure*} The integration of the three-body encounter equations is performed in a coordinate system centered at the location of $M_1$. Initially the binary (of mass ratio $q$) has eccentricity $e$ and a randomly-oriented orbit with $M_2$ at its pericenter. Stars initially move in the $x-y$ plane with pericenters along the positive $x$-axis and random orbital phases. The initial conditions of the restricted three-body problem problem are then completely defined by 6 variables, 3 for the binary and 3 for the star: 1) the inclination of the binary orbit, $\theta$; 2) the longitude of $M_2$'s ascending node, $l$; 3) the argument of $M_2$'s pericenter, $\phi$; 4) the semi-major axis of the stellar orbit, $a_*$; 5) the specific angular momentum of the star, $j_*$; and 6) the orbital phase of the star, $p_*$. We start each scattering experiment by generating $6$ random numbers, with $\cos\theta$ evenly sampled in the range $[-1,1]$, and $l$ and $\phi$ uniformly distributed in the range $[0,2\pi]$. We sample $a_*$ logarithmically around $a$ in the range $[1/2a,2a]$ where three-body interactions are strongest, $j_*^2$ randomly between 0 and 1 (corresponding to an isotropic distribution), and $p_*$ evenly between $0$ and $1$. The equation of motion are integrated using an explicit Runge-Kutta method of order 8 \citep{hairer87}, with a fractional error per step in position and velocity set to $10^{-13}$. We have tested our code by reproducing Figures 4 and 6 in \citet{sesana08} and found excellent agreement, and run $10^4$ scattering experiments for each binary configuration. During each experiment the minimum separation $r_{\rm min}$ between the star and the MBH pair is measured and stored: stars on orbits intersecting the tidal radius of hole $M_i$ ($i=1,2$), \begin{equation}\label{rt} r_{ti}=r_*\left(\frac{M_i}{m_*}\right)^{1/3}\\ \simeq (2.3\times10^{-6}~{\rm pc}) ~\left(\frac{r_*}{\rsun}\right) \left(\frac{M_i}{10^6 m_*}\right)^{1/3}, \end{equation} will be tidally disrupted at pericenter passage (neglecting general relativistic effects that set in when $M_1\gg 10^7\,\,{\rm M_\odot}$, see \citealt{hills75}). The results of our numerical experiments are shown in Figure \ref{fig1} for binaries with different mass ratios and eccentricities, all at separation $a=a_h$. The fraction of interacting stars that are scattered by $M_2$ to within a pericenter distance $r_{\rm min}<r_{t1}$ from $M_1$ and are tidal disrupted can be, for very unequal mass binaries, orders of magnitude higher than the corresponding number were $M_1$ not in a binary. The latter is simply given by all the bound stars within $M_1$ ``tidal loss cone'', the region in the $(a_*-j_*)$ phase-space bounded by \begin{eqnarray} j_{\rm lc}^2&=& \left\{ \begin{array}{ll} 1\,\,\,\,\,\,\,\,(a_*<r_{t1})\\ 2(r_{t1}/a_*)^2(a_*/r_{t1}-1/2)\,\,\,\,\,\,\,\,(a_*\ge r_{t1}), \end{array} \right. \end{eqnarray} where $j_*$ is the specific angular momentum of the star normalized to the angular momentum of a circular orbit with the same semi-major axis. For a binary with $q=1/81$, $e=0.1$, the probability that a close encounter with a star having $1/2a<a_*<2a$ results in a tidal disruption by $M_1$ ($r_{\rm min}<3\times 10^{-4}a\approx r_{t1}$) is more than 2 orders of magnitude larger than if $M_1$ were single. The figure also shows that: 1) many more stars are disrupted by $M_1$ than by $M_2$; 2) the disruption probability decreases with increasing $q$. This is both because the ratio $r_{t1}/a_h$ decreases with increasing $q$, and because, as the perturbing force of the secondary increases, more stars are ejected altogether rather than disrupted; and 3) more stars are scattered into $M_1$'s tidal loss cone with decreasing binary eccentricity. \begin{figure*} \plotone{fig2.eps} \caption{{\it Top and botton left panels:} Example of a chaotic three-body scattering leading to a tidal disruption. A star on a bound orbit of semi-major axis $a_*=1.2a$ and eccentricity $e_*=0.5$ interacts with a MBH binary of parameters $M_1=10^7~\,{\rm M_\odot}$, $q=1/81$, $e=0.3$, $a=a_h$. {\it Top left:} stellar ({\it black}) and $M_2$ ({\it green}) trajectories in the $x-y$ plane. The primary hole $M_1$ is located at the origin. {\it Top right:} same projected onto the $x-z$ plane. {\it Bottom left:} separation between the star and $M_1$ as a function of time (in units of the binary period $P$). The dotted line marks the tidal radius $r_{t1}$. {\it Bottom right panel:} Bound stars in the $(a_*-j_{z*})$ plane that are tidally disrupted during the interaction with a MBH binary of parameters $M_1=10^7~\,{\rm M_\odot}$, $q=1/81$, $e=0.1$, $a=a_h$. The solid lines mark the boundaries of the Kozai wedge $|j_{z*}|<j_{\rm lc}$.} \label{fig2} \vspace{0.8cm} \end{figure*} \section{Basic theory and disruption rates} Figure \ref{fig2} shows an example of a three-body scattering leading to a tidal disruption after many pericenter passages. In the presence of a secondary black hole, strongly interacting ($a_*\sim a$) stars in nearly circular, highly inclined orbits relative to the binary orbital plane undergo a slow secular evolution that periodically increases their eccentricity (in exchange for a lower inclination) and eventually drives them within the tidal disruption loss cone of the primary hole \citep{ivanov05}. Our scattering experiments show that this mechanism -- analogous to the so-called ``Kozai effect" of celestial mechanics \citep{kozai62} -- contributes but does not dominate the fueling of the tidal disruption loss cone in the case of very unequal mass binaries. The stars supplied to the disruption loss cone by the Kozai effect are all those having normal component of the angular momentum, $j_{z*}$, within the loss cone, i.e. all those stars within a wedge-like region in phase space where $|j_{z*}|<j_{\rm lc}$. For a given stellar semi-major axis, $a_*\ll a$, the fraction of stars that lie outside the tidal disruption loss cone but inside the ``Kozai wedge'' is \begin{equation} f_K(r_{t1},a_*)=\int_{j_{\rm lc}}^{1}dj_*\int_{-j_{\rm lc}}^{j_{\rm lc}}dj_{z*}= 2j_{\rm lc}-2j_{lc}^2. \end{equation} When $r_{t1}\ll a_*$, $f_K\simeq 2\sqrt{2r_{t1}/a_*}$, which is $\sqrt{2a_*/r_{t1}}$ times larger than the fraction of stars already in the tidal loss cone, $j_{lc}^2\simeq 2r_{t1}/a_*$. This result explains why the probability of stellar disruption for bound stars is much higher in MBH binaries than in single black hole systems. It also predicts that only a fraction $2\sqrt{2r_{t1}/a_*}= (0.05,0.09,,0.15)$ for $q=(1/81,2/243,1/721)$ of all stars with $a/2<a_*<2a$ will be supplied to the tidal loss cone by the Kozai effect. Figure \ref{fig1} shows that for very unequal mass binaries the tidal disruption probability is much larger than the above estimate. This discrepancy highlights the importance of close, resonant encounters with the secondary hole, which change the stellar orbital parameters in a chaotic way and fuel the tidal loss cone. Figure \ref{fig2} (bottom right panel) depicts the initial distribution in the $(a_*-j_{z*})$ plane of all the stars that are disrupted in our numerical experiments. It is clear that the majority of disrupted stars lie outside the Kozai wedge. We now show that the ejection of ambient stars by a hard MBH binary will be accompanied by a burst of stellar tidal disruptions, at a rate that may be orders of magnitude larger than that appropriate for a single MBH. In our numerical experiments the time when a star first crosses the tidal radius of the primary hole is stored and used to calculate a disruption frequency. To translate this number into a stellar disruption rate in physical units one needs to specify the parameters of the MBH binary and its stellar cusp. At the hardening radius \begin{equation} a_h\equiv \frac{GM_2}{4\sigma_*^2}\simeq (1.1{~\rm pc})~\sigma_{100}^{-2}q~M_7 \end{equation} the orbital period of the binary is \begin{equation}\label{pah} P_h=2\pi\sqrt{\frac{a_h^3}{G(M_1+M_2)}}\simeq (3.3\times10^4~{\rm yr})~ \sigma_{100}^{-3} M_7\left (\frac{q^3}{1+q}\right)^{1/2}, \end{equation} where $\sigma_{100}\equiv \sigma_*/100\,\,{\rm km\,s^{-1}}$ and $M_7\equiv M_1/10^7\,\,{\rm M_\odot}$. If we assume now, for simplicity, that the stars bound to $M_1$ follow an isothermal distribution, $\rho_*(r)=\sigma_*^2/(2\pi Gr^2)$, the total stellar mass between $a_h/2$ and $2a_h$ is $M_*=3qM_1/4$. Normalizing the interacting mass in our numerical experiment to the isothermal case and rescaling the stored disruption times according to equation (\ref{pah}), we obtain the stellar disruption rates shown in Figure~\ref{fig3}. Although the standard Kozai theory does not strictly apply to strongly interacting stars, we use it here to derive an analytical scaling for the stellar disruption rates. The Kozai timescale at $a_h$ is approximately \begin{equation}\label{tk} T_{\rm K}=\frac{2}{3\pi q}\left(\frac{a_*}{a_h}\right)^{-3/2}P_h \end{equation} \citep{innanen97,kiseleva98}. The stellar disruption rate can then be estimated as \begin{equation} {\dot N}_*=\frac{\lambda f_K M_*}{m_*T_K}e^{-t/T_K} \nonumber \end{equation} \begin{equation} ~~~~~~~~\simeq (6~\,{\rm yr^{-1}})~\lambda(1+q)^{1/2}\sigma_{100}^4M_7^{-1/3}e^{-t/T_K}, \label{drate} \end{equation} where $f_K$ is the fraction of stars in the Kozai wedge and $\lambda\simeq0.2$ is a correction factor accounting for the uncertainty in $T_K$ and for stars that are actually ejected before disruption, as well as for properly weighting our scattering experiments for the case of an isothermal profile. The numbers provided by equation (\ref{drate}) are in good agreement with the numerical rates. The tidal disruption plateau, however, lasts much longer than $T_K$ (indicated by the vertical ticks in Fig. \ref{fig3}), because at late times the disruption rate is dominated by chaotic scatterings. The tidal disruption rates we compute are many orders of magnitude higher than those, \begin{equation} \dot{N}_*\simeq (2\times10^{-4}~\,{\rm yr^{-1}})~\sigma_{100}^{7/2}M_7^{-1}\left(\frac{m_*}{\,{\rm M_\odot}}\right)^{-1/3} \left(\frac{r_*}{\rsun}\right)^{1/4}, \end{equation} derived for a single MBH fed by two-body relaxation \citep{wang04}. Note that in our calculations we have only considered stars with $a/2<a_*<2a$. Taking into account stars in a larger range of semi-major axis would further boost the binary disruption rates. \begin{figure*} \plotone{fig3.eps} \caption{Numerical tidal disruption rates as a function of time for a hard MBH binary of mass ratio $q$, eccentricity $e$, and separation $a=a_h$, embedded in an isothermal stellar cusp. The derivation assumes $M_1=10^7\,\,{\rm M_\odot}$, $m_*=1\,\,{\rm M_\odot}$, $r_*=\rsun$, and $\sigma_*=100\,\,{\rm km\,s^{-1}}$. The short vertical lines mark the Kozai timescale for $a_*=a=a_h$. } \label{fig3} \end{figure*} \section{Conclusions} We have used results from numerical scattering experiments and shown that the tidal disruption rate in a stellar cusp containing a $10^7\,\,{\rm M_\odot}$ MBH binary can be as large as $1\,\,{\rm yr^{-1}}$ over a timescale of $\sim 10^5\,$yr. {\it This is orders of magnitude larger than expected in the case of single MBHs}. After a tidal disruption, about half of the debris will be spewed into eccentric bound orbits and fall back onto the hole, giving rise to a bright UV/X-ray outburst that may last for a few years \citep[e.g.][]{Rees1988}. ``Tidal flares" from MBHs may have been observed in several nearby inactive galaxies \citep{komossa02,esquej07}. The inferred stellar disruption frequency is $\sim10^{-5}~{\rm yr^{-1}}$ per galaxy (with an order of magnitude uncertainty, \citealt{donley02}). The much enhanced disruption rates we have found here for MBH binaries can then be used to constrain the abundance of close MBH pairs in nearby galaxy nuclei \citep{chen08}. It is interesting to scale our results to the scattering of stars bound to Sgr A$^*$, the massive black hole in the Galactic Center, by a hypothetical inspiraling companion of intermediate mass \citep{yu03,sesana07}. The stellar density profile around the Galactic Center can be described as a double power-law, with outer slope $\simeq-2$ and inner slope $\simeq-1.5$ \citep{schodel07}. If the density profile inside the influence radius of $M_1$ is shallower than isothermal, \begin{equation} \rho_*(r<r_{\rm inf})=\rho_*(r_{\rm inf})\left(\frac{r}{r_{\rm inf}}\right)^{-\gamma} \end{equation} with $\gamma<2$, then the stellar mass between $2a_h$ and $a_h/2$ decreases by a factor $(a_h/r_{\rm inf})^{2-\gamma}$, and the stellar disruption rate in equation (\ref{drate}) decreases by a factor $(q/4)^{2-\gamma}$ relative to the isothermal case. Using $M_1=4\times 10^6\,\,{\rm M_\odot}$, $\sigma_*=100\,\,{\rm km\,s^{-1}}$, $\gamma=1.5$, and $1/243<q<1/81$, yields rates in the range ${\dot N}_*\simeq 0.05-0.1\,{\rm yr^{-1}}$. There are a number of uncertainties in our calculations that require clarification before a firm statement can be made on the rates and duration of stellar tidal disruptions expected in galaxy nuclei hosting MBH binaries, and on the constraints imposed by the very low level of activity observed in the Galactic Center. First and foremost, our estimates of the tidal disruption rate assume a fixed binary separation $a$. In reality, both binary separations and eccentricities will evolve due to three-body slingshots \citep{sesana08}. This changes the Kozai timescale of the system and replenishes the suppply of strongly interacting stars. According to Figure~7 of \citet{sesana08}, the evolutionary timescale of a binary with initial eccentricity 0.1 embedded in an isothermal cusp is $t_h\sim q^{-3/2}P$. At $a=a_h$ we derive \begin{equation}\label{th} t_h\sim (3.3\times10^4~{\rm yr})~\sigma_{100}^{-3} M_7(1+q)^{-1/2}. \end{equation} This is comparable to the duration of the plateau in the disruption rates shown in Figure \ref{fig3}, implying that binary evolution should not qualitatively change the plateau values. A more sophisticated calculation that couples the results of numerical scattering experiments with an evolving binary will be the subject of a subsequent paper. \acknowledgments Support for this work was provided by NASA through grant NNX08AV68G (P.M.). X.C. and F.K.L. thank the Chinese national 973 program (2007CB815405) and the China Scholarship Council for financial support. We are grateful to F. Haardt for early discussions on this topic.
1,477,468,751,331
arxiv
\section{Introduction} This paper reports the findings of a seven-month dissertation project carried out at Sheffield University, investigating steganographic techniques for hiding data in video files encoded using the popular H.264 format \citep{Ridgway13}. During the course of the project several key tools were developed, and these are described below together with experimental findings. All of the materials developed for the project can be accessed online at \url{http://www.steganosaur.us}. A supporting video is also available online at \url{http://www.youtube.com/watch?v=YhnlHmZolRM}. \label{litsurvey} \label{sec:common-techniques} We begin by reviewing various existing approaches to digital steganography, before explaining in section \ref{section:development} the specific issues that need to be addressed when developing these techniques for video container files. In section \ref{sec:conclusion}, we describe our experimental findings, and highlight areas where further research and development might be beneficial. \subsection{Background} Whereas encryption seeks to make a message uninterpretable to unauthorised eavesdroppers, steganography attempts instead to make the very existence of the message unsuspected (the two techniques can of course be combined; see section \ref{sec:crypto-subsystem}). Steganography has a very long history: Herodotus explains how Histi{\ae}us, the `tyrant' of Miletus, who was then staying with his overlord (the Persian emperor, Darius I), had a message tattooed onto a slave's shaved head some 2500 years ago (499 BCE). Once the hair had grown back, the slave was sent to Miletus, where his nephew re-shaved the slave to find an instruction telling him to revolt against Darius \citep{Herodotus1}. This idea, of hiding information covertly so that its presence is unsuspected even by eavesdroppers with access to the manipulated `container' (in this case, the slave), can of course be adapted to modern digital communications. Perhaps the simplest approach to digital steganography is to \emph{inject} data into redundant sections of a file. For example, because \texttt{EXE} files use an \emph{end of file} (EOF) marker, adding additional data to the end of the file doesn't affect executable behaviour. Other file types, e.g. WAV files, specify their intended size in a header \citep{Ibm04}, and additional data is again ignored. Although easy to implement, such injection techniques are extremely insecure, since direct analysis of the file can easily reveal the presence of unwarranted additional data. In contrast, \emph{substitution} techniques embed data in those sections of the file that are -- relative to some appropriate metric -- of least relevance, without affecting overall file size. This avoids, e.g., the tell-tail size inflation associated with injection techniques, but the \emph{steganographic capacity} of the container is limited by the amount of `irrelevant' data present. A particularly common substitution technique for audio and image files is \emph{LSB manipulation} \citep{Johnson03,Cole03,Fridrich10} (Fig. \ref{fig:lsb1}), in which the least significant bit of each byte of an image, say, is manipulated so as to embed information without discernably changing the image as viewed on-screen. However, such techniques are inherently at odds with the lossy compression algorithms used by various digital encoding formats, since these specifically seek to disregard the same `irrelevant' segments of a file, i.e., they `throw away' precisely the segments where we want to hide our message. \emph{Transform-domain} techniques can sometimes overcome this problem (section \ref{transformdomain}). \begin{figure} \begin{center} \fbox{\includegraphics*[natwidth=4in,natheight=1.42in]% {LSB1.pdf}} \caption{\small LSB encoding of the word ``Hello'' inside an arbitrary container file.\label{fig:lsb1}} \end{center} \end{figure} \subsection{Video Steganography in the Literature}\label{lit:videosteg} From a steganographic standpoint, video files have distinct advantages over stand-alone audio or image containers. In the first place, video files are typically much larger than other container files, and have far greater steganographic capacity. But in addition, video modification is also significantly harder for humans to detect than stand-alone image manipulation, because each video frame is only visible for a fraction of a second during normal playback, and moreover, video frames rarely include sharply focused images \citep{Al-Frajat10}. Surprisingly, however, the number of articles addressing video steganography appears to be rather limited, and those that exist in the literature generally give high level descriptions with only limited lower-level detail. Of those authors who specifically address the topic, Node et al. (\citeyear{Noda04}) suggest a \emph{bit plane complexity segmentation} technique that hides data in wavelet-compressed video, and Jalab et al. (\citeyear{Jalab09}) give a related method for embedding data in MPEG video. Eltahir et al. (\citeyear{Eltahir09}) discuss LSB manipulation of video, but do not address security. Finally, \cite{Singh2} specifically address the hiding of an image in a video, and while their method centres around LSB manipulation, this is one of the few papers that exploits the multi-dimensional aspect of a video as a container file; they claim, moreover, that the proposed technique is ``very useful in sending sensitive information securely'' but unfortunately they provide little supporting evidence for this claim, or for the effectiveness of the proposed technique. \subsection{Generation Techniques} \label{generation-techniques} Generation techniques involve generating a bespoke file from scratch by exploiting shared characteristics of the agents involved. For example, if Alice and Bob are both car enthusiasts, it is unlikely that the exchange of images showing pictures of the latest models will arouse much suspicion. We could therefore exchange a message covertly by creating an image of a race (say) in which the positions of the cars, spectators, or any other agreed component are used to encode the required information. Doing so may, of course, be time consuming, but this need not be an issue unless the information needs to be encoded and transmitted in real time. Such techniques have, moreover, a unique advantage over other steganographic methods, in that there is no underlying container file against which the transmitted file can be compared for steganalytic purposes. Reported research into generation techniques is, however, extremely limited, perhaps because the message construction process, with its heavy dependence on the shared interests of the specific agents involved, is necessarily ad hoc. \subsection{Transform Techniques} \label{transformdomain} \label{lit:transform} \emph{Discrete cosine transform} (DCT) techniques are often used with compressed image files, and these can be applied, to some extent, to individual images within certain video streams provided the frames to be manipulated are chosen appropriately. Informally, the discrete cosine transform takes image descriptions given in terms of pixel intensities, and re-expresses them in terms of frequencies, storing coefficients losslessly; these techniques are commonly used with, e.g., JPEGs \citep{Anderson96, Zhao95, Ruanaidh96}. Many existing steganographic systems make use of DCT coefficients, including the \emph{F5} \citep{Westfeld01} and \emph{Outguess} \citep{Provos01} algorithms, together with other \textit{model-based} \citep{Sallee03,Sallee05}, \textit{modified matrix} \citep{Kim06} and \textit{perturbed quantization} methods \citep{Fridrich05}. Of more significance for our purposes, Prabhakaran and Shanthi (\citeyear{Prabhakaran12}) describe a hybrid crypto-steganography method, which \cite{Shanableh12} extends by encoding data in the motion vector and quantisation scales (section \ref{lit:videoencoding}); their technique increases the steganographic capacity of the file, but is generally limited to raw video. Fang and Chang (\citeyear{Fang06}) focus on modifying the motion vectors of fast-moving objects (since such changes are relatively undetectable). In contrast, \cite{Aly11} examines macroblocks to determine which are most suitable for LSB embedding; both papers report that the resultant video quality remains good, but this is not quantified. \subsection{Video Encoding}\label{lit:videoencoding} In this section we briefly describe the structure of a typical video file. Different types of video frame serve different purposes, and it is essential for steganographic purposes that only certain kinds of frame are manipulated; attempting to hide data within the wrong frames typically causes the covert message to become garbled during re-extraction. We consider these practical issues in more detail in section \ref{section:development} below. \subsubsection{Coding Concepts} An encoder (compressor) and decoder (decompressor) forming a complementary pair is known as a \emph{codec} (en\emph{co}der/\emph{dec}oder). The encoder is used to store or transmit video by converting the original raw video format to an alternative (compressed) representation. The decoder converts the compressed form back to the original video. In general terms, a video encoder implements three main components: a \emph{temporal model}, a \emph{spatial model} and an \emph{entropy encoder}. The temporal model reads in a sequence of video frames and attempts to reduce redundancy by identifying similarities between neighbouring frames -- this analysis usually involves computing a prediction of the current video frame. With H.264 the prediction can be computed from multiple previous or future frames. The prediction is improved by means of compensation for differences between the frames -- this is known as motion compensation prediction. The temporal model outputs a residual frame and a set of \emph{motion vectors}. The residual frame is computed by subtracting the prediction from the current frame, and motion vectors are used to describe how the motion was compensated. The residual frame from the temporal model is then fed into the spatial model. This step is again concerned with removing redundancy, but in this case by analysing neighbouring samples within the frame itself. Spatial reduction in H.264 is achieved by applying a transform followed by a \emph{quantisation} process. Quantisation is the process of scaling down the range of symbols that are used in a representation. For instance, the DCT transform produces a matrix of coefficients whose values may range between $-223$ and $150$, but after quantisation, these values may only range between $10$ and $130$. This reduced range means that fewer bits are needed to code the representation than the original range, and this can lead to significant (but lossy) compression (section \ref{sec:compression}). Quantisation parameters for multimedia formats are chosen based on how individual components affect the average human perception \citep{Mukhopadhyay1}. The transform step produces a set of transform coefficients which are then quantised, removing insignificant values, and returning the quantised transform coefficients as the output of the spatial model. Finally, the entropy encoder produces an encoded output from the results of the spatial and temporal models. It processes the motion vectors from the temporal model and the coefficients from the spatial model to produce a compressed bit stream consisting of motion vectors, residual transform coefficients and header information. Although the quantisation stage causes a loss of information, this process is roughly reversible, and the decoder mechanism essentially `works in reverse' to retrieve the original video. Nonetheless, the output produced by the decoder mechanism will only ever (in the case of H.264) be an approximation to the original input because of the quantisation stages. \subsubsection{Temporal Model} The residual frame produced by the temporal model is produced by subtracting the predicted frame from the actual video frame, and its size is dependent on the accuracy of the prediction process -- the smaller the residual frame, the fewer bits needed to code it. Prediction accuracy can be improved by calculating and propagating compensation for motion from the reference frame(s) through to the current frame. Motion compensation can significantly improve prediction calculations because two successive video frames are usually highly correlated, because most of the information captured in successive residual frames relates to the movement of objects in an essentially static scene. These changes directly correspond to the movement of pixels between frames, a feature known as \emph{optical flow} \citep{Ahmad1}. In theory, knowing the optical flow allows us to predict the majority of the pixels in the current frame, simply by displacing pixels in the preceding frame as required. Unfortunately, this is a very computationally intensive process, as each pixel will have to be transformed, and each frame decoded, on a pixel-by-pixel basis using the optical flow vectors. Whilst workable in theory, this would result in a large amount of residual data, which is at odds with the desirability of a compact residual frame. \subsubsection{Macroblock Motion Estimation} A macroblock is typically a $16 \times 16$ pixel block of the current frame, although in the wider context of block-based motion estimation other suitably-sized $N \times M$ samples might be used. Macroblocks are used by a variety of codecs including MPEG-1, MPEG-2, H.261, H.263 and H.264. Macroblock motion estimation starts by dividing frames into macroblocks. Each macroblock is taken in turn, and a previously selected `reference frame' is searched for a matching macroblock. Macroblocks from the reference frame are paired with macroblocks in the current frame by choosing a candidate block that minimises the difference between the macroblock in the current frame and itself -- this process provides a \emph{residual block}. Finally, the residual block is encoded and stored, together with the associated motion vector. Using a $16 \times 16$ size macroblock can cause some problems with certain motions and object outlines. If a macroblock and its matching macroblock differ greatly, the number of bits required for the encoding increases and inflates the bit-rate. This issue can be addressed by decomposing a macroblock into smaller $8 \times 8$, or even $4 \times 4$ macroblock size, but this results in a larger number of blocks, which can be disadvantageous. The H.264 codec overcomes this problem to some extent by adopting an `adaptive' block size approach. \subsection{Compression} \label{sec:compression} An image in a video stream can be thought of as a function which maps each point of a 2D spatial domain to a three-dimensional RGB colour vector. Consequently, if we were to store a single 1920 x 1080 image in raw format, just over 2 million RGB triples would need to be stored. This is a substantial amount of information to store for a single image. Given that videos typically run at between 24 and 30 frames per second, a single second of video footage at 1920 x 1080 resolution would require the storage of around 50--60 million RGB triples. Storing and/or streaming this data in a raw, uncompressed format is consequently impractical for most situations, and as a result image and video formats typically use an alternative, compressed, representation. \subsubsection{Compression Considerations}\label{lit:motvec} Data compression inevitably involves trade-offs between computational costs, storage requirements, and accuracy of representation. For images and video, an approximation of the original source if often sufficient, which means that lossy compression schemes can be used, but if the given application requires complete accuracy then a lossless representation must be used. In section \ref{lit:transform} we saw how coefficient-based transforms such as DCT can be used to represent an image, and these techniques can be used to assist the image compression process. However, simply compressing each frame in turn is inefficient, since it fails to take into account the temporal cohesion between consecutive frames. Examining the correlation between consecutive frames typically allows a more concise representation to be used \citep{Mukhopadhyay1, Richardson1, Gall1}. A common approach is to use a GOP (``group of pictures'') structure. In a GOP, a reference frame is chosen, which is called an \textit{Intra Frame} or \textit{I-Frame}. Other frames are then predicted from this, where predictions are represented as changes (deltas) from the preceding frame -- these are known as \textit{P-Frames}. Some frames are predicted using both the preceding and subsequent neighbours, and these are known as bi-directionally predicted frames or \textit{B-Frames} \citep{Mukhopadhyay1, Richardson1}. \section{Development Issues} \label{section:development} Given the experimental focus of this work, we adopted an `agile' methodology and developed various auxiliary tools for embedding messages in audio and image files. In order to ensure usability, we designed the system as a user-friendly GUI, interacting with a lower-level suite of tools housing the core steganographic logic. We had intended using the same programming language throughout, but this proved infeasible, so different parts of the system had to be constructed using different programming languages. In particular, there was insufficient time to develop a complete video coding tool from scratch, so we adopted third-party libraries. We began with Xuggler,\footnote{\url{http://www.xuggle.com/xuggler}} a Java wrapper for FFmpeg,\footnote{\url{http://ffmpeg.org/}} but unfortunately Xuggler proved insufficiently flexible, and we found it necessary to work directly with the FFmpeg library using code written in C. Even so we needed to re-implement part of the FFmpeg library -- we are grateful to Michael Niedermayer, one of FFmpeg's developers, for vital feedback at this time (personal communication). Since we were already using C for the low-level video coding tools, we considered using it for the GUI as well, but this would have required using the GTK+ library,\footnote{\url{http://www.gtk.org/}} which has poor Mac OS integration. Moreover, good GUIs should be multithreaded to ensure the display updates regularly, but C has no uniform cross-platform API for managing threads. We therefore re-adopted Java and Xuggler for GUI development (Java includes GUI design packages, while Xuggler can be used for video playback). \subsection{Choice of codec} For development purposes we limited our choice of codecs to those supported by FFmpeg, focussing eventually on `H.264', since this is the most common video codec used by modern cameras \citep{Pcworld} and for online videos \citep{Techcrunch}. \subsection{Transcode Mechanism} Video transcoding involves demultiplexing the original input file to distinguish audio from video data. The separate streams are then processed independently and re-encoded back into the output file. Our transcoder uses FFmpeg's \texttt{avcodec} and \texttt{avformat} libraries. The input is scanned for audio and video streams, and relevant codecs are loaded into memory. A header is written to the output file, and we iterate over input data packets. Finally, a suitable footer is appended. Though simple in theoretical terms, decoding input packets proved somewhat problematic in practice. Frames produced by FFmpeg's decoder methods \texttt{avcodec\allowbreak\_decode\allowbreak\_video2} and \texttt{avcodec\allowbreak\_decode\allowbreak\_audio4} cannot be parsed directly without preparation, because they lack appropriate timestamps.\footnote{A timestamp mechanism is used to synchronise different streams in a video file.} Failure to set these timestamps results in either no video image, or else a lack of synchronisation between audio and video streams. Frames decoded by \texttt{avcodec\allowbreak\_decode\allowbreak\_video2} also contained data that interfered with the encoding, causing the image to be heavily pixelated. We eventually solved this problem by copying raw image data to a new \texttt{AVFrame} instance, which allowed us to carry out the requisite pre-parsing preparation. \subsubsection{Data encoding/decoding} Data encoding and decoding was performed by modifying motion vectors using a callback (figure \ref{fig:transcoding}). This approach makes it easy to implement additional steganographic schemes: the motion vector and frame number are passed to our bespoke callback method, \texttt{stegEncodeMv} (invoked from inside the \texttt{avcodec} library), which chains a callback to the relevant encoder modules, as determined by the current \texttt{getStegEncoderMode} (figure \ref{fig:uml_encoder_decoder}). The decoding process is somewhat simpler -- it uses a single \texttt{Decoder} component (figure \ref{fig:uml_decoder}), iterating the relevant decoding method over the packets of the specified video file. \begin{figure} \centering \subfigure[Main process]{ \includegraphics*[natwidth=379pt,natheight=823pt,width=.4\textwidth]% {transcoding.pdf} \label{fig:transcode_flow_chart} } \subfigure[\texttt{decodePacket} subprocess]{ \includegraphics*[natwidth=323pt,natheight=483pt,width=.35\textwidth]% {decoding.pdf} \label{fig:transcode_decode_packet_flow_chart} } \caption{Transcode process flow charts} \label{fig:transcoding} \end{figure} \begin{figure} \centering \subfigure[encoder]{ \includegraphics*[natwidth=335pt,natheight=512pt,width=.35\textwidth]{encoder.pdf} \label{fig:uml_encoder} } \subfigure[decoder]{ \includegraphics*[natwidth=432pt,natheight=359pt,width=.45\textwidth]{decoder.pdf} \label{fig:uml_decoder} } \caption{Encoder and decoder architecture overviews} \label{fig:uml_encoder_decoder} \end{figure} \subsection{Steganographic encoding/decoding} We developed a process to modify the \texttt{AVFrame} parsed to the \texttt{avcodec\allowbreak\_encode\allowbreak\_video2} method for coding into a compressed packet, but despite our best efforts the alterations made to \texttt{motion\allowbreak\_val} were not reflected in the output. After dissecting the 850,000+ lines of code comprising the FFmpeg codebase, we deduced that adjusting the behaviour of \texttt{ff\allowbreak\_estimate\allowbreak\_p\allowbreak\_frame\allowbreak\_motion} in \texttt{libavcodec/motion\allowbreak\_est.c} would let us manipulate the motion vectors of macroblocks in P-Frames. Initially, our encoder modified vectors using a bit mask, but tests showed that only around 30 characters could be embedded, and roughly 50\% of embedded bits would `flip', causing inaccurate decoding of the hidden message. Using different bit masks provided little improvement. Eventually, we discovered that implementing our callback in the \texttt{ff\allowbreak\_estimate\allowbreak\_p\allowbreak\_frame\allowbreak\_motion} method of \texttt{libavcodec/motion\allowbreak\_est.c} caused data to be embedded prior to quantisation, on occasion obliterating our changes. We therefore moved our callback to the \texttt{encode\_mb\_internal} method of \texttt{libavcodec/\allowbreak{}mpegvideo\allowbreak\_enc.c}. Further analysis revealed that certain values returned by the decoder varied seemingly at random. We deduced that the vector coding process performs an additional right-shift operation, and therefore introduced a shifted encoding mask. The problem was fully resolved when we discovered that certain macroblocks were flagged as having no motion vector. After moving the encoder callback, compensating for bit shifts and avoiding vector-less frames our transcoding mechanism was finally capable of embedding data in video. The decoder is far simpler, and operates by parsing \texttt{AVPacket}s to \texttt{avcodec\allowbreak\_decode\allowbreak\_video2} until a complete \texttt{AVFrame} has been returned. \subsection{Cryptography Subsystem} \label{sec:crypto-subsystem} It was not feasible, given the lifespan of the project, to determine whether our system exhibits longterm security. We therefore incorporated well-established cryptographic methods to ensure a minimum level of communications security. Our system fully implements the AES cryptosystem \citep{AES}, and comprehensive unit testing was employed to ensure that our implementation conforms to the relevant FIPS-197 specification \citep{FIPS197}. While encryption and steganography serve fundamentally different purposes, encryption can nonetheless be used \emph{with} steganography to obfuscate embedded ASCII messages: figure \ref{fig:lsb3} shows a significant increase in frequency for key ASCII values (especially spaces and alphabetic characters), but this tell-tale signature is removed by encrypting prior to embedding (figure \ref{fig:lsb4}). \begin{figure} \centering \subfigure[original image]{ \includegraphics*[keepaspectratio,height=1.2in]{before.pdf} \label{fig:lsb2} } \subfigure[unencrypted embedding]{ \includegraphics*[keepaspectratio,height=1.2in]{after.pdf} \label{fig:lsb3} } \subfigure[encrypted embedding]{ \includegraphics*[keepaspectratio,height=1.2in]{encrypted.pdf} \label{fig:lsb4} } \caption{ASCII distributions from the LSB string of a PNG image (a) before and (b) after unencrypted data is embedded; (c) the obfuscating effect of encrypting data prior to embedding.} \label{fig:lsb} \end{figure} \section{Evaluation and Conclusion} \label{sec:conclusion} Our initial intention was to develop numerous schemes for embedding and extracting data within video streams. This was overly ambitious, as we significantly underestimated the complexities associated with manipulating video. We nonetheless managed to develop two different methods for embedding and re-extracting data from video. Our final solution uses motion vector based approaches. This project also proved substantially more experimental than initially predicted. Our initial literature survey and preliminary research provided surprisingly little insight into how to design and implement a practical steganographic system. We therefore adopted an agile philosophy, and as the project progressed numerous design and implementation changes were undertaken and important lessons were learned. Although time constraints prevented us exploring more detailed steganographic schemes, our tools can take an object file in any format, apply AES encryption, and embed the data so that the resultant video is indistinguishable from the original and is capable of normal video playback. As part of testing and evaluation we made binary executables publicly available\footnote{\url{http://steganosaur.us/download}}. \subsection{Technical limitations} Our main goal was to develop and research techniques for embedding data using motion vector based techniques. While this has ultimately proven successful, we encountered notable setbacks. We thought it would be possible to embed data in motion vectors of any P- or B- frame, but discovered in some cases that a macroblock can be coded as having \emph{no} motion vector (as opposed to one of zero magnitude). This is an important distinction that needs to be considered carefully when choosing the specific frames in which to embed covert data. Another limitation is our inability to determine steganographic capacity in advance. Motion vectors describe the spatial translation of a pixel block between frames, whence modifying one frame will impact its neighbours. Moreover, the number of encodable macroblocks changes with the object to be embedded, and can also vary due to keyframe positions or GOP sizes. This sometimes changes B- or P- frames to I-frames which have no macroblocks, dramatically changing the number available. Whilst we could change our system to preserve I-frame positions, we found that regular GOP sizes provide better error correction. Nonetheless, we have produced a system that achieves most of the goals we set (see \ref{app:images} for examples of some of our outputs). The project was considerably more complex than originally envisaged, but we have largely been able to identify and overcome the major hurdles we encountered.
1,477,468,751,332
arxiv
\section*{Abstract}{\small The Panchromatic Hubble Andromeda Treasury (PHAT) is an on-going Hubble Space Telescope (HST) multi-cycle program that will image one-third of the M31 disk at high resolution, with wavelength coverage from the ultraviolet through the near-infrared. This dataset will allow for the construction of the most complete catalog of stellar clusters obtained for a spiral galaxy. Here, we provide an overview of the PHAT survey, a progress report on the status of observations and analysis, and preliminary results from the PHAT cluster program. Although only $\sim$20\% of the survey is complete, the superior resolution of HST has allowed us to identify hundreds of new intermediate and low mass clusters. As a result, the size of the cluster sample within the Year 1 survey footprint has grown by a factor of three relative to previous catalogs. \normalsize} \end{minipage} \section{Introduction \label{intro}} The Andromeda galaxy (M31) is an exquisite laboratory for studying stellar clusters. Its proximity \citep[785 kpc;][]{McConnachie05} allows for the detailed study of individual stars in clusters, while simultaneously providing a large, galaxy-wide sample of objects. Studies of M31's stellar cluster system have been on-going since the work of \citet{Hubble32}. The Panchromatic Hubble Andromeda Treasury (PHAT) survey, described in Sec.~\ref{phat}, represents a significant step forward in the study of Andromeda's cluster system, extending the sample of known clusters well into the intermediate and low mass regimes. This dataset will inform our understanding of cluster evolution as a whole, through its wide sampling of age, mass, and galactic environment parameter space. The survey's cluster science goals include placing constraints on cluster disruption behavior, assessing environmental dependencies of cluster formation and destruction, and measuring the star cluster initial mass function, among many others. The first step to achieving these goals lies in the accurate identification and characterization of M31's cluster system. We describe our current progress on this task in Sec.~\ref{clusterwork}, and direct the reader to our forthcoming paper (Johnson et al., in prep.) for complete details. \section{The PHAT Survey\label{phat}} The PHAT survey\footnote{Project Website: \href{http://www.astro.washington.edu/groups/phat}{http://www.astro.washington.edu/groups/phat}} (PI: Dalcanton) is a Hubble Space Telescope (HST) multi-cycle treasury program that will image one-third of M31's stellar disk. The survey region extends across the northeast half of M31, resulting in areal coverage stretching from the galactic nucleus out to the edge of the star-forming disk at galactocentric radii of $\sim$20 kpc. The full survey footprint is shown in Fig.~\ref{footprint}. Imaging across $\sim$0.5 deg$^2$ of contiguous spatial coverage will be obtained using three different HST instruments (WFC3-UVIS, ACS-WFC, WFC3-IR) in six filters ranging from the ultraviolet to the near-infrared (UV to NIR; F275W, F336W, F475W, F814W, F110W, F160W). Observations are grouped into 23 units known as ``bricks", each of which is made up of 18 tiled fields-of-view arranged in two side-by-side, 3$\times$3 half-brick arrays. Bricks are observed over two epochs, separated by six months, in which the optical imaging is collected for one of the half-brick arrays, and the UV and NIR images in the other. Six months later, these wavelength assignments are reversed in accordance with a 180-degree difference in HST roll angle, and six-filter imaging coverage for the brick is completed. For a complete survey and data reduction description, please see Dalcanton et al. (in prep.). The PHAT survey has been allocated more than 800 HST orbits, which will be executed over the course of four years. As of May 2011, four full bricks and two additional half-bricks have been observed, representing $\sim$20\% of the total expected survey data. The spatial positions of these Year 1 bricks are shown in Fig.~\ref{footprint}, and we discuss the cluster analysis results from these data in Sec.~\ref{clusterwork}. \begin{figure} \center \includegraphics[scale=0.42]{Johnson_LC_Fig1.pdf} \caption{\label{footprint} The footprint of the PHAT survey region (magenta) displayed on a GALEX NUV image of the northeast half of M31. Green rectangles represent the ``bricks" that make up the Year 1 imaging data. Blue circles show the spatial distribution of cluster identifications resulting from our Year 1 by-eye search.} \end{figure} In addition to the HST data, there is a remarkable amount of ancillary imaging and spectroscopy of M31 that is already available or will be obtained as part of the PHAT survey. Imaging datasets from GALEX, Swift/UVOT, Spitzer, and Herschel extend wavelength coverage further into the UV and IR, while observations from CARMA and the EVLA will provide important gas-phase diagnostics. Finally, red-sensitive spectroscopy of RGB stars from Keck/DEIMOS will improve constraints on galactic kinematics, while MMT/Hectospec observations will provide critical follow-up cluster spectroscopy. The stellar cluster results will also contribute to other science goals of the PHAT survey, which include placing constraints on the high-mass ($>$ 5 $\rm M_{\odot}$) stellar initial mass function and the calibration of stellar evolution models. Additionally, the survey goal to derive a spatially-resolved star formation history of the field population of M31 will allow for an interesting comparison between field and cluster age distributions and formation history. \section{Stellar Cluster Survey\label{clusterwork}} Our study of M31 stellar clusters is currently underway, with early work focusing on cluster identification and characterization. For PHAT, we chose to begin the cluster identification process with a systematic by-eye image search, and are currently implementing two automated search techniques to complement the by-eye work. This approach allows for the careful construction of a well-vetted cluster catalog \citep[following comparable work in M31 by][]{Krienke07, Krienke08, Hodge09, Hodge10}, while simultaneously providing an optimal training set to help refine the automated identification techniques. Proper calibration of the automated cluster finding routines is required, because unlike most automated extragalactic cluster searches, our clusters are resolved into individual stars (as faint as $\rm M_{F814W} \sim +0$). For an example of the cluster images provided by PHAT, see Fig.~\ref{cluster}. Instead of searching for slightly extended point sources, our search routines must account for both cluster components: the resolved stars and the underlying unresolved light. Our automated identification routines will take advantage of both the resolved and unresolved light components to obtain the best possible identification results. \begin{figure} \center \includegraphics[scale=0.48]{Johnson_LC_Fig2.pdf} \caption{\label{cluster} Image cutouts and optical CMD of cluster B256D showing the data quality available from the PHAT survey imaging. Image cutouts are 10"$\times$10". We also include an optical color composite of ground-based imaging from the Local Group Survey \citep{Massey06} for comparison.} \end{figure} The first results of our PHAT by-eye search show the great potential of this dataset. From the Year 1 imaging, we have preliminarily identified $\sim$500 likely clusters, whose spatial distribution is shown in Fig.~\ref{footprint}. Our current cluster catalog represents a factor of $\sim$3 increase in the number of known clusters for the same area \citep[previously 139 clusters;][and references therein]{Caldwell09}. As illustrated in Fig.~\ref{histogram}, we find that previous ground-based catalogs were generally complete for bright clusters ($\rm M_{F475W} < -6$), whereas HST imaging allows us to identify clusters more than $\sim$3 mag fainter. In terms of the faint end of the cluster distribution, we have greatly increased the number of cataloged faint clusters as a result of the marked increase in HST imaging coverage available through PHAT, compared to the limited number of objects discovered through previous, targeted HST observations. Extrapolating from these results, we estimate the final PHAT stellar cluster sample will include $\sim$2000 clusters, sampling a mass range down to $< $1000 $\rm M_{\odot}$. \begin{figure} \center \includegraphics[scale=0.39]{Johnson_LC_Fig3.pdf} \caption{\label{histogram} A histogram showing the F475W absolute magnitude distributions of our PHAT by-eye clusters (black) and previously cataloged clusters (red) that lie within the Year 1 imaging. The subset of previously known clusters that were discovered using ground-based data are represented by the dotted histogram, while the remaining, fainter known clusters were discovered using limited HST imaging available before PHAT.} \end{figure} Understanding the completeness characteristics of our cluster sample is a high priority considering the population-wide questions we plan to address. Sample completeness will be easier to quantify once we have incorporated automated cluster-finding routines, but even as part of the by-eye search, we employ artificial cluster tests to make completeness measurements. Following the philosophy of completeness testing developed for stellar photometry, we insert samples of artificial clusters into the same reduction and analysis pipeline we use for the by-eye search. This allows us to make an assessment of sample completeness as a function of cluster age, mass, and galactic environment (due to the effects of crowding and extinction). In addition to understanding sample completeness, our survey has placed great importance on deriving robust age and mass measurements. One advantage of the PHAT dataset is that it provides us the opportunity to determine cluster characteristics using multiple techniques, and to test for potential systematic differences in the results. Specifically, we plan to compare age and mass determinations obtained by fitting integrated light measurements to stochastically sampled models \citep[e.g.,][]{Fouesneau10}, through fitting cluster spectroscopy \citep[e.g.,][]{Caldwell09, Caldwell11}, and using color-magnitude diagram (CMD) analysis of individual resolved stars \citep[e.g., ][]{Dolphin02}. Using this multi-method approach, we will test for consistency between the different techniques, and determine the best methods to derive accurate ages and masses for these clusters, especially in the low to intermediate mass regime. We expect to publish the first PHAT cluster results in the coming months. Our Year 1 cluster catalog and accompanying six-band integrated photometry will be included in Johnson et al. (in prep.), while first age and mass assessments will appear in a subsequent paper (Fouesneau et al., in prep.). After these publications, we plan to periodically expand and update the cluster catalog as additional data arrives over the next three years. Our team looks forward to reporting the results of our work studying the cluster initial mass function, cluster disruption processes, and the environmental dependencies of cluster formation and destruction, among many other studies made possible by the PHAT survey data. \small \section*{Acknowledgments} Support for this work was provided by NASA through grant number HST-GO-12055 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. \bibliographystyle{aa}
1,477,468,751,333
arxiv
\section{Introduction} One of the central lessons of the past few years is that the semiclassical gravitational path integral knows about the encoding of the interior of the black hole in its Hawking radiation. Black hole horizons are ubiquitous in our universe, as they are believed to exist at the center of almost every galaxy. Perhaps even more ubiquitous is the cosmic horizon. Unlike a black hole, this horizon surrounds us, but similar to a black hole, it is believed to Hawking radiate at a characteristic temperature set by the size of the horizon. Furthermore, cosmic horizons have a thermodynamic entropy \cite{Gibbons:1976ue} \begin{equation}\label{gh} S = \frac{A}{4G}\,, \end{equation} given by the same formula as the black hole entropy. Trying to understand the encoding of spacetime beyond the cosmic horizon from the finite cavity within it is a difficult problem; what would help is an ``exterior" or ``bird's eye" view of cosmology. Assuming an exit from inflation \cite{Freivogel:2006xu, Chen:2020tes, Hartman:2020khs} or fixing future boundary conditions at $\mathcal{I}^+$ \cite{Maldacena:2002vr, Strominger:2001pn} are closely related versions of providing this exterior view. They provide us with an infinite Hilbert space in which we can make arbitrarily precise measurements and therefore put the problem on a more similar footing to that of black holes.\footnote{The finite dimensionality of the Hilbert space for a de Sitter universe was first proposed in \cite{Bousso:1999dw, Banks:2000fe, Fischler} and developed in \cite{Bousso:2000nf, Banks:2001yp, Parikh:2002py, Dyson:2002nt, Banks:2002wr, Banks:2003pt, Banks:2006rx, Banks:2015iya, Banks:2018ypk, Banks:2020zcr, Susskind:2021yvs, Shaghoulian:2021cef}.} In this paper, we will take a similar exterior view of cosmology. Our model will be Jackiw-Teitelboim gravity with a positive cosmological constant, which has de Sitter space as a solution. It also has a black hole in de Sitter space as a solution, which can be thought of as a dimensional reduction of the Nariai black hole. Like in higher dimensions, this black hole solution admits an arbitrarily large analytic extension, which can have as many black holes and as many inflating regions as one desires. This analytic extension is often assumed to be a mathematical curiosity, but we will see that it passes some consistency checks. It will allow us to formulate our paradox, which we briefly outline below. In our spacetime with inflating regions separated by black hole regions, we will consider two observers -- Alice and Bob -- in distinct inflating regions. We will assume these observers' local patches have exited from inflation, and they are in a region where gravity is weak, such that the spacetime background can be fixed. We will refer to such regions where the spacetime geometry is fixed as \emph{frozen}.\footnote{Note that because gravity is turned off in these frozen regions, we can really think of Alice and Bob's Hilbert spaces as tensor factorizing. This should be contrasted with the idea that when the different patches are weakly gravitating there is a single, non-factorizing Hilbert space which represents them both.} This will be our exterior view. A picture is provided below in figure \ref{threeregions}. The theory governing where Alice and Bob live is some quantum field theory (QFT) on a curved background; the gravitational part of the spacetime simply prepares an initial state for the evolution of the QFT. Since the two QFTs are spacelike separated, all operators in Alice's system commute with all operators in Bob's system. But notice that the exteriors of Alice's and Bob's horizons overlap. If the exterior of Alice's horizon is encoded by the data in her cavity, and similarly with Bob, that means that both Alice and Bob encode the same piece of spacetime. This violates the no-cloning theorem, since they can both independently extract a bit from beyond the horizon without affecting each others' ability to do so. We will elaborate on this paradox in section \ref{sec:paradox}. Our resolution to this paradox, which we will describe in more depth in section \ref{sec:resolution}, will be that whether or not the geometry in Bob's distinct inflating region is taken to be frozen or not can have a drastic effect on Alice's ability to reconstruct operators in Bob's region (and vice versa for Bob). Furthermore, we will find that for most ``natural'' choices of state on the two asymptotic regions, the dominant saddle in the semi-classical path integral is one where Alice and Bob's regions exist in their own, disconnected spacetimes. In this case, each observer only encodes the region beyond their horizon \emph{within their connected portion of the universe}. Thus there is no overlap and no violation of no-cloning. In order to make the dominant saddle the one hosting both Alice and Bob in the same connected universe, one must first change the path integral prescription (act with an operator) which entangles the two asymptotic regions. In that case, both Alice and Bob will find the microscopic state of their inflating regions to be mixed and they will not be able to encode each other's regions. What we are describing is similar to a ``time-like homology constraint'', which disallows consideration of entanglement wedges which are in the past of a portion of frozen spacetime in the \emph{semi-classical} saddle. The point of the present work will be to justify this constraint in an explicit example by illustrating how the Euclidean path integral disallows such quantum extremal surface saddles from contributing. We now briefly outline the paper. In section \ref{sec:setup}, we will describe the set-up of JT gravity coupled to 2d conformal matter. We will also describe the analytically extended nearly-Nariai geometries with multiple black hole and cosmological horizons. We will describe how quantum corrections are important for understanding this spacetime.\footnote{Analytic extensions of these near-Nariai geometries were recently discussed in \cite{Aguilar-Gutierrez:2021bns}, but the authors in that work did not account for the backreaction due to quantum matter, which we include. We will also discuss the relevant boundary conditions and how to compute physical quantities in the frozen regions.} In section \ref{sec:computation}, we describe various quantum extremal surface saddles. There we also discuss the entropy of a single inflating region. This leads us to a paradox which we discuss in section \ref{sec:paradox}. Then in section \ref{sec:resolution} we propose a resolution of this paradox via the gravitational path integral. In section \ref{sec:discussion} we end with some discussion and speculations about encoding a closed universe with inflating regions in a quantum system via the gravitational path integral. \section{JT gravity coupled to conformal matter in dS$_2$}\label{sec:setup} We will consider Jackiw-Teitelboim gravity with positive cosmological constant minimally coupled to conformal matter: \begin{equation}\label{JTact} I = - \frac{\phi_0}{4\pi}\left[\int_{\Sigma_2} R + 2 \int_{\partial\Sigma_2} K \right] -\frac{1}{4\pi} \left[\int_{\Sigma_2} \phi (R-2) + 2\phi_b \int_{\partial \Sigma_2} (K-1) \right] + I_{\text{CFT}}\,. \end{equation} The path integral over the dilaton fixes us to dS$_2$, \begin{equation} ds^2 = \frac{ - d\s^2+d\varphi^2}{\cos^2 \s}\,,\qquad \varphi\sim \varphi + L \,,\qquad \s \in (-\pi /2, \pi /2). \end{equation} We have fixed the de Sitter length to $1$. The metric equation of motion is \begin{equation}\label{eom} \left(g_{\m\n}\nabla^2 - \nabla_\m\nabla_\n+g_{\m\n}\right) \phi = 2\pi T_{\m\n} \,. \end{equation} The stress tensor on metric $ds^2 = e^{2\omega} ds^2_{\hat{g}} = e^{2\omega} (-d\s^2 + d\varphi^2)$ is given by \begin{align} T_{\m\n} &= T_{\m\n}^{\hat{g}} -\frac{c}{12\pi}\left(\hat{\nabla}_\m \omega \hat{\nabla}_\n \omega - \frac 1 2 \hat{g}_{\m\n} (\hat{\nabla}\omega)^2 - \hat{\nabla}_\n \hat{\nabla}_\m \omega + \hat{g}_{\m\n}\hat{\nabla}^2 \omega\right),\\ &= T_{\m\n}^{\hat{g}} +\frac{c}{24\pi} \d_{\m\n} +\frac{c}{24\pi} g_{\m\n}\,.\label{tfin} \end{align} In the final line, the second term is traceless and the last term is proportional to the metric. Picking the periodicity $\varphi \sim \varphi + 2\pi$ means the stress tensor on the cylinder $-d\s^2 + d\varphi^2$ is given by $T_{\m\n}^{\hat{g}} = -\frac{c}{24\pi} \d_{\m\n}$, precisely canceling the piece $\frac{c}{24\pi} \d_{\m\n}$ above. This leaves only the term proportional to the metric, which can be absorbed into a constant shift in $\phi$.\footnote{The only term in the equation of motion \eqref{eom} sensitive to constant shifts in $\phi$ is the $g_{\m\n}\phi$ term, which can therefore cancel a contribution to the stress tensor that is proportional to $g_{\m\n}$.} In this case we have a dilaton solution \begin{equation} \phi = \phi_r \frac{\cos \varphi}{\cos \s}. \end{equation} We can consider a different periodicity for $\varphi$, $\varphi \sim \varphi + L$, in which case \begin{equation} T_{\m\n}^{\hat{g}} = -\frac{\pi c}{6L^2} \d_{\m\n},\ \ \varphi \sim \varphi + L. \end{equation} A $\varphi$-independent solution is given by \begin{equation} \phi = -2\pi T_{\s\s}^{\text{traceless}}\left(1+ (\g+\s)\tan \s\right)\, , \end{equation} where $T_{\m\n}^{\text{traceless}}= T_{\mu \nu}^{\hat{g}} + \frac{c}{24\pi} \delta_{\mu \nu}$ is the traceless part of $T_{\mu \nu}$. Since this satisfies the sourced equation (i.e. $T_{\m\n}^{\text{traceless}} \neq 0$), we can add it to our previous sourceless solution to obtain \begin{equation} \phi = \tilde{\phi}_r\frac{\cos \varphi}{\cos \s} -2\pi T_{\s\s}^{\text{traceless}}\left(1+ (\g+\s)\tan \s\right) \end{equation} for some free constant $\tilde{\phi}_r$ and where now $\varphi \sim \varphi + L$ for general $L$. We pick $L = 2\pi n $ with $n \in \mathbb{Z}^+$ to ensure periodicity of our dilaton $\phi$, obtaining \begin{equation} \phi = \tilde{\phi}_r\frac{\cos \varphi}{\cos \s} -\frac{c}{12}\left(1-\frac{1}{n^2}\right)\left(1+ (\g+\s)\tan \s\right), \end{equation} For a time-symmetric solution around $\s = 0$ we pick $\g = 0$, and to ensure we are inflating in at least some region of $\mathcal{I}^+$ we pick $\tilde{\phi}_r > \frac{\pi c}{24}(1-1/n^2)$. We also drop the constant piece in $\phi$, giving altogether \begin{equation} \phi = \tilde{\phi}_r \frac{\cos \varphi}{\cos \sigma} - \frac{c}{12} \left(1- \frac{1}{n^2}\right)\sigma \tan \sigma. \end{equation} Expanding around $\epsilon = \pi/2 - \s$ gives \begin{equation} \phi \approx \frac{\tilde{\phi}_r \cos \varphi - \frac{\pi c}{24}\left(1-\frac{1}{n^2}\right)}{\epsilon}\,. \end{equation} We will refer to a region where $\phi \rightarrow -\infty$ as a crunching region (i.e. a black hole interior) and $\phi \rightarrow +\infty$ as an inflating region. We therefore see that this family of sourced solutions shrinks the inflating regions and grows the crunching regions as compared to the unsourced solutions. The inflating regions remain out of causal contact from one another, i.e. the (black hole) wormhole grows and therefore remains nontraversable. This is reasonable since we are reducing the magnitude of the Casimir energy. The case of $n=3$ is shown in figure \ref{threeregions}. Extending the periodicity of the universe as a model was suggested in \cite{Hartman:2020khs} and studied in \cite{Aguilar-Gutierrez:2021bns}. \begin{figure} \hspace{-10mm}\includegraphics[scale = .2]{dSschwarzschild3.pdf} \caption{Solution to JT gravity with $n=3$. The magnitude of the Casimir energy due to matter decreases as $n$ increases, leading to crunching regions which are larger than inflating regions. The Penrose diagram is periodically identified, making the spatial topology that of a circle.}\label{threeregions} \end{figure} \subsection{Matter entropy}\label{matent} We will also need the matter entropy on our $2\pi n$-sized universe. The quantum state of matter will be given by a Weyl transformation of the vacuum state on the flat cylinder of size $2\pi n$. We therefore write \begin{equation} ds^2 = \frac{dx d\bar{x}}{\Omega^2}\,,\qquad \Omega = \frac{1}{2n} (x \bar{x})^{(1-n)/2} (1+ (x \bar{x})^n) \,, \end{equation} with a map to the dS metric given by \begin{equation} x = e^{-i(\s-\varphi)/n}\,,\qquad \bar{x} = e^{-i(\s+\varphi)/n}\,. \end{equation} This gives the CFT entropy as \begin{equation}\label{eqn:entropyindS} S_{CFT} = \frac c 6 \log \left(\frac{(x_2-x_1)(\bar{x}_2-\bar{x}_1)}{\epsilon_{UV}^2\Omega(x_1)\Omega(x_2)} \right)= \frac c 6 \log \left( 2n^2 \frac{\cos\left( \frac{\s_2-\s_1}{n} \right)- \cos\left( \frac{\varphi_2 - \varphi_1}{n}\right)}{\epsilon_{UV}^2\cos \s_1 \cos \s_2}\right), \end{equation} where $\epsilon_{UV}$ is an arbitrary cutoff to make the argument of the logs dimensionless. Notice that the Euclidean background is singular for $n>1$, since the angular coordinate of the sphere satisfies $\varphi \sim \varphi + 2\pi n$. The state of matter is still well-defined due to the Weyl equivalence with a smooth background. The inclusion of the gravitational sector breaks this equivalence, although to formulate our paradox we will assume there exists a reasonable state for the gravitational sector which allows us to use the island rule. We will return to this point in section \ref{sec:resolution}. \subsection{Backreacted and extended Nariai solution} Recall that JT gravity in dS$_2$ can be obtained by a dimensional reduction of near-extremal black holes in dS$_d$. These near-extremal black holes are the Schwarzschild black hole in the limit where the black hole horizon approaches the cosmic horizon, called the Nariai limit. In the higher-dimensional picture, this spacetime has an analytic extension which puts in as many inflating and crunching regions as desired. Figure \ref{threeregions} is simply a dimensional reduction of one of these possible analytic extensions, for the case $n=3$. Usually, the extension to additional inflating regions is considered a mathematical curiosity; what are all these other universes? A sharper objection is that of Kay and Wald \cite{Kay:1988mu}, which argued that there are no reasonable quantum states for quantum fields on the extended Schwarzschild-de Sitter spacetime, which respect the isometries of the spacetime. They proved this two different ways. The first is effectively the statement that Schwarzschild-de Sitter is out-of-equilibrium, since the black hole horizon and cosmic horizon have different temperatures. This disallows a standard Euclidean preparation of a state. This argument does not apply to the Nariai limit we are concerned with where the two horizons have the same temperature. The second proof uses monogamy of entanglement. If you line up several bifurcate horizons in a row, then a single diamond between the black hole and cosmic horizons has to purify both the diamond to its left and the diamond to its right in a state which respects the de Sitter symmetries. But this is impossible unless the left and right diamond are the same, as occurs in the $n=1$ spacetime. The way the state described in the previous subsection evades this argument is that the bifurcate horizons disappear once we consider quantum corrections to the spacetime solution. This suggests that the quantum state for matter we discussed above -- and the analytically extended spacetime -- may be fact instead of fiction. It would be nice to study the same issue in higher dimensions. A time slice of the Nariai geometry is $S^1 \times S^2$, and for thermal periodicity conditions along the spatial $S^1$ we expect a negative Casimir energy, whose magnitude decreases as we grow the size of the $S^1$. This provides a contribution which wants to make the black hole wormhole grow as in two dimensions. We derive monotonicity of the Casimir energy with the length of the circle $L$ and some further constraints for a conformal field theory on $S^1 \times S^2$ in appendix \ref{app:casimir}. (The Nariai geometry is time-dependent so here we are talking about the instantaneous energy, say at $\s = 0$.) We need not take the other universes in the analytic extension seriously as a phenomenological model for what happens in our universe. Indeed, black holes formed from collapse do not look like this. But similar to the thermofield double in anti-de Sitter space, it is a useful theoretical model to probe various questions about horizons. \subsection{Boundary conditions} If we want to involve our saddle in a Hartle-Hawking-like path integral prescription, we need to know what boundary conditions to put at the future boundary. We will cut-off the space-time and glue to flat-space at the location defined by \begin{align} \phi(x) = \frac{\phi_r}{\epsilon}\,, \end{align} and we will fix the induced metric on this curve to be \begin{align} ds^2 = \frac{dx^2}{\epsilon^2}\,. \end{align} The flat-space metric will be \begin{align}\label{eqn:hatmetric} ds^2_{hat} = \frac{-dt^2 + dx^2}{\epsilon^2}\,. \end{align} We will often refer to this flat-space region as a \emph{hat}, represented by the triangles at the top of figure \ref{newcoords}. If this gluing occurs near $\mathcal{I}^+$, then the boundary condition on the dilaton picks out a curve $\sigma(\varphi)\approx \frac{\pi}{2} - \d \s(\varphi)$ in global coordinates which obeys the equation \begin{align} \tilde{\phi}_r \frac{ \cos \varphi}{\delta \sigma(\varphi)} - \frac{c}{12}\left(1-\frac{1}{n^2}\right)\frac{\frac{\pi}{2} -\delta \sigma(\varphi)}{\delta \sigma(\varphi)} = \frac{\phi_r}{\epsilon}. \end{align} Solving for $\delta \sigma(\varphi)$ we have \begin{align}\label{eqn:cutoffglobal} \delta \sigma(\varphi) = \epsilon\, \frac{\tilde{\phi}_r}{\phi_r} \left( \cos(\varphi) - \alpha_n\right),\ \ \alpha_n \equiv \frac{c\pi(1-1/n^2)}{24\tilde{\phi}_r}. \end{align} As of now, the ratio $\tilde{\phi}_r/\phi_r$ is an undetermined constant (analogous to $\eta_c/\epsilon$ in \cite{Chen:2020tes}) which in principle will need to be fixed in some auxiliary manner. We will return to this point shortly. Note that $\delta \sigma(\varphi)$ goes to zero when $\cos \varphi_* = \alpha_n>0$. This means that the inflating region goes from $\varphi \in (-\arccos \alpha_n, \arccos \alpha_n)$ with $\arccos \alpha_n <\pi/2$. In other words, the backreaction of the quantum fields on the $n>1$ universe causes the inflating region to shrink and the wormhole to grow, as was discussed above. There is a natural Milne-like wedge which covers the causal past of the portion of $\mathcal{I}^+$ that is inflating. Focusing on the inflating region which is centered about $\varphi = 0$, we can define coordinates which cover this wedge by the equations \begin{align}\label{eqn:milnecoords} &\tanh(\tilde{\chi}) = \sqrt{1-\alpha_n^2} \frac{\sin \varphi}{\sin \sigma - \alpha_n \cos \varphi} \nonumber \\ & \tanh(\tilde{\eta}) =\sqrt{1-\alpha_n^2} \frac{\cos \sigma}{\alpha_n \sin \sigma -\cos \varphi}. \end{align} One can also check that the dS$_2$ metric in these coordinates is just the familiar de-Sitter metric in the Milne wedge \begin{align}\label{eqn:milnemetric} ds^2 = \frac{-d\tilde{\eta}^2 + d\tilde{\chi}^2}{\sinh^2 \tilde{\eta}}, \end{align} and furthermore note that the cutoff surface given by $\delta \sigma(\varphi)$ in \eqref{eqn:cutoffglobal} is at constant $\tilde{\eta}$ given by \begin{align} \tanh \tilde{\eta}_c \approx \tilde{\eta}_c = -\epsilon\sqrt{1- \alpha_n^2} \frac{\tilde{\phi}_r}{\phi_r}. \end{align} We see that we can continuously match $\tilde{\eta}$ and $\tilde{\chi}$ with the $t$ and $x$ coordinates of \eqref{eqn:hatmetric} to get the metric in the hat \begin{align}\label{eqn:hatmilne} &ds_{hat}^2 = \frac{-d\tilde{\eta}^2 + d\tilde{\chi}^2}{\tilde{\eta}_c^2}. \end{align} As discussed in \cite{Chen:2020tes}, the ratio $\tilde{\eta}_c/\epsilon$ (or $\tilde{\phi}_r/\phi_r$) has to do with the re-scaling between the flat space $x$ coordinate and the Milne $\tilde{\chi}$ coordinate. The parameterization of the global manifold in terms of these coordinates (and their analytic continuations) is presented in figure \ref{newcoords} for the case $n=2$. \begin{figure}\centering \hspace{-10mm}\includegraphics[scale = .3]{newcoords.png} \caption{The lines in this diagram illustrate constant $\tilde{\eta}$ and $\tilde{\chi}$ surfaces. }\label{newcoords} \end{figure} To fix this undetermined parameter, one should compute the norm of the multi-hat state using path integral methods. If our saddle dominates this path integral, then the multi-hat state's norm will depend explicitly on $\tilde{\eta}_c/\epsilon$. One can then extremize this norm over all possible values of $\tilde{\eta}_c/\epsilon$. See \cite{Chen:2020tes} for further discussion of this point. As is discussed below in section \ref{sec:resolution}, the saddles we have discussed here do not naturally dominate the path integral. Without a specific prescription for making this multi-hat saddle dominant, we cannot explicitly fix the ratio $\tilde{\eta}_c/\epsilon$. Thus, for the remainder of this work, we will leave it as an unfixed parameter, keeping in mind that in principle it will be fixed to a specific value. We can use our coordinate transformation in \eqref{eqn:milnecoords} to write the metric \eqref{eqn:hatmilne} in terms of $\sigma, \varphi$ coordinates. It takes the form \begin{align} &ds_{hat}^2 = \frac{\sinh^2 \tilde{\eta}(\sigma,\varphi)}{\tilde{\eta}_c^2 \cos^2 \sigma} \left(-d\sigma^2 + d\varphi^2\right) \nonumber \\ & = \Omega^2(\sigma,\varphi) \left(-d\sigma^2 + d\varphi^2\right), \end{align} where \begin{align}\label{eqn:hatWeyl} \Omega^2(\sigma, \varphi) = \frac{1}{\tilde{\eta}_c^2} \frac{1-\alpha_n^2}{(\cos \varphi - \alpha_n \sin \sigma)^2-(1-\alpha_n^2) \cos^2 \sigma}. \end{align} We can then find entanglement entropies for regions with one endpoint in the hat and the other in the de-Sitter region by simply replacing one of the $\Omega$'s in \eqref{eqn:entropyindS} by the $\Omega$ in \eqref{eqn:hatWeyl}. Since $\Omega$ is local to the endpoint in the hat region, this will just affect the answer for the entropy in \eqref{eqn:entropyindS} by an overall constant, independent of the position of the endpoint that is in the de-Sitter region. Finally, an important but potentially confusing point is that the quantum fields of the CFT living in the back-reacted Milne wedge covered by the coordinates in \eqref{eqn:milnecoords} will \emph{not} be thermal with respect to Milne time $\tilde{\eta}$. There is, however, a different set of coordinates which one can choose for the same back-reacted wedge with respect to which the CFT state is thermal. We can find these coordinates by first conformally mapping the interval $\sigma = \pi/2$, $\varphi \in (-\varphi_*, \varphi_*)$ to the half-circle $\sigma = \pi/2$, $\varphi \in (0,\pi n)$. The wedge associated to this half-circle can then be viewed as a Rindler wedge of the Poincare patch of the Lorentzian cylinder covered by $\sigma, \varphi$. Following this procedure, the Rindler coordinates covering this Rindler wedge are related to $\sigma, \varphi$ by \begin{align} &\tanh x_{\text{th}} = \sqrt{1-\beta_n^2} \frac{\sin \frac{\varphi}{n}}{\cos \frac{\sigma - \pi/2}{n} - \beta_n \cos \frac{\varphi}{n}}\nonumber \\ &\tanh t_{\text{th}} = -\sqrt{1-\beta_n^2} \frac{\sin \frac{\sigma - \pi/2}{n}}{\beta_n \cos \frac{\sigma - \pi/2}{n} - \cos \frac{\varphi}{n}} \end{align} where $\beta_n = \cos \frac{\varphi_*}{n}$ with $\varphi_*$ defined by $\alpha_n = \cos \varphi_*$. Just as before, one can check that the coordinates $t_{\text{th}}$ and $x_{\text{th}}$ cover the wedge associated to the central inflating region. To reiterate, the state of the fields \emph{is} thermal with respect to $t_{\text{th}}$ but \emph{not} $\tilde{\eta}$. \begin{figure} \hspace{-10mm}\includegraphics[scale = .2]{dSschwarzschild3ent.pdf} \caption{Island region $I$ for entropy of region $R$. }\label{3island} \end{figure} \section{Island computation}\label{sec:computation} With the gravitational solutions at hand, we want to consider the generalized entropy of an interval in one of the inflating regions, analogous to the computations in \cite{Chen:2020tes, Hartman:2020khs, Aguilar-Gutierrez:2021bns}. We will slightly modify the solution above by appending flat-space regions to each of the inflating regions. Since the dilaton diverges toward the inflating boundary, the gravitational coupling is approaching zero there. Therefore in the flat-space region we will assume gravitational effects can be completely ignored. We will assume that the island region is as depicted in figure \ref{3island}, and that we are in an OPE limit such that the entropy in the complement channel -- which is the union of two intervals -- factorizes. We will do the computation for a region $R$ close to but below $\mathcal{I}^+$, where we will ignore the effects of gravity; as discussed below \eqref{eqn:hatWeyl}, moving $R$ into the hat as in figure \ref{3island} simply introduces an additive constant factor due to the distinct Weyl factor in the hat. The gravitational entropy is $\phi/4$, which when combined with the matter entropy in section \ref{matent} gives the generalized entropy as: \begin{equation} S_{gen} = 2\phi_0 +2 \tilde{\phi}_r \frac{\cos \varphi_I}{\cos\s_I} - \frac{c}{6}\left(1-\frac{1}{n^2}\right) \s_I \tan \s_I+\frac c 3 \log \left(2n^2 \frac{\cos\left( \frac{\s_I-\s_R}{n} \right)- \cos\left( \frac{\varphi_I - \varphi_R}{n}\right)}{\epsilon^2\cos \s_I \cos \s_R}\right) \end{equation} The overall factor of two is for both intervals. We want to extremize this answer with respect to the $\{\s_I, \varphi_I\}$ endpoint. It is a little difficult to extremize this directly, but in the limit of small backreaction $c/\tilde{\phi}_r \ll 1$, the $n>1$ saddles are not very different from the $n=1$ saddle, at least as long as we choose the endpoints of region $R$ to be near the interfaces between the crunching and inflating regions on $\mathcal{I}^+$. The resulting island is as in figure \ref{3island}. The above situation presents us with a puzzle. While an observer Alice with access to region $R$ can encode the rest of the universe, the same would apply to an observer Bob in the right or left patches. In particular, Alice and Bob would have overlapping islands (and would be in each other's island). This is inconsistent with complementary recovery, and leads to a violation of the no-cloning theorem in quantum mechanics. \subsection{Entropy of an entire hat} We can also take region $R$ to be an entire inflating region. In this case, the natural answer for the entropy, analogous to the island we found in the previous section, is to extend region $R$ into the bulk such that it covers the entire spacetime. This is displayed in figure \ref{2hats} and gives $S = 0$. Like in the previous section, if we compute the entropy of the other hat, we will find again that the island region is the rest of the spacetime, giving $S = 0$ again. This can also be seen more directly from the replica analysis, which we discuss in section \ref{sec:resolution}. Now that we have argued for overlapping island regions, let's move onto the paradox that arises. \begin{figure} \centering \includegraphics[scale = .2]{dSschwarzschild2try2ent.pdf} \caption{The ``purity" saddle which gives $S = 0$ for the entropy of one of the two hats.}\label{2hats} \end{figure} \section{A paradox} \label{sec:paradox} In this section we will carefully state our assumptions and the inconsistency they lead to. We will see that the following two assumptions are in contradiction with each other: \begin{enumerate} \item For the two (or multi) hat state, the entanglement wedge of either hat is the entire universe. \item Any operator in Hat$_1$ commutes with any operator in Hat$_2$. \end{enumerate} We now argue by contradiction that these both cannot be true. By the first assumption, the entanglement wedges of the two hats overlap, for example in either of the black hole interiors. For operators $\phi$ and $\pi$ in the black hole interior such that $[\phi, \pi] \neq 0$, we then have \begin{equation} \langle \psi|[\phi, \pi] |\psi\rangle =\langle \psi| [O_1, O_2] |\psi \rangle = 0 \end{equation} where in the first equality we used entanglement wedge reconstruction to represent $\phi$ in Hat$_1$ with $O_1$ and $\pi$ in Hat$_2$ with $O_2$. The second equality follows by assumption 2, but then we reach a contradiction since we assumed $[\phi, \pi] \neq 0$. \subsection*{Connection to no cloning} Note that this contradiction is very similar to the contradiction that if we have two overlapping entanglement wedges for complementary regions then we could clone quantum information. The proof for this is as follows. Suppose we have a quantum error correcting code with overlapping entanglement wedges for complementary regions. Denote the two complementary regions by $A$ and $\bar{A}$. Suppose our code-subspace $\mathcal{H}_C$ is spanned by the states $\lbrace \ket{i} \rbrace$ indexed by $i$. This subspace could be, for example, the Hilbert space of a qubit in the past of Hat$_1$ or Hat$_2$. Then suppose that both of these regions can reconstruct the code-subspace. Reconstructability is equivalent to the existence of a decoding isometry which isolates the code subspace state onto a sub-factor of the physical Hilbert space, $\mathcal{H}_{A\bar{A}}$, with dimension equal to that of the code subspace. In other words, using the conventions and notations of \cite{Harlow_2017}, this means that there is an isometry $U_A$ acting on $\mathcal{H}_A$ such that \begin{align}\label{eqn:decode} U_A \ket{i}_{A\bar{A}} = \ket{i}_{A_1} \otimes \ket{\chi}_{A_2 \bar{A}} \end{align} for all $\ket{i}$ in the code subspace and for some division $A_1$ and $A_2$ such that $|A_1| = \dim \mathcal{H}_C$ and $|A_2| = \dim \mathcal{H}_A/\dim \mathcal{H}_C$. See \cite{Harlow_2017} for the slight modification of this formula if $\dim \mathcal{H}_C$ is not a divisor of $\dim \mathcal{H}_A$, although this is unimportant for us. Here $\ket{\chi}_{A_2\bar{A}}$ is some state which is independent of $\ket{i}$, which is essential. Now by assumption there is also a similar equality for $U_{\bar{A}}$ on the complement region. Putting eq. \eqref{eqn:decode} together with its complementary version, and using that $U_A$ and $U_{\bar{A}}$ commute, we have \begin{align}\label{eqn:paradox} U_{\bar{A}} U_A\ket{i}_{A\bar{A}} = \ket{i}_{A_1} \otimes U_{\bar{A}} \ket{\chi}_{A_2 \bar{A}} = \ket{i}_{\bar{A}_1} \otimes U_{A} \ket{\chi}_{\bar{A_2} A}. \end{align} The latter equality tells us that $\chi$ is in fact dependent on $i$, violating the assumption of reconstructability. Note that if this string of equalities were true then we could clone quantum information. This is because the second equality in eq. \eqref{eqn:paradox} tells us that the reduced density matrix of $U_{\bar{A}} U_A \ket{i}_{A\bar{A}}$ on $A$ is $\ket{i}\bra{i}_{A_1} \otimes \chi_{A_2}$, where $\chi_{A_2} = \text{Tr}_{\bar{A}} \ket{\chi} \bra{\chi}_{A_2 \bar{A}}$, and analogously for $\bar{A}$, but the only pure state on $A\bar{A}$ with these reduced density matrices is \begin{align} U_{A}U_{\bar{A}} \ket{i}_{A\bar{A}} = \ket{i}_{A_1} \otimes \ket{i}_{\bar{A}_1} \otimes \ket{\chi}_{A_2 \bar{A}_2}. \end{align} Thus, the joint isometry $U_AU_{\bar{A}}$ would allow one to clone quantum information onto the $\mathcal{H}_{A_1} \otimes \mathcal{H}_{\bar{A}_1}$ subfactor of $\mathcal{H}_A$, which of course is impossible. \section{Resolution}\label{sec:resolution} Our proposed resolution to the paradox above is that the quantum extremal surface saddles in figures \ref{3island} and \ref{2hats} are actually \emph{incorrect} for the problem as posed. The reason will be roughly due to a ``time-like" homology constraint. To illustrate this, we first examine a slightly different set-up than the one we consider with multiple disconnected hats. \begin{figure} \centering \includegraphics[scale = 1.2]{twoint.png} \caption{We imagine taking global de Sitter and freezing two regions near $\mathcal{I}^+$ pictured in red. We integrate over the geometry away from these intervals. Naively, there is a puzzle since the entanglement wedge for one of the intervals appears to encompass the whole universe, since the spatial cross sections are compact. The naive entanglement wedge for $R_2$ is pictured in blue. }\label{fig:twoint} \end{figure} \subsection{A toy model of a toy model}\label{toytoy} We briefly discuss a slightly simpler set-up where a similar confusion arises. Consider global $dS_{d+1}$, without black holes, illustrated in figure \ref{fig:twoint}. We can imagine ``freezing" the geometry in two regions $R_1$ and $R_2$ close to $\mathcal{I}^+$. By freezing here, we mean that in defining the quantum state near $\mathcal{I}^+$ we only integrate over quantum fields while fixing the metric in the frozen regions.\footnote{One might have in mind here that the two frozen regions $R_1$ and $R_2$ correspond to two boundary quantum field theories. The state of the system on $R_1 \cup R_2$ is then prepared via the path integral over geometries in its past. To determine this state, one could use the Hartle-Hawking prescription or perhaps a modified prescription to produce a different state, as we will discuss below.} If we take the saddle-point in figure \ref{fig:twoint} seriously, one would run into a paradox similar to the one described in the section \ref{sec:paradox}. This is because again the spatial cross-sections of global $dS_{d+1}$ are just topologically $S^d$, and so the entanglement wedge for either $R_1$ or $R_2$ is just determined by the trivial quantum extremal surface, i.e. the entanglement wedge is the whole universe. If this were true, there would be operators encoded in $R_1$ which do not commute with operators in $R_2$. \begin{figure} \centering \includegraphics[scale = 1.3]{twointdisconn.png} \caption{The true saddle for the problem of two regions near $\mathcal{I}_+$ is actually two disconnected universes, with each region in its own copy of the original spacetime.}\label{fig:twointdisconn} \end{figure} We can see how the Hartle-Hawking prescription resolves this confusion, however. The HH prescription says that to compute the dominant contribution to the wave-function we just fix the two regions and then sum over all no-boundary geometries in the past which end on these intervals.\footnote{Note that, as always, there are contributions from closed universes which contain neither $R_1$ nor $R_2$. These are only relevant for computing the norm of the state but will divide out when we compute normalized quantities.} When we do this, however, the dominant saddle is \emph{not} the one pictured in figure \ref{fig:twoint}, but rather one where the two regions are in their own separate universes as in figure \ref{fig:twointdisconn}. This is because this saddle is enhanced by a factor of $e^{\frac{1}{16 \pi G_N} \int d^dx R} \equiv e^{S_0}$ (due to the additional universe) relative to the saddle where both intervals are in the same universe. In this saddle, there is no confusion: the two regions just encode their own copies of the universe. We see that without modification, the HH prescription produces a state of the two regions which is naturally disentangled. One could ask if there is a modification of the HH prescription - in other words, a different state - where the saddle in figure \ref{fig:twoint} \emph{is} the dominant saddle. Instead of delving into this question more here, we instead turn to the main set-up of interest. \subsection{Back to the multiple black hole set-up}\label{sec:multihatdom} \begin{figure} \centering \includegraphics[scale = .8]{twohat3d.png} \caption{The connected, two-hat saddle which we are interested in studying. Here we have illustrated the Lorentzian to Euclidean continuation of this saddle from the $\sigma = 0$ line. This is a hemisphere of curvature $R=2$ but with a conical excess at $\sigma = i\infty$, the south pole of the hemisphere. The conical singularity has an opening angle of $4\pi n$. For $n$ hats, it would have an opening angle of $2\pi n$.}\label{fig:twohat3d} \end{figure} We return to our model of 2d Schwarzschild-de Sitter with a $2\pi n$-sized universe for $n>1$. As mentioned at the end of section \ref{matent}, the Euclidean manifold which would prepare the Hartle-Hawking state has a conical singularity. In other words, although the configuration in Section \ref{sec:setup} is a solution everywhere in Lorentzian signature, the analytic continuation of these geometries to Euclidean signature is not everywhere a solution to the JT saddle-point equations. The spatial cross section of the multi-hat saddle depicted in figure \ref{2hats} has a total angle of $2n\pi$ for $n$ hats, which when continued into Euclidean signature leads to a conical singularity in the past ($\sigma = +i\infty$ in global coordinates), at which point the constraint $R=2$ is no longer obeyed. This is illustrated in figure \ref{fig:twohat3d}. Say we have a UV complete theory where this conical singularity is regularized somehow. Then using the argument in Section \ref{toytoy}, we see that if we freeze $m < n$ hats, then the dominant saddle will be $m$ disconnected universes, and we will not run into a paradox of overlapping entanglement wedges. Without such a UV complete theory, the resolution is even simpler: these problematic spacetimes are simply not saddles. They instead appear as though an operator has been inserted at some point in the Euclidean past. Thus they do not contribute to -- let alone dominate -- the Hartle-Hawking path integral without operator insertions. \begin{figure} \centering \includegraphics[scale = .8]{3ddisconn.png} \caption{If we insert a conical singularity at a position which references only one of the hats, instead of both simultaneously, the dominant saddle will be two disconnected universes illustrated here. If we define the conical singularity with respect to the Hat$_L$, then Hat$_L$ will be in a universe with angle $4\pi$ and another, fluctuating asymptotically inflating region. Hat$_R$ will be off in its own $2\pi$ universe. We have schematically illustrated the procedure of tying the conical singularity to a hat by the green dashed line. To prevent the saddle illustrated here from dominating the path integral, we need to tie the position of the conical singularity to both hats simultaneously. We discuss some ways of doing this in the main text.}\label{fig:3ddisconn} \end{figure} One could then ask: can we include operator insertions such that the connected spacetime becomes a solution, and in fact dominates the path integral? For this to occur, we need a nonperturbative definition of the location of the insertion, i.e. the location of the conical singularity. A natural way of discussing the location is to define it relative to the future boundary conditions. For example, one might geodesically ``dress" this point to the future asymptotic boundary. Then it is not hard to see that to make the connected two-hat saddle dominant, we need that the position of this singularity is defined relative to not just one of the hats \emph{but to both simultaneously.} To understand this, imagine that we referenced just one of the hats, say the left hat Hat$_L$. Then, similarly to the previous subsection, the dominant saddle will again be one where the two hats sit in their own, disconnected de Sitter universes since this is enhanced relative to the connected saddle by factors of $e^{S_0}$. Here the universe with Hat$_L$ has an extra asymptotically de Sitter region due to the conical singularity in the Euclidean region of the manifold. This is illustrated in figure \ref{fig:3ddisconn}. This again shows that Hartle-Hawking-like prescriptions naturally want to disconnect all asymptotically de Sitter regions, unless we force them to connect. Thus we see that to force them to connect we need to define the position of the conical singularity with respect to \emph{both} hats simultaneously. In this case the disconnected saddles no longer contribute since the dressing of the conical singularity only includes geometries which are connected in the path integral. \subsection{Inserting the conical singularity relative to both hats and resolution of the paradox} This discussion so far has been abstract since we have not discussed any concrete methods by which to actually insert this conical singularity in the Euclidean past. We now discuss two possible options. \subsubsection{Freezing by hand}\label{sec:frozen} One option is to follow in the footsteps of \cite{Almheiri_2021} and just freeze more of the geometry by hand. For example, instead of just freezing in the asymptotic regions where the dilaton is becoming large and so gravity is becoming weak, we could for example choose to freeze all regions of the geometry where the dilaton $\phi(x)$ is bigger than some value $\phi_*$. For example, one could choose to freeze all parts of the geometry (Lorentzian or Euclidean) where the dilaton is $\phi(x) \geq 0$, which corresponds to all regions where the total dilaton is greater than its extremal value $\phi_0$. Note that this is not the value of the dilaton at the de Sitter horizon, which is instead $\phi(x) = \tilde{\phi}_r(1+ \mathcal{O}(c/\tilde{\phi}_r))$. Ignoring the quantum corrections of order $c/\tilde{\phi}_r$, we see that this amounts to freezing everything in the geometry with angular variable $\varphi \in [-\pi/2, \pi/2] \cup [3\pi/2, 5\pi/2]$. This is illustrated in figure \ref{fig:twohat3dfrozen}. We see that the conical singularity is just barely included in the Euclidean past frozen region. With quantum corrections, the dilaton in fact linearly blows up to $+\infty$ at the conical singularity and so the frozen region, where $\phi \geq 0$, includes a neighborhood of the conical singularity. Since the geometry is frozen in what used to be the ``bulk" of the spacetime, we are considering a genuinely \emph{different} state from the HH state on the Hilbert space $\mathcal{H} = \mathcal{H}_{\text{Hat}_L} \otimes \mathcal{H}_{\text{Hat}_R}$. Given this frozen region, we then compute the wavefunction of this state by summing over all geometries which end on this mixed-signature manifold. Clearly, our connected saddle geometry is a solution to this problem and likely nothing else is. \begin{figure} \centering \includegraphics[scale = .79]{twohat3dfrozen2.pdf} \caption{One could make the two-hat connected saddle dominate the path integral by freezing some portion of the geometry in the interior. A natural choice is to freeze all regions with some dilaton value $\phi \geq 0$. Here we show the saddle where we have frozen everything filled in with gray. Note that when we include $c/\tilde{\phi}_r$ corrections to this solution then the frozen region actually includes a neighborhood of the conical singularity.}\label{fig:twohat3dfrozen} \end{figure} \textbf{Resolution of the paradox for frozen geometries}: If we freeze portions of the geometry by hand, then the resolution to our paradox is simple; since the frozen region extends into the Lorentzian past of both saddles, the naive QES for Hat$_L$, which includes the whole universe, can no longer work because it would manifestly include a piece of the frozen region in the past of Hat$_R$. We can be more explicit and review how the Euclidean path integral implements the constraint that the entanglement wedge for a frozen region $R$ should not include any other frozen region besides $R$. To illustrate how the gravity path integral enforces this constraint, we can compute the Renyi-2 entropy, $S_2 = -\log \text{Tr} \rho^2$, of Hat$_L$ in the HH state prepared by JT gravity along with the extra boundary condition of freezing the portions of its geometry in figure \ref{fig:twohat3dfrozen}. We will call the state of the two hats prepared via this path integral $\ket{HH_f}$ where the subscript $f$, for ``frozen,'' denotes that we are working with the extra boundary conditions. What we want to argue is that $S_2 >0$ since the saddle which gives $S_2 = 0$ does not obey the boundary conditions. \begin{figure} \centering \includegraphics[scale = 1]{swaptrickwsing.png} \caption{This figure illustrates what happens if the frozen portion of the geometry is only connected to one hat, in this case Hat$_L$. The green-dashed lines schematically represent the frozen portion of the geometry. In this case, Hat$_L$ from replica 1 can be swapped with Hat$_L$ from replica 2 and so the dominant saddle will just be two copies of the saddle which dominates the norm, $\braket{HH_f|HH_f}$. In other words, $\text{Tr}[\rho^2] =\braket{HH_f|HH_f}^2$ where $\rho$ is the unnormalized density matrix. The double-headed arrow indicates that the left hats from each replica have been swapped.}\label{fig:swaptrick} \end{figure} To compute $S_2$, we prepare two replica copies of the pure state density matrix $\ket{HH_f}\bra{HH_f} \equiv \rho_{LR}^{HH_f}$, trace out $R$ and then compute $\text{Tr}[\rho_L^2]$. This amounts to computing the path integral with 4 boundaries or 8 asymptotic hat regions, gluing all the $R$ kets to their bra partners in the same replica and then gluing the $L$ kets to the $L$ bras in the other replica. This is illustrated in figure \ref{fig:purityreptrick}. To compute the purity, we are then instructed to sum over all JT saddles that have these boundary conditions discussed in the previous section. If the frozen region does not geometrically connect Hat$_L$ to Hat$_R$, then we can effectively swap the bra for Hat$_L$ in replica copy 1 with the bra in Hat$_L$ for replica 2 as in figure \ref{fig:swaptrick}. The dominant saddle would then just be two copies of the saddle that dominates when computing $\braket{HH_f|HH_f}$. In other words, we would get $S_2 = 0$. If on the other hand the frozen region connects both boundaries as in figure \ref{fig:twohat3dfrozen}, then we cannot swap the replica copies because Hat$_L$ and Hat$_R$ are effectively tied to each other within each replica byt the path integral over the forzen region. In other words, one can only get saddles which connect between bras (or kets) of the same replica. This is not the same as $\braket{HH_f|HH_f}^2$ and so $S_2 \neq 0$. Indeed, we expect it to be of order $\phi_0$. As we will see in the next section, there is another quantum extremal surface which we have thus far ignored that appears for $n >1$ and which gives an answer $S_2 \approx \phi_0$. \begin{figure} \centering \includegraphics[scale = 1]{bcswithsingularity.png} \caption{We illustrate the boundary conditions associated to computing the purity, $\text{Tr}[\rho^2_L]$, for the state on Hat$_L$ in the Hartle-Hawking state with modified boundary conditions, where the conical singularity (marked by ``X") is inserted in the frozen portion of the geometry. The prime denotes bra vs. ket and the subscript denotes replica number. Since this dressing ties hats of the same replica number to each other, we see that the quantum extremal surface (or its Renyi-2 analog), which gave an answer of $S_2 = 0$ for each hat, is excluded. The only possibility is the analog of the ``Hawking" saddle, which gives a non-zero answer for $S_2$.}\label{fig:purityreptrick} \end{figure} \subsubsection{Modifying the JT action} Freezing a large portion of the interior geometry is a rather drastic way of making the multi-hat saddle dominate the path integral. One might hope that there is a less severe way of accomplishing this goal. One method might be to modify the bulk theory so as to insert a conical singularity at the correct point. More explicitly, one could attempt to produce this saddle by modifying the JT action from \begin{align}\label{eqref:modJT} \int d^2 x \phi(R-2) \to \int d^2 x \sqrt{-g} \phi \left( R-2-\frac{\alpha}{\sqrt{-g}} \delta^2(x-x_*) \right) \end{align} where $\alpha$ is an independent parameter that can be tuned to be $\alpha = \frac{2(n-1)}{n}$ with $n$ the number of asymptotic hats. Here we have in mind that $x_*$ is defined in some way that makes sense non-perturbatively, and, when evaluated on the metric $g_0$ with a conical singularity at the past Euclidean south pole (i.e. $\sigma = i \infty$ in global coordinates), we find that $x_*^{\mu}(g_0) = (\sigma = i\infty, \varphi )$. For more general metrics, $x_*$ might depend on the background metric and dilaton: $x_* = x_*(\phi, g)$. Suppose there exists some method to specify $x_*$ that depends only upon the background metric $g$ and not the dilaton. For example, we might imagine geodesically dressing the point $x_*$ to the future asymptotic boundary. We can then integrate over the dilaton and find that the geometries localize to those with positive curvature and a delta function singularity at $x = x_*$. The metric equations also get modified to be \begin{align}\label{eqn:dileqnmod} \left(g_{\m\n}\nabla^2 - \nabla_\m\nabla_\n+g_{\m\n}\right) \phi = 2\pi T_{\m\n} +\alpha \frac{\delta x^{\beta}_*(g)}{\delta g^{\m \n}(x)} \partial_{\beta} \phi(x_*)\,. \end{align} Note that for a point $x_*$ which is extremal with respect to the dilaton, this extra source term on the right hand side vanishes. This is why the fixed area states of \cite{Akers:2018aa, Dong:2018aa} have been considered for extremal surfaces only. More generally, however, this term may not vanish. Furthermore, depending on how specifically $x_*$ is defined in terms of $g$, $\delta x_*/\delta g$ might be quite hard to calculate. Regardless of the specific details of the dilaton solution to eq. \eqref{eqn:dileqnmod}, as long as there remain $n$ asymptotically inflating regions after accounting for the term proportional to $\alpha$, one can still discuss a version of our paradox in this modified saddle point. This paradox for the modified saddle described by \eqref{eqn:dileqnmod} will be resolved in the same way as follows. Figures \ref{fig:swaptrick} and \ref{fig:purityreptrick} can be used again in this context just by re-interpreting the green dashed lines as schematically denoting the dressing of the conical singularity with respect to the asymptotic hats, i.e. a prescription for defining $x_*(g)$. The same words then go through. When we dress the conical singularity to only one hat, the dominant saddle is the disconnected one. When we dress the conical singularity to both simultaneously, we can no longer swap hats in different replicas, since they are ``tied together'' by the green lines in figure \ref{fig:purityreptrick}. We see that the resolution is effectively the same, regardless of the details of how we prepared the entangled state of the two hats via the gravitational path integral. \subsection{Entropy of an entire hat revisited}\label{hatentropy} Having argued that whichever prescription we use to make the connected saddle for $n >1$ dominate the path integral will necessarily preclude the zero-entropy saddle, we are then left with the question of what exactly is the entropy for one of the hats in these connected saddles? In the case of figure \ref{3island}, if the saddle drawn is incorrect, then we are left only with the trivial saddle with vanishing island. Thus the fine-grained entropy of region $R$ is simply the semiclassical matter entropy. But in figure \ref{2hats} we still need to extend the endpoints of region $R$ into the bulk. If they do not run across the entire universe, then where do they end? As it turns out, there exists a nontrivial QES, shown in figure \ref{2hatsqes}. \begin{figure} \centering \includegraphics[scale = .2]{nontrivial.pdf} \caption{The impure saddle for region $R$ with $n=2$.}\label{2hatsqes} \end{figure} This can be found explicitly as an extremum of our generalized entropy functional. When $n=2$, it suffices to note that (a) there is a time-reflection $\mathbb{Z}_2$ symmetry around $\s = 0$, (b) the QES for the region $R$ in one hat has to be the same as the QES for the same region in the neighboring hat, by purity of the semiclassical state, and (c) the generalized entropy of a region $R \cup I$ is invariant under $2\pi$ shifts. Fact (a) locates the QES at $\s = 0$, whereas facts (b)-(c) locate the QES at the center of the crunching regions. For $n > 2$ we do not generally have fact (b). But we do know that the center of the crunching regions is classically extremal, so for $\varepsilon=c/\tilde{\phi}_r \ll 1$ the quantum extremal surface will be located nearby. It will still be at $\s = 0$ due to fact (a). By explicit extremization of the generalized entropy functional from a point $\{\s_1, \varphi_1\}$ to $\{\s_2, \varphi_2\}$ in the gravitating part of the spacetime, we find the quantum extremal surface is the pair of points \begin{equation} \s_1 = \s_2 = 0\,,\qquad \varphi_1 = \pi - \varepsilon, \,\,\varphi_2 = -\pi + \varepsilon\,,\qquad \varepsilon = \frac{c \cot \frac{\pi}{n}}{6 \tilde{\phi}_r n} + O(c^2/\tilde{\phi}_r^2)\,. \end{equation} To the same order in $\varepsilon = c/\tilde{\phi}_r$, the nearby black hole horizons at $\s = 0$ are located at $\varphi =\pm \pi \mp \frac{c \pi (1-1/n^2)}{24\tilde{\phi}_r}$. This means that the QES lives beyond the horizon, ensuring that the entanglement wedge of $R$ is larger than its causal past. This is represented for $n=3$ in figure \ref{nontrivial3}. Notice that given the symmetry around $\s = 0$, the growth of the wormhole was necessary to get a nontrivial QES which bounded a spacelike region. The existence of these saddles is a good sign, because otherwise we would not be able to ascribe an entropy to region $R$. (One option would have been to declare it to be the semiclassical entropy of region $R$, but the fact that its endpoints end at the interface of the gravitating and non-gravitating regions would have been puzzling; in particular we would have to exclude by fiat the gravitational contribution to the entropy here.) We take this as an indication that our setup is self-consistent. \begin{figure} \hspace{-10mm}\includegraphics[scale = .2]{nontrivial3.pdf} \caption{The impure saddle for region $R$ with $n=3$.}\label{nontrivial3} \end{figure} A somewhat surprising feature of our setup and this result for the entropy is that it also suggests a fine-grained entropy for a region between the two red QES's, which is \emph{entirely in the gravitationally fluctuating region of the spacetime} (here we are assuming that whatever prescription makes this saddle dominant does not freeze the region $\s < \pi/2$). In other words, we can unitarily deform the region $R \cup I$ in figure \ref{nontrivial3} downward while keeping the red endpoints fixed without changing the fine-grained entropy. This is qualitatively different from the usual scenario in AdS/CFT, where every time slice that includes a Cauchy slice of the entanglement wedge also includes a frozen region. Also, when we normally apply the island rule directly to a region in a gravitating spacetime we find that the region shrinks to zero size and vanishes once we extremize with respect to the region's boundary. In this case we have a ``floating" island which is not anchored to any boundary and does not have an auxiliary region $R$; its endpoints are quantum extremal on their own. It seems consistent to ascribe such a region a fine-grained entropy equal to its generalized entropy. Of course, even if this is possible it is related to the fact that gravity was frozen somewhere else in the spacetime. Note that if we froze spacetime to the past of $\s = \pi/2$ as in section \ref{sec:frozen}, then unitarily deforming $R \cup I$ downward from the hat would not lead to any surprises, since any unitary deformation of the region would include a frozen region. \section{Discussion}\label{sec:discussion} The observer-dependence of cosmic horizons makes the concept of encoding the region beyond the horizon more subtle than in the context of black holes. In particular, two observers outside of a black hole will agree on the black hole event horizon, and only one of them will be able to encode the interior. However, two observers in different places in the universe will have different cosmic horizons: if they each encode the region beyond their horizon, then those regions can overlap. If this were realized it would lead to inconsistencies with quantum mechanics. The resolution to this problem is quite simple in our setup. It depends on carefully defining the microscopic description. Our microscopic description is given by a CFT on the ``frozen" regions (i.e. the regions where the quantum effects of gravity are ignored) with a boundary condition at an initial time. The initial boundary condition is prepared by a gravitational path integral, and in this paper we have looked for various saddles which dominate this path integral. To reach a potential paradox, we considered freezing two disconnected regions, corresponding to two different observers. Now, if we do nothing to entangle these two regions in the microscopic description, then the dominant saddle which fills them in will correspond to two distinct universes, one for each frozen region. In such a situation we will not have overlapping entanglement wedges, and therefore no paradox (each observer will encode the region beyond their horizon but within their connected piece of the universe). However, if we entangle the two frozen regions in the microscopic description, then we can have the leading saddle be a single universe hosting both frozen regions. But entangling the two frozen regions has to be a process which references both, so in a computation of the purity, the replica wormhole which would give a pure state for either frozen region is disallowed. It is disallowed since it would require swapping one of the two frozen hats with its replica, but this would violate the procedure that entangled the two regions, e.g. by the procedure in \ref{sec:frozen}. The entanglement wedge of either frozen region does not run across the entire universe; we instead have complementary entanglement wedges for each frozen region, as expected. \begin{figure} \centering \includegraphics[scale = .2]{2hats2wings.pdf}\vspace{2mm} \caption{On the left we have reproduced figure \ref{fig:twohat3dfrozen}, corresponding to the Euclidean preparation of a $4\pi$ universe with frozen regions denoted in gray. On the right we have the Euclidean preparation of the thermofield double in AdS coupled to frozen flat-space ``wing" regions. The solid black line in the middle is the time-reflection symmetric point where the Euclidean manifold is pasted to the Lorentzian manifold. These two situations are analogous.}\label{tfd} \end{figure} It is important in the situations we have analyzed that the frozen regions define the microscopic description, and therefore the entropy. This is true even if the frozen regions are somewhere else in the universe. A simple analogy is the thermofield double (TFD) in AdS, coupled to non-gravitating flat-space wings. This is prepared by the Euclidean path integral on the right-hand-side of figure \ref{tfd}. Notice the left and right wings, which would have been disconnected in the Lorentzian manifold, are connected by a frozen region in the Euclidean manifold. This is just as in figure \ref{fig:twohat3dfrozen} with our frozen flat-space hats, reproduced on the left-hand-side of figure \ref{tfd} (the two shield-like regions meet in the vicinity of the conical singularity in the Euclidean past). So in the computation of the entropy of a hat or wing using the island rule, the region cannot be extended to the entire universe since it will run into another frozen region. This means $S \neq 0$. In a replica computation, like in section \ref{sec:frozen}, the connected frozen regions must remain connected, disallowing the purity saddle with $S = 0$ which swaps hats/wings. In the TFD the entropy of either the left or right system is given by the black hole entropy. We saw a similar result for the entropy of a single hat in section \ref{hatentropy}. However, the two wings, or two hats, can be disentangled, by unfreezing one of the two. In the TFD this can be done by inserting an end-of-the-world brane behind the horizon. Then the full microscopic description is just given by one side, and its entropy now vanishes, see the right-hand-side of figure \ref{eow}. This corresponds to ``unfreezing" the left region, in which case there is no problem with the right region encoding the interior (and what used to be the left exterior) on its own. This would be the same as if we only froze one of the two hat regions, shown on the left-hand-side of figure \ref{eow}: the entropy of the theory in the hat now vanishes, and it encodes the entire universe on its own. (The universe drawn in figure \ref{eow} is size $4\pi$ because we imagine the frozen region includes the conical singularity of particular opening angle in the Euclidean past, which fixes the size of the universe. If it were not included then the frozen hat would like in a $2\pi$ universe.) A complicating feature of our analysis was the conical singularity in Euclidean signature that prohibits a conventional preparation of the Hartle-Hawking state. One option to deal with this, although there is not an obvious candidate, is to find a bra-ket wormhole in global-like coordinates which avoids the universe capping off in the Euclidean section. This would avoid the conical singularity. \begin{figure} \centering \includegraphics[scale = .18]{1hat1wing.pdf}\vspace{2mm} \caption{On the right we have a pure-state black hole with an end-of-the-world brane behind the horizon. This means that the single wing system (plus boundary) encodes the entire bulk, and has fine-grained entropy $S = 0$. This corresponds to unfreezing one of the two frozen regions in the TFD in figure \ref{tfd}. It is analogous to a single frozen hat region in dS, shown on the left. In this case the theory in the hat has fine-grained entropy $S = 0$ and encodes the entire universe on its own.}\label{eow} \end{figure} \subsection{Comments on quantum cosmology} The fact that our observers tend to split up into disconnected universes depends crucially on using the Hartle-Hawking wavefunction. Proposals like Vilenkin's tunneling wavefunction \cite{Vilenkin:1983xq} seem to give the inverse answer for the amplitude in these simple setups \cite{Vilenkin:1984wp, Feldbrugge:2017kzv}. In such a situation we would find a preference for our observers to be in the same connected piece of the universe. It would be interesting to analyze whether there remains a paradox in this case. Another interesting aspect of our analysis is that we see a semiclassical avatar of the picture advocated in \cite{Hartle:2010dq, Hartle:2016tpo, Aguilar-Gutierrez:2021bns}, where one can think of distinct observers as living in their own universe due to coarse-graining beyond their horizons. \subsection{Encoding our universe in a CFT$_2$?} A curious fact about our solution described in section \ref{sec:setup} is that after accounting for the back-reaction from the quantum matter, our dilaton solution actually blows up to $+\infty$ as one approaches the conical singularity. To see this, recall that the backreacted dilaton takes the form in global dS$_2$ coordinates \begin{align} \phi(\sigma, \varphi) = \tilde{\phi}_r \frac{\cos \varphi}{\cos \sigma} - \frac{c}{12} \left(1- \frac{1}{n^2}\right)\sigma \tan \sigma. \end{align} Continuing $\sigma \to is$ and taking $s \to \pm \infty$, we see that the dilaton actually blows up linearly in $s$ to $+\infty$. This means that gravitational effects are becoming weak near the north and south poles of the sphere. It is natural to introduce another boundary at these locations to cut off the spacetime. In particular, we could imagine cutting out small holes around the conical singularities and gluing in two semi-infinite cylinders, each ending on one of the holes. We can imagine the bulk matter CFT continues to propagate along these cylinders. This is illustrated in figure \ref{fig:CFTvac}. \begin{figure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[width=.5\linewidth]{EncodeCFTvac.png} \caption{} \label{fig:CFTvac} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[width=.75\linewidth]{EncodeCFTthermal.png} \caption{} \label{fig:CFTthermal} \end{subfigure} \setlength{\belowcaptionskip}{-20pt} \caption{In the figure on the left, we imagine attaching two semi-infinite cylinders to holes at the north and south poles of the sphere. Here we have exaggerated the size of the holes relative to the size of the sphere. This prepares the CFT matter state in the vacuum on the circle, as was considered in section \ref{sec:setup}. On the right, we imagine making the CFT state thermal. This will raise the energy in the spacetime and make the wormholes longer upon analytic continuation. The red dots denote points of maximal dilaton and the black dots denote points of minimal dilaton along the $\sigma =0$ time slice.} \label{fig:CFTencode} \end{figure} We could further imagine connecting these two cylinders together to form a portion of a torus, with some length $\tau_0$. Doing so modifies the stress energy in the bulk portion of the spacetime, putting the quantum fields in a thermal state with temperature set by $\tau_0$. This is also illustrated in figure \ref{fig:CFTthermal}. The resulting bulk saddle will still have a moment of time-reflection symmetry at $s=i\sigma =0$. We can then analytically continue the saddle from this moment of symmetry and the picture we get is that of multiple inflating patches connected via wormholes similar to the saddles discussed in this work, where the full universe is entangled with a 2d CFT on a circle. This is illustrated in figure \ref{fig:CFTLorentzian}. By the island rule, the microscopic entropy of the CFT is zero, since the island just includes the full closed universe. In other words, the CFT will encode certain observables in the inflating patches. This part is not surprising, and has been explored in previous work \cite{Almheiri:2019tt, Balasubramanian:2020ue}. \begin{figure} \centering \includegraphics[width=.5\textwidth]{EncodeCFTLorentzian.png} \caption{Here we illustrate the Lorentzian interpretations of the Euclidean saddles shown in figure \ref{fig:CFTencode}. The idea is that for finite $\tau_0$ these saddles describe a situation where a closed dS$_2$ universe with multiple inflating and crunching regions is encoded in a CFT$_{2}$ on a spatial circle. The dashed lines signify the thermal entanglement between the closed universe and the CFT.}\label{fig:CFTLorentzian} \end{figure} But a construction like the one above would allow us to probe subsystem encoding, i.e. to see if subregions of the CFT$_2$ encode subregions of the de Sitter universe.\footnote{What one would like is an island region that includes part of the inflating patch of the spacetime. But insofar as the endpoints of this region are quantum extremal cousins of the classically extremal cosmic horizon, one will run into issues with entanglement wedge nesting \cite{Shaghoulian:2022aa}. The basic issue is that the bifurcate cosmic horizon is a ``minimax" surface (minimum in time, maximum in space) as opposed to a maximin surface like the bifurcate black hole horizon.Here there would be both cosmic and black hole horizons in the closed universe and so there is hope of finding a maximin QES in the closed universe, presumably close to the black dots in figure \ref{fig:CFTLorentzian}.} This encoding is similar in spirit to what happens for the encoding of the black hole interior in the radiation after a black hole has fully evaporated and also to scenarios recently discussed in \cite{Chen:2020tes, Cooper:2018aa, Raamsdonk:2021aa, Antonini:2022aa}. The difference for us is that now we have the possibility of encoding asymptotically inflating regions in the dual CFT. This is reminiscent of proposals for ``making a universe in the lab'' by creating inflating regions behind the black hole horizon in AdS/CFT \cite{Freivogel_2006}.\footnote{The authors of \cite{Freivogel_2006} argued against the boundary CFT describing the inflating region. Their claim was that the boundary CFT had to be in a mixed state, obtained by tracing out the degrees of freedom which describe the inflating region. With only causal wedge reconstruction, this is a reasonable conclusion. But given the modern understanding of entanglement wedge reconstruction -- and in particular the encoding of ``bag-of-gold" type geometries (like an inflating region behind a black hole horizon) -- there is no obstruction to such a configuration being described by a pure state in the boundary CFT.} It would be interesting to understand this encoding in more detail and we hope to do so in the future. \bigskip\bigskip\bigskip \noindent \textbf{Acknowledgments} We would like to thank Raphael Bousso, Daniel Harlow, Tom Hartman, Thomas Hertog, Juan Maldacena, Don Marolf and Henry Maxfield for useful conversations. ES is supported by the Simons Foundation It from Qubit collaboration (385592) and the DOE QuantISED grant DESC0020360. AL acknowledges support from NSF grant PHY-1911298 and Carl P. Feinberg. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611.
1,477,468,751,334
arxiv
\section{Introduction} \label{sec: introduction} Convolutional Neural Networks (CNNs) have made extraordinary progress in various computer vision tasks, with image classification as a most representative one. The trained models generally perform well on the testing data which shares similar data distribution to that of the training data. However, in many practical scenarios, drastic performance degradation is observed when applying such trained models to new domains with \textit{domain shift} \cite{torralba2011unbiased}, where the data distributions between the training and testing domains are different. Fine-tuning on labeled target data is a direct solution but is costly due to the requirement of target sample annotations. In contrast, unsupervised domain adaptation (UDA) requires only the labeled source data and \emph{unlabeled} target data to enhance the model's performance on the target domain, which has attracted increasing interest in both academia \cite{ben2007analysis, ben2010theory, zhong2020doesemix, tzeng2017adversarialadda, huang2021model, kundu2020towardsINHERITABLE} and industry \cite{wang2020differential, james2019simtoreal}. \begin{figure*}[t] \centering \includegraphics[width=1 \textwidth]{pipeline.png} \vspace{-0.3cm} \caption{\textbf{Illustration of adversarial learning based (a) \textit{Baseline} and (b) our proposed \emph{ToAlign}}. $D$ and $C$ denote domain discriminator and image classifier respectively. (a) \textit{Baseline} (\egno, DANN \cite{ganin2016domaindann}) directly aligns the target feature $\mathbf{f}^t$ with the holistic source feature $\mathbf{f}^s$. Domain alignment and image classification tasks are optimized in parallel. (b) Our proposed \emph{ToAlign}~makes the domain alignment proactively serve the classification task, where target feature $\mathbf{f}^t$ is aligned with source task-discriminative "positive" feature $\mathbf{f}^s_p$ which is obtained under the guidance of meta-knowledge induced from the classification task. $\odot$ denotes Hadamard product.} \label{fig: pipeline} \end{figure*} There has been a large spectrum of UDA methods. Supported by the theoretical analysis~\cite{ben2007analysis}, the overwhelming majority of methods tend to align the distributions of source and target domains. A line of works \cite{borgwardt2006integratingmmd, zellinger2017central, peng2019momentm3sda, sun2016return, sun2016deep} explicitly align the distributions based on domain discrepancy measurements, \egno, Maximum Mean Discrepancy (MMD) \cite{borgwardt2006integratingmmd}. Another line of alignment-based UDAs borrow ideas from Generative Adversarial Networks \cite{goodfellow2014generative} and use domain adversarial training to learn domain-aligned/invariant features, which dominate in the top performance methods. In the seminal work Domain Adversarial Neural Network (DANN) \cite{ganin2015dann, ganin2016domaindann}, a domain discriminator is trained to distinguish the target features from source features while a feature extractor (generator) is trained to generate domain-invariant features to fool this discriminator. Following DANN, a plethora of variants have been proposed \cite{tzeng2017adversarialadda,long2018cdane,sankaranarayanan2018generatetoadapt, chen2018reweighted,saito2018maximummcd,volpi2018adversarialfeature, liu2019transferabletat,lu2020stochastic,cui2020gradually,chen2020adversarialadla, wei2021metaalign}. It is noteworthy that the goal of alignment in UDA is to alleviate the adverse effect of domain shift to improve the \textit{classification} performance on unlabeled target data. Even though impressive progress has been made, there is a common intrinsic limitation, \textit{i}.\textit{e}., \textbf{alignment is still not deliberately designed to dedicatedly/proactively serve the final image classification task}. In many previous UDAs, as shown in Figure~\ref{fig: pipeline}~(a), the alignment task is in parallel with the ultimate classification task. The assumption is that learning domain-invariant features (via alignment) reduces the domain gap and thus makes the image classifier trained on source readily applicable to target \cite{ben2007analysis}. However, with alignment treated as a parallel task, there is a lack of mechanism to make it explicitly assist classification, where the alignment may contaminate the discriminative features for classification \cite{jin2020feature}. Previous works (\egno, CDAN \cite{long2018cdane}) exploit class information (\egno, predicted class probability) as a condition to the discriminator. MADA \cite{pei2018multimada} implements class-level domain alignment by applying one discriminator per class. Their purpose is to provide additional helpful information to the discriminator \cite{long2018cdane} or perform class-level alignment \cite{pei2018multimada}, but they are still short of explicitly making alignment assist classification. Some works move a step forward and investigate what features the networks should align for better adaptation. \cite{wang2019transferabletada, kurmi2019attendingCADA} focus on transferable local regions, which are selected based on the uncertainty or entropy of the domain discriminator, for alignment. However, such self-induced feature selection is still not specific to the optimization of classification task; instead, it is based on the alignment task itself. There is no guarantee that alignment positively serves the classification task. Hsu \textit{et al.}~\cite{hsu2020everyepm} carry out object centerness-aware alignment by aligning the center part of the objects to exclude the background distraction/noise for domain adaptive object detection. However, the feature in object center position could be task-irrelevant and thus is not suited for alignment. Moreover, regarding such centerness feature as alignment objective is somewhat ad-hoc, which is still not designed directly from the perspective of assisting classification. \begin{figure*}[t] \centering \includegraphics[width=0.82 \textwidth]{conceptual.png} \vspace{-0.2cm} \caption{Conceptual comparison between (a) previous alignment and (b) our proposed task-oriented alignment. $\{ \mathbf{f}^t\}$ and $\{ \mathbf{f}^s\}$ denote the sets of target features and source features, respectively. (a) Previous methods take each source feature as a holistic one for alignment with target features. (b) We decompose each source feature $\mathbf{f}^s$ into a task-discriminative positive feature $\mathbf{f}^s_p$ and a task-irrelevant negative feature $\mathbf{f}^s_n$ and make the target features to be aligned with the positive source features $\{ \mathbf{f}^s_p\}$ while avoiding aligning with the negative source features $\{ \mathbf{f}^s_n\}$.} \label{fig:concept} \end{figure*} \emph{We pinpoint that the selection of "right" features to achieve task-oriented alignment is important.} For classification, the essence is to train the network to extract class-discriminative feature. Similarly, for UDA classification, it is also desired to assure strong discrimination of the target domain features without class label supervision. Thus, we intend to align target features to the task-discriminative source features while ignoring the task-irrelevant ones. Note that for the feature of a source sample, it contains both task/classification-discriminative and task-irrelevant information, because the network is in general not able to suppress non-discriminative feature responses (\egno, responses unrelated to image class or those related to other tasks such as alignment) perfectly \cite{selvaraju2017grad, chattopadhay2018gradcamplus}. Aligning target features with task-irrelevant source features would prevent alignment from serving classification and lead to poor adaptation. Intuitively, for example, image style that is a non-causal factor for classification can be considered as task-irrelevant information and the bias towards such factor in alignment may hurt the classification task. We demonstrate this by conducting experiments where only the source \textbf{t}ask-\textbf{i}rrelevant features are utilized to align with target \textit{i}.\textit{e}., the scheme \emph{Baseline+TiAlign} in Figure \ref{fig: acc_curve_r2c}. The performance of \emph{Baseline+TiAlign} (in purple) on target test set drops drastically compared to the source-only method which dose not incorporate any alignment technique. This corroborates that aligning with task-irrelevant features is even harmful to the classification on target domain. Motivated by this, in this paper, we propose an effective UDA method named \textit{\textbf{T}ask-\textit{o}riented \textbf{Align}ment} (\emph{ToAlign}) to make the domain alignment explicitly serve classification. We achieve this by performing feature alignment guided by the meta-knowledge induced from the classification task to make the target features align with task-discriminative source features (\textit{i}.\textit{e}., "positive" features), to avoid the interference from task-irrelevant features (\textit{i}.\textit{e}., "negative" features). Figure~\ref{fig:concept} conceptually illustrates the comparison between our proposed alignment and previous one. Particularly, as illustrated in Figure~\ref{fig: pipeline}~(b), to obtain the suitable feature from a source sample for alignment with target samples, we leverage the classification task to guide the extraction/distillation of task-related/discriminative feature $\mathbf{f}_p^s$, from original feature $\mathbf{f}^s$. Correspondingly, for the domain alignment task, we enforce aligning target features with the source positive features by domain adversarial training to achieve task-oriented alignment. In this way, the domain alignment will better assist the classification task. We summarize our main contributions as follows: \begin{itemize}[leftmargin=*,noitemsep,nolistsep] \item We pinpoint that the selection of "right" features to achieve task-orientated alignment is important for adaptation. \item We propose an effective UDA approach named \textbf{\emph{ToAlign}} which enables the alignment to explicitly serve classification. We decompose a source feature into a task-relevant/discriminative one and a task-irrelevant one under the guidance of classification-meta knowledge for performing classification-oriented alignment, which explicitly guides the network what features should be aligned. \end{itemize} Extensive experimental results demonstrate the effectiveness of \emph{ToAlign}. \emph{ToAlign}~is generic and can be applied to different adversarial learning based UDAs to enhance their adaption capability, which helps achieve the state-of-the-art performance with a negligible increase in training complexity and no increase in inference complexity. \section{Related Work} \label{sec: related_work} \begin{figure*}[t] \begin{minipage}[r]{0.45\textwidth} \centering \includegraphics[width=1.0\textwidth]{Acc_R2C_v1.png} \vspace{-0.6cm} \captionof{figure}{Classification accuracy on target (Rw$\rightarrow$Cl in Office-Home) for different methods. \textit{TiAlign} denotes aligning target features with \textbf{t}ask-\textbf{i}rrelevant source features.} \label{fig: acc_curve_r2c} \end{minipage} \hfill \begin{minipage}[l]{0.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{vis_p_n_mini.png} \vspace{-0.3cm} \captionof{figure}{Visualization of task-discriminative and task-irrelvant features. The positive features generally focus on the foreground objects which provide the most discriminative information for classification, while the negative ones focus on non-discriminative background regions. The images are sampled from Office-Home.} \label{fig: vis_p_n} \end{minipage} \vspace{-0.5cm} \end{figure*} \textbf{Unsupervised Domain Adaptation} aims to transfer the knowledge from labeled source domain(s) to unlabeled target domain. Ben \textit{et al.}~\cite{ben2007analysis}~ theoretically reveal that learning domain-invariant representations helps make the image classifier trained on source domain applicable to target domain. Various works learn domain-invariant features by aligning the source and target distributions measured by some metrics \cite{borgwardt2006integratingmmd,zellinger2017central,peng2019momentm3sda,sun2016return,sun2016deep,peng2018synthetic}, or by domain adversarial learning \cite{tzeng2017adversarialadda,long2018cdane, sankaranarayanan2018generatetoadapt, chen2018reweighted,saito2018maximummcd,volpi2018adversarialfeature, liu2019transferabletat,zhang2019domainsymnet,tang2020dada,lu2020stochastic,cui2020gradually, chen2020adversarialadla, wei2021metaalign, cao2018dida, kang2018deepattentionalign}. The latter is overwhelmingly popular in recent years owing to its superiority in dealing with distribution problems \cite{goodfellow2014generative}. Note that our proposed method is designed to enhance the capability of the widely used domain adversarial learning based approaches. For domain adversarial learning based approach (\egno, DANN~\cite{ganin2015dann, ganin2016domaindann}), in general, a domain discriminator is trained to distinguish the source domain from the target domain, meanwhile a feature extractor is trained to learn domain-invariant features. Many variants of DANN have been proposed \cite{long2018cdane, cui2020gradually, tang2020dada, chen2020adversarialadla, cui2020hda, tang2020dada, zhang2019domainsymnet, li2020dcan, bermudez2020multibranchuda}. CDAN \cite{long2018cdane} further conditions the discriminator on the image class information conveyed in the classifier predictions. MADA \cite{pei2018multimada} implements class-wise alignment with multi-discriminators. GSDA \cite{hu2020unsupervisedgsda} performs class-, group- and domain-wise alignments simultaneously, where the three types of alignment are enforced to be consistent in their gradients for more precise alignment. HDA \cite{cui2020hda} leverages domain-specific representations as heuristics to obtain domain-invariant representations from a heuristic search perspective. CMSS \cite{yang2020curriculumCMSS} exploits Curriculum Learning (CL) \cite{bengio2009curriculum} to align target samples with the dynamically selected source samples to exploit the different transferability of the source samples. However, in these methods, the domain alignment is designed as a task in parallel with the image classification task. It does not explicitly take serving classification as its mission, where such alignment may result in loss of discriminative information. Jin \textit{et al.}~\cite{jin2020feature}~ remedy the loss of discriminative information caused by alignment via incorporating a restoration module. Wei \textit{et al.}~\cite{wei2021metaalign}~ pinpoint that alignment and classification are not well coordinated in optimization where they may contradict with each other. They thus propose to use meta-learning to coordinate their optimization directions. In this paper, to make alignment explicitly serve classification, we propose a task-oriented alignment. Guided by the classification meta-knowledge, task-discriminative sub-features are selected for alignment. Different from \cite{wei2021metaalign}, we investigate what features should be aligned to assist classification and intend to provide more interpretable alignment. We are the first to perform \emph{task-oriented alignment} by decomposing source feature into task-discriminative and task-irrelevant feature, and explicitly guides the network what sub-features should be aligned. Note that Huang \textit{et al.}~\cite{huang2020udareid} propose to decouple features into domain-invariant and domain-specific features, % where the former ones are aligned for unsupervised person re-identification. \cite{peng2019domainDADA, cai2019learningDSR} exploit the VAE framework with several complex losses to perform the disentanglement from the perspective of domain and semantics simultaneously, and only use domain-invariant semantics for inference, leaving domain-specific but task-related information underexplored. In contrast to focusing on the domain-level, our decomposition strategy focuses on the task-level guided by image classification task, where we further enable domain alignment on the task-discriminative features to proactively serve image classification. \section{Task-Oriented Alignment for UDA} \label{sec: toalign for uda} Unsupervised domain adaptation (UDA) for classification aims to train a classification model on labeled source domain image set $\mathbf{X}_s$ and unlabeled target domain image set $\mathbf{X}_t$ to obtain high classification accuracy on a target domain test set. Most popular adversarial learning based UDAs attempt to align the features of the source and target domains to alleviate the domain gap to improve the classification performance on target domain. As mentioned before, aligning based on holistic features is sub-optimal, where such alignment does not explicitly serve classification. To address this, as illustrated in Figure~\ref{fig: pipeline}~(b), we propose an effective task-oriented alignment to explicitly make the alignment serve classification. Particularly, we propose to decompose a source sample feature into a task-discriminative one that should be aligned, and a task-irrelevant one that should be ignored based on the classification meta-knowledge. \emph{Then, we perform alignment between the target features and the positive source features, which is consistent with the essence of the classification task, \textit{i}.\textit{e}., focusing on discriminative features}. In Sec. \ref{sec: recap}, to be self-contained, we briefly introduce adversarial learning based UDAs. We answer the question of what feature should be aligned to better serve classification and introduce our task-oriented feature decomposition and alignment in Sec. \ref{sec: feature_decomp}. \subsection{Recap of Domain Adversarial UDAs} \label{sec: recap} Domain adversarial learning based UDAs typically train a domain discriminator $D$ to distinguish which domain (\textit{i}.\textit{e}., source or target) a sample belongs to, and adversarially train a feature extractor $G$ to fool the discriminator $D$ in order to learn domain-invariant feature representations. The network is also trained under the supervision of image classification on the labeled source samples. Particularly, $D$ is optimized to minimize the domain classification loss $\mathcal{L}_{D}$ (\textit{i}.\textit{e}., binary cross entropy loss). Meanwhile, $G$ is optimized to maximize the domain classification loss $\mathcal{L}_{D}$ and minimize the image classification loss $\mathcal{L}_{cls}$ (\textit{i}.\textit{e}., cross entropy loss): \begin{equation} \begin{split} & \operatorname*{argmin}_{D} \mathcal{L}_{D}, \\ & \operatorname*{argmin}_{G} \mathcal{L}_{cls} - \mathcal{L}_{D},\\ \end{split} \end{equation} To achieve adversarial training, usually, gradient reversal layer (GRL)~\cite{ganin2015dann, ganin2016domaindann} which connects $G$ and $D$ is used via multiplying the gradient from $D$ by a negative constant during the back-propagation to $G$. $\mathcal{L}_{D}$ is typically defined as \cite{ganin2016domaindann, long2018cdane, cui2020gradually}: \begin{equation} \begin{split} \mathcal{L}_{D} (\mathbf{X}_s, \mathbf{X}_t)= -\mathbb{E}_{\mathbf{x}_s\sim \mathbf{X}_s}\left[\log(D(G(\mathbf{x}_s)))\right] -\mathbb{E}_{\mathbf{x}_t\sim \mathbf{X}_t}\left[\log(1-D(G(\mathbf{x}_t)))\right], \label{eqn:l_d_naive} \end{split} \end{equation} \subsection{Task-oriented Feature Decomposition and Alignment} \label{sec: feature_decomp} In adversarial learning based UDAs, a feature ingested by $D$ as a holistic feature from a source or target sample, in general contains both task/classification-discriminative information and task-irrelevant information. Intuitively, aligning the task-irrelevant features would not effectively reduce the domain gap of the task-discriminative features and thus brings no obvious benefit for the classification task. \textbf{Mistakenly aligning the target features with the source task-irrelevant features would hurt the discrimination power of the target features}. We also experimentally confirm that in Figure \ref{fig: acc_curve_r2c}, \textit{i}.\textit{e}., aligning with task-irrelevant features (\textit{TiAlign}, line in purple) drastically reduces the classification accuracy on the target domain. \textbf{Therefore, we propose to decompose a holistic feature of each source sample into a task-discriminative feature and a task-irrelevant feature to enable the task-oriented alignment with the target features.} Particularly, we softly select/re-weight (based on Grad-CAM~\cite{selvaraju2017grad}) the feature vector $\mathbf{f}^s$ of a source sample to obtain task-discriminative feature $\mathbf{f}_p^s$ that is discriminative for identifying the groundtruth class, which we refer to as \emph{positive} feature. Correspondingly, the task-irrelevant feature $\mathbf{f}_n^s$ can be obtained simultaneously, which we refer to as \emph{negative} feature. \noindent\textbf{Task-Oriented Feature Decomposition.} \label{sec: decomposition} Grad-CAM \cite{zhou2016learningCAM, selvaraju2017grad, chattopadhay2018gradcamplus} is a widely used technique to localize the most important features for classification in a convolutional neural network model. As analyzed in \cite{zhou2016learningCAM, selvaraju2017grad, chattopadhay2018gradcamplus, simonyan2014deepinside}, the gradients (\textit{w.r.t.} the feature for classification) of the final predicted score corresponding to the ground-truth class convey the task-discriminative information, which identifies the relevant features to recognize the image class correctly. It is noteworthy that such task-discriminative information is, in general, highly related (but not limited) to the foreground object in the classification task. \emph{In this work, motivated by Grad-CAM, we propose to use the gradients of the predicted score corresponding to the ground-truth class as the attention weights to obtain the task-discriminative features}. As illustrated in Figure~\ref{fig: pipeline}, we obtain a feature map $F\in\mathbb{R}_{+}^{H\times W \times M}$ (\textit{i}.\textit{e}., a tensor of non-negative real numbers, with height $H$, width $W$, and $M$ channels) from the final convolutional block (with ReLU layer) of the feature extractor. After spatial-wise global average pooling (GAP), we have a feature vector ${\mathbf{f}} = pool(F) \in \mathbb{R}^M$. The logits for all classes are predicted via the classifier $C(\cdot)$. Based on the response $C({\mathbf{f}})$, we can derive the gradient $\mathbf{w}_{cls}\in\mathbb{R}^{M}$ of $y^k$ \textit{w.r.t.} $\mathbf{f}$: \begin{equation} \mathbf{w}^{cls}= \frac{\partial {y^k}}{\partial \mathbf{f}}, \end{equation} where $y^k$ is the predicted score corresponding to the ground-truth class $k$. As analyzed in \cite{selvaraju2017grad, chattopadhay2018gradcamplus, simonyan2014deepinside}, the gradient $\mathbf{w}^{cls}$ conveys the channel-wise importance information of feature $\mathbf{f}$ for classifying the sample into its groudtruth class $k$. We draw inspiration from Grad-CAM which uses $\mathbf{w}^{cls}$ to modulate the feature map in channel-wise to find the classification-discriminative features. Similarly, modulated with $\mathbf{w}^{cls}$, we can obtain the task-discriminative (\textit{i}.\textit{e}., positive) feature as: \begin{equation} \mathbf{f}_{p}= \mathbf{w}^{cls}_p\odot\mathbf{f} = s \mathbf{w}^{cls}\odot\mathbf{f}, \end{equation} where $\odot$ represents the Hadamard product, the attention weight vector $\mathbf{w}^{cls}_p = s \mathbf{w}^{cls}$, where $s\in \mathbb{R}_+$ is an adaptive non-negative parameter to modulate the energy $\mathcal{E}(\mathbf{f}_p)=||\mathbf{f}_p||_2^2$ of $\mathbf{f}_p$ such that $\mathcal{E}(\mathbf{f}_p)=\mathcal{E}(\mathbf{f})$: \begin{equation} s = \sqrt{ \frac{||\mathbf{f}||_2^2}{||\mathbf{w}^{cls}\odot\mathbf{f}||_2^2} } = \sqrt{ \frac{\sum_{m=1}^M f_m^2}{\sum_{m=1}^M (w^{cls}_mf_m)^2} }, \label{eqa: scale} \end{equation} Motivated by the counterfactual analysis in \cite{selvaraju2017grad}, the task-irrelevant (\textit{i}.\textit{e}., negative) feature can be represented as $\mathbf{f}_{n}=-\mathbf{w}^{cls}_p\odot\mathbf{f}$, where $-\mathbf{w}^{cls}_p$ delights the task-discriminative channels since the task-discriminative channels (with larger values in $\mathbf{w}^{cls}_p$) correspond to ones with smaller values in -$\mathbf{w}^{cls}_p$. To better understand and validate the discriminativeness of the positive and negative features, we visualize the spatial maps $F$ with channels modulated by $\mathbf{w}^{cls}$ and $-\mathbf{w}^{cls}$ following \cite{selvaraju2017grad, zhou2016learningCAM}. As shown in Figure \ref{fig: vis_p_n}, the positive information is more related to the foreground objects that provide the discriminative information for the classification task, while the negative one is more in connection with the non-discriminative background regions. \noindent\textbf{Task-oriented Domain Alignment.} As discussed above, we expect the domain alignment to explicitly serve the final classification task. Given the source task-discriminative features obtained based on the classification meta-knowledge, we can guide the target features to be aligned with the source task-discriminative features $\mathbf{f}_{p}$ through different domain adversarial learning based alignment methods \cite{ganin2015dann, ganin2016domaindann, cui2020hda}. The procedure is almost the same as that in UDAs discussed in Sec. \ref{sec: recap}, except that the input source feature $\mathbf{f}^s$ to the final domain discriminator is replaced by the positive feature $\mathbf{f}^s_p$ of this source sample. Thus, the domain classification loss is defined with a small modification on Eq. (\ref{eqn:l_d_naive}): \begin{equation} \begin{split} \mathcal{L}_{D} (\mathbf{X}_s, \mathbf{X}_t)= -\mathbb{E}_{\mathbf{x}_s\sim \mathbf{X}_s}\left[\log(D(G^p(\mathbf{x}_s)))\right] -\mathbb{E}_{\mathbf{x}_t\sim \mathbf{X}_t}\left[\log(1-D(G(\mathbf{x}_t)))\right]. \label{l_d_pos} \end{split} \end{equation} where $G^p(\mathbf{x}_s) = \mathbf{f}_{p}^s$ denotes the positive feature of source $\mathbf{x}_s$. \begin{comment} However, it is hard to arrive at the optimal solution in practice, considering the model capability and optimization efficiency. In fact, \cite{asokan2020rumigan, hou2018genpu, guo2020pugan} have demonstrated that the knowledge in negative samples would significantly contribute to the efficient and stable learning, if used in a proper way. In other word, \textit{the model not only should know what to learn but also should know what not to learn}. Motivated by this insight, we elaborately design a dual discriminator module to simultaneously align the target features with the source positive features $\mathbf{f}^p$ and push them away from the source negative features $\mathbf{f}^n$, leveraging the discriminator's ability of handling data distribution \cite{goodfellow2014generative}. Aside from the common discriminator $D_p$, which is used to align target features with $\mathbf{f}^p$ with an inserted GRL layer for adversary domain alignment, another \textcolor{blue}{discriminator} $D_n$ is introduced to avoid aligning the target features with $\mathbf{f}_n$. Specifically, $D_n$ takes $\mathbf{f}_t$ or $\mathbf{f}^n$ as input, and aims to identify/distinguish these two types of features. With the same goal rather than an adversarial goal, the generator $G$ aims to separate $\mathbf{f}_t$ from $\mathbf{f}^n$. The objective function of \emph{ToAlign} ~~is: \end{comment} \begin{comment} \textbf{Discussion}: As can be seen in Figure \ref{fig: acc_curve_r2c}, if we align the target domain features $\mathbf{f}^t$ with the task-irrelevant negative features $\mathbf{f}_n^s$ (Dis-ToAlign), the performance on target domain drops drastically (curve in purple) compared with the baseline scheme (curve in green) which aligns $\mathbf{f}^t$ with the source holistic features. The reason lies in the fact that Dis-ToAlign would result in the lack of task-discriminative information for the target domain samples. In contrast, when we align the target domain features with task-related/discriminative features as in our final scheme, thanks to the correct guidance, the performance on target domain is significantly improved. In this way, we make the domain alignment task to explicitly serve classification task. \end{comment} \textbf{Understanding from the Meta-knowledge Perspective.} To enable a better understanding of \emph{ToAlign}~on why it works well, here, we analyse \emph{ToAlign}~from the perspective of meta-learning with meta-knowledge. In an adversarial UDA framework, the image classification task and domain alignment task can be considered to be a \textit{meta-train} task $\mathcal{T}^{tr}$ and a \textit{meta-test} task $\mathcal{T}^{te}$, respectively. \emph{ToAlign}~actually introduces knowledge communication from $\mathcal{T}^{tr}$ to $\mathcal{T}^{te}$. In the meta-training stage, we can obtain the prior/meta-knowledge $\phi^{tr}$ of $\mathcal{T}^{tr}$. Without effective communication between $\mathcal{T}^{tr}$ and $\mathcal{T}^{te}$, the optimization of $\mathcal{T}^{te}$ may contradict that of $\mathcal{T}^{tr}$, considering that they have different optimization goals. To improve the knowledge communication from $\mathcal{T}^{tr}$ to $\mathcal{T}^{te}$, certain meaningful prior/meta-knowledge $\phi^{tr}$ is helpful for a more effective $\mathcal{T}^{te}|_{\phi^{tr}}$. A typical implementation of passing meta-knowledge from $\mathcal{T}^{tr}$ to $\mathcal{T}^{te}$ is based on gradients \cite{liu2018metamulti, finn2017maml, li2017learningmldg, wei2021metaalign, li2020onlineMetaMSDA}, \textit{i}.\textit{e}., $\nabla\mathcal{T}^{tr}$, which provides knowledge of $\mathcal{T}^{tr}$. Other mechanisms \egno, leveraging the parameters regularizer in a way of weight decay, are also exploited~\cite{balaji2018metareg, zhao2020knowledgeasprior}. In our \emph{ToAlign}, instead of encoding the meta-knowledge $\phi^{tr}$ into the gradients \textit{w.r.t.} the parameters, we use $\mathcal{T}^{tr}$ to learn/derive attention weights for identifying $\mathcal{T}^{tr}$-related sub-features in the feature space and then pass such prior/meta-knowledge $\phi^{tr}$ to $\mathcal{T}^{te}$ to make meta-test task $\mathcal{T}^{te}_{\phi^{tr}}$ adapt its optimization based on $\phi^{tr}$. In this work, actually, we are motivated by the reliable human prior knowledge on \emph{what} should be aligned across domains to better assist classification task for UDA (\textit{i}.\textit{e}., task/classification-discriminative features), while excluding the interference from task-irrelevant ones. Accordingly, in our design, we obtain the prior/meta-knowledge for identifying task-discriminative features from the classification task (meta-train) and apply it to the domain alignment task (meta-test) to achieve task-\textit{oriented} alignment. \section{Experiments} \label{sec: experiments} To evaluate the effectiveness of \textit{ToAlign}, we conduct comprehensive experiments under three domain adaptation settings, \textit{i}.\textit{e}., single source unsupervised domain adaptation (SUDA), multi-source unsupervised domain adaptation (MUDA) and semi-supervised domain adaptation (SSDA). For SSDA, domain adaptation is performed from labeled source domain to \textit{partially} labeled target domain \cite{donahue2013semissda}. \subsection{Datasets and Implementation Details} \label{sec: details} \textbf{Datasets.} We use two commonly used benchmark datasets (\textit{i}.\textit{e}., Office-Home~\cite{venkateswara2017deep-officehome} and VisDA-2017~\cite{peng2017visda}) for SUDA and a large-scale dataset DomainNet~\cite{peng2019momentm3sda} for MUDA and SSDA. 1) \begin{table*}[t] \renewcommand\arraystretch{1.5} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{@{}>{\columncolor{white}[0pt][\tabcolsep]}l@{}|cccccccccccc>{\columncolor{white}[\tabcolsep][0pt]}c@{}} \toprule Method& Ar$\rightarrow$Cl & Ar$\rightarrow$Pr & Ar$\rightarrow$Rw & Cl$\rightarrow$Ar & Cl$\rightarrow$Pr & Cl$\rightarrow$Rw & Pr$\rightarrow$Ar & Pr$\rightarrow$Cl & Pr$\rightarrow$Rw & Rw$\rightarrow$Ar & Rw$\rightarrow$Cl & Rw$\rightarrow$Pr & Avg \\ \hline Source-Only \cite{he2016deep} & 34.9 & 50.0 & 58.0 & 37.4 & 41.9 & 46.2 & 38.5 & 31.2 & 60.4 & 53.9 & 41.2 & 59.9 & 46.1 \\ \rowcolor{Gray}MCD(CVPR'18) \cite{saito2018maximummcd} &48.9 &68.3 &74.6 &61.3 &67.6 &68.8 &57.0 &47.1 &75.1 &69.1 &52.2 &79.6 &64.1 \\ CDAN(NeurIPS'18) \cite{long2018cdane} & 50.7 & 70.6 & 76.0 & 57.6 & 70.0 & 70.0 & 57.4 & 50.9 & 77.3 & 70.9 & 56.7 & 81.6 & 65.8 \\ \rowcolor{Gray}ALDA(AAAI'20) \cite{chen2020adversarialadla} & 53.7 & 70.1 & 76.4 & 60.2 & 72.6 & 71.5 & 56.8 & 51.9 & 77.1 & 70.2 & 56.3 & 82.1 & 66.6 \\ SymNet(NeurIPS'18) \cite{zhang2019domainsymnet} & 47.7 & 72.9 & 78.5 & 64.2 & 71.3 & 74.2 & 63.6 & 47.6 & 79.4 & 73.8 & 50.8 & {82.6} & 67.2 \\ \rowcolor{Gray}TADA(AAAI'19) \cite{wang2019transferabletada} & 53.1 & 72.3 & 77.2 & 59.1 & 71.2 & 72.1 & 59.7 & 53.1 & 78.4 & 72.4 & 60.0 & 82.9 & 67.6 \\ MDD(ICML'19) \cite{zhang2019bridgemdd} & 54.9 & 73.7 & 77.8 & 60.0 & 71.4 &71.8 & 61.2 & 53.6 & 78.1 & 72.5 & 60.2 & 82.3 & 68.1 \\ \rowcolor{Gray} BNM(CVPR'20) \cite{cui2020towardsbnm} & 56.2 & 73.7 & 79.0 & 63.1 & 73.6 & 74.0 & 62.4 & 54.8 & 80.7 & 72.4 & 58.9 & 83.5 & 69.4 \\ GSDA(CVPR'20) \cite{hu2020unsupervisedgsda} & \textbf{61.3} & 76.1 & 79.4 & 65.4 & 73.3 & 74.3 & 65.0 & 53.2 & 80.0 & 72.2 & 60.6 & 83.1 & 70.3 \\ \rowcolor{Gray}GVB(CVPR'20) \cite{cui2020gradually} & 57.0 & 74.7 & 79.8 & 64.6 & 74.1 & 74.6 & 65.2 & 55.1 & 81.0 & 74.6 & 59.7 & 84.3 & 70.4 \\ E-Mix(AAAI'21) \cite{zhong2020doesemix} & 57.7 & 76.6 & 79.8 & 63.6 & 74.1 & 75.0 & 63.4 & 56.4 & 79.7 & 72.8 & \textbf{62.4} & \textbf{85.5} & 70.6\\ \rowcolor{Gray} MetaAlign(CVPR'21) \cite{wei2021metaalign} & 59.3 & 76.0 & 80.2 & 65.7 & 74.7 & 75.1 & 65.7 & 56.5 & 81.6 & 74.1 & 61.1 & 85.2 & 71.3 \\ \hline \hline DANNP \cite{wei2021metaalign} & 54.2 & 70.0 & 77.6 & 62.3 & 72.4 & 73.1 & 61.3 & 52.7 & 80.0 & 72.0 & 56.8 & 83.1 & 67.9 \\ \rowcolor{Gray} DANNP+ToAlign & ~~$56.8_\uparrow$ & ~~$74.8_\uparrow$ & ~~$79.9_\uparrow$ & ~~$64.0_\uparrow$ & ~~$73.9_\uparrow$ & ~~$75.3_\uparrow$ & ~~$63.8_\uparrow$ & ~~$53.7_\uparrow$ & ~~$81.1_\uparrow$ & ~~$73.1_\uparrow$ & ~~$58.2_\uparrow$ & ~~$84.0_\uparrow$ & ~~$69.9_\uparrow$ \\ \hline HDA(NeurIPS'20) \cite{cui2020hda} & 56.8 & 75.2 & 79.8 & 65.1 & 73.9 & 75.2 & 66.3 & 56.7 & 81.8 & \textbf{75.4} & 59.7 & 84.7 & 70.9 \\ \rowcolor{Gray} HDA+ToAlign & ~~$57.9_\uparrow$ & ~~$\textbf{76.9}_\uparrow$ & ~~$\textbf{80.8}_\uparrow$ & ~~$\textbf{66.7}_\uparrow$ & ~~$\textbf{75.6}_\uparrow$ & ~~$\textbf{77.0}_\uparrow$ & ~~$\textbf{67.8}_\uparrow$ & ~~$\textbf{57.0}_\uparrow$ & ~~$\textbf{82.5}_\uparrow$ & ~~$75.1_\downarrow$ & ~~$60.0_\uparrow$ & ~~$84.9_\uparrow$ & ~~$\textbf{72.0}_\uparrow$ \\ \thickhline \end{tabular} } \captionof{table}{ Accuracy (\%) of different UDAs on Office-Home with ResNet-50 as backbone. Best in bold.} \label{table: uda_office-home} \end{center} \end{table*} \textbf{Office-Home} \cite{venkateswara2017deep-officehome} consists of images from four different domains: Art (Ar), Clipart (Cl), Product (Pr), and Real-World (Rw). \begin{table*}[t] \small \renewcommand\arraystretch{1.5} \begin{center} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{@{}>{\columncolor{white}[0pt][\tabcolsep]}l@{}|c c c c c c >{\columncolor{white}[\tabcolsep][0pt]}c@{}} \hline Methods & Clipart & Infograph & Painting & Quickdraw & Real & Sketch & Avg. \\ \hline Source-Only \cite{he2016deep} & 47.6$_{\pm0.52}$ & 13.0$_{\pm0.41}$ & 38.1$_{\pm0.45}$ & 13.3$_{\pm0.39}$ & 51.9$_{\pm0.85}$ & 33.7$_{\pm0.54}$ & 32.9$_{\pm0.54}$ \\ \rowcolor{Gray}ADDA(CVPR'17)~\cite{tzeng2017adversarialadda} & 47.5$_{\pm0.76}$ & 11.4$_{\pm0.67}$ & 36.7$_{\pm0.53}$ & 14.7$_{\pm0.50}$ & 49.1$_{\pm0.82}$ & 33.5$_{\pm0.49}$ & {32.2}$_{\pm0.63}$ \\ DANN(ICML'15)~\cite{ganin2015dann} & 45.5$_{\pm0.59}$ & 13.1$_{\pm0.72}$ & 37.0$_{\pm0.69}$ & 13.2$_{\pm0.77}$ & 48.9$_{\pm0.65}$ & 31.8$_{\pm0.62}$ & 32.6$_{\pm0.68}$ \\ \rowcolor{Gray}DCTN(CVPR'18)~\cite{xu2018deepDCTN} & 48.6$_{\pm0.73}$ & 23.5$_{\pm0.59}$ & 48.8$_{\pm0.63}$ & 7.2$_{\pm0.46}$ & 53.5$_{\pm0.56}$ & 47.3$_{\pm0.47}$ & 38.2$_{\pm0.57}$ \\ MCD(CVPR'18)~\cite{saito2018maximummcd} & 54.3$_{\pm0.64}$ & 22.1$_{\pm0.70}$ & 45.7$_{\pm0.63}$ & 7.6$_{\pm0.49}$ & 58.4$_{\pm0.65}$ & 43.5$_{\pm0.57}$ & 38.5$_{\pm0.61}$ \\ \rowcolor{Gray}{M$^{3}$SDA(ICCV'19)~\cite{peng2019momentm3sda}} & 57.2$_{\pm0.98}$ & 24.2$_{\pm1.21}$ & 51.6$_{\pm0.44}$ & 5.2$_{\pm0.45}$ & 61.6$_{\pm0.89}$ & 49.6$_{\pm0.56}$ & 41.5$_{\pm0.74}$ \\ {{M}$^{3}${SDA}-$\beta$(ICCV'19)~\cite{peng2019momentm3sda}} & 58.6$_{\pm0.53}$ & 26.0$_{\pm0.89}$ & 52.3$_{\pm0.55}$ & 6.3$_{\pm0.58}$ & 62.7$_{\pm0.51}$ & 49.5$_{\pm0.76}$ & 42.6$_{\pm0.64}$ \\ \rowcolor{Gray}MDAN(NeurIPS'18)~\cite{zhao2018adversarialMDAN} & 60.3$_{\pm0.41}$ & 25.0$_{\pm0.43} $ & 50.3$_{\pm0.36}$ & 8.2$_{\pm1.92}$ & 61.5$_{\pm0.46}$ & 51.3$_{\pm0.58}$ & 42.8$_{\pm0.69}$ \\ MLMSDA(Arxiv'20)~\cite{li2020mutualMLMSDA} & 61.4$_{\pm0.79}$ & 26.2$_{\pm0.41}$ & 51.9$_{\pm0.20}$ & \textbf{19.1}$_{\pm0.31}$ & 57.0$_{\pm1.04}$ & 50.3$_{\pm0.67}$ & 44.3$_{\pm0.57}$ \\ \rowcolor{Gray}GVBG(CVPR'20)~\cite{cui2020gradually} & 61.5$_{\pm0.44}$ & 23.9$_{\pm 0.71}$ & 54.2$_{\pm0.46}$ & 16.4$_{\pm 0.57}$ & 67.8$_{\pm0.98}$ & 52.5$_{\pm0.62}$ & 46.0$_{\pm0.63}$ \\ CMSS(ECCV'20) \cite{yang2020curriculumCMSS} & 64.2$_{\pm0.18}$ & \textbf{28.0}$_{\pm0.20}$ & 53.6$_{\pm0.39}$ & 16.0$_{\pm0.12}$ & 63.4$_{\pm0.21}$ & 53.8$_{\pm0.35}$ & 46.5$_{\pm0.24}$ \\ \rowcolor{Gray}HDA(NeurIPS'20)~\cite{cui2020hda} & 63.6$_{\pm0.35}$ & 25.9$_{\pm0.16}$ & 56.1$_{\pm0.38}$ & 16.6$_{\pm0.54}$ & 69.1$_{\pm0.42}$ & 54.3$_{\pm0.26}$ & 47.6$_{\pm0.40}$ \\ \hline \hline Baseline & 66.4$_{\pm0.24}$ & 24.7$_{\pm0.16}$ & 57.3$_{\pm0.10}$ & 11.5$_{\pm0.17}$ & 69.2$_{\pm0.21}$ & 55.2$_{\pm0.13}$ & 47.3$_{\pm0.19}$ \\ \rowcolor{Gray}Baseline+ToAlign & $_\uparrow$\textbf{67.0}$_{\pm0.22}$~~ & $_\uparrow$25.9$_{\pm0.20}$~~ & $_\uparrow$\textbf{57.8}$_{\pm0.32}$~~ & $_\uparrow$12.2$_{\pm0.14}$~~ & $_\uparrow$\textbf{70.7}$_{\pm0.25}$~~ & $_\uparrow$\textbf{56.0}$_{\pm0.18}$~~ & $_\uparrow$\textbf{48.2}$_{\pm0.22}$~~ \\ \hline \end{tabular} } \captionof{table}{ Accuracy (\%) of different MUDA methods on DomainNet with ResNet-101 as backbone. Best in bold.} \label{table: msda_domainnet} \end{center} \vspace{-0.5cm} \end{table*} Each domain contains 65 object categories in office and home environments. Following the typical settings \cite{cui2020gradually, cui2020hda, wei2021metaalign, long2018cdane}, we evaluate methods on one-source to one-target domain adaptation, resulting in 12 adaptation cases in total. 2) \textbf{VisDA-2017} \cite{peng2017visda} is a synthetic-to-real dataset for domain adaptation with over 280,000 images across 12 categories, where the source images are synthetic and the target images are real collected from MS COCO dataset \cite{lin2014microsoftcoco}. 3) \textbf{DomainNet} \cite{peng2019momentm3sda} is a large-scale dataset containing about 600,000 images across 345 categories, which span 6 domains with large domain gap: Clipart (C), Infograph (I), Painting (P), Quickdraw (Q), Real (R), and Sketch (S). For MUDA, following \begin{figure*}[t] \hspace{0.02\textwidth} \begin{minipage}[h]{0.38\textwidth} \centering \footnotesize \resizebox{0.95\textwidth}{!}{ \renewcommand\arraystretch{1.4} \begin{tabular}{@{}c@{}|c|c} \hline \multicolumn{2}{c|}{Method} & Acc. \\ \hline \multicolumn{2}{c|}{DANNP} & 67.9 \\ \hline \multirow{7}{*}{\makecell{DANNP+ToAlign}} & $s=$1 & 59.7 \\ &$s=$8 & 68.8 \\ &$s=$16 & 69.7 \\ &$s=$64 & 70.0 \\ &$s=$128 & 69.8 \\ & Adaptive $s$ & 69.9 \\ \hline \end{tabular} } \captionof{table}{Ablation study on the influence of $s$ in Eq. \ref{eqa: scale}. } \label{table: ablation_scale} \end{minipage} \hfill \begin{minipage}[h]{0.5\textwidth} \centering \resizebox{\textwidth}{!}{ \renewcommand\arraystretch{1.7} \begin{tabular}{@{}l|c|c|c@{}} \hline Method & Time/ms & GPU mem./MB & Acc./\%\\ \hline DANNP & 550 & 6,660 & 67.9\\ \hline \makecell[l]{DANNP+\\MetaAlign\cite{wei2021metaalign}} & 1,000 & 10,004 & 69.5 \\ \hline \makecell[l]{DANNP+\\ToAlign} & 590 & 6,668 & 69.9 \\ \hline \end{tabular} } \vspace{0.3cm} \captionof{table}{Training complexity comparison (on GTX TITAN X GPU) in terms of computational time (of one iteration) and GPU memory for a mini-batch with batch size 32.} \label{table: computation_cost} \end{minipage} \hspace{0.02\textwidth} \end{figure*} the settings in \cite{peng2019momentm3sda, yang2020curriculumCMSS, cui2020hda, li2020onlineMetaMSDA, venkat2021yourSImpAl}, we evaluate methods on five-sources to one-target domain adaptation, resulting in 6 MUDA cases in total. For SSDA, we take the typical protocal in \cite{hospedales2020meta, saito2019semiMME, cui2020hda}, where there are 7 SSDA cases conducted on the 4 sub-domains (\textit{i}.\textit{e}., C, R, P and S) with 126 sub-categories selected from DomainNet. All methods are evaluated under the one-shot/three-shot setting respectively, where besides unlabeled samples, one/three sample(s) per class in the target domain are available during training. \textbf{Implementation Details.} We apply our \textit{ToAlign} on top of two different baseline schemes: \textit{DANNP}~\cite{cui2020gradually, wei2021metaalign} and \textit{HDA}~\cite{cui2020hda}. \textbf{\textit{DANNP}} is an improved variant of the classical adversarial learning based adaptation method DANN \cite{ganin2015dann, ganin2016domaindann}, where the domain discrimination $D$ is conditioned on the predicted class probabilities. \textbf{\textit{HDA}} is a state-of-the-art adversarial training based method which leverages the domain-specific representations as heuristics to obtain domain-invariant representations. We use the ResNet-50 \cite{he2016deep} pre-trained on ImageNet \cite{krizhevsky2012imagenet} as the backbone for SUDA, while using ResNet-101 and ResNet-34 for MUDA and SSDA respectively. Following ~\cite{wei2021metaalign, long2018cdane, cui2020hda}, the image classifier $C$ is composed of one fully connected layer. The discriminator $D$ consists of three fully connected layers with inserted dropout and ReLU layers. We follow \cite{zhang2019domainsymnet} to take an annealing strategy to set the learning rate $\eta$, \textit{i}.\textit{e}., $\eta_t = \frac{\eta_0}{(1+\gamma p)^\tau}$, where $p$ indicates the progress of training that increases linearly from 0 to 1, $\gamma=10$, and $\tau=0.75$. The initial learning rate $\eta_0$ is set to $1e-3, 3e-4, 3e-4$, and $1e-3$ for SUDA on Office-Home, SUDA on VisDA-2017, MSDA on DomainNet, and SSDA on DomainNet, respectively. All reported experimental results are the average of three runs with different seeds. \subsection{Ablation Study} \textbf{Effectiveness of \emph{ToAlign}~on Different Baselines.} Our proposed \emph{ToAlign}~is generic and applicable to different domain adversarial training based baselines, where we focus on what features to align instead of the alignment methods. The last four rows in Table~\ref{table: uda_office-home} show the ablation comparisons on Office-Home. Our \emph{ToAlign}~improves the accuracy of baseline \textit{DANNP} and \textit{HDA} by \textbf{2.0\%} and \textbf{1.1\%} respectively. As can be seen from the results in Table \ref{table: uda_office-home}, Table \ref{table: msda_domainnet}, Table \ref{table: ssda_domainnet_one_shot} and Table \ref{table: ssda_domainnet}, our \emph{ToAlign}~can consistently bring significant improvement over the baseline schemes under different domain adaptation settings, \textit{i}.\textit{e}., SUDA, MUDA and SSDA. \emph{ToAlign}~enables the domain alignment task to proactively serve the classification task, resulting in more effective feature alignment for image classification. \textbf{Effectiveness of Different Ways to Obtain Positive Features.} As mentioned in Sec. \ref{sec: decomposition}, we use $\mathbf{w}^{cls}_p = s \mathbf{w}^{cls}$ as the attention weight (which conveys the classification prior/meta-knowledge) to derive positive feature $\mathbf{f}_p$, where $s$ is a parameter to modulate the energy of $\mathbf{f}_p$. We study the influence of $s$ under the setting of Rw$\rightarrow$Cl on Office-Home for our scheme \emph{DANNP+}\emph{ToAlign}~and illustrate the results in Table~\ref{table: ablation_scale}. As discussed around Eq.~(\ref{eqa: scale}), we can use an adaptively calculated $s$, which achieves 2\% improvement over the baseline on target test data. Moreover, we can treat $s$ as a preset hyper-parameter. We found that the performance drops drastically if $s$ is too small (\egno, $s=1$). That is because the energy of the source positive feature will get too weak when $s$ gets too small (\egno, the source feature $\mathbf{f}$'s average energy $\mathcal{E}(\mathbf{f})$ is about 800; if $s=1$, the source positive feature's average energy $\mathcal{E}(\mathbf{f}_p)$ is about 2). Then, it would be ineffective to align the target with the source positive features. When $s$ is larger than 16, the performance significantly outperforms the baseline and approaches the result of using adaptive $s$. As an optional design choice, we could transform the weight $\mathbf{w}^{cls}$ with certain activation function $\sigma(\cdot)$ such as Sigmoid or Softmax followed by a best selected scaling factor $s$, \textit{i}.\textit{e}., $\mathbf{w}^{cls}_p = s \sigma(\mathbf{w}^{cls})$. We found the results (\textit{i}.\textit{e}., 69.6/69.7 for Sigmoid/Softmax) are close to that without activation function. We reckon that what is more important is the relative importance among the elements in $\mathbf{w}^{cls}$. For simplicity, we finally take the adaptive $s$ (cf. Eq.~\ref{eqa: scale}) for all experiments. \begin{table*}[t] \centering \footnotesize \resizebox{0.72\textwidth}{!}{ \renewcommand\arraystretch{1.3} \begin{tabular}{@{}>{\columncolor{white}[0pt][\tabcolsep]}l@{}|c c c c c c c>{\columncolor{white}[\tabcolsep][0pt]}c@{}} \hline Methods & R$\rightarrow$C & R$\rightarrow$P & P$\rightarrow$C & C$\rightarrow$S & S$\rightarrow$P & R$\rightarrow$S & P$\rightarrow$R & Avg. \\ \hline Source-Only \cite{he2016deep} & 55.6 & 60.6 & 56.8 & 50.8 & 56.0 & 46.3 & 71.8 & 56.9 \\ DANN(ICML'15)~\cite{ganin2015dann} & 58.2 & 61.4 & 56.3 & 52.8 & 57.4 & 52.2 & 70.3 & 58.4 \\ \rowcolor{Gray}ADR(ICLR'18)~\cite{saito2018ADR} & 57.1 & 61.3 & 57.0 & 51.0 & 56.0 & 49.0 & 72.0 & 57.6 \\ CDAN(NeurIPS'18)~\cite{long2018cdane} & 65.0 & 64.9 & 63.7 & 53.1 & 63.4 & 54.5 & 73.2 & 62.5 \\ \rowcolor{Gray}ENT(NeurIPS'05)~\cite{grandvalet2005semiENT} & 65.2 & 65.9 & 65.4 & 54.6 & 59.7 & 52.1 & 75.0 & 62.6 \\ MME(ICCV'19)~\cite{saito2019semiMME} & 70.0 & 67.7 & 69.0 & 56.3 & 64.8 & 61.0 & 76.1 & 66.4 \\ CANN(Arxiv'20)~\cite{qin2020oppositeCANN} & 72.7 & 70.3 & 69.8 & 60.5 & 66.4 & 62.7 & 77.3 & 68.5 \\ \rowcolor{Gray}GVBG(CVPR'20)~\cite{cui2020gradually} & 70.8 & 65.9 & 71.1 & 62.4 & 65.1 & \textbf{67.1} & 76.8 & 68.4 \\ \hline \hline HDA(NeurIPS'20) \cite{cui2020hda} & 72.4 & 71.0 & 71.0 & \textbf{63.6} & 68.8 & 64.2 & 79.9 & 70.0 \\ \rowcolor{Gray}HDA+ToAlign & ~~\textbf{73.0}$_\uparrow$ & ~~\textbf{72.0}$_\uparrow$ & ~~\textbf{71.7}$_\uparrow$ & ~~63.0$_\downarrow$ & ~~\textbf{69.3}$_\uparrow$ & ~~64.6$_\uparrow$ & ~~\textbf{80.8}$_\uparrow$ & ~~\textbf{70.6}$_\uparrow$ \\ \hline \end{tabular} } \captionof{table}{Accuracy (\%) of different one-shot SSDA methods on DomainNet with ResNet-34 as backbone. Best in bold.} \label{table: ssda_domainnet_one_shot} \end{table*} \begin{table*}[t] \centering \footnotesize \resizebox{0.72\textwidth}{!}{ \renewcommand\arraystretch{1.3} \begin{tabular}{@{}>{\columncolor{white}[0pt][\tabcolsep]}l@{}|c c c c c c c>{\columncolor{white}[\tabcolsep][0pt]}c@{}} \hline Methods & R$\rightarrow$C & R$\rightarrow$P & P$\rightarrow$C & C$\rightarrow$S & S$\rightarrow$P & R$\rightarrow$S & P$\rightarrow$R & Avg. \\ \hline Source-Only \cite{he2016deep} & 60.0 & 62.2 & 59.4 & 55.0 & 59.5 & 50.1 & 73.9 & 60.0 \\ \rowcolor{Gray}ADR(ICLR'18)~\cite{saito2018ADR} & 60.7 & 61.9 & 60.7 & 54.4 & 59.9 & 51.1 & 74.2 & 60.4 \\ CDAN(NeurIPS'18)~\cite{long2018cdane} & 69.0 & 67.3 & 68.4 & 57.8 & 65.3 & 59.0 & 78.5 & 66.5 \\ \rowcolor{Gray}ENT(NeurIPS'05)~\cite{grandvalet2005semiENT} & 71.0 & 69.2 & 71.1 & 60.0 & 62.1 & 61.1 & 78.6 & 67.6 \\ MME(ICCV'19)~\cite{saito2019semiMME} & 72.2 & 69.7 & 71.7 & 61.8 & 66.8 & 61.9 & 78.5 & 68.9 \\ \rowcolor{Gray}MetaMME(ECCV'20) \cite{li2020onlineMetaMSDA} & 73.5 & 70.3 & 72.8 & 62.8 & 68.0 & 63.8 & 79.2 & 70.1 \\ GVBG(CVPR'20)~\cite{cui2020gradually} & 73.3 & 68.7 & 72.9 & 65.3 & 66.6 & \textbf{68.5} & 79.2 & 70.6 \\ \rowcolor{Gray}CANN(Arxiv'20)~\cite{qin2020oppositeCANN} & 75.4 & 71.5 & 73.2 & 64.1 & 69.4 & 64.2 & 80.8 & 71.2 \\ \hline \hline HDA(NeurIPS'20) \cite{cui2020hda} & 74.5 & 71.5 & 73.9 & 65.9 & 70.1 & 65.9 & 81.9 & 71.8 \\ \rowcolor{Gray}HDA+ToAlign & ~~\textbf{75.7}$_\uparrow$ & ~~\textbf{72.9}$_\uparrow$ & ~~\textbf{75.6}$_\uparrow$ & ~~\textbf{66.2}$_\uparrow$ & ~~\textbf{71.1}$_\uparrow$ & ~~66.4$_\uparrow$ & ~~\textbf{83.0}$_\uparrow$ & ~~\textbf{73.0}$_\uparrow$ \\ \hline \end{tabular} } \captionof{table}{Accuracy (\%) of different three-shot SSDA methods on DomainNet with ResNet-34 as backbone. Best in bold.} \label{table: ssda_domainnet} \end{table*} \subsection{Comparison with the State-of-the-arts} \textbf{Single Source Unsupervised Domain Adaptation (SUDA).} We incorporate our \emph{ToAlign}~into the recent state-of-the-art UDA method \textit{HDA} \cite{cui2020hda}, denoted as \textit{HDA+ToAlign}. Table \ref{table: uda_office-home} shows the comparisons with the previous state-of-the-art methods on Office-Home. \textit{HDA+ToAlign} outperforms all the previous methods and achieves the state-of-the-art performance. It is noteworthy that \textit{HDA+ToAlign} achieves the best adaptation results on almost all the one-source to one-target adaptation cases thanks to the effective feature alignment for classification. The results on VisDA-2017 could be found in Appendix, where \textit{HDA+ToAlign} outperforms \textit{HDA} by 0.9\%. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{vis_target.png} \vspace{-0.2cm} \captionof{figure}{Visualization of the feature response maps on target test images. First row: Art of Office-Home. Second row: Painting of DomainNet.} \label{fig: vis_target} \end{figure*} \textbf{Multi-source Unsupervised Domain Adaptation (MUDA).} Table \ref{table: msda_domainnet} shows the results on DomainNet, where all the methods take ResNet-101 as the feature extractor. We build our \textit{Baseline} based on \textit{HDA}~\cite{cui2020hda}. For simplicity, we replace the multi-class domain discriminator in the original \textit{HDA} by a two-class one as in \cite{tzeng2017adversarialadda, ganin2015dann, yang2020curriculumCMSS}. Note that CMSS \cite{yang2020curriculumCMSS} selects suitable source samples for alignment while our \emph{ToAlign}~selects task-discriminative sub-feature for each sample for task-oriented alignment. Compared with \textit{Baseline}, \emph{ToAlign}~brings about 0.9\% improvement and helps to achieve the best performance on this more challenging dataset. \textbf{Semi-supervised Domain Adaptation (SSDA).} Table \ref{table: ssda_domainnet_one_shot} and Table \ref{table: ssda_domainnet} show the results on one-shot and three-shot SSDA respectively, where all the methods use ResNet-34 as backbone. To compare with previous methods, we apply \emph{ToAlign} ~on top of \textit{HDA}. \textit{HDA+ToAlign} outperforms \textit{HDA} by 0.6\%/1.2\% for one-/three-shot settings, and surpasses all previous SSDA methods. \subsection{Complexity} \label{sec: complexity} In Table \ref{table: computation_cost}, we compare the training complexity and performance of \emph{ToAlign}~with baseline \emph{DANNP}, and \emph{DANNP+MetaAlign} \cite{wei2021metaalign} which incorporates meta-learning to coordinate the optimization of domain alignment and image classification. In contrast, inspired by the prior knowledge of what feature should be aligned to serve classification task, we distill such meta-knowledge from classification task and explicitly pass it to alignment task for classification-oriented alignment, eschewing complex optimization. Compared with baseline, \emph{ToAlign}~introduces negligible additional computational cost~(only 7\%) and occupies almost the same GPU memory as the baseline, which is much smaller than that of \emph{DANNP+MetaAlign}, which almost doubles the computational cost due to its complex meta-optimization. Thanks to our explicit design which makes domain alignment effectively serve the classification task, our \emph{ToAlign}~achieves superior performance to \emph{MetaAlign}. \subsection{Feature Visualization} \begin{comment} We visualize the learned source (marked by red) and target (marked by blue) feature representations (\textit{i}.\textit{e}., $\mathbf{f}_s$ and $\mathbf{f}_t$) using t-SNE \cite{maaten2008visualizingtsne} for different methods in Figure \ref{fig: tsne}. Figure \ref{fig: tsne}~(a) shows the embedded features of the Source-Only method where no adaptation technique is used. We can see that the samples are very scattered. In comparison, the samples for \emph{HDA}~\cite{cui2020hda} (see Figure \ref{fig: tsne}~(b)) and our \emph{HDA+ToAlign}~(see Figure~\ref{fig: tsne}~(c)) forms more compact clusters, where the clusters of ours are more compact and the target samples are located closer to the source samples than HDA. \end{comment} We visualize the target feature response maps $F$ (which will be pooled to be the input of the image classifier) of the \emph{Baseline} (DANNP) and \emph{ToAlign}~in Figure~\ref{fig: vis_target}. \textit{Baseline} sometimes focuses on the background features which are useless to the image classification task, since it aligns the holistic features without considering the discriminativeness of different channels/sub-features. Thanks to our task-oriented alignment, in \emph{ToAlign}, the features with higher responses are in general related to task-discriminative features, which is more consistent with human perception. More results can be found in the Appendix. \section{Conclusion} \label{sec: conclusion} In this paper, we study what features should be aligned across domains for more effective unsupervised domain adaptive image classification. To make the domain alignment task proactively serve the classification task, we propose an effective task-oriented alignment (\emph{ToAlign}). We explicitly decompose a feature in the source domain into a task-related feature that should be aligned and a task-irrelevant one that should be ignored, under the guidance of the meta-knowledge induced from the classification task itself. Extensive experiments on various datasets demonstrate the effectiveness of our \emph{ToAlign}. In our future work, we will extend \emph{ToAlign}~to tasks beyond image classification, \egno, object detection and segmentation. \begin{ack} This work was supported in part by the National Key Research and Development Program of China 2018AAA0101400 and NSFC under Grant U1908209, 61632001, and 62021001. \end{ack} {\small \bibliographystyle{abbrv}
1,477,468,751,335
arxiv
\section*{Introduction} The current knowledge of the nature of dark matter is scarce. However, the cumulative evidence seems to favor the scenario of dark matter as a non-interacting form of matter instead of some modified gravity theory. This is true in particular when considering phenomena as cluster collisions (e.g. Refs.~\cite{Ref-DM-Bullet,Ref-DM-Cluster2}). Our ignorance of dark matter physical behavior implies a lack of knowledge of its features as a source of gravity. In particular, it is unknown whether or not the spin tensor of dark matter vanishes. The issue is relevant since a non-vanishing spin tensor is a source of torsion, and torsion requires to go beyond General Relativity (GR). The closest framework to General Relativity is the Einstein-Cartan-Sciama-Kibble (ECSK) theory of gravity. There are many other alternatives theories of gravity with torsion, but ECSK is probably the simplest one that includes spinning matter and torsion. The ignorance regarding the physical nature of dark matter is in sharp contrast with the knowledge on the behavior of the Standard Model matter. For instance, from the Yang-Mills (YM) Lagrangia \begin{equation} \mathcal{L}_{\mathrm{YM}}=-\frac{1}{4}F^{A}{}_{\mu\nu}F^{B\mu\nu}\, \mathrm{tr}\left( \boldsymbol{T}_{A}\boldsymbol{T}_{B}\right) , \end{equation} it is straightforward to see that the spin tensor of Yang-Mills bosons vanishes. Therefore, Yang-Mills bosons are not a source of torsion, and there is no Yang-Mills bosons-torsion interaction. In contrast, the Standard Model fermions have a non-vanishing spin tensor, and therefore they are a source of torsion. They should interact with torsion, but the effect is so feeble that it is hard to foresee any particle-physics experiment capable of detecting it (See Chap. 8.4 of Ref.~\cite{Ref-SUGRA-Van-Proeyen}). Torsion does not interact with matter at a classical level (see Ref.~\cite{Ref-Hehl-GravProbeB}), and neither do with electromagnetic phenomena. For instance, regardless of background torsion, classical point particles should follow torsionless geodesics, and electromagnetic waves should travel trough torsionless null geodesics. To Standard Model matter, torsion is \textquotedblleft dark.\textquotedblright\ Perhaps, the only realistic way of detecting torsion could be through precise measurement of the polarization of gravitational waves (see Refs.~\cite{Ref-Nos-2019-GW-Polarization} and~\cite{Ref-Nos-2019-GW-Torsion}). Even further, due to interaction and decoherence, Standard Model baryons are highly localized, and they form astrophysical structures. In the context of ECSK theory, torsion is not able to propagate in a vacuum (in glaring contrast to the behavior of Riemannian curvature). Therefore, given both the granular nature of the baryonic matter in the current epoch of the Universe and that torsion vanishes in the vacuum, it seems incorrect to associate an effective non-vanishing spin tensor to Standard Model matter in cosmological scales in modern times. In other words, it seems unrealistic (see Ref.~\cite{Ref-Anti-Weyssenhof}) to consider Standard Model baryons as a spin fluid on a cosmological scale: the effective spin tensor of a gas of galaxies vanishes in long scales. In contrast, the spin tensor of Standard Model matter is a relevant source of torsion in a Universe filled with a high-density plasma of Standard Model fermions. That is the case of bounce models at the very early Universe (see Ref.~\cite{Ref-Poplawski-Big-Bounce}). In this model, the torsion created by high-density fermion plasma gives rise to inflation-like behaviors at very early times. The situation is arguably different for dark matter. Its lack of interaction with Standard Model matter and its incapability to create dark matter structures lead to the conjecture that the decoherence effects could be feeble for dark matter. Even more, this picture of dark matter fits well with its distribution being broader and more unlocalized than the one of Standard Model matter. Therefore, if dark matter has a non-vanishing spin tensor, it seems natural to expect that it could give rise to torsion in cosmological scales. The torsion created through this mechanism would be as dark as its source: Standard Model matter would not be able to interact with it. When moving these \textquotedblleft dark torsion\textquotedblright\ terms to the right-hand side of the field equations, they behave just as an extra (and dark) source of standard torsionless Riemannian curvature. From an observational point of view, it is possible to measure only the Riemannian curvature and not the torsion. Therefore, in this scenario, the observed gravitational dark matter effects correspond to the ones created by the "bare" dark matter plus the torsional \textquotedblleft dark dress\textquotedblright\ it creates through its spin tensor. The current article explores the idea of how \textquotedblleft dark torsion\textquotedblright\ could amplify the effects of a small amount of dark matter in a cosmological setting. Given the disparity between the amount of dark matter and Standard Model baryons in the Universe, a mechanism as this one may seem of interest. The Section~1 briefly reviews ECSK gravity and shows how torsion amplifies the effects of \textquotedblleft bare dark matter\textquotedblright,\ creating a higher effective energy density. This total torsion-dressed density would correspond to the observed dark matter density instead of the bare piece. In the case of Standard Model fermions, the canonical approach is to describe their spin tensor as a Weyssenhof fluid. However, given the lack of dark matter self-interaction, this Ansatz does not seem correct. In this Section, we offer a different Ansatz for the spin tensor of dark matter using symmetry and dimensional analysis arguments. The Sections~1.1 and 2 uses the generalized Friedmann equations to analyze the cosmological consequences of torsion and its dark matter amplification effect. The Section~3 studies the thermodynamical effects of the torsional dress of dark matter. Finally, in Section~\ref{Sec_TheEnd} we present some conclusions and possible further works. \label{Sec_DM-DT} \section{Dark Matter and Dark Torsion} There are many works in the context of cosmology using alternative theories of gravity involving a non-vanishing torsion (\cite{Ref-Poplawski-Big-Bounce,Ref-Pasmatsiou,Ref-Kranas,Ref-Cabral,Ref-Magueijo,Ref-Alexander,Ref-Nos-2018-CosmoHorndsk ). The present work focuses on the most straightforward approach, i.e., ECSK theory. It also the closest to standard GR. The idea is to have a taste of some of the consequences of non-vanishing spin tensor for dark matter in the simplest context before considering more exotic approaches. Let us consider a four-dimensional spacetime with $\left( -,+,+,+\right) $ signature described by the Einstein--Cartan geometry, i.e., the metric $g_{\mu\nu}$ and the connection $\Gamma_{\mu\nu}^{\lambda}$ are independent degrees of freedom. The ECSK action principle corresponds to \begin{equation} \mathcal{S}=\int\sqrt{\left\vert g\right\vert }\mathrm{d}^{4}x\left( \mathcal{L}_{\mathrm{G}}+\mathcal{L}_{\mathrm{b}}+\mathcal{L}_{\mathrm{DM }\right) , \label{Eq_Action \end{equation} where we are using units $c=8\pi G=k_{\mathrm{B}}=1$. In Eq.~(\ref{Eq_Action}) $\mathcal{L}_{\mathrm{b}}$ stands for the Lagrangian for baryonic matter and $\mathcal{L}_{\mathrm{DM}}$ corresponds to an unknown Lagrangian for dark matter. The gravity Lagrangian $\mathcal{L}_{\mathrm{G}}$ corresponds to the standard Einstein--Hilbert term a la Palatini, i.e., without imposing the torsionless condition (and therefore with the metric and the connection as independent degrees of freedom) \begin{equation} \mathcal{L}_{\mathrm{G}}\left( g,\Gamma,\partial\Gamma\right) =\frac{1 {2}R\left( g,\Gamma,\partial\Gamma\right) -\Lambda. \end{equation} Here $R=g^{\sigma\nu}R^{\mu}{}_{\sigma\mu\nu}$ is the generalization of the Ricci scalar constructed from the generalized Riemann tensor (or Lorentz curvature \begin{equation} R^{\rho}{}_{\sigma\mu\nu}=\partial_{\mu}\Gamma_{\nu\sigma}^{\rho -\partial_{\nu}\Gamma_{\mu\sigma}^{\rho}+\Gamma_{\mu\lambda}^{\rho}\Gamma _{\nu\sigma}^{\lambda}-\Gamma_{\nu\lambda}^{\rho}\Gamma_{\mu\sigma}^{\lambda}, \end{equation} where $\Gamma_{\nu\sigma}^{\rho}$ is a general connection (not necessarily the Christoffel one). The action principle, Eq.~(\ref{Eq_Action}), may seem general. However, it is fair to remark that the Lagrangian choice Eq.~(\ref{Eq_Action}) assumes the minimal coupling between dark matter, baryons, and gravity, and it does not include torsional terms as the Holst term. Non-minimal couplings with gravitational terms are sources of torsion, even for scalar bosonic fields~\cite{Ref-Nos-2017-Horndeski} and in cosmological settings~\cite{Ref-Nos-2018-CosmoHorndsk}. Similarly, non-minimal couplings within the Standard Model piece give rise to axions, which may be a promising dark matter candidate. Therefore it is worth remembering that Eq.~(\ref{Eq_Action}) corresponds to the simplest ECSK case and there are many other more exotic choices. The antisymmetric part of the connection $\Gamma_{\mu\nu}^{\lambda}$ defines the torsion tensor a \begin{equation} T^{\lambda}{}_{\mu\nu}=\Gamma_{\mu\nu}^{\lambda}-\Gamma_{\nu\mu}^{\lambda}, \end{equation} and the difference between the general connection $\Gamma_{\mu\nu}^{\lambda}$ and the canonical Christoffel connection $\mathring{\Gamma}_{\mu\nu}^{\lambda }=\left( 1/2\right) g^{\lambda\rho}\left( \partial_{\mu}g_{\nu\rho }+\partial_{\nu}g_{\mu\rho}-\partial_{\rho}g_{\mu\nu}\right) $ is given b \begin{equation} \Gamma_{\mu\nu}^{\lambda}-\mathring{\Gamma}_{\mu\nu}^{\lambda}=K^{\lambda {}_{\nu\mu}, \end{equation} where the right-hand side corresponds to the contorsion\footnote{It seems there is no agreement in the literature on the name of this tensor. Some authors call it \textquotedblleft contortion,\textquotedblright\ while others use \textquotedblleft contorsion.\textquotedblright\ We have chosen to use the later one because it sounds closer to torsion. The word \textquotedblleft contortion\textquotedblright\ may also be confused with a twisting motion.} tensor \begin{equation} K_{\mu\nu\lambda}=\frac{1}{2}\left( T_{\nu\mu\lambda}-T_{\mu\nu\lambda }+T_{\lambda\mu\nu}\right) . \label{Eq_K=T \end{equation} It is possible to decompose the generalized curvature in terms of the contorsion a \begin{equation} R^{\alpha\beta}{}_{\mu\nu}=\mathring{R}^{\alpha\beta}{}_{\mu\nu +\mathring{\nabla}_{\mu}K^{\alpha\beta}{}_{\nu}-\mathring{\nabla}_{\nu }K^{\alpha\beta}{}_{\mu}+K^{\alpha}{}_{\lambda\mu}K^{\lambda\beta}{}_{\nu }-K^{\alpha}{}_{\lambda\nu}K^{\lambda\beta}{}_{\mu}, \label{Eq_R+DK+K2 \end{equation} where $\mathring{R}^{\alpha\beta}{}_{\mu\nu}$ is the canonical torsionless Riemann tensor in terms of the Christoffel connection $\mathring{\Gamma _{\mu\nu}^{\lambda}$, and $\mathring{\nabla}_{\mu}$ is the standard torsionless covariant derivative in terms of it. The metric equations of motion are given b \begin{equation} R_{\mu\nu}^{+}-\frac{1}{2}g_{\mu\nu}R+\Lambda g_{\mu\nu}=\mathcal{T}_{\mu\nu }^{\left( \mathrm{b}\right) }+\mathcal{T}_{\mu\nu}^{\left( \mathrm{DM \right) }, \label{Eq_Field_Metric \end{equation} where $R_{\mu\nu}^{+}$ is the symmetric part of the generalized Ricci tensor\footnote{In the case of non-vanishing torsion, the generalized Ricci tensor has an antisymmetric part given by $R_{\mu\nu}^{-}=\frac{1}{2}\left( R_{\mu\nu}-R_{\nu\mu}\right) =-\frac{1}{2}\left( \nabla_{\lambda}T^{\lambda }{}_{\mu\nu}+\nabla_{\nu}T^{\lambda}{}_{\lambda\mu}-\nabla_{\mu}T^{\lambda {}_{\lambda\nu}+T^{\lambda}{}_{\rho\lambda}T^{\rho}{}_{\mu\nu}+T^{\rho {}_{\lambda\mu}T^{\lambda}{}_{\rho\nu}-T^{\rho}{}_{\lambda\nu}T^{\lambda {}_{\rho\mu}\right) .$} and $\mathcal{T}_{\mu\nu}^{\left( \mathrm{b}\right) }$ and $\mathcal{T}_{\mu\nu}^{\left( \mathrm{DM}\right) }$ are the stress-energy tensors associated with $\mathcal{L}_{\mathrm{b}}$ and $\mathcal{L}_{\mathrm{DM}}$. The affine equations of motion are given b \begin{equation} T_{\lambda\mu\nu}-g_{\lambda\mu}T^{\rho}{}_{\rho\nu}+g_{\lambda\nu}T^{\rho {}_{\rho\mu}=\sigma_{\lambda\mu\nu}^{\left( \mathrm{b}\right) +\sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }, \label{Eq_Field_Affine \end{equation} where $\sigma_{\lambda\mu\nu}^{\left( \mathrm{b}\right) }=-\sigma _{\lambda\nu\mu}^{\left( \mathrm{b}\right) }$ and $\sigma_{\lambda\mu\nu }^{\left( \mathrm{DM}\right) }=-\sigma_{\lambda\nu\mu}^{\left( \mathrm{DM}\right) }$ are the spin tensors\footnote{The spin tensor is the variation of the matter Lagrangian with respect to the connection, in the same way as the stress-energy tensor is the variation of the matter Lagrangian with respect to the metric. The spin tensor of classical matter (e.g., dust) vanishes, but the spin tensor of a fermionic particle does not. For instance, the spin tensor of an electron is proportional to its axial current.} associated with $\mathcal{L}_{\mathrm{b}}$ and $\mathcal{L}_{\mathrm{DM}}$. We would like to end this brief review of ECSK pointing that in general $\nabla^{\mu}\left( R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\right) \neq0$ and therefore the right-hand side of Eq.~(\ref{Eq_Field_Metric}) is not longer \textquotedblleft conserved\textquotedblright. It is possible to write down a genuine conservation law using Eq.~(\ref{Eq_R+DK+K2}) to move all the torsional terms to the right-hand sid \begin{equation} \mathring{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\mathring{R}+\Lambda g_{\mu\nu }=\mathcal{T}_{\mu\nu}^{\left( \mathrm{b}\right) }+\mathcal{T}_{\mu\nu }^{\left( \mathrm{DM}\right) }+\mathcal{T}_{\mu\nu}^{\left( \mathrm{T \right) }, \label{Eq_TorsionRightHandSide \end{equation} where $\mathring{R}_{\mu\nu}$ is the standard torsionless Ricci tensor and $\mathcal{T}_{\mu\nu}^{\left( \mathrm{T}\right) }$ is the effective stress-energy tensor for torsion given b \begin{align} \mathcal{T}_{\mu\nu}^{\left( \mathrm{T}\right) } & =g_{\mu\nu}\left( \mathring{\nabla}_{\alpha}K^{\alpha\rho}{}_{\rho}+\frac{1}{2}\left[ K^{\alpha}{}_{\lambda\alpha}K^{\lambda\rho}{}_{\rho}-K^{\alpha}{}_{\lambda \rho}K^{\lambda\rho}{}_{\alpha}\right] \right) +\nonumber\\ & +\frac{1}{2}\left( \mathring{\nabla}_{\nu}K^{\alpha}{}_{\mu\alpha }+\mathring{\nabla}_{\mu}K^{\alpha}{}_{\nu\alpha}+K^{\alpha}{}_{\lambda\mu }K^{\lambda}{}_{\nu\alpha}+K^{\alpha}{}_{\lambda\nu}K^{\lambda}{}_{\mu\alpha }-\left[ \mathring{\nabla}_{\lambda}+K^{\alpha}{}_{\lambda\alpha}\right] \left[ K^{\lambda}{}_{\mu\nu}+K^{\lambda}{}_{\nu\mu}\right] \right) . \label{Eq_T_eff_torsion \end{align} Doing this, the \textquotedblleft conservation law\textquotedblright\ takes the for \begin{equation} \mathring{\nabla}^{\mu}\left( \mathcal{T}_{\mu\nu}^{\left( \mathrm{b \right) }+\mathcal{T}_{\mu\nu}^{\left( \mathrm{DM}\right) }+\mathcal{T _{\mu\nu}^{\left( \mathrm{T}\right) }\right) =0. \end{equation} As mentioned in the Introduction, the baryons spin tensor $\sigma_{\lambda \mu\nu}^{\left( \mathrm{b}\right) }$ and the torsion associated with it could have been relevant under the extremely high fermion densities of the very early Universe~\cite{Ref-Poplawski-Big-Bounce}. However, in current times $\sigma_{\lambda\mu\nu}^{\left( \mathrm{b}\right) }=0$ should be an excellent approximation for any cosmological purpose. Considering $\sigma_{\lambda\mu\nu}^{\left( \mathrm{b}\right) }=0$ and tracing the affine equation of motion~(\ref{Eq_Field_Affine}), it is clear that in cosmological scales we should hav \begin{equation} T_{\lambda\mu\nu}=\sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) +\frac{1}{2}\left[ g_{\lambda\nu}\sigma^{\rho}{}_{\rho\mu}^{\left( \mathrm{DM}\right) }-g_{\lambda\mu}\sigma^{\rho}{}_{\rho\nu}^{\left( \mathrm{DM}\right) }\right] , \label{Eq_T=DM \end{equation} which means that torsion vanishes in the absence of dark matter. In the context of ECSK, torsion cannot propagate in a vacuum. To have a propagating torsion, we must have a different action choice than Eq.~(\ref{Eq_Action}), for instance, the Holst action or the Horndeski generalization of Refs.~\cite{Ref-Nos-2017-Horndeski,Ref-Nos-2018-CosmoHorndsk}. Since torsion is dark for baryonic matter, Eq.~(\ref{Eq_Field_Metric}) can be regarded a \begin{equation} \mathring{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\mathring{R}+\Lambda g_{\mu\nu }=\mathcal{T}_{\mu\nu}^{\left( \mathrm{b}\right) }+\mathcal{T}_{\mu\nu }^{\left( \mathrm{eff-DM}\right) }, \end{equation} where the effective dark matter stress-energy tensor $\mathcal{T}_{\mu\nu }^{\left( \mathrm{eff-DM}\right) }=\mathcal{T}_{\mu\nu}^{\left( \mathrm{DM}\right) }+\mathcal{T}_{\mu\nu}^{\left( \mathrm{T}\right) }$ causes the observed effects of dark matter. Here $\mathcal{T}_{\mu\nu }^{\left( \mathrm{DM}\right) }$ corresponds to the stress-energy tensor of \textquotedblleft bare\textquotedblright\ dark matter. In the next sections, we show that torsion amplifies its weight through the effective torsional stress-energy tensor $\mathcal{T}_{\mu\nu}^{\left( \mathrm{T}\right) }$ from Eq.~(\ref{Eq_T_eff_torsion}). Since the interaction of torsion with baryonic matter is negligible in current times, torsion would be as dark as its source. Since current observations are only able to detect the Riemannian gravity piece of the geometry, they would be sensitive to the combined or \textquotedblleft dressed\textquotedblright\ effect of $\mathcal{T}_{\mu\nu }^{\left( \mathrm{eff-DM}\right) }=\mathcal{T}_{\mu\nu}^{\left( \mathrm{DM}\right) }+\mathcal{T}_{\mu\nu}^{\left( \mathrm{T}\right) }$, but they won't be able to distinguish bare dark matter from torsion. Perhaps, only a careful measurement of the propagation of polarization of gravitational waves could distinguish bare dark matter from its \textquotedblleft torsional dress\textquotedblright~\cite{Ref-Nos-2019-GW-Polarization}. At this point, the lack of precise knowledge of the nature of dark matter creates what may seem like an insurmountable problem when trying to model its spin tensor. There are some usual Ans\"{a}tze for the spin tensor, as the Weyssenhof fluid~\cite{Ref-Weyssenhof, Ref-Obukhov-Weyssenhof}. However, given our ignorance on the physics of dark matter, any Ansatz for $\sigma _{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }$ may seem excessive. The problem is that since we do not have any information on the spin tensor of dark matter $\sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }$, we cannot use the field equation~(\ref{Eq_T=DM}). Without this field equation we do not have information on torsion and it seems impossible to solve the system. In what follows this problem is treated in a cosmological setting. The general idea is that whatever dark matter is, it is possible to use symmetry arguments and dimensional analysis to arrive at a general Ansatz for an effective $\sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }$ in cosmological scales. Using this Ansatz, it becomes possible to study the effects of the torsion created by dark matter in cosmic evolution. Let us start by considering the canonical FLRW metric with a homogeneous, isotropic and Riemannian flat spatial sectio \begin{equation} \mathrm{d}s^{2}=-\mathrm{d}t^{2}+a^{2}\left( t\right) \left( \mathrm{d x^{2}+\mathrm{d}y^{2}+\mathrm{d}z^{2}\right) . \label{Eq_FLRW_Metric \end{equation} Despite not knowing the Lagrangian $\mathcal{L}_{\mathrm{DM}}$, we know that the only stress-energy tensor compatible with the cosmological symmetries $\pounds _{\zeta}\mathcal{T}_{\mu\nu}^{\left( \mathrm{DM}\right) }=0$ is the canonica \begin{equation} \mathcal{T}_{\mu\nu}^{\left( \mathrm{DM}\right) }=\left( \rho_{\mathrm{DM }+p_{\mathrm{DM}}\right) U_{\mu}U_{\nu}+p_{\mathrm{DM}}g_{\mu\nu}, \end{equation} where $\rho_{\mathrm{DM}}$ and $p_{\mathrm{DM}}$ are the dark matter density and pressure. With the spin tensor it is possible to do the same. The most general spatially isotropic and homogeneous spin tensor $\pounds _{\zeta }\sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }=0$ for dark matter must have the \textquotedblleft Cartan staircase\textquotedblright\ for \begin{equation} \sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }=-2\left( g_{\mu\lambda }g_{\nu\rho}-g_{\mu\rho}g_{\nu\lambda}\right) h^{\rho}\left( t\right) -2\sqrt{\left\vert g\right\vert }\epsilon_{\lambda\mu\nu\rho}f^{\rho}\left( t\right) , \label{Ec_Spin_Tensor_Escalera_Cartan \end{equation} with the 4-vectors $h^{\rho}\left( t\right) $ and $f^{\rho}\left( t\right) $ having the form $h^{\rho}\left( t\right) =-h\left( t\right) U^{\rho}$ and $f^{\rho}\left( t\right) =-f\left( t\right) U^{\rho}$. In terms of component \begin{align} \sigma_{0\mu\nu}^{\left( \mathrm{DM}\right) } & =0,\label{Ec_Spin_Tensor-0}\\ \sigma_{ij0}^{\left( \mathrm{DM}\right) } & =2g_{ij}h\left( t\right) ,\label{Ec_Spin_Tensor-h}\\ \sigma_{ijk}^{\left( \mathrm{DM}\right) } & =2\sqrt{\left\vert g\right\vert }\epsilon_{ijk}f\left( t\right) , \label{Ec_Spin_Tensor-f \end{align} where $i,j,k,=1,2,3$ and $\lambda,\mu,\nu,=0,1,2,3$. From Eqs.~(\ref{Ec_Spin_Tensor-0}-\ref{Ec_Spin_Tensor-f}) it is already clear that dark matter spatial isotropy and homogeneity are not fully compatible with usual models as the Weyssenhoff spin fluid. At this point, we start to notice how different it is to model a high-density fermionic plasma as a source of spin and torsion and non-interacting dark matter. A high-density fermionic plasma in the early universe is well modeled as a Weyssenhoff spin fluid because their strong interactions create a rapidly changing spin tensor in short scales. It respects the Copernican principle because in longer cosmological scales only matters the average of these local spin anisotropies. The same arguments do not seem to hold for dark matter, considering it as a non-interacting fluid extended over cosmological distances in the current epoch. That is why the Ansatz Eq.~(\ref{Ec_Spin_Tensor_Escalera_Cartan}) for the spin tensor of dark matter may be a far better choice than the standard Weyssenhoff spin fluid. The spin tensor, torsion and contorsion are all algebraically related through equations~(\ref{Eq_T=DM}) and~(\ref{Eq_K=T}). From them, it is straightforward to conclude that whatever $h\left( t\right) $ and $f\left( t\right) $ ar \begin{align} T_{\lambda\mu\nu} & =\left( g_{\mu\lambda}g_{\nu\rho}-g_{\mu\rho g_{\nu\lambda}\right) h^{\rho}\left( t\right) -2\sqrt{\left\vert g\right\vert }\epsilon_{\lambda\mu\nu\rho}f^{\rho}\left( t\right) ,\label{Eq_FLRW_T}\\ K_{\mu\nu\lambda} & =\left( g_{\mu\lambda}g_{\nu\rho}-g_{\mu\rho g_{\nu\lambda}\right) h^{\rho}\left( t\right) +\sqrt{\left\vert g\right\vert }f^{\rho}\left( t\right) \epsilon_{\rho\mu\nu\lambda}, \label{Eq_FLRW_K \end{align} i.e., the same (still unknown) functions $h\left( t\right) $ and $f\left( t\right) $ describe torsion and contorsion. Using the expressions~(\ref{Eq_FLRW_Metric},\ref{Eq_FLRW_T},\ref{Eq_FLRW_K}), it is possible to calculate the Lorentz curvature components~(\ref{Eq_R+DK+K2 ) and from it the field equations~(\ref{Eq_Field_Metric}) lead to the generalized Friedmann relation \begin{align} 3\left[ \left( H+h\right) ^{2}-f^{2}\right] & =\rho_{\mathrm{DM },\label{Eq_Friedmann_Gen_density}\\ 2\left( \dot{H}+\dot{h}\right) +\left( 3H+h\right) \left( H+h\right) -f^{2} & =-p_{\mathrm{DM}}. \label{Eq_Friedmann_Gen_p \end{align} To solve the Eqs.~(\ref{Eq_Friedmann_Gen_density}) and~(\ref{Eq_Friedmann_Gen_p}), we need to know the dependence of $f$ and $h$ on other physical variables, like dark matter density and pressure. The next section shows how to find an Ansatz for these \textquotedblleft torsional equations of state,\textquotedblright\ and to solve the system. \subsection{Torsional dressing of Dark Matter and Dark Energy} At this point, instead of making some standard conjecture (Weyssenhoff fluid, Frenkel condition, Tulczyjew condition, etc.) on the physical nature of the dark matter spin tensor $\sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }$, we adopted a simpler approach. We may not have an understanding of $\sigma_{\lambda\mu\nu}^{\left( \mathrm{DM}\right) }$ from first principles, but we have some clues about its form. On the one hand, replacing Eq.~(\ref{Eq_FLRW_K}) in Eq.~(\ref{Eq_T_eff_torsion}) we can get an effective stress-energy tensor $\mathcal{T}_{\mu\nu}^{\left( \mathrm{T}\right) }$ in terms of $f$ and $h$. Using dimensional analysis on it, it is clear that at least in what concerns units we hav \begin{align} f & \sim\sqrt{\mathrm{energy}\text{ }\mathrm{density}},\\ h & \sim\sqrt{\mathrm{energy}\text{ }\mathrm{density}},\\ \mathring{\nabla}_{\mu}h^{\mu} & \sim\mathrm{energy}\text{ }\mathrm{density . \end{align} On the other hand, it is clear that in a dark matter vacuum ($\rho _{\mathrm{DM}}=0$) its spin tensor vanishes and $h\left( t\right) =f\left( t\right) =0$. Similarly, it seems reasonable to expect $h\left( t\right) $ and $f\left( t\right) $ to grow for higher values of $\rho_{\mathrm{DM}}$. For this reason, it seems natural to propose an Ans\"{a}tze of \textquotedblleft barotropic relations\textquotedblright\ between $h\left( t\right) $ and $f\left( t\right) $ and the dark matter energy density $\rho_{\mathrm{DM}}$ of the for \begin{align} f & \sim\sqrt{\rho_{\mathrm{DM}}},\label{Eq_Protobaro_f}\\ h & \sim\sqrt{\rho_{\mathrm{DM}}}. \label{Eq_Protobaro_Dh \end{align} Of course, much more complex relationships are possible, but these seem to be the simplest torsional equations of state. Let us consider a standard barotropic relation for the dark matter pressure $p_{\mathrm{DM} =\omega_{\mathrm{DM}}\rho_{\mathrm{DM}}$ and let us write the barotropic Ansatz\ for $f$ a \begin{equation} f=\alpha_{f}\sqrt{\frac{\rho_{\mathrm{DM}}}{3}}, \end{equation} where $\alpha_{f}$ is a constant. In terms of $\alpha_{f}$ it proves practical to define the \textquotedblleft semi-dressed\textquotedblright\ dark matter energy density and pressur \begin{align} \rho_{f} & =\rho_{\mathrm{DM}}+3f^{2}=\left( 1+\alpha_{f}^{2}\right) \rho_{\mathrm{DM}},\label{Eq_Semidressed_Density}\\ p_{f} & =p_{\mathrm{DM}}-f^{2}=\left( \omega_{\mathrm{DM}}-\frac{1 {3}\alpha_{f}^{2}\right) \rho_{\mathrm{DM}}. \label{Eq_Semidressed_Pressure \end{align} In terms of $\rho_{f}$ and $p_{f}$, the Eqs.~(\ref{Eq_Friedmann_Gen_density}) and~(\ref{Eq_Friedmann_Gen_p}) take the simpler for \begin{align} 3\left( H+h\right) ^{2} & =\rho_{f}, \label{Eq_Friedmann_Density_Seminaked}\\ 2\left( \dot{H}+\dot{h}\right) +\left( 3H+h\right) \left( H+h\right) & =-p_{f}, \label{Eq_Friedmann_Pressure_Seminaked \end{align} where the \textquotedblleft semi-dressed\textquotedblright\ pressure $p_{f}$ obeys the barotropic relation $p_{f}=\omega_{f}\rho_{f}$ and \begin{equation} \omega_{f}=\frac{\omega_{\mathrm{DM}}-\alpha_{f}^{2}/3}{1+\alpha_{f}^{2}}. \end{equation} In short, the $f$-component of the spin tensor has the effect of replacing the original \textquotedblleft bare\textquotedblright\ dark matter density $\rho_{\mathrm{DM}}$ by an amplified \textquotedblleft semi-dressed\textquotedblright\ energy density $\rho_{f}$, Eq.~(\ref{Eq_Semidressed_Density}). The pressure $p_{\mathrm{DM}}$ is replaced by an smaller \textquotedblleft semi-dressed\textquotedblright\ $p_{f}$ pressure, Eq.~(\ref{Eq_Semidressed_Pressure}). It is worth to notice that in the case of cold dark matter $\omega_{\mathrm{DM}}=0$, it leads us to an effective negative pressure $-1/3<\omega_{f}\leq0$. This way, torsion can easily produce an effective \textquotedblleft non-particle\textquotedblrigh \ negative pressure $p_{f}$ from canonical cold dark matter $\omega _{\mathrm{DM}}=0$. From Eqs.~(\ref{Eq_Friedmann_Density_Seminaked}) and~(\ref{Eq_Friedmann_Pressure_Seminaked}), we may feel compelled to define a generalized Hubble parameter $H+h$. However, it is important to remember that our observations describe the behavior of classical particles (i.e., galaxies). Classical particles are sensitive only to the Riemannian piece of the geometry and oblivious to torsion, and therefore observations measure $H$ and not $H+h$. For this reason, it is convenient to write down the Eqs.~(\ref{Eq_Friedmann_Density_Seminaked}) and~(\ref{Eq_Friedmann_Pressure_Seminaked}) a \begin{align} 3H^{2} & =\rho_{f}+\rho_{h},\label{Eq_Friedmann_rho_h}\\ 2\dot{H}+3H^{2} & =-\left( p_{f}+p_{h}\right) , \label{Eq_Friedmann_p_h \end{align} where $\rho_{h}$ and $p_{h}$ are the effective density and pressure originated when moving all the $h\left( t\right) $ terms to the right-hand side of the field equation \begin{align} \rho_{h} & =-3\left( h+2H\right) h,\label{Eq_Def_rho_h}\\ p_{h} & =h^{2}+4Hh+2\dot{h}. \end{align} The total dark matter and torsion weight is described by the effective \textquotedblleft dressed\textquotedblright\ density and pressur \begin{align} \rho_{\mathrm{dressed}} & =\rho_{f}+\rho_{h},\\ p_{\mathrm{dressed}} & =p_{f}+p_{h}. \end{align} At this point, using the relation~(\ref{Eq_Protobaro_h}) we propose the \textquotedblleft barotropic\textquotedblright\ Ansat \begin{equation} h=\alpha_{h}\sqrt{1+\alpha_{f}^{2}}\sqrt{\rho_{\mathrm{DM}}}=\alpha_{h \sqrt{\rho_{f}}, \end{equation} and from here the behavior of the dark matter spin tensor, and its torsion becomes more clear. The two functions $f$ and $h$ parametrize the dark matter spin tensor, and they create an effective \textquotedblleft torsional dress\textquotedblright\ for $\rho_{\mathrm{DM}}$ and $p_{\mathrm{DM}}$. For instance, a small $\rho_{\mathrm{DM}}$ may be amplified for torsion and create a much bigger $\rho_{\mathrm{dressed}}=\rho_{f}+\rho_{h}$. Current observations would measure the effective $\rho_{\mathrm{dressed}}$ and not the original dark matter density $\rho_{\mathrm{DM}}$. On the other hand, the torsional-dressed density $\rho_{\mathrm{dressed} =\rho_{f}+\rho_{h}$ has more complex behavior. From Eq.~(\ref{Eq_Friedmann_Density_Seminaked}), it is clear that $6\left( H+h\right) \left( \dot{H}+\dot{h}\right) =\dot{\rho}+\dot{\rho}_{f}$. Replacing this in Eq.~(\ref{Eq_Friedmann_Pressure_Seminaked}) and considering that dark matter does not interact with SM matter Eq.~(\ref{Eq_Conservation_SM}), it is possible to prove that the two dark matter-torsion modes, the $f$-dressed density $\rho_{f}$ and the $h$-dressed density $\rho_{h}$, interchange energy among the \begin{align} \dot{\rho}_{f}+3H\left( \rho_{f}+p_{f}\right) & =-Q,\label{Eq_Q1}\\ \dot{\rho}_{h}+3H\left( \rho_{h}+p_{h}\right) & =Q, \label{Eq_Q2 \end{align} wher \begin{equation} Q=\left( 1+3\omega_{f}\right) h\rho_{f}, \tag{46 \end{equation} and therefore the effective density $\rho_{\mathrm{dressed}}=\rho_{f}+\rho _{h}$ obeys the canonical conservation relatio \begin{equation} \dot{\rho}_{\mathrm{dressed}}+3H\left( \rho_{\mathrm{dressed} +p_{\mathrm{dressed}}\right) =0, \tag{47 \end{equation} with $p_{\mathrm{dressed}}$ obeying a non-trivial equation of state $p_{\mathrm{dressed}}=p_{\mathrm{dressed}}\left( \rho_{f},\rho_{h}\right) $. The next Section analyses the phenomenology of this system for some important particular cases. \label{Sec_Cosmo_DM_DT} \section{Cosmological consequences of Dark Torsion} The two torsional modes $h$ and $f$ create very distinctive phenomenology in the context of cosmic evolution. The simplest case is $h=\alpha_{h}=0$, leading us to a system of the canonical for \begin{align} 3H^{2} & =\rho_{f},\tag{48}\\ \dot{\rho}_{f}+3H\left( 1+\omega_{f}\right) \rho_{f} & =0. \tag{49 \end{align} When $h=0$, the effective density $\rho_{f}$ packs dark matter and torsion altogether. The only difference with the standard $\mathrm{\Lambda CDM}$ torsionless case is that for cold dark matter $\omega_{\mathrm{DM}}=0$, the effective barotropic constant $\omega_{f}=\left( \omega_{\mathrm{DM} -\alpha_{f}^{2}/3\right) /\left( 1+\alpha_{f}^{2}\right) $ has the allowed range $-1/3<\omega_{f}\leq0$. Since $\rho_{f}=\left( 1+\alpha_{f}^{2}\right) \rho_{\mathrm{DM}}$, it means that for large values of $\alpha_{f}$ a small quantity of dark matter can get significantly amplified. The case $h\neq0$. From Eq.~(\ref{Eq_Friedmann_Density_Seminaked}) we can obtai \begin{equation} H\left( t\right) =\left( \sqrt{\frac{1}{3}}\frac{\mathrm{s}_{H+h }{\left\vert \alpha_{h}\right\vert }-\mathrm{s}_{h}\right) \left\vert h\left( t\right) \right\vert , \tag{50 \end{equation} where we are using the shortcut notation $\mathrm{s}_{X}=\mathrm{sign}\left( X\right) $. Therefore the $\rho_{h}$ density corresponds t \begin{equation} \rho_{h}=3\left( \left\vert \alpha_{h}\right\vert -\frac{2}{\sqrt{3 }\mathrm{s}_{h}\mathrm{s}_{H+h}\right) \frac{h^{2}}{\left\vert \alpha _{h}\right\vert }>0, \tag{51 \end{equation} but \begin{align} \mathrm{s}_{h} & =-1\text{ \ },\text{ \ }\mathrm{s}_{H+h}=1\Longrightarrow H\left( t\right) =\left( \sqrt{\frac{1}{3}}\frac{\mathrm{1}}{\left\vert \alpha_{h}\right\vert }+\mathrm{1}\right) \left\vert h\left( t\right) \right\vert \rightarrow\frac{\left\vert h\left( t\right) \right\vert }{H\left( t\right) }\text{ }<1\text{\ },\text{ \ }\rho_{h}=3\left( \left\vert \alpha_{h}\right\vert +\frac{2}{\sqrt{3}}\right) \frac{h^{2 }{\left\vert \alpha_{h}\right\vert },\tag{52}\\ \mathrm{s}_{h} & =-1\text{ \ },\text{ \ }\mathrm{s}_{H+h}=-1\Longrightarrow H\left( t\right) =\left( 1-\sqrt{\frac{1}{3}}\frac{\mathrm{1}}{\left\vert \alpha_{h}\right\vert }\right) \left\vert h\left( t\right) \right\vert \Longrightarrow\left\vert \alpha_{h}\right\vert >\sqrt{\frac{1}{3} \rightarrow\rho_{h}=3\left( \left\vert \alpha_{h}\right\vert -\frac{2 {\sqrt{3}}\right) \frac{h^{2}}{\left\vert \alpha_{h}\right\vert },\tag{53}\\ \mathrm{s}_{h} & =1\text{ \ },\text{ \ }\mathrm{s}_{H+h}=1\Longrightarrow H\left( t\right) =\left( \sqrt{\frac{1}{3}}\frac{\mathrm{1}}{\left\vert \alpha_{h}\right\vert }-\mathrm{1}\right) \left\vert h\left( t\right) \right\vert \Longrightarrow\left\vert \alpha_{h}\right\vert <\sqrt{\frac{1 {3}}\rightarrow\rho_{h}=3\left( \left\vert \alpha_{h}\right\vert -\frac {2}{\sqrt{3}}\right) \frac{h^{2}}{\left\vert \alpha_{h}\right\vert }<0, \tag{54 \end{align} and the last two cases lead to $\left\vert h\left( t\right) \right\vert /H\left( t\right) >1$. Additionally we note that the case $\mathrm{s _{h}=-1$, $\mathrm{s}_{H+h}=1$ leads directly to $\rho_{h}>0$ and we are well with the weak energy condition. But, is it reasonable to expect $\left\vert h\left( t\right) \right\vert /H\left( t\right) <1$ or $\left\vert h\left( t\right) \right\vert /H\left( t\right) <<1$ during the cosmic evolution? This is an open question. Now, after replacing $H$ and $h=\alpha_{h}\sqrt{\rho_{f}}$ into $\dot{\rho }_{f}+3H\left( 1+\omega_{f}\right) \rho_{f}=-\left( 1+3\omega_{f}\right) h\rho_{f}$, we obtain the following solution for $h$, with $\mathrm{s}_{h}=-1$ and\ $\mathrm{s}_{H+h}=1$ \begin{equation} \frac{h\left( t\right) }{h\left( t_{0}\right) }=\left[ 1+\Delta\left( t-t_{0}\right) \right] ^{-1}, \tag{55 \end{equation} where $t_{0}$ is today an \begin{equation} \Delta=\left[ \left\vert \alpha_{h}\right\vert -\frac{\sqrt{3}}{2}\left( 1+\omega_{f}\right) \right] \frac{\left\vert h\left( t_{0}\right) \right\vert }{\left\vert \alpha_{h}\right\vert }. \tag{56 \end{equation} Recalling that $\omega_{\mathrm{DM}}=0\rightarrow\omega_{f}=-\alpha_{f ^{2}/3\left( 1+\alpha_{f}^{2}\right) $, we writ \begin{equation} \Delta=\left[ \left\vert \alpha_{h}\right\vert -\frac{\sqrt{3}}{2}\left( \frac{1+2\alpha_{f}^{2}/3}{1+\alpha_{f}^{2}}\right) \right] \frac{\left\vert h\left( t_{0}\right) \right\vert }{\left\vert \alpha_{h}\right\vert }, \tag{57 \end{equation} so tha \begin{align} \mathrm{s}_{\Delta} & =1\Longrightarrow\Delta>0\rightarrow\text{standard scheme},\tag{58}\\ \mathrm{s}_{\Delta} & =-1\Longrightarrow\Delta<0\rightarrow\text{phantom evolution!} \tag{59 \end{align} Bu \begin{equation} \Delta<0\longleftrightarrow\left\vert \alpha_{h}\right\vert <\frac{\sqrt{3 }{2}\left( \frac{1+2\alpha_{f}^{2}/3}{1+\alpha_{f}^{2}}\right) <1, \tag{60 \end{equation} and s \begin{equation} h\left( t\right) =\frac{h\left( t_{0}\right) }{\left\vert \Delta \right\vert }\left( t_{s}-t\right) ^{-1}\text{ \ },\text{ \ }t_{s =t_{0}+\frac{1}{\left\vert \Delta\right\vert }, \tag{61 \end{equation} and all the components explode at $t_{\mathrm{s}}$: the Hubble parameter, the densities $\rho_{\mathrm{DM}}$, $\rho_{f}$, $\rho_{\mathrm{dressed}}$ and $Q$. This a Big Rip singularity. The $Q$-function become \begin{equation} Q\left( t\right) =-\frac{1}{\left\vert a_{h}\right\vert ^{2}\left( 1+\alpha_{f}^{2}\right) }\left\vert h\right\vert ^{3}, \tag{62 \end{equation} \ \ \ \ and so, there is energy transference from $\rho_{h}$ to $\rho_{f}$. \section{Thermodynamics} We inspect two thermodynamics aspects in presence of torsion, adiabaticity and dark matter temperature.\ We start with the Gibb's relatio \begin{equation} T\mathrm{d}S=\mathrm{d}\left( \frac{\rho_{\mathrm{DM}}}{n}\right) +p_{_{\mathrm{DM}}}\mathrm{d}\left( \frac{1}{n}\right) , \tag{63 \end{equation} implyin \begin{equation} nT\frac{\mathrm{d}S}{\mathrm{d}t}=-\left( \rho_{_{\mathrm{DM}} +p_{_{\mathrm{DM}}}\right) \frac{\dot{n}}{n}+\dot{\rho}_{\mathrm{DM}}, \tag{64 \end{equation} where $T$ is the temperature, $S$ the entropy and $n$ the number particle density. Using the integrability condition ($S$ is a function of state) we have $\partial^{2}S/\partial T\partial n=\partial^{2}S/\partial n\partial T$ and therefor \begin{equation} n\frac{\partial T}{\partial n}+\left( p_{\mathrm{DM}}+\rho_{\mathrm{DM }\right) \frac{\partial T}{\partial\rho_{\mathrm{DM}}}=T\frac{\partial p_{_{\mathrm{DM}}}}{\partial\rho_{\mathrm{DM}}}. \tag{65 \end{equation} Since $\rho_{f}=\left( 1+\alpha_{f}^{2}\right) \rho_{\mathrm{DM}}$ and $\rho_{f}$ satisfies Eq.~(\ref{Eq_Q1}), we have that the bare dark matter density obeys the \textquotedblleft conservation\textquotedblright\ la \begin{equation} \dot{\rho}_{\mathrm{DM}}+3H\left( 1+\omega_{f}\right) \rho_{\mathrm{DM }=-\frac{Q}{1+\alpha_{f}^{2}}, \tag{66 \end{equation} and making the hypothesis of $\dot{n}+3Hn=0$ (conservation of the number of dark matter particles) we have that \begin{equation} nT\frac{\mathrm{d}S}{\mathrm{d}t}=3H\left( p_{_{\mathrm{DM}}}-\omega_{f \rho_{\mathrm{DM}}\right) -\frac{Q}{1+\alpha_{f}^{2}}. \tag{67 \end{equation} In the case of cold dark matter $\omega_{\mathrm{DM}}=0$ it implies tha \begin{equation} nT\frac{\mathrm{d}S}{\mathrm{d}t}=\frac{1}{1+\alpha_{f}^{2}}\left( H\alpha_{f}^{2}\rho_{\mathrm{DM}}-Q\right) , \tag{68 \end{equation} and there is not adiabaticity. Given that $Q<0$, then $dS/dt>0$ and the second law of thermodynamics is guaranteed. As we know, $\Lambda CDM$~\cite{Ref-PLANCK} has been quite successful in describing the current state of cosmic evolution even when it is not free of problems. As a consequence of this, the idea of dark energy emerged as a more physical alternative to $\Lambda$ and, moreover, dark matter-dark energy interaction is a fact does not ruled out by observation~\cite{Ref-B.Wang}. In this interaction framework, the non-adiabaticity is manifest~\cite{Ref-Victor . So, we conjecture that torsion is cause of non-adiabaticity! Using the integrability condition,\ the temperature can be obtained fro \begin{equation} \frac{\dot{T}}{T}=-3H\omega_{\mathrm{DM}}\left( 1+\frac{\left( 1+\omega_{\mathrm{DM}}\right) \left( 1+\alpha_{f}^{2}\right) {1+\omega_{\mathrm{DM}}+2\alpha_{f}^{2}/3+Q/3H\rho_{\mathrm{DM}}}\right) ^{-1}, \tag{69 \end{equation} and it is clear that $\omega_{\mathrm{DM}}=0\Longrightarrow T=\mathrm{const $., consistent with "orthodoxy" which tells us that the dark matter temperature is constant during the cosmic evolution. Thus, torsion does not affect the dark matter temperature. \section{Final remarks} \label{Sec_TheEnd} We have found a phantom scheme originated after considering torsion \textquotedblleft coupled\textquotedblright\ to the dark matter. As a consequence of this, we would not need dark energy (phantom dark energy described by $\omega_{ph}<-1$) in order to explain such late evolution. This is an alternative that seeks to explain the phantom scheme, not ruled out by the current observational information. Another interesting fact that we have found is an interaction scheme between the torsional components $h$ and $f$. The significance of this interaction is not clear to us yet. If this interaction is such, is there any way to detect any observational consequence from this? If torsion effects cannot be detected with the current observations, perhaps it can be done in the future if the polarization of gravitational waves is measured and from this observational fact we can also have indications of $\left\vert h\left( t\right) \right\vert /H\left( t\right) $. This is an interesting conjecture to explore. From the thermodynamic point of view, the cosmic evolution turns out to be non-adiabatic ($\mathrm{\Lambda CDM}$ is an adiabatic scheme) and since $h<0$, the second law of thermodynamics is guaranteed. The interesting thing is also that the dark matter temperature, even in the presence of torsion, remains constant through the cosmic evolution. Finally, and as we have already said, future observations could shed some light on the role, if any, of torsion in the cosmic evolution. \begin{acknowledgement} FI acknowledges financial support from the Chilean government through FONDECYT grant 1180681 of the Government of Chile. \end{acknowledgement}
1,477,468,751,336
arxiv
\section{Introduction} Rigid and affine registration is crucial in a variety of medical imaging studies and has been a topic of active research for decades. In a comprehensive image registration framework, the target image pair is often pre-aligned based on a rigid or affine transformation before using deformable (non-rigid) registration, eliminating the possible linear and large spatial misalignment between the target image pair. Solid structures such as bones can be aligned well with rigid and affine registration \cite{maintz1998survey,pluim2003mutual}. In conventional image registration approaches, inaccurate pre-alignment of the image pair may impair the registration accuracy or impede the convergence of the optimization algorithm, resulting in sub-optimal solutions \cite{zhou2014novel}. The success of recent learning-based deformable image registration approaches has largely been fueled \cite{de2019deep,balakrishnan2018unsupervised,dalca2018unsupervised,heinrich2019closing,hering2021cnn,mok2020fast,mok2020large,mok2021conditional,hoopes2021hypermorph} by accurate affine initialization using conventional image registration methods. While the conventional approaches excel in registration performance, the registration time is dependent on the degree of misalignment between the input images and can be time-consuming with high-resolution 3D image volumes. To facilitate real-time automated image registration, a few studies \cite{zhao2019unsupervised,shen2019networks,hu2018label,huang2021coarse} have been proposed to learn joint affine and non-parametric registration with convolutional neural networks (CNNs). However, the standalone performance of the affine subnetwork compared to the conventional affine registration algorithm is less explored. Moreover, considering that affine transformation is global and generally targets the possible large displacement, we argue that CNNs are not the ideal architecture to encode the orientation and absolution position of the image scans in Cartesian space or affine parameters due to the inductive biases embedded into the architectural structure of CNNs. In this paper, we analyze and expose the generic inability and limited generlizability of CNN-based affine registration methods in cases with large initial misalignment and unseen image pairs apart from the training dataset. Motivated by the recent success of vision transformer models \cite{vaswani2017attention,dosovitskiy2020image,wang2021pyramid,d2021convit,wu2021cvt}, we depart from the existing CNN-based approaches and propose a coarse-to-fine vision transformer (C2FViT) dedicated to 3D medical affine registration. To the best of our knowledge, this is the first learning-based affine registration approach that considers the non-local dependencies between input images when learning the global affine registration for 3D medical image registration. The main contributions of this work are as follows: \begin{itemize} \item we quantitatively investigate and analyze the registration performance, robustness and generalizability of existing learning-based affine registration methods and conventional affine registration methods in 3D brain registration; \item we present a novel learning-based affine registration algorithm, namely C2FViT, which leverages convolutional vision transformers with the multi-resolution strategy. C2FViT outperforms the recent CNN-based affine registration approaches while demonstrating superior robustness and generalizability across datasets; \item the proposed learning paradigm and objective functions can be adapted to a variety of parametric registration approaches with minimum effort. \end{itemize} We evaluate our method on two tasks: template-matching normalization to MNI152 space \cite{grabner2006symmetric,evans2012brain,fischl2012freesurfer} and 3D brain atlas registration in native space. Results demonstrate that our method not only achieves superior registration performance over existing CNN-based methods, but the trained model also generalizes well to an unseen dataset beyond the training dataset, reaching the registration performance of conventional affine registration methods. \section{Related Work} \subsection{Learning-based Affine Registration Methods} Conventional approaches often formulate the affine registration problem to an iterative optimization problem, which optimizes the affine parameters directly using adaptive gradient descent \cite{klein2009elastix,avants2009advanced} or convex optimization \cite{heinrich2015multi}. While conventional approaches excel in registration accuracy, the registration time is subject to the complexity and resolution of the input image pairs. Recently, many learning-based approaches have been proposed for fast affine registration. These approaches significantly accelerate the registration time by formulating the affine registration problem as a learning problem using CNNs and circumventing the costly iterative optimization in conventional approaches. Existing CNN-based affine registration approaches can be divided into two categories: concatenation-based \cite{zhao2019unsupervised,hu2018label,huang2021coarse,miao2016cnn} and Siamese network approaches \cite{de2019deep,chen2021learning,shao2021weakly} as shown in figure \ref{fig:compare}. Zhao et al. \cite{zhao2019unsupervised} propose a concatenation-based affine subnetwork that concatenates the fixed and moving images as input, and exploits single-stream CNNs to extract the features based on the local misalignment of the input. Considering affine registration is global, their method is not capable of input with large initial misalignment as the affine subnetwork lacks global connectivity and only focuses on the overlapping region between two image spaces. In contrast to the concatenation-based method, de Vos et al. \cite{de2019deep} propose an unsupervised affine registration method using the Siamese CNN architecture for fixed and moving images. A global average pooling \cite{lin2013network} is applied to the end of each pipeline in order to extract one feature per feature map, forcing the networks to encode orientations and affine transformations globally. Although their network focuses on the global high-level geometrical features of separated input, their method completely ignores the local features of the initial misalignment between the input image pair. Moreover, a recent study \cite{liu2018intriguing} demonstrates that a pure CNN encoder fails spectacularly in a seemingly trivial coordinate transform problem, implying that a pure CNN encoder may not be an ideal architecture to encode the orientations and absolution positions of the image scans in Cartesian space or to affine parameters. Shen et al. \cite{shen2019networks} also report that CNN-based affine registration methods do not perform well in practice, even for deep CNNs with large receptive fields. It is worth noting that most of the existing CNN-based affine registration methods \cite{de2019deep,zhao2019unsupervised,hu2018label,huang2021coarse,shao2021weakly,chen2021learning} jointly evaluate the affine and deformable registration performance or completely ignore the standalone performance of the affine subnetwork compared to the conventional affine registration algorithms. As inaccurate affine pre-alignment of the image pair may impair the registration accuracy or impede the convergence of the deformable registration algorithm \cite{shen2019networks,zhou2014novel}, a comprehensive evaluation of the CNN-based affine registration methods should by no means be ignored. \subsection{Vision Transformer} CNNs architecture generally has limitations in modelling explicit long-range dependencies due to the intrinsic inductive biases, \ie, weight sharing and locality, embedded into the architectural structure of CNNs. Recently, Dosovitskiy et al. \cite{dosovitskiy2020image} proposed a pioneering work, Vision Transformer (ViT), for image classification and proved that a pure transformer \cite{vaswani2017attention} architecture can attain a state-of-the-art performance. Compared to CNN-based approaches, ViT offers less image-specific inductive bias and has tremendous potential when training in large scale datasets. Wang et al. \cite{wang2021pyramid} develop a pyramid architectural design for a pure transformer model to imitate the multi-scale strategy in CNNs, achieving promising results in various computer vision tasks. Subsequent studies \cite{wang2021pvtv2,li2021localvit,guo2021cmt,dai2021coatnet,d2021convit,wu2021cvt,chu2021twins,chen2021visformer} further extend ViT to pyramid architectural design and introduce convolutions to ViT. These studies demonstrate that introducing moderate convolutional inductive bias to ViT improves the overall performance, especially for training with small datasets. Apart from pure ViT methods, Zhang et al. \cite{zhang2021learning} and Chen et al. \cite{chen2021vit} combine CNN encoder-decoder with transformer for deformable registration. While CNNs have achieved remarkable success in deformable medical image registration, we argue that CNNs are not an ideal architecture for modelling and learning affine registration. In contrast to deformable image registration, affine registration is often used to mitigate and remove large linear misalignment, which is considered to be a global operation and contradicts the inductive bias embedded in the architectural structure of CNNs. Building on the insights of ViT and its variants \cite{dosovitskiy2020image,wang2021pyramid,wu2021cvt,d2021convit}, we depart from the CNNs architecture and propose a pure transformer-based method dedicated to 3D medical affine registration. \begin{figure*}[t] \centering \begin{center} \includegraphics[width=1.0\linewidth]{figure3_pdf_enlarge.pdf} \end{center} \caption{Overview of the proposed Coarse-to-Fine Vision Transformer (C2FViT). The entire model is divided into three stages, solving the affine registration in a coarse-to-fine manner.} \label{fig:overview} \end{figure*} \section{Method} Let $F$, $M$ be fixed and moving volumes defined over a $n$-D mutual spatial domain $\Omega \subseteq \mathbb{R}^n$. In this paper, we focus on 3D affine medical image registration, \ie, $n = 3$ and $\Omega \subseteq \mathbb{R}^3$. For simplicity, we further assume that $F$ and $M$ are single-channel, grayscale images. Our goal is to learn the optimal affine matrix that align $F$ and $M$. Specifically, we parametrized the affine registration problem as a function $f_\theta(F, M) = \mathcal{A}$ using a coarse-to-fine vision transformer (C2FViT), where $\theta$ is a set of learning parameters and $\mathcal{A}$ represents the predicted affine transformation matrix. \subsection{Coarse-to-fine Vision Transformer (C2FViT)} The overall pipeline of our method is depicted in figure \ref{fig:overview}. Our method has been divided into $L$ stages that solves the affine registration in a coarse-to-fine manner with an image pyramid. All stages share an identical architecture consisting of a \emph{convolutional patch embedding} layer and $N_i$ transformer encoder blocks, where $N_i$ denotes the number of transformer blocks in stage $i$. Each transformer encoder block consists of an alternating multi-head self-attention module and a \emph{convolutional feed-forward layer}, as depicted in figure \ref{fig:compare}. We use $L = 3$ and $N_i = 4$ for each stage $i$ throughout this paper. Specifically, we first create the input pyramid by downsampling the input $F$ and $M$ with trilinear interpolation to obtain $F_i \in \{ F_1, F_2, \ldots ,F_L \}$ (and $M_i \in \{ M_1, M_2, \ldots ,M_L \}$), where $F_i$ represents the downsampled $F$ with a scale factor of $0.5^{L-i}$ and $F_L = F$. We then concatenate $F_i$ and $M_i$, and the concatenated input is subjected to the convolutional patch embedding layer. Different from the prior Transformer-based architectures \cite{dosovitskiy2020image,wang2021pyramid,wu2021cvt,d2021convit}, we prune all the layer normalization operations as we did not observe noticeable effects on the image registration performance in our experiments. Next, a stack of $N_i$ transformer encoder blocks take as input the image patch embedding map and output the feature embedding of the input. C2FViT solves the affine registration problem in a coarse-to-fine manner, and the intermediate input moving image $M_i$ is transformed via \emph{progressive spatial transformation}. Additionally, for stage $i > 1$, a residual connection from the output embeddings (tokens) of the previous stage $i-1$ is added to the patch embeddings of the current stage $i$. Finally, the estimated affine matrix $\mathcal{A}_L$ of the final stage is adopted as the output of our model $f_\theta$. \vspace{-12 pt} \subsubsection{Locality of C2FViT} While the ViT model \cite{dosovitskiy2020image} excels in modelling long-range dependencies within a sequence of non-overlapping image patches due to the self-attention mechanism, the vision transformer model lacks locality mechanisms to model the relationship between the input patch and its neighbours. Therefore, we follow \cite{li2021localvit,wu2021cvt,wang2021pvtv2} to add locality to our transformers in C2FViT. Specifically, we mainly improve the transformer in two aspects: patch embedding and feed-forward layer. As shown in figure \ref{fig:overview}, we depart from the linear patch embedding approach \cite{dosovitskiy2020image} and adopt convolutional patch embedding \cite{wu2021cvt,wang2021pvtv2} instead. The goal of the convolutional patch embedding layer is to convert the input images into a sequence of overlapping patch embeddings. Formally, given a concatenated input $I \in \mathbb{R}^{H \times W \times D \times C}$, where $H$, $W$ and $D$ denote the spatial dimension of $I$, and $C$ is the number of channels, the convolutional patch embedding layer utilizes a 3D convolution layer to compute the patch embedding map $\mathbf{Z} \in \mathbb{R}^{H_i \times W_i \times D_i \times d}$ of $I$. Specifically, the kernel size, stride, number of zero-paddings and number of feature maps of the 3D convolution layer are denoted as $k^3$, $s$, $p$ and $d$, respectively. Next, the patch embedding map $\mathbf{Z}$ is then flattened into a sequence of patch embeddings (tokens) $\{\hat{\mathbf{Z}}_i \in \mathbb{R}^d |i=1, \ldots, N \}$, where $N = H_i W_i D_i$ and $d$ is the embedding dimension. The patch embeddings can be aggregated into a matrix $\hat{\mathbf{Z}} \in \mathbb{R}^{N \times d}$. We restrict the number of patches $N$ to $4096$ and the embedding dimension $d$ to $256$ for all convolutional patch embedding layers in C2FViT by varying the stride $s$ of the convolution layer, \ie, $s = (\frac{H}{16},\frac{W}{16},\frac{D}{16})$. Moreover, we enforce the window overlapping to the sliding window of the convolution operation by setting $k$ to $2s-1$, and pad the feature with zeros ($p= \lfloor \frac{k}{2} \rfloor$). In contrast to the linear patch embedding in ViT, the convolutional patch embedding in C2FViT helps model local spatial context and features across the fixed and moving images. It also provides flexibility to adjust the number and feature dimensions of patch embeddings. On the other hand, the feed-forward layer in ViT consists of a MLP block with two hidden layers. In the transformer encoder, the feed-forward layer is the only local and translation equivariance. Since the feed-forward layer in ViT is applied to the patch embeddings map in a patch-wise manner, it lacks a local mechanism to model the relationship between adjacent patch embeddings. As such, we add a $3 \times 3 \times 3$ depth-wise convolution layer in between two hidden layers of a MLP block in the feed-forward layer of C2FViT \cite{wang2021pvtv2,li2021localvit}. The depth-wise convolution further introduces locality into the transformer encoder of C2FViT. \vspace{-12pt} \subsubsection{Global Connectivity of C2FViT} Transformers excel in modelling long-range dependencies within a sequence of embedding owing to their self-attention mechanism. In contrast to existing CNN-based affine registration approaches, the misalignment and the global relationship between the fixed and moving images can be captured and modelled by the similarity between the projected query-key pairs in transformer encoders of C2FViT, yielding the attention score for each patch embedding. Specifically, the query $\mathbf{Q}$, key $\mathbf{K}$, and value $\mathbf{V}$ are a linearly projection of the patch embeddings (tokens), \ie, $\mathbf{Q}=\hat{\mathbf{Z}}\mathbf{W}^Q$, $\mathbf{K}=\hat{\mathbf{Z}}\mathbf{W}^K$ and $\mathbf{V}=\hat{\mathbf{Z}}\mathbf{W}^V$. We further extend the self-attention module to a multi-head self-attention (MHA) module \cite{vaswani2017attention}. Given the number of attention heads is $h$, the linear projection matrices $\mathbf{W}^Q_j$, $\mathbf{W}^K_j$ and $\mathbf{W}^V_j$ for each attention head $j$ are the same size, \ie, $\mathbf{W}^Q_j$, $\mathbf{W}^K_j$, $\mathbf{W}^V_j \in \mathbb{R}^{d \times d_h}$ and $d_h = \frac{d}{h}$. Following the self-attention mechanism \cite{vaswani2017attention,dosovitskiy2020image} in the original transformer, our attention operation for attention head $j$ is computed as: \begin{equation}\label{eq:attention} {\rm Attention}(\mathbf{Q}_j, \mathbf{K}_j, \mathbf{V}_j) = {\rm Softmax}(\frac{\mathbf{Q}_j \mathbf{K}_j^\mathsf{T}}{\sqrt{d_h}})\mathbf{V}_j \end{equation} \noindent where $d_h$ is the embedding dimension for the attention head. At the end, the attended embeddings of all attention heads are concatenated and linear projected by a matrix $\mathbf{W}^O \in \mathbb{R}^{d \times d}$. In this study, we employ $h=2$ attention heads and $d=256$ embedding dimension for all the transformer encoders. \subsubsection{Progressive Spatial Transformation} We adopt the multiresolution strategy into our architectural design. Specifically, a classification head, which is implemented by two successive multilayer perceptrons (MLP) layers with the hyperbolic tangent ({\rm Tanh}) activation function, is appended at the end of each stage in C2FViT. The classification head takes as input the averaged patch-wise patch embedding and outputs a set of affine transformation parameters. In the intermediate stage $i$, the derived affine matrix is used to progressively transform the moving image $M_{i+1}$ with a spatial transformer \cite{jaderberg2015spatial}. The warped moving image $M_{i+1}$ is then concatenated with fixed image $F_{i+1}$ and taken as input for stage $i+1$. With the proposed progressive spatial transformation, the linear misalignment of the input images can easily be eliminated with low-resolution input, and the transformers from the higher level can focus on the complex misalignment between the input image pair, reducing the complexity of the problem at the higher stages. \subsection{Decoupled Affine Transformation} While directly estimating the affine matrix is feasible \cite{shao2021weakly,zhao2019unsupervised,hu2018label}, this transformation model cannot generalize to other parametric registration methods as the affine matrix cannot decompose into a set of linear geometric transformation matrices, \ie, translation, rotation, scaling and shearing. In the transformation model of C2FViT, we take a step further and utilize C2FViT to predict a set of geometric transformation parameters instead of directly estimating the affine matrix. Formally, the affine registration problem is reduced to $f_\theta(F, M) = [\bm{t}, \bm{r}, \bm{s}, \bm{h}]$, where $\bm{t}, \bm{r}, \bm{s}, \bm{h} \in \mathbb{R}^3$ represent the translation, rotation, scaling and shearing parameters. Given $\mathcal{T}$, $\mathcal{R}$, $\mathcal{S}$ and $\mathcal{H}$, the resulting affine matrix $\mathcal{A}$ can be derived by a set of geometric transformation matrices via matrix multiplication as $\mathcal{A} = \mathcal{T} \cdot \mathcal{R} \cdot \mathcal{S} \cdot \mathcal{H}$, where $\mathcal{T}$, $\mathcal{R}$, $\mathcal{S}$ and $\mathcal{H}$ denote the translation, rotation, scaling and shearing transformation matrices derived by the corresponding geometric transformation parameters ($\bm{t}$, $\bm{r}$, $\bm{s}$ and $\bm{h}$), respectively. Our proposed transformation model can easily be transferred to other parametric registration settings by pruning or modifying undesired geometric transformation matrices. For instance, our C2FViT can be applied to rigid registration by removing the scaling and shearing matrices. Furthermore, our transformation model is capable of geometrical constraints, reducing the searching space of the model during optimization. In this work, the output geometric transformation parameters are constrained as follows: rotation and shearing parameters are constrained between $-\pi$ and $+\pi$, the translation parameters are constrained between -50\% and +50\% of the maximum spatial resolution, and the scaling parameters are constrained between 0.5 and 1.5. In this paper, we use the center of mass of the input instead of the geometric center for rotation and shearing. The center of mass $c_{I}$ of the image $I$ is defined as $c_{I} = \frac{\sum_{p \in \Omega} pI(p)}{\sum_{p \in \Omega} I(p)}$. If the background intensity of the image scan is non-zero, the origin of the rotation can be set to the geometric center of the image. \subsection{Unsupervised and Semi-supervised Learning}\label{sec:trans_model} In contrast to the conventional affine registration methods, we parametrize the affine registration problem as a learning problem. Specifically, we formulate the function $f_\theta(F, M) = \mathcal{A}_f$, where $f_\theta$ and $\mathcal{A}_f$ represent the C2FViT model and the output affine transformation matrix, respectively. Mathematically, our goal is to minimize the following equation: \begin{equation}\label{eq:training} \theta^* = \argmin_\theta \Big[ \mathbb{E}_{(F,M) \in D} \; \mathcal{L} \big( F,M(\phi(\mathcal{A}_f) \big) \Big] , \end{equation} \noindent where the $\theta$ is the learning parameters in C2FViT, fixed and moving images are randomly sampled from the training dataset $D$ and the loss function $\mathcal{L}$ measures the dissimilarity between the fixed image and the affine transformed moving image $M(\phi(\mathcal{A}_f))$. In our unsupervised learning setting, we use the negative NCC similarity measure with the similarity pyramid \cite{mok2020large} $\mathcal{L}_{sim}$ to quantify the distance between $F$ and $M(\phi(\mathcal{A}_f))$ such that $\mathcal{L}=\mathcal{L}_{sim}$ and $\mathcal{L}_{sim}$ is defined as: \begin{equation}\label{eq:unsupervised} \mathcal{L}_{sim}(F,M(\phi)) = \sum_{i \in [1 .. L]} -\frac{1}{2^{(L-i)}} {\rm NCC}_w(F_i, M_i(\phi)), \end{equation} \noindent where $L$ denotes the number of image pyramid levels, ${\rm NCC}_w$ represents the local normalized cross-correlation with windows size $w^3$, and $(F_i, M_i)$ denotes the images in the image pyramid, \ie, $F_1$ is the image with the lowest resolution. In addition, our method is also capable of semi-supervised learning if the anatomical segmentation maps of the fixed and moving images are available in the training dataset. Given anatomical segmentation maps of fixed image $S_F$ and warped moving image $S_M(\phi)$, the semi-supervised C2FViT can be formulated by changing the similarity measure $\mathcal{L}$ in eq. \ref{eq:training} to $\mathcal{L}_{sim} + \lambda \mathcal{L}_{seg}$, where $\mathcal{L}_{seg}$ is defined as follows: \begin{equation}\label{eq:semi_sup} \mathcal{L}_{seg}(S_F,S_M(\phi)) = \frac{1}{K} \sum_{i \in [1 .. K]} \Big( 1 - \frac{2(S_{F}^i \cap S_{M}^i(\phi))}{|S_{F}^i| + |S_{M}^i(\phi)|} \Big) \end{equation} \noindent where $K$ denotes the number of anatomical structures. For the semi-supervised C2FViT, we utilize all available anatomical segmentations in our experiments. In this paper, we employ $L=3$ image pyramid levels and $\lambda=0.5$. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{figure3_qual_compress.jpg} \end{center} \caption{Example coronal MR slices from the atlases (fixed images), moving images, resulting warped images for ConvNet-Affine, VTN-Affie and our method without center of mass initialization.} \label{fig:example_result} \end{figure} \section{Experiments} \subsection{Data and Pre-processing} We evaluated our method on brain template-matching normalization and atlas-based registration using 414 T1-weighted brain MRI scans from the OASIS dataset \cite{marcus2007open} and 40 brain MRI scans from the LPBA dataset \cite{shattuck2008construction}. For the OASIS dataset, we resampled and padded all MRI scans to $256 \times 256 \times 256$ with the same resolution ($1mm \times 1mm \times 1mm$) followed by standard preprocessing steps, including motion correction, skull stripping and subcortical structure segmentation, for each MRI scan using FreeSurfer \cite{fischl2012freesurfer}. For the LPBA dataset, the MRI scans are skull-stripped, and the manual delineation of the subcortical structures are provided. All brain MRI scans in our experiments are in native space, except the MNI152 brain template. We split the OASIS dataset into 255, 10 and 149 volumes for training, validation, and test sets, respectively. For the LPBA dataset, we included all 40 scans as the test set. \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{ccccccccccccccc} \toprule[1.5pt] \multirow{2}{*}{Method} & \multirow{2}{*}{\#Param} & \multicolumn{4}{c}{Template-Matching Normalization (MNI152)} & \multicolumn{4}{c}{Atlas-Based Registration (OASIS)} & \multicolumn{4}{c}{Atlas-Based Registration (OASIS$_{train}$ $\Rightarrow$ LPBA$_{test}$)}\\ \cmidrule(lr){3-6}\cmidrule(lr){7-10}\cmidrule(lr){11-14} & & \rule{1pt}{0ex} DSC$_4$ $\uparrow$ & DSC30$_4$ $\uparrow$ & HD95$_4$ $\downarrow$ & $\textnormal{T}_{test}$ $\downarrow$ & \rule{1pt}{0ex} DSC$_{23}$ $\uparrow$ & DSC30$_{23}$ $\uparrow$ & HD95$_{23}$ $\downarrow$ & $\textnormal{T}_{test}$ $\downarrow$ & \rule{1pt}{0ex} DSC$_3$ $\uparrow$ & DSC30$_3$ $\uparrow$ & HD95$_3$ $\downarrow$ & $\textnormal{T}_{test}$ $\downarrow$ \\ \midrule[1pt] Initial \hspace{0.1cm} & - & 0.14 $\pm$ 0.12 & 0.02 $\pm$ 0.02 & 29.26 $\pm$ 11.33 & - & 0.18 $\pm$ 0.14 & 0.06 $\pm$ 0.02 & 15.53 $\pm$ 6.77 & - & 0.33 $\pm$ 0.06 & 0.26 $\pm$ 0.03 & 12.43 $\pm$ 4.65 & - \\ \midrule ConvNet-Affine \cite{de2019deep} \hspace{0.1cm} & 14.7 M & 0.65 $\pm$ 0.08 & 0.56 $\pm$ 0.06 & 6.14 $\pm$ 1.33 & 0.12 $\pm$ 0.09 s & 0.57 $\pm$ 0.07 & 0.48 $\pm$ 0.05 & 4.10 $\pm$ 1.01 & 0.09 $\pm$ 0.06 s & 0.36 $\pm$ 0.07 & 0.28 $\pm$ 0.03 & 11.58 $\pm$ 4.99 & 0.11 $\pm$ 0.08 s \\ VTN-Affine \cite{zhao2019unsupervised} \hspace{0.1cm} & 14.0 M & 0.67 $\pm$ 0.06 & 0.60 $\pm$ 0.05 & 5.80 $\pm$ 1.01 & \textbf{2e-3} $\pm$ 4e-4 s & 0.57 $\pm$ 0.08 & 0.48 $\pm$ 0.06 & 4.18 $\pm$ 1.08 & \textbf{3e-3} $\pm$ 8e-4 s & 0.31 $\pm$ 0.06 & 0.24 $\pm$ 0.03 & 14.99 $\pm$ 5.34 & \textbf{2e-3} $\pm$ 6e-4 s \\ C2FViT (ours) \hspace{0.1cm} & 15.2 M & \textbf{0.71} $\pm$ 0.06 & \textbf{0.64} $\pm$ 0.04 & \textbf{5.17} $\pm$ 0.81 & 0.09 $\pm$ 0.03 s & \textbf{0.64} $\pm$ 0.06 & \textbf{0.57} $\pm$ 0.05 & \textbf{3.33} $\pm$ 0.77 & 0.08 $\pm$ 0.01 s & \textbf{0.47} $\pm$ 0.04 & \textbf{0.42} $\pm$ 0.02 & \textbf{6.55} $\pm$ 1.60 & 0.14 $\pm$ 0.06 s \\ \bottomrule[1.5pt] \end{tabular} } \caption{Quantitative results of template-matching normalization and atlas-based registration \emph{without center of mass initialization}. The subscript of each metric indicates the number of anatomical structures involved. $\uparrow$: higher is better, and $\downarrow$: lower is better. Initial: initial results in native space without registration.} \label{tab:main_result} \end{table*} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{ccccccccccccccc} \toprule[1.5pt] \multirow{2}{*}{Method} & \multirow{2}{*}{\#Param} & \multicolumn{4}{c}{Template-Matching Normalization (MNI152)} & \multicolumn{4}{c}{Atlas-Based Registration (OASIS)} & \multicolumn{4}{c}{Atlas-Based Registration (OASIS$_{train}$ $\Rightarrow$ LPBA$_{test}$)}\\ \cmidrule(lr){3-6}\cmidrule(lr){7-10}\cmidrule(lr){11-14} & & \rule{1pt}{0ex} DSC$_4$ $\uparrow$ & DSC30$_4$ $\uparrow$ & HD95$_4$ $\downarrow$ & $\textnormal{T}_{test}$ $\downarrow$ & \rule{1pt}{0ex} DSC$_{23}$ $\uparrow$ & DSC30$_{23}$ $\uparrow$ & HD95$_{23}$ $\downarrow$ & $\textnormal{T}_{test}$ $\downarrow$ & \rule{1pt}{0ex} DSC$_3$ $\uparrow$ & DSC30$_3$ $\uparrow$ & HD95$_3$ $\downarrow$ & $\textnormal{T}_{test}$ $\downarrow$ \\ \midrule[1pt] Initial (CoM) \hspace{0.1cm} & - & 0.49 $\pm$ 0.11 & 0.35 $\pm$ 0.06 & 11.03 $\pm$ 3.48 & - & 0.45 $\pm$ 0.12 & 0.29 $\pm$ 0.06 & 6.97 $\pm$ 2.89 & - & 0.45 $\pm$ 0.04 & 0.41 $\pm$ 0.01 & 6.87 $\pm$ 1.69 & - \\ \midrule Elastix \cite{klein2009elastix} \hspace{0.1cm} & - & 0.73 $\pm$ 0.07 & 0.64 $\pm$ 0.06 & 5.01 $\pm$ 1.44 & 6.6 $\pm$ 0.2 s & 0.63 $\pm$ 0.09 & 0.52 $\pm$ 0.08 & 3.89 $\pm$ 1.72 & 6.3 $\pm$ 0.2 s & \textbf{0.55} $\pm$ 0.02 & \textbf{0.53} $\pm$ 0.02 & 4.11 $\pm$ 1.01 & 6.4 $\pm$ 0.2 s \\ ANTs \cite{avants2009advanced} \hspace{0.1cm} & - & 0.74 $\pm$ 0.06 & 0.67 $\pm$ 0.05 & 4.65 $\pm$ 0.57 & 38.2 $\pm$ 3.2 s & 0.67 $\pm$ 0.08 & 0.58 $\pm$ 0.08 & 3.27 $\pm$ 1.56 & 37.7 $\pm$ 2.5 s & 0.54 $\pm$ 0.03 & 0.50 $\pm$ 0.02 & 4.53 $\pm$ 1.38 & 46.6 $\pm$ 15.3 s \\ \midrule ConvNet-Affine \cite{de2019deep} \hspace{0.1cm} & 14.7 M & 0.70 $\pm$ 0.06 & 0.63 $\pm$ 0.05 & 5.28 $\pm$ 0.68 & 0.12 $\pm$ 0.08 s & 0.62 $\pm$ 0.06 & 0.55 $\pm$ 0.05 & 3.43 $\pm$ 0.91 & 0.10 $\pm$ 0.07 s & 0.45 $\pm$ 0.04 & 0.41 $\pm$ 0.01 & 7.46 $\pm$ 1.87 & 0.11 $\pm$ 0.08 s \\ VTN-Affine \cite{zhao2019unsupervised} \hspace{0.1cm} & 14.0 M & 0.71 $\pm$ 0.06 & 0.64 $\pm$ 0.05 & 5.11 $\pm$ 0.74 & 3e-3 $\pm$ 9e-4 s & 0.66 $\pm$ 0.06 & 0.59 $\pm$ 0.06 & 3.02 $\pm$ 0.81 & \textbf{2e-3} $\pm$ 7e-4 s & 0.43 $\pm$ 0.04 & 0.39 $\pm$ 0.02 & 8.02 $\pm$ 2.23 & \textbf{2e-3} $\pm$ 6e-4 s \\ C2FViT (ours) \hspace{0.1cm} & 15.2 M & 0.72 $\pm$ 0.06 & 0.65 $\pm$ 0.05 & 4.99 $\pm$ 0.75 & 0.12 $\pm$ 0.04 s & 0.66 $\pm$ 0.05 & 0.61 $\pm$ 0.04 & 2.96 $\pm$ 0.54 & 0.09 $\pm$ 0.02 s & 0.54 $\pm$ 0.03 & 0.51 $\pm$ 0.04 & \textbf{4.06} $\pm$ 1.12 & 0.12 $\pm$ 0.04 s \\ \midrule ConvNet-Affine-semi \cite{de2019deep} \hspace{0.1cm} & 14.7 M & 0.73 $\pm$ 0.06 & 0.66 $\pm$ 0.04 & 4.94 $\pm$ 0.76 & 0.12 $\pm$ 0.09 s & 0.63 $\pm$ 0.06 & 0.56 $\pm$ 0.06 & 3.46 $\pm$ 0.96 & 0.10 $\pm$ 0.07s & 0.43 $\pm$ 0.03 & 0.40 $\pm$ 0.02 & 6.90 $\pm$ 1.52 & 0.12 $\pm$ 0.08 s \\ VTN-Affine-semi \cite{zhao2019unsupervised} \hspace{0.1cm} & 14.0 M & 0.75 $\pm$ 0.05 & \textbf{0.70} $\pm$ 0.04 & 4.65 $\pm$ 0.66 & \textbf{2e-3} $\pm$ 6e-4 s & 0.68 $\pm$ 0.05 & 0.62 $\pm$ 0.04 & 2.94 $\pm$ 0.64 & \textbf{2e-3} $\pm$ 8e-4 s & 0.44 $\pm$ 0.04 & 0.40 $\pm$ 0.02 & 7.27 $\pm$ 1.96 & \textbf{2e-3} $\pm$ 1e-3 s \\ C2FViT-semi (ours) \hspace{0.1cm} & 15.2 M & \textbf{0.76} $\pm$ 0.05 & \textbf{0.70} $\pm$ 0.04 & \textbf{4.60} $\pm$ 0.69 & 0.13 $\pm$ 0.05 s & \textbf{0.69} $\pm$ 0.04 & \textbf{0.64} $\pm$ 0.04 & \textbf{2.81} $\pm$ 0.55 & 0.08 $\pm$ 0.02 s & 0.51 $\pm$ 0.03 & 0.47 $\pm$ 0.04 & 4.58 $\pm$ 1.71 & 0.13 $\pm$ 0.05 s \\ \bottomrule[1.5pt] \end{tabular} } \caption{Quantitative results on template-matching normalization, OASIS and LPBA dataset \emph{with center of mass initialization}. The subscript of each metric indicates the number of anatomical structures involved. $\uparrow$: higher is better, and $\downarrow$: lower is better. Initial (CoM): initial results with the center of mass initialization. To our knowledge, ANTs and Elastix do not have a GPU implementation.} \label{tab:main_result_COM} \end{table*} We evaluated our method on two applications of brain registration: brain template-matching normalization to MNI152 space and atlas-based registration in native space. Brain template-matching normalization is a standard application in analyzing inter-subject images and a necessary pre-processing step in most deformable image registration methods. For the task of brain template-matching normalization, we affinely register all test scans in the OASIS dataset to an MNI152 (6$^\text{th}$ generation) brain template \cite{grabner2006symmetric,evans2012brain,fischl2012freesurfer}, which is derived from 152 structural images and averaged together after non-linear registration into the common MNI152 co-ordinate system. We train the learning-based methods with the training dataset of OASIS and the MNI152 template, which employ the MNI152 template as the fixed image and MRI scans from the training dataset as moving images. For the atlas-based registration task, we randomly select 3 and 2 scans from the test set of OASIS and LPBA datasets respectively as atlases. Then, we align the remaining MRI scans in the test set to the selected atlases within the same dataset. Note that in the atlas-based registration task, we train the learning-based methods with pairwise brain registration, which randomly samples two image scans as fixed and moving images, using only the training set of the OASIS dataset, \ie, the selected atlases and the MRI scans from the LPBA dataset were not involved in the training. Conventionally, affine registration methods often initialize the input images with center of mass (CoM) initialization by default \cite{mccormick2014itk}, which initializes the translation parameters using the CoM of the input images. Equivalently, the CoM initialization for learning-based methods can be achieved by translating the CoM of the moving image to the CoM of the fixed image. We evaluated our method with and without the CoM initialization, and the results are listed in table \ref{tab:main_result} and table \ref{tab:main_result_COM}, respectively. \subsection{Measurement} To quantify the registration performance of an affine registration algorithm, we register each subject to an atlas or MNI152 template, propagate the subcortical structure segmentation map using the resulting affine transformation matrix, and measure the volume overlap using the Dice similarity coefficient (DSC) and 30\% lowest DSC of all cases (DSC30). We also measure the 95\% percentile of the Hausdorff distance (HD95) of the segmentation map to represent the reliability of the registration algorithm. In the brain template-matching normalization task, 4 subcortical structures, \ie, caudate, cerebellum, putamen and thalamus, are included in the evaluation. In the atlas-based registration with the OASIS dataset, 23 subcortical structures are included, as shown in the boxplot in figure \ref{fig:box_plot}. For the atlas-based registration with the LPBA dataset, we utilize all manual segmentation of the brain scan, including cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM), for evaluation. \subsection{Baseline Methods} We compare our method with two state-of-the-art conventional affine registration methods (ANTs\cite{avants2009advanced} and Elastix \cite{klein2009elastix}) and two learning-based affine registration approaches (ConvNet-Affine \cite{de2019deep} and VTN-Affine \cite{zhao2019unsupervised}). Specifically, we use the ANTs affine registration implementation in the publicly available ANTs software package \cite{avants2011reproducible}, and we use the Elastix affine registration algorithm in the SimpleElastix toolbox \cite{marstal2016simpleelastix}. Both methods use a 3-level multi-resolution optimization strategy with adaptive gradient descent optimization and the mutual information as the similarity measure. For ConvNet-Affine and VTN-Affine, we follow their papers to implement their affine subnetworks. The initial number of feature channels for both methods is set to 16, and we follow the rules in their papers to define the growth of network depth and the hidden dimension of each convolution layer. By default, all learning-based methods are trained in an unsupervised manner with the similarity pyramid as described in eq. \ref{eq:unsupervised}. We also extend the unsupervised learning-based methods to semi-supervised variants using the same semi-supervised object function as our method, denoted as C2FViT-semi, ConvNet-Affine-semi and VTN-Affine-semi. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{boxplot_Dice_cvpr2022.png} \end{center} \caption{Boxplots illustrating Dice scores of each anatomical structure for C2FViT, VTN and ANTs in the atlas-based registration with the OASIS dataset. The left and right hemispheres of the brain are combined into one structure for visualization. The brain stem (BS), thalamus (Th), cerebellum cortex (CblmC), lateral ventricle (LV), cerebellum white matter (WM), putamen (Pu), caudate (Ca), pallidum (Pa), hippocampus (Hi), 3rd ventricle (3V), 4th ventricle (4V), amygdala (Am), and cerebral cortex (CeblC) are included. Methods with (CoM) postfix are trained and tested on MRI scans with the center of mass initialization. } \label{fig:box_plot} \end{figure*} \subsection{Implementation} The learning-based methods, \ie, C2FViT, ConvNet-Affine and VTN-Affine, are developed and trained using Pytorch. All the methods are trained or executed on a standalone workstation equipped with an Nvidia TITAN RTX GPU and an Intel Core i7-7700 CPU. The learning-based approaches are trained with half-resolution image scans by downsampling the image scans with trilinear interpolation. Then, we apply the resulting affine transformation to the full-resolution image scans for evaluation. We adopt the Adam optimizer \cite{kingma2014adam} with a fixed learning rate of $1e^{-4}$ and batch size sets to 1 for all learning-based approaches. \subsection{Results} \subsubsection{Registration accuracy and Robustness} Table \ref{tab:main_result} shows the results of template-matching normalization and atlas-based registration of the learning-based methods \emph{without spatial initialization}. Figure \ref{fig:example_result} illustrates the qualitative results of all tasks without spatial initialization. The low initial Dice scores over all subjects, suggesting that there is a large misalignment within each test case. Our proposed method is significantly better than ConvNet-Affine and VTN-Affine in terms of DSC, DSC30 and HD95 over all three tasks, suggesting our method is robust and accurate in affine registration with large initial misalignment. We visualize the distribution of Dice scores for each subcortical structure as in the boxplot in figure \ref{fig:box_plot}. Compared to VTN-Affine, the C2FViT model achieves consistently better performance across all structures. Table \ref{tab:main_result_COM} shows the results of tasks with CoM initialization. This simple but effective initialization boosts the initial Dice scores from 0.14, 0.18 and 0.33 to 0.49, 0.45 and 0.45, respectively, implying that the initialization eliminates most of the misalignment due to translation. All three learning-based methods improve significantly on affine alignment with CoM initialization. For an unsupervised manner, our method achieves comparable Dice measures to the conventional methods (ANTs and Elastix), and slightly better than ConvNet-Affine and VTN-Affine. It is worth noting that VTN-Affine gains significant improvement in registration performance of template-matching and atlas-based registration (OASIS) under CoM initialization. Nevertheless, the validity of the initial registration should be questioned when the two images are acquired in different imaging modalities and hence, the registration performance without spatial initialization should be considered when evaluating the learning-based affine registration algorithm. With our proposed semi-supervised settings, our method C2FViT-semi achieves the best overall registration performance in the template-matching normalization and the atlas-based registration task on the OASIS dataset. \vspace{-12pt} \subsubsection{Generalizability Analysis} As shown in the results of the LPBA dataset in tables \ref{tab:main_result} and \ref{tab:main_result_COM}, ConvNet-Affine and VTN-Affine, using models trained on the OASIS dataset, fail spectacularly in the test set of LPBA, which obtain -5\% and -2\% loss in DSC with VTN-Affine, and +3\% and +0\% gain in DSC with ConvNet-Affine compared to initial results without registration and with spatial initialization, respectively. The results imply that their models cannot generalize well to an unseen dataset in practice regardless of spatial initialization. By contrast, our C2FViT model achieves a comparable registration performance to the conventional affine registration approaches ANTs and Elastix in the task with the LPBA dataset, reaching an average Dice score of 0.54 and HD95 of 4.06 in the task with the LPBA dataset, as shown in table \ref{tab:main_result_COM}. While the semi-supervised settings improve the dataset-specific performance of learning-based models in template-matching normalization and atlas-based registration with the OASIS dataset, the semi-supervised models are inferior to their unsupervised models in the LPBA dataset, indicating anatomical knowledge injected to the model with semi-supervision may not generalize well to unseen data beyond the training dataset. \begin{table}[h] \begin{center} \scalebox{1.0}{ \resizebox{0.45\textwidth}{!}{ \begin{tabular}{l|lll|l} \toprule Methods & DSC$_{23}$ & HD95$_{23}$ & $\textnormal{T}_{test}$ & \#Param \\ \hline Vanilla C2FViT-s1 & 0.61 & 3.53 & 0.05 $\pm$ 0.04 s & 5.0 M \\ Vanilla C2FViT-s2 & 0.62 & 3.57 & 0.06 $\pm$ 0.05 s & 10.0 M \\ \hline Vanilla C2FViT-s3 & 0.62 & 3.46 & 0.07 $\pm$ 0.02 s & 15.2 M \\ \ \ \ \ \small{+Progressive Spatial Transformation} & 0.64 \color{ForestGreen}\small \textbf{(+0.02)} & 3.33 \color{ForestGreen}\small \textbf{(-0.13)} & 0.08 $\pm$ 0.02 s & 15.2 M \\ \ \ \ \ \small{+Center of Mass Initialization} & 0.66 \color{ForestGreen}\small \textbf{(+0.02)} & 2.96 \color{ForestGreen}\small \textbf{(-0.37)} & 0.09 $\pm$ 0.02 s & 15.2 M \\ \ \ \ \ \small{+Semi-supervision} & 0.69 \color{ForestGreen}\small \textbf{(+0.03)} & 2.81 \color{ForestGreen}\small \textbf{(-0.15)} & 0.08 $\pm$ 0.02 s & 15.2 M \\ \bottomrule \end{tabular} }} \end{center} \caption{Influence of the number of stages, progressive spatial transformation, center of mass initialization and the semi-supervised learning to the C2FViT model. The C2FViT with postfix -s$\{ n \}$ represents the C2FViT model with an $n$-stage. } \label{tab:ablation} \end{table} \subsubsection{Runtime Analysis} The average runtimes (denoted as $\textnormal{T}_{test}$) of all methods in the inference phase are reported in tables \ref{tab:main_result} and \ref{tab:main_result_COM}. We report the average registration time for each task. C2FViT, ConvNet-Affine and VTN-Affine are faster than the ANTs and Elastix by order of magnitude, thanks to the GPU acceleration and the effective learning formulation. Moreover, ANTs runtimes vary widely, as its convergence depends on the degree of initial misalignment of the task. On the other hand, Elastix runtimes are stable at around 6.6 seconds per alignment task because of the early stopping strategy used during the affine alignment. \begin{table}[h] \centering \footnotesize \resizebox{0.45\textwidth}{!}{ \begin{tabular}{l|cccc} \toprule & DSC$_{23}$ $\uparrow$ & DSC30$_{23}$ $\uparrow$ & HD95$_{23}$ $\downarrow$ & $\textnormal{T}_{test}$ $\downarrow$\\ \hline C2FViT-direct & 0.63 $\pm$ 0.06 & 0.55 $\pm$ 0.04 & 3.43 $\pm$ 0.73 & 0.02 $\pm$ 4e-3 s \\ C2FViT-decouple & 0.64 $\pm$ 0.06 & 0.57 $\pm$ 0.05 & 3.33 $\pm$ 0.77 & 0.08 $\pm$ 0.01 s \\ \bottomrule \end{tabular} } \caption{Influence of the proposed decoupled affine transmation model compared to the direct affine matrix estimation model.} \label{tab:transformation_model_result} \end{table} \subsubsection{Ablation study} Table \ref{tab:ablation} shows the ablation study results of C2FViT in the OASIS atlas-based registration task. The results suggest that the proposed progressive spatial transformation, CoM initialization and semi-supervised learning consistently improve the registration performance of C2FViT without adding extra learning parameters or significant computational burden to the model. Table \ref{tab:transformation_model_result} presents the results of C2FViT using two different transformation models in the OASIS atlas-based registration task. The proposed decoupled affine transformation model is slightly better than directly learning the affine matrix, in terms of registration performance, at the cost of registration runtime. Moreover, the decoupled affine transformation model can be easily adapted to other parametric registration methods by pruning or modifying the geometrical transformation matrices. \section{Conclusion} We have proposed a Coarse-to-Fine Vision Transformer dedicated to 3D affine medical image registration. Unlike prior works using CNN-based affine registration methods, our method leverages the global connectivity of the self-attention operator and moderates the locality of the convolutional feed-forward layer to encode the global orientations, spatial positions and long-term dependencies of the image pair to a set of geometric transformation parameters. Comprehensive experiments demonstrate that our method not only achieves superior registration performance over the existing CNN-based methods under data with large initial misalignment and is robust to an unseen dataset, but also our method with semi-supervision outperforms conventional methods in terms of dataset-specific and preserves the runtime advantage of learning-based methods. Nevertheless, there is still a gap between unsupervised learning-based approaches and conventional approaches. We believe that expanding the training dataset and introducing task-specific data augmentation techniques would likely lead to performance improvement. \newpage {\small \bibliographystyle{ieee_fullname}
1,477,468,751,337
arxiv
\section{Introduction} The QCD phase transition manifests itself in two phenomena, deconfinement and chiral symmetry restoration. The conventional order parameter for (de)confinement is the Polyakov loop, the straight loop in the time direction. After a suitable renormalization, it is related to the free energy of a static quark via $\langle \mbox{tr}\, P\rangle\sim e^{-\beta F}$ ($\beta=1/k_B T$). The Polyakov loop is small in the confined phase (and exactly vanishes in the quenched case because of center symmetry) and increases above the critical temperature. The chiral condensate, on the other hand, is an order parameter for chiral symmetry breaking in the massless limit, as it is not invariant under chiral transformations. In the chirally broken phase, the chiral condensate is finite, while it decays above the restoration temperature. Lattice simulations at physical quark masses have revealed the QCD phase transition to be a crossover with pseudo-critical temperatures of $157\pm 4~$MeV for the chiral susceptibility and $170\pm 5~$MeV for the Polyakov loop in \cite{paper:WuppertalPapers} (see also \cite{paper:hotQCD}). The dual condensate \cite{paper:QuenchedDualCondensate} connects Pol\-ya\-kov loop and chiral condensate as two different mass limits, thus it also relates confinement and chiral symmetry breaking. It is therefore particularly interesting to see what one can learn from the dual condensate about the physical crossover. We here improve previous results \cite{paper:PreliminaryResultUnquenched} and study the dual condensate on the $N_f=2+1$ staggered dynamical configurations of \cite{paper:WuppertalPapers}. \section{Dual Condensate} The physical boundary condition for a fermion field at finite temperature is anti-periodic: $\psi(t+\beta,\vec{x}) = -\psi(t,\vec{x})$. With `quark condensate' we refer to the expectation value $\Sigma(m)=\frac{1}{V} \langle Tr[(m+D_-)^{-1}] \rangle$ with this boundary condition. We here also consider general boundary conditions \cite{paper:DualCondensate} \begin{equation} \psi(t+\beta,\vec{x}) = e^{i\varphi} \psi(t,\vec{x})\,, \label{eqn:bc} \end{equation} giving the general quark condensate \begin{eqnarray} \Sigma(m,\varphi) = \frac{1}{V} \langle \mbox{tr}[(m+D_\varphi)^{-1}] \rangle = \frac{1}{V} \sum_{\lambda_\varphi} \frac{1}{m \pm i \lambda_\varphi}\,, \label{eqn:Cond} \end{eqnarray} where $i\lambda_\varphi$ are the eigenvalues of the massless Dirac operator with these boundary condition (the physical boundary condition $\varphi=\pi$ is among them). \begin{figure}[h] \includegraphics[width=0.75\linewidth]{Try.pdf} \caption{Examples of closed loops on the lattice (with time running upwards). The red links get $e^{i\varphi}$ factors from the implementation of general boundary conditions, Eqn.~(\protect\ref{eqn:implemBCs}). The green lines have winding number one.} \label{fig:LoopsInDPhi} \end{figure} The dual condensate is defined as the first Fourier component of the general quark condensate with respect to the boundary phase $\varphi$: \begin{eqnarray} \tilde{\Sigma}_1(m)=\! \int_0^{2\pi} \frac{d\varphi}{2\pi}\, e^{-i\varphi}\,\Sigma(\varphi) =\!\int_0^{2\pi} \frac{d\varphi}{2\pi V} \sum_{\lambda_\varphi} \frac{e^{-i\varphi}}{m \pm i \lambda_\varphi}\,. \label{eqn:DualCond} \end{eqnarray} The interpretation of this quantity is simplest in a lattice context. One can implement the boundary conditions (\ref{eqn:bc}) by multiplying a factor $e^{i\varphi}$ to temporal links in one, say the last, time slice \begin{equation} U_0(t=N_t a) \Longrightarrow e^{i\varphi}U_0(t=N_t a). \label{eqn:implemBCs} \end{equation} The general quark condensate $\Sigma(m,\varphi)$ is gauge invariant and as such is composed of the contributions from all kinds of closed loops. These loops receive different powers of $e^{i\varphi}$ factors, see Fig.~\ref{fig:LoopsInDPhi}. Now the dual condensate as the first Fourier component of $\Sigma(m,\varphi)$ (see (\ref{eqn:DualCond})), picks out the contributions from all the loops with one $e^{i\varphi}$ factor. These are loops winding once in the temperal direction, hence the dual condensate can be viewed as a `dressed Polyakov loop'. \begin{figure}[t] \begin{minipage}{0.8\linewidth} \hskip0.2cm\includegraphics[width=0.78\linewidth]{Accumulated_QuarkCondensate_AntiPeriodicBC_ErrBar.pdf} \vskip-0.1cm\hspace{2.5cm}$\lambda$[MeV]\vskip0.1cm \includegraphics[width=0.81\linewidth]{DualCondensate_Accum_37000_0500_1000_M01_ErrorBar.pdf} \vskip-0.1cm\hspace{2.5cm}$\lambda$[MeV] \end{minipage} \caption{Accumulated contributions of eigenvalues to the quark condensate $\Sigma(m,\pi)$ (top) and the dual condensate $\tilde{\Sigma}(m)$ (bottom), both in GeV$^3$, at $T=172~$MeV and $m=100~$MeV.} \label{fig:convergence} \end{figure} In a similar way dual observables can be constructed for arbitrary gauge invariant objects (cf.~\cite{paper:DualCondensateConvergence}). The conventional infinitely thin Polyakov loop is included in the set of loops that wind once and dominates in the limit of large probe mass $m$ (which can be seen through an expansion in $1/m$). In this limit, however, more UV eigenvalues contribute to the sum in (\ref{eqn:DualCond}) \cite{paper:QuenchedDualCondensate}. We here consider all quantities unrenormalized (in \cite{paper:ProceedingsQuenchedDualCondensate} we demonstrated that the (quenched) dressed Polyakov loop has only a mild dependence on the lattice spacing). \section{Numerical Results} We use dynamical improved staggered fermion configurations from \cite{paper:WuppertalPapers} on lattices of size $8\times24^3$, for temperatures ranging from $78~$MeV to $890~$MeV and lattice spacings from $0.282~$fm to $0.028~$fm. We compute 500 to 1000 lowest eigenvalues of $D$ for 16 or 8 different boundary conditions $\varphi\in[0,2\pi]$ with ARPACK. We currently have completed the spectrum calculations for 20 to 35 configurations at temperatures between $100~$MeV and $200~$MeV. \begin{figure}[t] \centering \begin{minipage}{0.8\linewidth} \hskip0.13cm\includegraphics[width=0.782\linewidth]{Spectrum_Density_Spectrum_34000_latdat_090701_174953_UP_0500_1000_BoundaryCondition_PrdAPd.pdf} \vskip-0.1cm\hspace{2.5cm}$\lambda$[MeV]\vskip0.1cm \includegraphics[width=0.8\linewidth]{Spectrum_Density_Spectrum_46600_latdat_090609_015324_UP_1000_1500_BoundaryCondition_PrdAPd.pdf} \vskip-0.1cm\hspace{2.5cm}$\lambda$[MeV] \end{minipage} \caption{Distribution of the lowest eigenvalues for the confined phase ($T=78~$MeV, top) and the deconfined phase ($T=892~$MeV, bottom). We compare histograms for $\varphi=0$ (red dashed) and $\varphi=\pi$ (blue).} \label{fig:DiracOperatorSpectrum} \end{figure} The first problem we investigate is the convergence of the sums (\ref{eqn:Cond}) and (\ref{eqn:DualCond}) when truncated to the number of available eigenvalues (in physical units). As the contribution of a $\pm i \lambda$ pair to the condensates is $2m/\lambda^2+m^2$, it is clear that this contribution decays for $\lambda\gg m$ and only the lowest part of the spectrum contributes. For the dual condensate there is an additional effect because it only probes the difference in the response of the spectra to changing boundary conditions. A strong response is manifest only in the IR spectrum \cite{paper:preQuenchedDualCondensate,paper:DualCondensateConvergence}, as can be seen in Fig.~\ref{fig:DiracOperatorSpectrum}, and for dual condensates thus only the IR contributes. Fig.~\ref{fig:convergence} illustrates this effect: when using the available spectrum, the physical chiral condensate from (\ref{eqn:Cond}) has not converged, while the dual condensate has due to the additional Fourier transform in (\ref{eqn:DualCond}). Fig.~\ref{fig:GeneralQuarkCondensateAndBC} shows the (unrenormalized) general quark condensate as a function of the boundary angle $\varphi$ at different temperatures. It is flat at low temperature and depends strongly on $\varphi$ for high temperatures. Similar results were found also in non-lattice approaches \cite{paper:OtherPapersofSimilarResults}. \begin{figure}[t] \begin{minipage}{0.95\linewidth} \centering \includegraphics[width=0.8\linewidth]{GeneralQuarkCondensate_140MeV_To_350MeV_AxesLabel.pdf} \centering \includegraphics[width=0.49\linewidth]{Intg_Spectrum_34000_latdat_090701_174953_UP_0500_1000_M01.pdf} \includegraphics[width=0.48\linewidth]{QuarkCondPhi_44800_latdat_090609_014913_UP_1000_1500_M001.pdf}\\ \vskip-0.1cm\hspace{0.5cm}$\varphi$ \hspace{3.4cm}$\varphi$ \end{minipage} \caption{The general quark condensate $\Sigma(m,\varphi)$ in $GeV^3$ as a function of temperature and boundary angle with $m=1~$MeV (top). We remark that for large $T$ and $\varphi \sim \pi$ we expect further correlations until full convergence. The lower panels zoom into the confined phase (left, $T=78~$MeV, $m=100~$MeV) and the deconfined phase (right, $T=740~$MeV, $m=10~$MeV), respectively (each for a single configuration).} \label{fig:GeneralQuarkCondensateAndBC} \end{figure} \begin{figure}[t] \centering \begin{minipage}[b]{0.75\linewidth} \includegraphics[width=0.9\linewidth]{BareDualCondensate_60MeV_to_250MEV.pdf} \vskip-0.1cm\hspace{2.5cm}$T$[MeV]\vskip0.1cm \includegraphics[width=0.9\linewidth]{BarePolyakovLoop_60MeV_to_250MEV.pdf} \vskip-0.1cm\hspace{2.5cm}$T$[MeV] \end{minipage} \caption{The dual condensate $\tilde{\Sigma}(m)$ in $GeV^3$ at $m=60~$MeV (top) and the Polyakov loop (bottom) as a function of temperature.} \label{fig:DualCondensate} \end{figure} \begin{figure}[h] \centering \begin{minipage}[b]{0.75\linewidth} \includegraphics[width=0.92\linewidth]{TLogBareDualCondensate_60MeV_to_250MEV.pdf} \vskip-0.1cm\hspace{2.5cm}$T$[MeV]\vskip0.1cm \hskip0.13cm\includegraphics[width=0.9\linewidth]{TLogBarePolyakovLoop_60MeV_to_250MEV.pdf} \vskip-0.1cm\hspace{2.5cm}$T$[MeV] \end{minipage} \caption{The `free energy' $-\log \tilde{\Sigma}/\beta$ from the dual condensate at $m=60~$MeV (top) and from the Polyakov loop ($-\log |\langle\mbox{tr}~P\rangle|/\beta$, bottom) vs.\ temperature.} \label{fig:BarePolyakovLoop} \end{figure} Correspondingly, the dual condensate is small at low temperature and larger at high temperatures. It should serve as an order parameter for deconfinement, as the Polyakov loop does. In the quenched case this statement can be made exact because of the same behavior under center transformations \cite{paper:QuenchedDualCondensate}. Here both quantities have at least the same qualitative behavior. In Fig.~\ref{fig:DualCondensate} we show our results for the absolute value of the unrenormalized dual condensate as a function of temperature, compared to the conventional Polyakov loop. We also plot the negative logarithms of both divided by the inverse temperature (Fig.~\ref{fig:BarePolyakovLoop}). For the Polyakov loop the latter has the interpretation of the free energy of an infinitely heavy quark. In analogy to that we might view the same quantity from the dressed Polyakov loops with mass parameter $m$ as the free energy of a test quark with finite mass $m$. All of these quantities show an order parameter behavior in the temperature range of $100$ to $200~$MeV. In the future we want to identify the critical temperatures (through inflection points and susceptibilities) and study their mass dependence. We thank Szabolcs Borsanyi for useful correspondence. F.B.~and B.Z.~are supported by DFG (BR 2872/4-2).
1,477,468,751,338
arxiv
\section{Introduction} \emph{Robust principal component analysis} aims to find a low rank subspace that best approximates a data matrix $M$ which has corrupted entries. It is defined as the problem of decomposing a given matrix $M$ into the sum of a low rank matrix $L$, whose column subspace gives the principal components, and a sparse matrix S, which corresponds to the outliers' matrix. The standard method via \emph{convex optimization} has significantly worse computation time than the \emph{singular value decomposition} (SVD) \citep{NIPS2009_3704, NIPS2010_4005,Candes:2011,5707106,5934412,NIPS2011_4434}. Recent results developing efficient algorithms for robust PCA contributed to notably reduce the running time \citep{Rodriguez2013, NIPS2014_5430, Chen, NIPS2016_6445, pmlr-v70-cherapanamjeri17a}. However, in some cases, it is of utmost importance to \emph{instantaneously} produce robust low rank approximations of a given matrix. In particular, in Finance we need instantaneously and for long time series of multiple assets, robust low rank estimates of covariance matrices. For instance, this is the case for high-frequency trading \citep{ait2010high,ait2017,ait2019}. Moreover, it is important to have \emph{one} procedure applicable to different data that provides such estimates and is insensitive to small noise perturbations, which is not the case in classical approaches. Our contribution lies precisely in this area by introducing an instantaneous algorithm for robust PCA for symmetric positive semidefinite matrices. Specifically, we provide a simple deep learning based algorithm which ensures continuity with respect to the input matrices, such that small perturbations lead to small changes in the output, while this is not the case for classical methods. Moreover, when the deep neural network is trained, only an evaluation of it is needed to decompose any new matrix. Therefore the computation time is negligible, which is an undeniable advantage in comparison with the classical algorithms. To support our claim, theoretical guarantees are provided for the expressiveness of our neural network architecture, convergence to an optimal solution of the learning problem and the convergence of the optimization scheme. \section{Related Work} Let $||M||_{\ell_1} = \sum_{i,j}{|M_{i,j}|}$ denote the $\ell^1$-norm of the matrix $M$. For a given $\lambda>0$, the RPCA is formulated as \begin{equation*} \min_{L,S} \hbox{rank} (L)+ \lambda ||S||_{\ell_1} \quad \hbox{s.t }~ M = L + S\,. \end{equation*} Although it is $\mathcal{NP}$-hard, approximated minimization problems can be solved in polynomial time. The most popular method to solve RPCA is via \textit{convex relaxation} \citep{NIPS2009_3704, NIPS2010_4005,Candes:2011,5707106,5934412,NIPS2011_4434}. It consists of a nuclear-norm-regularized matrix approximation which needs a time-consuming full Singular Value Decomposition (SVD) in each iteration. Let $||M||_{*} = \sum_{i}{|\sigma_i(M)|}$ denote the nuclear norm of $M$, i.e. the sum of the singular values of $M$. Then for a given $\lambda>0$, the problem can be formulated as \begin{equation} \label{convex-relaxation-formulation} \min_{L,S} ||L||_{*}+ \lambda ||S||_{\ell_1} \quad \hbox{s.t }~ M = L + S. \end{equation} The Principal Component Pursuit (PCP) \cite{Candes:2011} is considered as the state-of-the-art technique and solves \eqref{convex-relaxation-formulation} by an \emph{alternating directions algorithm} which is a special case of a more general class of augmented Lagrange multiplier (ALM) algorithms known as \emph{alternating directions methods}. The Inexact ALM (IALM) \cite{NIPS2011_4434} is an computational improved version of the ALM algorithm that reduces the number of SVDs needed. As the previous algorithms need time-consuming SVDs in each iteration, several non convex algorithms have been proposed to solve \eqref{convex-relaxation-formulation} for a more efficient decomposition of high-dimensional matrices \citep{Rodriguez2013, NIPS2014_5430, Chen, NIPS2016_6445, pmlr-v70-cherapanamjeri17a}. In particular, The Fast Principal Component Pursuit (FPCP) \citep{Rodriguez2013} is an alternating minimization algorithm for solving a variation of \eqref{convex-relaxation-formulation}. By incorporating the constraint into the objective, removing the costly nuclear norm term, and imposing a rank constraint on $L$, the problem becomes \begin{equation*} \min_{L,S} \tfrac{1}{2}||M-L-S||^2_{F}+ \lambda ||S||_{\ell_1} \quad \hbox{s.t.} \quad \hbox{rank}(L)=r \,. \end{equation*} The authors apply an alternating minimization to solve this equation using a partial SVD. The RPCA via Gradient Descent (RPCA-GD) \cite{NIPS2016_6445} solves \eqref{convex-relaxation-formulation} via a gradient descent method. The low rank Cholesky factorization is used to solve semidefinite programs \citep{Burer01anonlinear, journee2008low, JourneBach2010, pmlr-v49-bandeira16, de2014global, Boumal2016, li2019non, Rong2016}. We are not only interested in the low rank approximation, but in a \emph{robust} low rank approximation. In that sens, we estimate the low rank approximation of a matrix which can be corrupted by outliers. Therefore, we are using the $\ell_1$ norm instead of the Frobenius norm as it is done in those works. The most related work to ours is done in \citep{baes2019lowrank}, where the minimization problem \begin{equation}\label{eq_target_problem} \min_{U \in \mathbb{R}^{n \times k}} \lVert M - U U^\top \rVert_{\ell_1} \end{equation} is considered. A neural network parametrization $U_\theta(M)$ of the matrix $U$ is optimized with gradient descent to find an approximate solution for any fixed $M$. Here, for every new input $M$ the optimization has to be repeated. In contrast, we train a neural network on a synthetic training dataset and the learnt parameters can be reused for any unseen matrix $M'$. Other related problems are \emph{matrix factorization} \citep{NIPS2000_1861, 4685898, Trigeorgis:2014, Kuang2012}, \emph{matrix completion} \citep{Xue:2017, 2018arXiv181201478N, Sedhain:2015}, \emph{sparse coding} \citep{gregor2010learning, ablin2019learning}, \emph{robust subspace tracking} \citep{he2011online, narayanamurthy2018nearly} and \emph{anomaly detection} \citep{chalapathy2017robust}. \cite{solomon2019deep} suggested a deep robust PCA algorithm tailored to clutter suppression in ultrasound, which still depends on applying SVDs in each layer of their convolutional recurrent neural network. A key component of our approach is the universal approximation capability of the deep neural model implementing Denise. This result is not covered by any of the available universal approximation theorems, including those for standard feedforward neural networks \cite{hornik1989multilayer, barron1992neural, KidgerLyons2020} and those concerning non-euclidean geometries \cite{kratsios2020noneuclidean}. In contrast, our universal approximation result guarantees that we can generically approximate any function encoding both the geometric and algebraic structure of the low-rank plus sparse decomposition problem. \section{Denise}\label{sec:main} We present \textit{{Denise}}\footnote{The name \textit{Denise} comes from \textbf{De}ep and \textbf{Se}midefi\textbf{ni}te.}, an algorithm that solves the robust PCA for positive semidefinite matrices, using a deep neural network. The main idea is the following: according to the Cholesky decomposition, a positive semidefinite symmetric matrix $L \in \mathbb{R}^{n\times n}$ can be decomposed into $L=UU^{T}$. If $U$ has $n$ rows and $r$ columns, then the matrix $L$ will be of rank $r$ or less. In order to obtain the desired decomposition $M = L + S$, we therefore reduce the problem to finding a matrix $U\in\mathbb{R}^{n\times r}$ such that $S := M-UU^{T}$ is a sparse matrix, i.e. a matrix that contains a lot of zero entries. In particular, we define the matrix $U = U_{\theta}(M) \in \mathbb{R}^{n \times r}$ as the output of a neural network. Then the natural objective of the training of the neural network is to achieve sparsity of $S_{\theta}(M) := M - U_{\theta}(M)U_{\theta}(M)^\top$. A good and widely used approximation of this objective is to minimize the $\ell_1$-norm of $S_{\theta}(M)$ as in \eqref{eq_target_problem}. To achieve this, the neural network can be trained in a supervised or an unsupervised way, as explained below, depending on the available training dataset. Once Denise is trained, we only need to evaluate it in order to find the low rank plus sparse decomposition \begin{equation*} M = \underbrace{U_{\theta}(M)U_{\theta}(M)^{T}}_{L} + \underbrace{M-U_{\theta}(M)U_{\theta}(M)^{T}}_{S} \end{equation*} of any new positive semidefinite matrix $M$. Therefore, Denise considerably outperforms all existing algorithms in terms of speed, as they need to solve an optimization problem for each new matrix. By construction, Denise guarantees that the estimator matrix $L=U_\theta(M) U_\theta(M)^T$ is positive semidefinite, even if the input matrix $M$ is not. For example, when computing covariance matrices of asynchronous data, they may not be positive semidefinite. This occurs in financial data when the quotations used for the covariance estimator matrices are not taken at the same frequency \citep{Higham2016, BARNDORFFNIELSEN2011149,Corsi2012,peluso2014bayesian}. \subsection{Supervised Learning} If a training set is available where for each matrix $M$ an optimal decomposition into $L+S$ is known, then the network can be trained directly to output the correct low rank matrix, by minimizing the supervised loss function \begin{equation}\label{equ:supervised loss function} \begin{split} \Phi_s(\theta) :&= \mathbb{E} \left[||L-U_{\theta}(M)U_{\theta}(M)^{T}||_{\ell_1} \right] \\ & = \mathbb{E} \left[||S-S_{\theta}(M) ||_{\ell_1} \right] \,. \end{split} \end{equation} A synthetic dataset of positive semidefinite matrices with known decomposition can be created by simulating Cholseky factors and sparse matrices (Section~\ref{sec:numerical results}). \subsection{Unsupervised Learning} In standard applications only the matrix $M$ but no optimal decomposition is known. In this case, the neural network can be trained by minimizing the unsupervised loss function \begin{equation} \label{eq:loss} \begin{split} \Phi_u(\theta) :&= \mathbb{E} \left[||M-U_{\theta}(M)U_{\theta}(M)^{T}||_{\ell_1} \right]\\ & = \mathbb{E} \left[||S_{\theta}(M)||_{\ell_1} \right] \,. \end{split} \end{equation} \subsection{Combining Supervised Learning and Unsupervised Finetuning} Often the amount of available training data of a real world dataset is limited. Therefore, we consider the following training procedure. First, Denise is trained with the supervised loss function on a large synthetic dataset, where the decomposition is known (Section \ref{Training}). Then the trained network can be finetuned with the unsupervised loss function on a real world training dataset of matrices, where the optimal decomposition is unknown. This way, Denise can incorporate the peculiarities of the real world dataset. \section{Theoretical Guarantees for Denise} We provide theoretical guarantees that on every compact subset of symmetric positive semidefinite matrices, the function performing the optimal low-rank plus sparse decomposition can be approximated arbitrarily well by the neural network architecture of Denise. Furthermore, we show that the optimization procedure by which Denise is trained converges to a stationary point of the robust PCA problem. \subsection{Notation} Let $\mathbb{S}_{n}$ be the set of $n$-by-$n$ symmetric matrices, $P_n \subset \mathbb{S}_{n}$ the subset of positive semi-definite matrices and $P_{k,n} \subset P_n$ the subset of matrices with rank at most $k \leq n$. We consider a matrix $M=\left[M_{i,j}\right]_{i,j}\in P_n$, e.g., a covariance matrix. The matrix $M$ is to be decomposed as a sum of a matrix $L=\left[L_{i,j}\right]_{i,j}\in P_{k,n}$ of rank at most $k$ and a sparse matrix $S=\left[S_{i,j}\right]_{i,j}\in P_n$. By the Cholesky decomposition \cite[Thm 10.9 b]{higham2002accuracy}, we know that the matrix $L$ can be represented as $L=UU^{T}$, where $U=\left[U_{i,j}\right]_{i,j}\in\mathbb{R}^{n\times k}$; thus $M=UU^{T}+S$. Let $f_\theta : \mathbb{R}^{n(n+1)/2} \to \mathbb{R}^{nk}$ be a feedforward neural network with parameters $\theta$. As the matrix $M$ is symmetric, the dimension of the input can be reduced from $n^2$ to $n(n+1)/2$ by taking the triangular lower matrix of $M$. Moreover, we convert the triangular lower matrix to a vector. We combine these two transformations in the operator $h$ \begin{equation*} h :\mathbb{S}_{n} \to \mathbb{R}^{n(n+1)/2}, \quad M \mapsto (M_{1,1}, M_{2,1}, M_{2,2},\dots,M_{n,1}, \dots , M_{n,n})^{T}\,. \end{equation*} Similarly, every vector $X$ of dimension $nk$ can be represented as a $n$-by-$k$ matrix with the operator $g$ defined as \begin{equation*} g :\mathbb{R}^{nk} \to \mathbb{R}^{n \times k}, \quad X \mapsto \begin{pmatrix} X_1 & \dots & X_{k} \\ \vdots & & \vdots \\ X_{(n-1)k + 1} & \dots & X_{(n-1)k + k} \end{pmatrix}. \end{equation*} Using $h$ and $g$, the matrix $U$ can be expressed as the output of the neural network $ U_{\theta} (M) = g(f_{\theta}\left(h(M)\right))$ and the low rank matrix can be expressed as $L_{\theta} (M) = \rho(f_{\theta}\left(h(M)\right))$ for \begin{equation*} \begin{aligned} \rho: \rrflex{k n} &\rightarrow P_{k,n}, \quad X \mapsto g(X) g(X)^{T} . \end{aligned} \end{equation*} We assume to have a set $\mathcal{Z} \subset \mathbb{S}_{n} \times P_{k,n}$ of training sample matrices $(M,L)$, which is equipped with a probability measure $\P$, i.e. the distribution of the training samples. In the supervised case, we assume that $L$ is an optimal low rank matrix for $M$, while in the unsupervised case, where $L$ is not used, it can simply be set to $0$. For a given training sample $(M,L)$, the supervised and unsupervised loss functions $\varphi_s, \varphi_u : \Omega \times \mathcal{Z} \to \mathbb{R}$ are defined as \begin{equation} \label{eq:supervised_loss} \varphi_s(\theta, M, L) = \left\Vert L- \rho\left(f_{\theta}\left(h(M)\right)\!\right)\right\Vert_{\ell_1} \end{equation} and \begin{equation} \label{eq:unsupervised_loss} \varphi_u(\theta, M, L) = \left\Vert M- \rho\left(f_{\theta}\left(h(M)\right)\!\right)\right\Vert_{\ell_1}\,. \end{equation} Then, the overall loss functions as defined in \eqref{equ:supervised loss function} and \eqref{eq:loss} can be expressed for $\varphi \in \{ \varphi_s, \varphi_u \}$ \begin{equation*} \Phi(\theta) = \mathbb{E}_{(M,L) \sim \P}\left[\varphi(\theta, M, L)\right]. \end{equation*} Moreover, the Monte Carlo approximations of these loss functions are given by \begin{equation*} \hat \Phi^N(\theta) = \frac{1}{N} \sum_{i=1}^N \varphi(\theta, M_i, L_i), \end{equation*} where $(M_i, L_i)$ are i.i.d. samples of $\P$. Denise can be trained using Stochastic Gradient Descent (SGD). A schematic version of these supervised and unsupervised training schemes is given in the pseudo-Algorithm~\ref{eq:gradient-method}. \begin{algorithm}[H] \begin{algorithmic} \caption{Training of Denise} \label{eq:gradient-method} \STATE Fix $\theta_0 \in \Omega, N \in \mathbb{N}$ \FOR{$j\geq 0$} \STATE Sample i.i.d. matrices $(M_1, L_1), \dotsc, (M_N, L_N) \sim \P$ \STATE Compute the gradient $G_j := \tfrac{1}{N} \sum_{i=1}^N \nabla_{\theta}{\varphi}(\theta_{j}, M_i, L_i)$ \STATE Determine a step-size $h_{j}>0$ \STATE Set $\theta_{j+1}=\theta_{j}-h_{j} G_j$ \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Solution Operator to the Learning Problem} Our first result guarantees that there is a (non-linear) solution operator to~\eqref{eq_target_problem}. Thus, there is an optimal low rank plus sparse decomposition for Denise to learn. \begin{theorem}\label{theorem_univ_approx} Fix a Borel probability measure ${\mathbb{P}}$ on $P_{n}$ and set $0<\varepsilon\leq 1$ satisfying~\ref{cond_KL}. Then: \begin{enumerate}[(i)] \item For every $M \in P_n$, the set of optimizers, $\underset{U \in \rrflex{n \times k}}{\operatorname{argmin}}\, \|M - U U^T\|_{\ell_1}$, is non-empty and every $U \in \underset{U \in \rrflex{n \times k}}{\operatorname{argmin}}\, \|M - U U^T\|_{\ell_1}$ satisfies \[L := U U^T \in \underset{L \in P_{k,n}}{\operatorname{argmin}}\, \|M - L\|_{\ell_1},\] \item There exists a Borel-measurable function $f:P_n \rightarrow \rrflex{n \times k}$ satisfying \begin{equation*} f(M) \in \underset{U \in \rrflex{n \times k}}{\operatorname{argmin}}\, \|M - U U^T\|_{\ell_1} . \end{equation*} \item There exists a compact $K_{\varepsilon}\subseteq P_{n}$ such that: $ {\mathbb{P}}(K_{\varepsilon})\geq 1-\varepsilon$ and on which $f$ is continuous. \end{enumerate} \end{theorem} Theorem~\ref{theorem_univ_approx} (iii) guarantees that the map \begin{equation} f^{\star}:K_{\varepsilon} \ni M \mapsto f(M)f(M)^{\top} \in P_{k,n} \label{eq_target_function_optimal_decomposer} , \end{equation} is continuous and can be written as the square of a continuous function $f$ from $K_{\varepsilon}$ to $\rrflex{n\times k}$. \subsection{Novel Universal Approximation Theorem} We introduce a structured subset of $\rrflex{n\times n}$-valued functions encapsulating the relevant structural properties of the solution map in~\eqref{eq_target_function_optimal_decomposer}. We fix a compact $X\subset P_n$. Denise's ability to optimally solve~\eqref{eq_target_problem} is contingent on its ability to uniformly approximate any function in $ \sqrt{C}(X,P_{k,n}) := \left\{ f \in C(X,P_{k,n}) \left| \, \exists \tilde{f} \in C(X,\rrflex{n \times k}) :\, f = \tilde{f}\tilde{f}^{\top} \right. \right\} . $ Unlike $C(X,\rrflex{n\times k})$, functions in $\sqrt{C}(X,P_{k,n})$ always output meaningful candidate solutions to~\eqref{eq_target_problem} since they are necessarily low-rank, symmetric, and positive semi-definite matrices. Due to this non-Euclidean structure the next result is not covered by the standard approximation theorems of \cite{hornik1989multilayer} and \cite{KidgerLyons2020}. Similarly, every function in $\sqrt{C}(X,P_{k,n})$ encodes the algebraic property~\eqref{eq_target_function_optimal_decomposer}; namely, it admits a point-wise Cholesky-decomposition which is a continuous $\rrflex{n\times k}$-valued function. Thus, $\sqrt{C}(X,P_{k,n})$ encapsulates more algebraic structure than $C(X,P_{k,n})$ does. This algebraic structure puts approximation in $\sqrt{C}(X,P_{k,n})$ outside the scope of the purely geometric approximation theorems of \cite{kratsios2020noneuclidean}. Our next result concerns the universal approximation capabilities in $\sqrt{C}(X,P_{k,n})$ by the set of all deep neural models $\hat{f}:P_n\to P_{k,n}$ with representation $\hat{f}=\rho \circ f_{\theta} \circ h$, where $f_{\theta}:\rrflex{\frac{n(n+1)}{2}}\rightarrow \rrflex{kn}$ is a deep feedforward network with activation function $\sigma$. Denote the set of all such models by $\NN[\rho, h]$. The width of $\hat{f}\in \NN[\rho, h]$ is defined as the width of $f_{\theta}$. The activation function $\sigma$ defining $f_{\theta}$ is required to satisfy the following condition of~\cite{KidgerLyons2020}. \begin{assumption}\label{cond_KL} The activation function $\sigma\in C({\mathbb{R}})$ is non-affine and differentiable at at-least one point with non-zero derivative at that point. \end{assumption} \begin{theorem}\label{theorem_UAT_AlgebroGeo} Let $X\subset P_n$ be compact and let $\sigma\in C({\mathbb{R}})$ satisfy Condition~\ref{cond_KL}. For each $\varepsilon>0$, and each $f\in \sqrt{C}(X,P_{k,n})$, there is an $\hat{f}\in \NN[ g,h ]$ of width at-most $\frac{n(n+2k+1)+4}{2}$ such that: \begin{equation}\label{eq_lem_uni_condition} \max_{M \in X}\, \left\| f\left(M\right) - \hat{f}(M) \right\|_{\ell_1} <\varepsilon . \end{equation} \end{theorem} Theorems~\ref{theorem_univ_approx} and~\ref{theorem_UAT_AlgebroGeo} imply that $\NN[\rho, h]$ can approximate $f^{\star}$ with arbitrarily high probability. \begin{corollary}\label{cor_learnability} Fix a Borel probability measure ${\mathbb{P}}$ on $P_{n}$, $0<\varepsilon\leq 1$, and $\sigma$ satisfying~\ref{cond_KL}. Then, there exists some $\hat{f}\in \NN[ g,h ]$ of width at-most $\frac{n(n+2k+1)+4}{2}$ such that \begin{equation} \max_{M \in K_{\varepsilon}}\, \left\| f^{\star}\left(M\right) - \hat{f}(M) \right\|_{\ell_1} <\varepsilon, \label{eq_lem_uni_condition_target} \end{equation} where $K_\varepsilon$ was defined in Theorem \ref{theorem_univ_approx}. \end{corollary} \subsection{Convergence of Denise to a Solution Operator of the Learning Problem} In this section we show that Denise converges in mean to an optimal solution of the learning problem \eqref{eq_target_problem} when Denise optimizes the supervised loss function \eqref{equ:supervised loss function} under the following assumptions. \begin{assumption}\label{assumption_measurability} We assume to have a compact subset $X \subset P_n$ of matrices $M$ such that a continuous function $f : X \to \mathbb{R}^{n \times k}$ satisfying \begin{equation*} f(M) \in \underset{U \in \rrflex{n \times k}}{\operatorname{argmin}}\, \|M - U U^T\|_{\ell_1} \end{equation*} for all $M \in X$ exists. Moreover, we assume that for $f^\star(M) := f(M)f(M)^\top$, the training set is given by \begin{equation*} \mathcal{Z} := \{ (M, L) \, | \, M \in X, L = f^\star(M) \} \end{equation*} and that we consider a probability measure $\P$ such that $\P (\mathcal{Z})=1$. \end{assumption} By Theorem \ref{theorem_univ_approx}, we know that such a set $X$ exists. For any $D \in \mathbb{N}$ let $\mathcal{N}_{\rho, h}^{\sigma, D} \subset \NN[\rho, h]$ be the set of neural networks of depth at most $D$ and let $\Theta_D$ be the set of all admissible weights for such neural networks. \begin{theorem}\label{thm:convergence to optimal decomp} If for every fixed depth $D$, the weights $\theta_D$ of $\hat f_{\theta_D} \in \mathcal{N}_{\rho, h}^{\sigma, D}$ are chosen such that $\Phi_s(\theta_D)$ is minimal, then $ \| \hat f_{\theta_D} - f^\star \|_{\ell_1}$ converges to $0$ in mean ($L^1$-norm) as $D$ tends to infinity. \end{theorem} In the following, we assume the size of the neural network $D$ is fixed and we study the convergence of the Monte Carlo approximation with respect to the number of samples $N$. Moreover, we show that both types of convergence can be combined. To do so, we define $\tilde\Theta_D := \{ \theta \in \Theta_D \; \vert \; \lvert \theta \rvert_2 \leq D \}$, which is a compact subspace of $\Theta_D$. It is straight forward to see, that $\Theta_D$ in Theorem \ref{thm:convergence to optimal decomp} can be replaced by $\tilde\Theta_D$. Indeed, if the needed neural network weights for an $\varepsilon$-approximation have too large norm, then one can increase $D$ until it is sufficiently big. \begin{theorem} \label{thm:MC convergence} For every $D \in \mathbb{N}$, $\P$-a.s. \begin{equation*} \hat\Phi^N_s \xrightarrow{N \to \infty} \Phi_s \quad \text{uniformly on } \tilde\Theta_D. \end{equation*} Let the size of the neural network $D$ be fixed. If for every fixed size $N$ of the training set, the weights $\theta_{D,N} \in \tilde \Theta_D$ are chosen such that $\hat \Phi_s(\theta_{D,N})$ is minimal, then \begin{equation*} \Phi(\theta_{D,N}) \xrightarrow{N \to \infty} \Phi(\theta_{D}). \end{equation*} In particular, one can define an increasing sequence $(N_D)_{D \in \mathbb{N}}$ in $\mathbb{N}$ such that $ \| \hat f_{\theta_{D,N}} - f^\star \|_{\ell_1}$ converges to $0$ in mean ($L^1$-norm) as $D$ tends to infinity. \end{theorem} \subsection{Convergence of the Optimization Scheme} We provide the convergence rate of the training of Denise when using a $\mathcal{C}^2$-approximation of the unsupervised loss function \eqref{eq:unsupervised_loss}. The proof is similar for the supervised loss function \eqref{eq:supervised_loss}. \begin{theorem}\label{cor:convergence of SGD} Let $\varphi$ be replaced by a $\mathcal{C}^2$-approximation. Let $M \sim \P$ be a random variable following the distribution of the training samples and assume that $r:= \lVert M \rVert$ is a random variable in $L^2(\P)$, i.e. $\mathbb{E}[r^2] < \infty$. Here $\lVert \cdot \rVert$ denotes the Frobenius norm. Furthermore, assume that there exists $0<B_{\Omega} < \infty$ such that $\sup_{j \geq 0} \lVert \theta_j\rVert_{\infty} < B_{\Omega}$. Here $(\theta_j)_{y \geq 0}$ is the sequence of parameters (in $\mathbb{R}^d$) defined by Algorithm~\ref{eq:gradient-method}, where we choose the adaptive step-sizes $h_j$ as \begin{equation*} h_j := \left( 4L_{\nabla \Phi}^2 + \sum_{i=1}^{j-1} \lVert G_i \rVert^2 + \varepsilon\right)^{-\frac{1}{2}}\,. \end{equation*}Here, $L_{\nabla \Phi}$ is the Lipschitz constant of the function $\theta \mapsto \nabla_{\theta}\Phi(\theta)$, which exists and is finite. Then there exists a constant $C$ depending on the neural network architecture and on $\Phi(\theta_0) - \Phi^*$, where $\Phi^*:=\min_{ \lVert\theta \rVert \leq B_{\Omega}} \Phi(\theta)$, such that for every $n \in \mathbb{N}$ \begin{equation*} \mathbb{E}\left[\min_{1\leq j \leq n} \| \nabla \Phi (\theta^{(j)}) \| \right] \leq \frac{C}{\sqrt{n}}, \end{equation*} In particular, for every tolerance level $\varepsilon>0$ we have \begin{equation*} n\geq \left( \tfrac{C}{\varepsilon} \right)^{2} \; \Longrightarrow \; \mathbb{E}\left[\min_{1\leq j \leq n} \| \nabla \Phi (\theta^{(j)}) \| \right] \leq \varepsilon. \end{equation*} \end{theorem} \section{Numerical Results}\label{sec:numerical results} In this sections we provide numerical results of Denise. We first train Denise with the supervised loss function on a synthetic train dataset and evaluate it on a synthetic test dataset. We also evaluate Denise on a synthetic test dataset which is generated with a different distribution. Finally, we test Denise on a real word dataset before and after finetuning with the unsupervised loss function. The source code is avaible at \url{https://github.com/DeepRPCA/Denise}\,. \subsection{Supervised Training} \label{Training} We create a synthetic dataset in order to train Denise using the supervised loss function \eqref{equ:supervised loss function}. In particular, we construct a collection of $n$-by-$n$ positive semidefinite matrices $M$ that can be written as \begin{equation} M = L_0+S_0 \end{equation} for a known matrix $L_0$ of rank $k_0\leq n$ and a known matrix $S_0$ of given sparsity $s_0$. By sparsity we mean the number of zero-valued elements divided by the total number of elements. For example, a sparsity of $0.95$ means that $95\%$ of the elements of the matrix are zeros. To construct a low rank matrix $L_0$, we first sample $nk_0$ independent standard normal random variables that we arrange into an $n$-by-$k_0$ matrix $U$. Then $L_0$ is defined as $UU^T$. To construct a symmetric positive semidefinite sparse matrix $S_0$ we first sample a random pair $(i,j)$ with $1\leq i<j\leq n$ from an uniform distribution. We then construct an $n$-by-$n$ matrix $\tilde{S}_0$ that has only four non-zero coefficients: the off-diagonal elements $(i,j)$ and $(j,i)$ are set to a number $b$ drawn uniformly randomly in $[-1,1]$, the diagonal elements $(i,i)$ and $(j,j)$ are set to a number $a$ drawn uniformly randomly in $[|b|,1]$. An example of a $3\times 3$ matrix with $(i,j) = (1,2)$, $b=-0.2$ and $a=0.3$ is the following: \begin{equation*} \tilde{S}_0 = \left(\begin{matrix} 0.3 & -0.2 & 0 \\ -0.2 & 0.3 & 0 \\ 0 & 0 & 0 \end{matrix}\right)\,. \end{equation*} This way, the matrix $\tilde{S}_0$ is positive semidefinite. The matrix $S_0$ is obtained by summing different realizations $\tilde{S}_0^{(l)}$, each corresponding to a different pair $(i,j)$, until the desired sparsity $s_0 = 0.95$ is reached. With this method, we create a synthetic dataset consisting of 10 million matrices for the training set. Other possibilities to generate the training set exist. For example, other distributions or different levels of sparsity can be used. Diversifying the training set can lead to better performance. \subsubsection{Evaluation} We create a synthetic dataset consisting of 10 thousand matrices for the testing set, using the same method presented in Section~\ref{Training}. The synthetic dataset introduced in Section~\ref{Training} is composed of randomly generated low rank plus sparse matrices of a certain rank and sparsity. Therefore, a network which performs well on this random test set should also perform well on a real world datasets with the same rank and similar sparsity. We compare Denise against PCP \cite{Candes:2011}, IALM\cite{NIPS2011_4434}, FPCP \citep{Rodriguez2013} and RPCA-GD\cite{NIPS2016_6445}. All algorithms are implemented as part of the LRS matlab library \citep{lrslibrary2015, Bouwmans:2016:HRL:2994445}. To implement Denise, we used the machine learning framework Tensorflow \citep{tensorflow2015whitepaper} with Keras APIs \citep{chollet2015keras}. We trained our model using 16 Google Cloud TPU-v2 hardware accelerators. Training took around 8 hours (90 epochs), at which point loss improvements were negligible. Evaluation of all the algorithms was done on the same computer\footnote{A machine with $2\times$Intel Xeon CPU E5-2697 v2 (12 Cores) 2.70GHz and 256 GiB of RAM.}. The code to generate the synthetic dataset is deterministic by setting a fixed random seed. We compare the rank of the low rank matrix $L$ and the sparsity of the sparse matrix $S$. We determine the \emph{approximated rank} $r(L)$ by the number of eigenvalues of the low-rank $L$ that are larger than $\varepsilon = 0.01$. Similarly, we determine the \emph{approximated sparsity} $s(L)$ by proportion of the entries of the sparse matrix $S$ which are smaller than $\varepsilon = 0.01$ in absolute value. Moreover, we compare the relative error between the computed low rank matrix $L$ and the initial low rank matrix $L_0$, by rel.error$(L,L_0) =||L-L_0||_F /||L_0||_F$. Similarly, we compare the relative error between the computed sparse matrix $S$ and the initial sparse matrix $S_0$, by rel.error$(S,S_0) =||S-S_0||_F /||S_0||_F$. We have tested several neural network architectures, and settled on a simple feed-forward neural network of four layers, with a total of $32\times n(n+1)/2$ parameters. Moreover, we have tested various sizes, sparsity and ranks. All results were similar, hence we only present those using size $n=20$, sparsity $s_0=0.95$ and initial rank $k_0=3$. To enable a fair comparison between the algorithms, we first ensure that the obtained low-rank matrices $L$ all have the same rank. While in FPCP, RPCA-GD and Denise the required rank is set, in PCP and IALM the required rank is depending on the parameter $\lambda$. Therefore, we empirically determined $\lambda$ in order to reach the same rank. In particular, with $\lambda = 0.56/\sqrt{n}$ for the synthetic dataset and $\lambda = 0.64 /\sqrt{n}$ for the real dataset, we approximately obtain a rank of $3$ for matrices $L$. Overall Denise obtains comparable results to the state-of-the-art algorithms, while significantly outperforming the other algorithms in terms of speed (Table \ref{table:synthetic normal}). This is due to the fact that only one forward pass through the neural network of Denise is needed during evaluation to compute the decomposition. In contrast to this very fast operation, the state-of-the-art algorithms need to solve an iterative optimization algorithm for each new matrix. \input{tables/comparison_normal0} \subsubsection{Evaluation on a differently simulated data} We additionally create a synthetic testing set consisting of 10 thousand matrices, using the same method presented in Section~\ref{Training} but with a different distribution. In particular, the low rank matrices are generated using the Student's $t$-distribution (with parameter $k=5$) instead of using the standard normal distribution. Also in this example, Denise achieves similar results, while being instantaneous (Table \ref{table:synthetic t-dist}). \input{tables/comparison_student2} \subsection{Application on S\&P500 Stocks Portfolio} We consider a real world dataset of about 1'000 $20$-by-$20$ correlation matrices of daily stock returns (on closing prices), for consecutive trading days, shifted every 5 days, between 1989 and 2019. The considered stocks belong to the S\&P500 and have been sorted by the GICS sectors\footnote{ According to the global industry classification standard: energy , materials , industrials, real estate, consumer discretionary, consumer staples, health care, financials, information technology, communication services, utilities.}. The first $77\%$ of the data is used as training set and the remaining $23\%$ as test set. Denise is once evaluated on the test set before and once after finetuning it on the training set (Table \ref{realtable}). The finetuning considerable improves the performance of Denise. Upon inspection we find that Denise offers comparable performance to the leading fastest robust PCA algorithm, namely FPCP, while executing 30$\times$ faster. The synthetic test dataset is composed of $10'000$ matrices, why here the test dataset contains around $200$ matrices. This explains why the computation time of Denise is higher here, as the effort needed to launch the computations is the same no matter whether $10'000$ or $200$ matrices are evaluated. If repeating the test set such that it has again $10'000$ samples, Denise achieves the same speed as on the synthetic dataset ($0.05$ ms). In particular, Denise has the advantage of becoming faster when applied to more samples. \input{tables/N20_market.tex} \begin{center} \begin{figure}[hp] \centering \begin{subfigure}[c]{0.70\textwidth} \begin{minipage}[c]{0.32\linewidth}% \centering \includegraphics[width=1\linewidth]{plots//PCP_M.png} \vspace{-6mm} \caption*{{$M$}} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//IALM_M.png} \vspace{-6mm} \caption*{{$M$}} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//FPCP_M.png} \vspace{-6mm} \caption*{{$M$}} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//RPCA-GD_M.png} \vspace{-6mm} \caption*{{$M$}} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//Denise_M.png} \vspace{-6mm} \caption*{{$M$}} \vspace{2mm} % \end{minipage}% \vspace{0.5cm} \begin{minipage}[c]{0.32\linewidth}% \centering \includegraphics[width=1\linewidth]{plots//PCP_L.png} \vspace{-6mm} \caption*{$L$ (PCP)} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//IALM_L.png} \vspace{-6mm} \caption*{$L$ (IALM)} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//FPCP_L.png} \vspace{-6mm} \caption*{$L$ (FPCP)} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//RPCA-GD_L.png} \vspace{-6mm} \caption*{$L$ (RPCA-GD)} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//Denise_L.png} \vspace{-6mm} \caption*{$L$ (Denise (FT))} \vspace{2mm} % \end{minipage}% \vspace{-0.5cm} \begin{minipage}[c]{0.32\linewidth}% \centering \includegraphics[width=1\linewidth]{plots//PCP_S.png} \vspace{-6mm} \caption*{$S$ (PCP)} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//IALM_S.png} \vspace{-6mm} \caption*{ {$S$ (IALM)}} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//FPCP_S.png} \vspace{-6mm} \caption*{ {$S$ (FPCP)}} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//RPCA-GD_S.png} \vspace{-6mm} \caption*{ {$S$ (RPCA-GD)}} \vspace{2mm} \includegraphics[width=1\linewidth]{plots//Denise_S.png} \vspace{-6mm} \caption*{ $S$ (Denise (FT))} \vspace{2mm} % \end{minipage \end{subfigure} \caption{Decomposition into a low-rank plus a sparse matrix of the correlation matrix of a portfolio of 20 stocks among the S\&P500 stocks. The forced rank is set to $k=3$. We have $||M - L||_F/||M||_F$ at $0.15$ for PCP, $0.15$ for IALM, at $0.11$ for FPCP, at $0.22$ for RPCA-GD and at $0.15$ for Denise. The reconstruction metric $||M - L - S||_F/||M||_F$ is $0$ for all algorithms. The computation times in milliseconds are: $103.24$ for PCP, $28.66$ for IALM, $15.20$ for FPCP, $58.17$ for RPCA-GD and $0.62$ for Denise. } \label{sp500} \end{figure} \end{center} \section{Proofs} \subsection{Proof of Low Rank Recovery via Universal Approximation} Let $\left(P_n, dist(A,B) :=\|A-B \|_{\ell_1} \right)$ be the metric space of $n\times n$ symmetric positive semi-definite matrices with real coefficient. Let $C(X,P_{k,n})$ be the set of continuous functions from $X$ to $P_{k,n}$, given any (non-empty) subset $X\subset P_n$. Analogously to~\cite{leshno1993multilayer}, the set $C(X,P_{k,n})$ is made a topological space, by equipping it with the topology of uniform convergence on compacts, also called compact-convergence, which is generated by the sub-basic open sets of the form \begingroup\makeatletter\def\f@size{8}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}} \begin{equation*} B_{K}(f, \varepsilon):= \left\{ g \in C(X,P_{k,n})\left| \sup_{x \in K}\|f(x)-g(x)\|_{\ell_1} \right.< \varepsilon \right\}\,, \end{equation*} \endgroup where $\varepsilon > 0$, $K \subset X$ compact and $f \in C(X,P_{k,n})$. In this topology, a sequence $\{f_j\}_{j \in {\mathbb{N}}}$ in $C(X,P_{k,n})$ converges to a function $f \in C(X,P_{k,n})$ if for every non-empty compact subset $K\subseteq X$ and every $\varepsilon>0$ there exists some $N\in {\mathbb{N}}$ for which $$ \sup_{x \in K}\|f_j(x)-f(x)\|_{\ell_1} < \varepsilon \qquad \text{for all } j \geq N . $$ This topological space is metrizable. The topology on $\sqrt{C}(X,P_{k,n})$ is the subspace topology induced by inclusion in $C(X,P_{k,n})$ (see \citep[Chapter 18]{MunkesTop2}). \begin{proof}[{Proof of Theorem~\ref{theorem_univ_approx}}] For every $M \in P_n$, the map from $\rrflex{n \times k}$ to ${\mathbb{R}}$ defined by $U \to \|M-U U^T\|_{\ell_1}$ is continuous, bounded-below by $0$, and for each $\lambda>0$ the set \begin{equation} \left\{ U \in \rrflex{n \times k}:\, \|M-U U^T\|_{\ell_1}\leq \lambda \right\} , \label{eq_coercivity} \end{equation} is compact in $\rrflex{n \times k}$. Thus, the map $U \to \|M- U U^T\|_{\ell_1}$ is coercive in the sense of \citep[Definition 2.1]{focardi2012gamma}. Hence, by \citep[Theorem 2. ]{focardi2012gamma}, the set $$\underset{U \in \rrflex{n \times k}}{\argmin}\, \|M - U U^T\|_{\ell_1}$$ is non-empty. Furthermore, by the Cholesky decomposition \cite[Theorem 10.9]{higham2002accuracy}, for every $L \in P_{k,n}$ there exists some $U \in \rrflex{n \times k}$ such that $L = U U^{T}$. Since, conversely, for every $U\in \rrflex{n \times k}$ the matrix $UU^{T} \in P_{k,n}$ we obtain (i). Any given $M \in P_n$ is positive semidefinite and therefore $e_1^{T}Me_1 \geq 0$, where $e_1\in {\rrflex{n}}$ has entry $1$ in its first component and all other entries equal to $0$. Therefore, $M_{1,1}= e_1^{T}Me_1 \geq 0$ and in particular, $\sqrt{M_{1,1}} \in {\mathbb{R}}$. Therefore, the matrix $\tilde{U}$ defined by $\tilde{U}_{i,j}=\sqrt{M_{1,1}}I_{i=j=1}$, where $I_{i=j=1}=1$ if $1=i=j$ and $0$ otherwise, is in $\rrflex{n\times 1}\subseteq \rrflex{n \times k}$. Moreover, $\tilde{U}$ satisfies $\|\tilde{U} \tilde U^T\|_{\ell_1}\leq \|M\|_{\ell_1}$. Thus, by the triangle inequality, the set $$ D_{M} := \left\{ U \in \rrflex{n \times k}:\, \|M - U U^T\|_{\ell_1} \leq 2\|M\|_{\ell_1} \right\}, $$ is non-empty. Furthermore, by~\eqref{eq_coercivity} it is compact. In summary, \begin{equation} \emptyset\neq \underset{U \in D_{M} }{\argmin} \, \|M - U U^T\|_{\ell_1} = \underset{U \in \rrflex{n \times k}}{\argmin} \|M - U U^T\|_{\ell_1} \label{eq_redux_compact} . \end{equation} Hence $f(M)$, described by condition (ii), is equivalently characterized by \begin{equation} f(M) \in \underset{U \in D_{M} }{\argmin}\, \|M - U U^T\|_{\ell_1} , \quad \text{for all } M \in P_n \label{eq_redux_compacts_II_reformulation_of_i} . \end{equation} The advantage of~\eqref{eq_redux_compacts_II_reformulation_of_i} over condition (ii) is that the set $ D_{M} , $ is compact, whereas $\rrflex{n \times k}$ is non-compact. For any set $Z$ denote its power-set by $\mathcal{P}(Z)$. Define the function $\phi$ by $$ \begin{aligned} \phi:P_{n} & \rightarrow \mathcal{P}(\rrflex{n \times k}), \\ M & {\mapsto} D_{M}. \end{aligned} $$ Next, we show that $\phi$ is a weakly measurable correspondence in the sense of \citep[Definition 18.1]{guide2006infinite}. This amounts to showing that for every open subset $\mathcal{U}\subseteq \rrflex{n \times k}$ the set $\tilde{\mathcal{U}}:= \left\{ M \in P_n : \, % \phi(M) \cap \mathcal{U} \neq \emptyset % \right\}$ is a Borel subset of $P_n$. To this end, define the function $$ \begin{aligned} G:P_n \times \rrflex{n \times k} &\rightarrow {\mathbb{R}}, \\ \left( M,U \right) &\mapsto 2 \|M\|_{\ell_1} - \|M - U U^T\|_{\ell_1}, \end{aligned} $$ and let $p$ be the canonical projection $P_{n}\times \rrflex{n \times k} \to P_n$ taking $(M,U)$ to $M$. Observe that, for any non-empty open $\mathcal{U}\subseteq \rrflex{n \times k}$ we have that $$ \tilde{\mathcal{U}}= p\left[ G^{-1}\left[[0,\infty)\right] \cap (P_n \times \mathcal{U}) \right] . $$ Since $G$ is continuous and $[0,\infty)$ is closed in ${\mathbb{R}}$ then $G^{-1}[[0,\infty)]$ is closed. Since both $\rrflex{n \times k}$ and $P_n$ are metric sub-spaces of $\rrflex{n^2}$ then they are locally-compact, Hausdorff spaces, with second-countable topology. Thus \citep[Proposition 7.1.5]{measuretheoryCohn} implies that the open set $P_n\times \mathcal{U}= \bigcup_{j \in {\mathbb{N}}} K_j$ where $\{K_j\}_{j \in {\mathbb{N}}}$ is a collection of compact subsets of $P_n\times \rrflex{n \times k}$. Since $P_n$ and $\rrflex{n \times k}$ are $\sigma$-compact, i.e. the countable union of compact subsets, $P_n\times \rrflex{n \times k}$ is also $\sigma$-compact by \citep[Page 126]{WillardGeneralTopology}. Let $\{C_i\}_{i \in {\mathbb{N}}}$ be a compact cover of $P_n\times \rrflex{n \times k}$. Since $P_n\times \rrflex{n \times k}$ is Hausdorff (as both $P_n$ and $\rrflex{n \times k}$ are), each $C_i \cap G^{-1}[[0,\infty)]$ is compact and therefore $ \left\{ K_j \cap \left[C_i \cap G^{-1}[[0,\infty)\right] \right\}_{j,i \in {\mathbb{N}}} $ is a countable cover of $G^{-1}[[0,\infty)]\cap (X\times \mathcal{U})$ by compact sets. Finally, since $p$ is continuous, and continuous functions map compacts to compacts, \[ \begin{aligned} \tilde{\mathcal{U}}=& p\left[ G^{-1}\left[[0,\infty) \cap (P_n \times \mathcal{U})\right] \right]\\ = & p\left[ \bigcup_{i,j \in {\mathbb{N}}} \left[C_i \cap G^{-1}[[0,\infty)\right] \cap K_j \right] \\ = & \bigcup_{i,j \in {\mathbb{N}}} p\left[ C_i \cap G^{-1}[ [0,\infty)] \cap K_j \right]; \end{aligned} \] hence $\tilde{\mathcal{U}}$ is an $F_{\sigma}$ subset of $P_n$ and therefore Borel. In particular, for each open subset $\mathcal{U}\subseteq \rrflex{n \times k}$, the corresponding set $\tilde{\mathcal{U}}$ is Borel. Therefore, $\phi$ is a weakly-measurable correspondence taking non-empty and compact values in $\mathcal{P}\left(\rrflex{n \times k}\right)$. Define, the continuous function $$ \begin{aligned} F: P_n \times \rrflex{n \times k} &\rightarrow [0,\infty), \\ (M,U) &\mapsto \|M - U U^T\|_{\ell_1}. \end{aligned} $$ The conditions of the \citep[Measurable Maximum Theorem; Theorem 18.19]{guide2006infinite} are met and therefore there exists a Borel measurable function $f$ from $ P_n$ to $\rrflex{n \times k}$ satisfying \begingroup\makeatletter\def\f@size{9}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}} $$ f(M) \in \underset{U \in D_{M }{\argmin} \|M - U U^T\|_{\ell_1} = \underset{U \in \rrflex{n \times k}}{\argmin} \, \|M - U U^T\|_{\ell_1} , $$ \endgroup for every $M \in P_n$. This proves (ii). Fix a Borel probability measure ${\mathbb{P}}$ on $P_n$. Since $P_n$ is separable and metrizable then by \citep[ Theorem 13.6]{klenke2013probability} ${\mathbb{P}}$ must be a Radon measure. Moreover, since $\rrflex{n \times k}$ and $P_n$ are locally-compact and second-countable topological spaces, then, the conditions for Lusin's theorem (see \citep[Exercise 13.1.3]{klenke2013probability} for example) are met. Therefore, for every $0<\varepsilon\leq 1$ there exists a compact subset $K_{\varepsilon}\subseteq P_n$ satisfying ${\mathbb{P}}\left( K_{\varepsilon}\right) \geq 1- \varepsilon$ and for which $f$ is continuous on $K_{\varepsilon}$. That is, $f|_{K_{\varepsilon}} \in C(K_{\varepsilon},\rrflex{n \times k})$. Moreover, since $\rho$ is continuous, then $$ f(\cdot) f(\cdot)^{T} |_{K_{\varepsilon}} = \rho\circ f|_{K_{\varepsilon}} \in \sqrt{C}(K_{\varepsilon},P_{k,n}). $$ % % This gives (iii). \end{proof} \begin{proof}[{Proof of Theorem~\ref{theorem_UAT_AlgebroGeo}}] Let $\NN[2^{-1}n(n+1),kn][\sigma,\text{narrow}]$ denote the collection of deep feed-forward networks in $\NN[2^{-1}n(n+1),kn]$ of width at-most $\frac{n(n+2k+1)+4}{2}$. Note that the approximation condition~\eqref{eq_lem_uni_condition} holding for all $\varepsilon>0$, and all $f\in \sqrt{C}(X,P_{k,n})$ is equivalent to the topological condition $\{\rho\circ \hat{f}\circ \operatorname{vect}:\hat{f}\in \NN[2^{-1}n(n+1),kn][\sigma,\text{narrow}]\}$ is dense in $\sqrt{C}(X,P_{k,n})$ for the uniform convergence on compacts topology. We establish the later. Fix a $\sigma \in C({\mathbb{R}})$ satisfying condition~\ref{cond_KL}. By \cite{KidgerLyons2020}, $\NN[2^{-1}n(n+1),kn][\sigma,\text{narrow}]$ is dense $C(\rrflex{n(n+1)/2},\rrflex{kn})$ in the topology of uniform convergence on compacts. Let $\phi := h \circ \iota_2\circ \iota_1$, where $\iota_1: X \to P_n$, $\iota_2:P_n\to \mathbb{S}_{n}$ are the inclusion maps. Since $h$, $\iota_2$, and $\iota_1$ are all continuous and injective, so is $\phi$. Observe that, $g$ is a continuous bijection with continuous inverse. Thus, \citep[Proposition 3.7]{kratsios2020noneuclidean} implies that $\NN[2^{-1}n(n+1),kn][\sigma,\text{narrow}]$ is dense in $C(\phi (X),\rrflex{kn})$ if and only if $\NN[g,\phi][\sigma,\text{narrow}]\triangleq \{g\circ \hat{f}\circ \phi:\hat{f}\in \NN[2^{-1}n(n+1),kn][\sigma,\text{narrow}]\}$ is dense in $C(X,\rrflex{n\times k})$. Let $R:\mathbb{R}^{n\times k}\ni U\to UU^{\top} \in P_{k,n}$. Consider the map $R_{\star}$ sending any $f \in C(X,\rrflex{n \times k})$ to the map $R\circ f \in \sqrt{C}(X,P_{k,n})$. By \citep[Theorem 46.8]{munkres2018elements} the topology of uniform convergence on compacts on $C(X,\rrflex{n \times k})$ and $C(X,P_{k,n})$ are equal to their respective compact-open topologies (see \cite[page 285]{MunkesTop2} for the definition) and by \citep[Theorem 46.11]{MunkesTop2} function composition is continuous for the compact-open topology; whence, $R_{\star}$ is continuous. Moreover, by definition, its image is $\sqrt{C}(X,P_{k,n})$ and therefore, $R_{\star}$ is a continuous surjection as a map from $C(X,\rrflex{n \times k})$ to $\sqrt{C}(X,P_{k,n})$. Since continuous maps send dense subsets of their domain to dense subsets of their image, $R_{\star}\left[\NN[g,\phi][\sigma,\text{narrow}]\right] \triangleq \{R\circ g\circ \hat{f}\circ \phi:\hat{f}\in \NN[2^{-1}n(n+1),kn][\sigma,\text{narrow}]\}\subset \NN[\rho,\phi][\sigma] $ is dense in $\sqrt{C}(X,P_{k,n})$. As density is transitive, $\NN[\rho,\phi][\sigma]$ is dense in $\sqrt{C}(X,P_{k,n})$. \end{proof} \begin{proof}[{Proof of Corollary~\ref{cor_learnability}}] By Theorem~\ref{theorem_univ_approx} and~\eqref{eq_target_function_optimal_decomposer} the map $f^{\star}:P_n\rightarrow P_{k,n}$ is continuous on $K_{\varepsilon}$. Since $K_{\varepsilon}$ is compact, Theorem~\ref{theorem_UAT_AlgebroGeo} implies that there exists some $\hat{f}\in \NN[\rho, h]$ of width at-most $\frac{n(n+2k+1)+4}{2}$ satisfying: $\max_{x\in K_{\varepsilon}} \|f^{\star}(M)-\hat{f}(M)\|_{\ell_1}<\varepsilon$. \end{proof} \subsection{Proof of Convergence of Supervised Denise to a Solution Operator of the Learning Problem} \begin{proof} By our assumption on $X$ it follows from Corollary~\ref{cor_learnability} that for any $\varepsilon > 0$ there exists some $D$ and weights $\tilde \theta_D$ such that $\hat f_{\tilde \theta_D} \in \mathcal{N}_{\rho, h}^{\sigma, D}$ and \begin{equation*} \max_{M \in X}\, \left\| f^{\star}\left(M\right) - \hat f_{\tilde \theta_D}(M) \right\|_{\ell_1} <\varepsilon . \end{equation*} Since expectations are taken with respect to $\P$ which is supported on $\mathcal{Z}$ and since the weights $\theta_D$ are chosen to optimize the loss function, we have $\Phi(\theta_D) \leq \Phi(\tilde \theta_D)$ and hence \begin{align*} \Phi(\theta_D) & = \mathbb{E}_{(M,L) \sim \P} \left[ \| \hat f_{ \theta_D}(M) - f^\star(M) \|_{\ell_1} \right] \\ & \leq \mathbb{E}_{(M,L) \sim \P} \left[ \| \hat f_{\tilde \theta_D}(M) - f^\star(M) \|_{\ell_1} \right] \\ & \leq \varepsilon . \end{align*} Hence, we can conclude that for any fixed $\varepsilon>0$, there exists a $D_1>0$ such that for all $D>D_1$, we get \begin{equation*} \mathbb{E}_{(M,L) \sim \P} \left[ \|\hat f_{\theta_D}(M) - f^\star(M) \|_{\ell_1} \right] \leq \varepsilon\,. \end{equation*} In other words, we have that \begin{equation*} \mathbb{E}_{(M,L) \sim \P} \left[ \|\hat f_{\theta_D}(M) - f^\star(M) \|_{\ell_1} \right]\underset{D \to \infty}{\longrightarrow} 0 \, , \end{equation*} which concludes the proof \end{proof} \subsection{Proof of Convergence of Monte Carlo approximation} The following Monte Carlo convergence analysis is based on \cite[Section 4.3]{lapeyre2019neural}. In comparison to them, we do not need the additional assumptions that were essential in \cite[Section 4.3]{lapeyre2019neural}, i.e. that all minimizing neural network weights generate the same neural network output. \subsubsection{Convergence of Optimization Problems} Consider a sequence of real valued functions $(f_n)_n$ defined on a compact set $K \subset \mathbb{R}^d$. Define, $v_n = \inf_{x\in K} f_n(x)$ and let $x_n$ be a sequence of minimizers $f_n(x_n) = \inf_{x\in K} f_n(x) $. From \cite[Theorem A1 and discussion thereafter]{rubinstein1993discrete} we have the following Lemma. \begin{lemma} \label{lemma:distanceconverges} Assume that the sequence $(f_n)_n$ converges uniformly on $K$ to a continuous function $f$. Let $v^{*} = \inf_{x\in K}f(x)$ and $\mathcal{S}^* = \{ x\in K: f(x) =v^* \}$. Then $v_n \to v^*$ and $d(x_n, \mathcal{S}^*) \to 0 $ a.s. \end{lemma} The following lemma is a consequence of \cite[Corollary 7.10]{ledoux1991m} and \cite[Lemma A1]{rubinstein1993discrete}. \begin{lemma} \label{lemma:convergencelocallyuniform} Let $(\xi_i)_{i \geq 1}$ be a sequence of $i.i.d$ random variables with values in a separable Banach space $\mathcal{S}$ and $h:\mathbb{R}^d\times \mathcal{S}\to \mathbb{R}$ be a measurable function. Assume that a.s., the function $\theta\in \mathbb{R}^d \mapsto h(\theta, \xi_1)$ is continuous and for all $C>0$, $\mathbb{E}(\sup_{|\theta|_2 \leq C}|h(\theta, \xi_1)|)< + \infty$. Then, a.s. $\theta \in \mathbb{R}^d \mapsto \frac{1}{N}\sum_{i=1}^{N}h(\theta, \xi_i)$ converges locally uniformly to the continuous function $\theta\in \mathbb{R}^d \mapsto \mathbb{E}(h(\theta, \xi_1))$, \begin{equation*} \lim_{N\to\infty} \sup_{|\theta|_2\leq C} \left|\frac{1}{N}\sum_{i=1}^{N}h(\theta, \xi_i) - \mathbb{E}(h(\theta, \xi_1))\right| = 0 \; a.s. \end{equation*} \end{lemma} \subsubsection{Strong law of large numbers} Let $(M_j, L_j)_{j \geq 1}$ be i.i.d. random variables taking values in $\mathcal{Z} = X \times f^\star(X) \subset \mathbb{R}^{n \times n} \times \mathbb{R}^{n \times n} =: \mathcal{S}$. We first remark that $\mathcal{S}$ is a separable Banach space. Moreover, since $f^\star(X)$ is compact as the continuous image of the compact set $X$, it is bounded. Hence, there exists a bounded continuous function $\iota: \mathbb{R}^{n \times n} \to \mathbb{R}^{n \times n}$ such that $\iota \vert_{f^\star(X)}$ is the identity. Then we define \begin{equation*} h(\theta, (M_j, L_j)) := \| \iota(L_j) - \hat{f}_{\theta}(M_j) \|_{\ell_1} \end{equation*} where $\hat{f}_\theta \in \mathcal{N}_{\rho, h}^{\sigma, D}$ is a neural network of depth $D$ with the weights $\theta$. \begin{lemma} \label{lem:properties for MC conv thm} The following properties are satisfied. \begin{itemize} \item[$(\mathcal{P}_1)$] There exists $\kappa>0$ such that for all $ Z = (M,L) \in \mathcal{Z}$ and $ \theta \in \tilde\Theta_D$ we have $\| \hat f_{\theta}(M)\|_{\ell_1} \leq \kappa$. \item[$(\mathcal{P}_2)$] Almost-surely the random function $\theta\in\tilde\Theta_M \mapsto \hat f_{\theta}$ is uniformly continuous. \end{itemize} \end{lemma} \begin{proof} By definition of the neural networks with sigmoid activation functions (in particular having bounded outputs), all neural network outputs are bounded in terms of the norm of the network weights, which is assumed to be bounded, not depending on the norm of the input. Since the activation functions are continuous, also the neural networks are continuous with respect to their weights $\theta$, which implies that also $\theta\in\tilde\Theta_M \mapsto \hat f_{\theta}$ is continuous for any fixed input. Since $\tilde{\Theta}_M$ is compact, this automatically yields uniform continuity almost-surely and therefore finishes the proof of $(\mathcal{P}_2)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:MC convergence}.] We apply Lemma \ref{lemma:convergencelocallyuniform} to the sequence of $i.i.d$ random function $h(\theta, (M_j, L_j))$. With $(\mathcal{P}_1)$ of Lemma \ref{lem:properties for MC conv thm} and since $\iota$ is bounded we know that also \begin{equation*} \vert h(\theta, (M_j, L_j)) \vert \leq \| \iota(L_j) \|_{\ell_1} + \| \hat{f}_{\theta}(M_j) \|_{\ell_1} \end{equation*} is bounded for $\theta \in \tilde\Theta_D$. Hence, there exists some $B > 0$ such that \begin{equation} \label{equ:dominating bound loss function} \mathbb{E}_{ (M_j, L_j) \sim \P}\left[\sup_{\theta \in \tilde\Theta_D} \vert h(\theta, (M_j, L_j) ) \vert\right] < B < \infty \end{equation} By $(\mathcal{P}_2)$ of Lemma \ref{lem:properties for MC conv thm}, the function $\theta \mapsto h(\theta)$ is continuous. Therefore, we can apply Lemma \ref{lemma:convergencelocallyuniform}, yielding that almost-surely for $N \to \infty$ the function \begin{equation*}\label{equ:unif conv 1} \theta \mapsto \frac{1}{N} \sum_{j=1}^{N} h(\theta, (M_j, L_j) ) = \hat\Phi_s^N(\theta) \end{equation*} converges uniformly on $\tilde\Theta_M$ to \begin{equation*}\label{equ:unif conv 2} \theta \mapsto \mathbb{E}_{\P }[h(\theta, (M_1, L_1))] = \Phi_s(\theta), \end{equation*} where we used that $\iota$ is the identity on $f^\star(X)$. Let us define $\Theta^{\min}_M \subset \Theta_D $ be the subset of weights that minimize $\Phi_s$. We deduce from Lemma \ref{lemma:distanceconverges} that $d(\theta_{D,N},\Theta^{\min}_D)\to 0$ a.s. when $N\to \infty$. Then there exists a sequence $(\hat\theta_{D,N})_{N \in \mathbb{N}}$ in $\Theta_D^{\min}$ such that $\lvert \theta_{D,N} - \hat\theta_{D,N} \rvert_2 \to 0$ a.s. for $N \to \infty$. The uniform continuity of the random functions $\theta \mapsto \hat f_{\theta}$ on $\tilde{\Theta}_D$ implies that $\vert \hat f_{\theta_{D,N}}- \hat f_{\hat\theta_{D,N}} \rvert_2 \to 0$ a.s. when $N \to \infty$. By continuity of $\iota$ and the $\ell_1$-norm this yields $\lvert h(\theta_{D,N}, (M_1, L_1) ) - h(\hat\theta_{D,N}, (M_1, L_1) ) \rvert \to 0$ a.s. as $N \to \infty$. With \eqref{equ:dominating bound loss function} we can apply dominated convergence which yields \begin{align*} \lim_{N \to \infty} \mathbb{E}_{\P} & \left[ \lvert h(\theta_{D,N}, (M_1, L_1) ) - h(\hat\theta_{D,N}, (M_1, L_1) ) \rvert \right] = 0. \end{align*} Since for every integrable random variable $Z$ we have $0 \leq \lvert \mathbb{E}[Z] \rvert \leq \mathbb{E}[\lvert Z \rvert] $ and since $\hat\theta_{D,N}\in \Theta_D^{\min}$ we can deduce \begin{eqnarray} \label{equ: MC convergence} \lim_{N \to \infty} \Phi_s(\theta_{D,N}) &=& \lim_{N \to \infty} \mathbb{E}_{\P}\left[ h(\theta_{D,N}, (M_1, L_1) ) \right] \nonumber \\ &=& \lim_{N \to \infty} \mathbb{E}_{\P}\left[ h(\hat\theta_{D,N}, (M_1, L_1) ) \right]\nonumber \\ &=& \Phi_s(\theta_D). \end{eqnarray} We define $N_0 := 0$ and for every $D \in \mathbb{N}$ \begin{equation*} N_D := \min\big\{ N \in \mathbb{N} \; \vert \; N > N_{D-1}, \lvert \Phi_s(\theta_{D,N}) - \Phi_s(\theta_{D}) \rvert \leq \tfrac{1}{D} \}, \end{equation*} which is possibly due to \eqref{equ: MC convergence}. Then Theorem \ref{thm:convergence to optimal decomp} implies that \begin{equation*} \mathbb{E}_{\P} \left[ \|\hat f_{\theta_{D,N_D}}(M) - f^\star(M) \|_{\ell_1} \right] = \Phi_s(\theta_{D,N_D}) \leq \tfrac{1}{D} + \Phi_s(\theta_D) \xrightarrow{D\to \infty} 0, \end{equation*} which concludes the proof. \end{proof} \subsection{Proof of Convergence of the Optimization Scheme} Since the function $\varphi$ is not $\mathcal{C}^2$ , we approximate it by \begin{align*} \tilde\varphi: \Omega \times \mathbb{R}^{n\times n} &\to \mathbb{R}\\ (\theta, M) &\mapsto \sum_{i,j=1}^{n}\!\mu\left( \left[ M - \rho\left( f_\theta (h(M))\right) \right]_{i,j} \right), \end{align*} where $\mu:\mathbb{R}\to[0,\infty)$ is a smooth approximation of the absolute value function with its derivative uniformly bounded by $1$ and its second derivative bounded by $\mu''_{\max}$. Let us define $X = (X_1, \dots, X_{nk}) := f_{\theta}\left(h(M)\right)$. We are interested in the derivatives of the loss function with respect to the parameters $\theta$. For this purpose, we define for a fixed $M$ the function \if\twocol1 \begin{eqnarray*} \phi: \mathbb{R}^{nk} &\to & \mathbb{R}\\ X &\mapsto & \left\Vert M- g\left(X\!\right)\,g\left(X\!\right)^{T}\right\Vert_{\ell_1}, \end{eqnarray*} \else \begin{equation*} \varphi: \mathbb{R}^{l_{m+1}} \to \mathbb{R}, \quad X \mapsto \left\Vert M-g\left(X\!\right)\,g\left(X\!\right)^{T}\right\Vert_{\ell_1}, \end{equation*} \fi and its $\mathcal{C}^2$ approximation \begin{equation} \label{eq:tilde phi} \tilde\phi(X) =\sum_{i,j=1}^{n}\!\mu\left(\left[g \left(X\!\right)\!g \left(X\!\right)^{T}\! -M\right]_{i,j}\!\right). \end{equation} We make use of the following Lemma, for which we define \[ \omega_{i,j}:=\left[ g\left(X\right)g\left(X\right)^{T} -M \right]_{i,j}, \quad 1\leq i,j \leq n. \] \begin{lemma}\label{lem:der-d} Let $\tilde{\phi}$ be the function defined in \eqref{eq:tilde phi}. Then, for every $\nu:=(\alpha-1)k + \beta$ and $\eta:=(\gamma-1)k + \delta$ with $1\le\alpha, \gamma \le n$ and $1\le\beta, \delta\le k$, we have that \begin{equation*} \frac{\partial\tilde{\phi}\left(X\right)}{\partial X_{\nu}} = 2\sum_{j=1}^{n}\mu'(\omega_{\alpha,j})X_{(j-1)k+\beta} \end{equation*} and \if\twocol1 \begin{multline*} \frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X_{\eta} \partial X_{\nu}} = 2 \mu'\left(\omega_{\alpha,\gamma}\right) 1_{\{\beta = \delta \}} \\ + 2 \mu''(\omega_{\alpha,\gamma}) X_{(\gamma-1)k+\beta} X_{ (\gamma -1)k + \delta } \\ + 2 \sum_{j=1}^{n}\mu''(\omega_{\alpha,j}) X_{(j-1)k+\beta} X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}} . \end{multline*} \else \begin{equation*} \begin{split} \frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X_{\eta} \partial X_{\nu}} & = 2 \mu'\left(\omega_{\alpha,\gamma}\right) 1_{\{\beta = \delta \}} + 2 \mu''(\omega_{\alpha,\gamma}) X_{(\gamma-1)k+\beta} X_{ (\gamma -1)k + \delta }\\ & \quad + 2 \sum_{j=1}^{n}\mu''(\omega_{\alpha,j}) X_{(j-1)k+\beta} X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}} . \end{split} \end{equation*} \fi \end{lemma} \begin{proof} First, notice that \if\twocol1 \begin{multline*} \frac{\partial\tilde{\phi}\left(X\right)}{\partial X_{\nu}} = \sum_{i=1}^{n}\sum_{j=1}^{n}\mu'(\omega_{i,j})\left[\tfrac{\partial g\left(X\right)}{\partial X_{\nu}}\left(g\left(X\right)\right)^{T} \right.\\ + \left.g\left(X\right)\left(\tfrac{\partial\big(g\left(X\right)^{T}\big)}{\partial X_{\nu}}\right)\right]_{i,j}. \end{multline*} \else \begin{equation*} \frac{\partial\tilde{\phi}\left(X\right)}{\partial X_{\nu}} = \sum_{i=1}^{n}\sum_{j=1}^{n}\mu'(\omega_{i,j})\left[\tfrac{\partial g\left(X\right)}{\partial X_{\nu}}\left(g\left(X\right)\right)^{T} + g\left(X\right)\left(\tfrac{\partial\big(g\left(X\right)^{T}\big)}{\partial X_{\nu}}\right)\right]_{i,j}. \end{equation*} \fi Moreover, using that $\nu:=(\alpha-1)k + \beta$, ensures that \[ \frac{\partial g\left(X\right)}{\partial X_{\nu}}=\left[\nabla_{X}g(X)\right]_{\nu}=\begin{pmatrix}0 & \cdots & 0\\ \vdots & 1_{(\alpha,\beta)} & \vdots\\ 0 & \cdots & 0 \end{pmatrix} \in \mathbb{R}^{n\times k}. \] Therefore, using the definition of the function $g$, we have \if\twocol1 \begin{multline*} \frac{\partial\tilde{\phi}\left(X\right)}{\partial X_{\nu}} =\sum_{i,j=1}^{n}\mu'(\omega_{i,j})\times\\ \left(\left[\begin{pmatrix}0 & \cdots & 0\\ \vdots & 1_{(\alpha,\beta)} & \vdots\\ 0 & \cdots & 0 \end{pmatrix}\!\begin{pmatrix}X_{_{1}} & \!\cdots \!& X_{(n-1)k+1}\\ \vdots & \! \! & \vdots\\ X_{k} & \!\cdots\! & X_{nk} \end{pmatrix} \right.\right. \\ + \left. \left. \begin{pmatrix}X_{_{1}} & \cdots & X_{k}\\ \vdots & & \vdots\\ X_{(n-1)k+1} & \cdots & X_{nk} \end{pmatrix}\begin{pmatrix}0 & \cdots & 0\\ \vdots & 1_{(\beta,\alpha)} & \vdots\\ 0 & \cdots & 0 \end{pmatrix}\right]_{i,j}\right)\\ =\sum_{i,j=1}^{n}\mu'(\omega_{i,j})\left[\underbrace{\begin{pmatrix}0 & \cdots & 0\\ X_{\beta} & \cdots & X_{(n-1)k+\beta}\\ 0 & \cdots & 0 \end{pmatrix}}_{\textrm{line}\;\alpha} \right. \\ + \left. \underbrace{\begin{pmatrix}0 & \cdots & X_{\beta} & \cdots & 0\\ \vdots & & \vdots & & \vdots\\ 0 & \cdots & X_{(n-1)k+\beta} & \cdots & 0 \end{pmatrix}}_{\textrm{column}\;\alpha}\right]_{i,j}. \end{multline*} \else \begin{equation*} \begin{split} \frac{\partial\tilde{\phi}\left(X\right)}{\partial X_{\nu}} &=\sum_{i,j=1}^{n}\mu'(\omega_{i,j})\times \left(\left[\begin{pmatrix}0 & \cdots & 0\\ \vdots & 1_{(\alpha,\beta)} & \vdots\\ 0 & \cdots & 0 \end{pmatrix}\!\begin{pmatrix}X_{_{1}} & \!\cdots \!& X_{(n-1)k+1}\\ \vdots & \! \! & \vdots\\ X_{k} & \!\cdots\! & X_{nk} \end{pmatrix} \right.\right. \\ &\left. \left. \quad\quad\quad\quad\quad\quad\quad\quad\quad + \begin{pmatrix}X_{_{1}} & \cdots & X_{k}\\ \vdots & & \vdots\\ X_{(n-1)k+1} & \cdots & X_{nk} \end{pmatrix}\begin{pmatrix}0 & \cdots & 0\\ \vdots & 1_{(\beta,\alpha)} & \vdots\\ 0 & \cdots & 0 \end{pmatrix}\right]_{i,j}\right)\\ &=\sum_{i,j=1}^{n}\mu'(\omega_{i,j})\left[\underbrace{\begin{pmatrix}0 & \cdots & 0\\ X_{\beta} & \cdots & X_{(n-1)k+\beta}\\ 0 & \cdots & 0 \end{pmatrix}}_{\textrm{line}\;\alpha} + \underbrace{\begin{pmatrix}0 & \cdots & X_{\beta} & \cdots & 0\\ \vdots & & \vdots & & \vdots\\ 0 & \cdots & X_{(n-1)k+\beta} & \cdots & 0 \end{pmatrix}}_{\textrm{column}\;\alpha}\right]_{i,j}. \end{split} \end{equation*} \fi Using that $\omega_{i,j}=\omega_{j,i}$ for every $1\le i$, $j\le n$ we obtain indeed that \if\twocol1 \begin{align*} \frac{\partial\tilde{\phi}\left(X\right)}{\partial X_{\nu}} &=2\mu'(\omega_{\alpha,\alpha})X_{(\alpha-1)k+\beta} \\ &+\sum_{\substack{j=1\\ j\neq\alpha } }^{n}\mu'(\omega_{\alpha,j})X_{(j-1)k+\beta}\\ & +\sum_{\substack{i=1\\i\neq\alpha} }^{n}\mu'(\omega_{i,\alpha})X_{(i-1)k+\beta}\\ &=2\sum_{j=1}^{n}\mu'(\omega_{\alpha,j})X_{(j-1)k+\beta}, \end{align*} \else \begin{equation*} \begin{split} \frac{\partial\tilde{\phi}\left(X\right)}{\partial X_{\nu}} &=2\mu'(\omega_{\alpha,\alpha})X_{(\alpha-1)k+\beta} +\sum_{\substack{j=1\\ j\neq\alpha } }^{n}\mu'(\omega_{\alpha,j})X_{(j-1)k+\beta} +\sum_{\substack{i=1\\i\neq\alpha} }^{n}\mu'(\omega_{i,\alpha})X_{(i-1)k+\beta}\\ &=2\sum_{j=1}^{n}\mu'(\omega_{\alpha,j})X_{(j-1)k+\beta}, \end{split} \end{equation*} \fi which proofs the first part. For the second part, we use this formula and we get \if\twocol1 \begin{align*} &\frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X_{\eta}\partial X_{\nu}} = 2\sum_{j=1}^{n} \Big[ \mu'(\omega_{\alpha,j}) \frac{\partial X_{(j-1)k+\beta}}{\partial X_{\eta}} \\ & \qquad \qquad \qquad \quad + \mu''(\omega_{\alpha,j}) \frac{\partial \omega_{\alpha,j}}{\partial X_{\eta}} X_{(j-1)k+\beta} \Big] \\ &=2\sum_{j=1}^{n} \Big[ \mu'(\omega_{\alpha,j})1_{\{j = \gamma, \beta = \delta \}} \\ & \qquad \qquad + \mu''(\omega_{\alpha,j}) X_{(j-1)k+\beta} (X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}} \\ & \qquad \qquad + X_{ (\gamma -1)k + \delta } 1_{\{ j = \gamma \}}) \Big] \\ & = 2 \mu'\left(\omega_{\alpha,\gamma}\right) 1_{\{\beta = \delta \}} \\ & \quad + 2 \mu''(\omega_{\alpha,\gamma}) X_{(\gamma-1)k+\beta} X_{ (\gamma -1)k + \delta } \\ & \quad + 2 \sum_{j=1}^{n}\mu''(\omega_{\alpha,j}) X_{(j-1)k+\beta} X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}}. \end{align*} \else \begin{equation*} \begin{split} &\frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X_{\eta}\partial X_{\nu}} \\ &= 2\sum_{j=1}^{n} \Big[ \mu'(\omega_{\alpha,j}) \frac{\partial X_{(j-1)k+\beta}}{\partial X_{\eta}} + \mu''(\omega_{\alpha,j}) \frac{\partial \omega_{\alpha,j}}{\partial X_{\eta}} X_{(j-1)k+\beta} \Big] \\ &=2\sum_{j=1}^{n} \Big[ \mu'(\omega_{\alpha,j})1_{\{j = \gamma, \beta = \delta \}} + \mu''(\omega_{\alpha,j}) X_{(j-1)k+\beta} (X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}} + X_{ (\gamma -1)k + \delta } 1_{\{ j = \gamma \}}) \Big] \\ & = 2 \mu'\left(\omega_{\alpha,\gamma}\right) 1_{\{\beta = \delta \}} + 2 \mu''(\omega_{\alpha,\gamma}) X_{(\gamma-1)k+\beta} X_{ (\gamma -1)k + \delta } \\ & \quad + 2 \sum_{j=1}^{n}\mu''(\omega_{\alpha,j}) X_{(j-1)k+\beta} X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}}. \end{split} \end{equation*} \fi \end{proof} Now the result follows from \cite[Theorem 4]{Li_Orabona_2019}. \begin{proof}[Proof of Theorem \ref{cor:convergence of SGD}] With Lemma \ref{lem:der-d} we can compute the norm of the derivatives of $\tilde{\phi}$. Using Cauchy-Schwarz's inequality, we obtain \begin{align*} \left\Vert\frac{\partial\tilde{\phi}\left(X\right)}{\partial X} \right\Vert^2 &= \sum_{\alpha = 1}^n \sum_{\beta=1}^{k}\left( 2\sum_{j=1}^{n}\mu'(\omega_{\alpha,j})X_{(j-1)k+\beta}\right)^2\\ &\leq 4 n \sum_{\alpha = 1}^n \sum_{\beta=1}^{k}\sum_{j=1}^{n}\left( \mu'(\omega_{\alpha,j})X_{(j-1)k+\beta}\right)^2\\ &\leq 4 n \sum_{\alpha = 1}^n \sum_{\beta=1}^{k}\sum_{j=1}^{n}\left( X_{(j-1)k+\beta}\right)^2 \leq 4 n^2 \left\Vert X \right\Vert^2 . \end{align*} Similarly, using Cauchy-Schwarz's inequality twice, we have \if\twocol1 \begingroup\makeatletter\def\f@size{8.5}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}} \begin{align*} &\left\Vert\frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X^2} \right\Vert^2 = \sum_{\nu, \mu =1}^{\ell_{m+1}} \left( \frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X_{\eta}\partial X_{\nu}} \right)^2 \\ & \leq 12 \sum_{\alpha, \gamma = 1}^n \sum_{\beta, \delta=1}^{k} \Big[ 1_{\{\beta = \delta \}}^2 + ( \mu_{\max}'' \lVert X \rVert X_{ (\gamma -1)k + \delta })^2 \\ & \quad + (\mu_{\max}'' \sum_{j=1}^{n} X_{(j-1)k+\beta} X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}} )^2 \Big] \\ & \leq 12 \Big[ n^2 k + n k (\mu_{\max}'')^2 \lVert X \rVert^4 \\ & \quad + (\mu_{\max}'')^2 \sum_{\alpha = 1}^n \sum_{\beta, \delta=1}^{k} \big( \sum_{j=1}^{n} X_{(j-1)k+\beta}^2 \big) \big( \sum_{j=1}^{n} X_{ (j - 1)k + \delta }^2 \big) \Big] \\ & \leq 12 \big[ n^2 k + n k (\mu_{\max}'')^2 \lVert X \rVert^4 + n (\mu_{\max}'')^2 \lVert X \rVert^4 \big]. \end{align*} \endgroup \else \begin{equation*} \begin{split} \left\Vert\frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X^2} \right\Vert^2 &= \sum_{\nu, \mu =1}^{\ell_{m+1}} \left( \frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X_{\eta}\partial X_{\nu}} \right)^2 \\ & \leq 12 \sum_{\alpha, \gamma = 1}^n \sum_{\beta, \delta=1}^{k} \Big[ 1_{\{\beta = \delta \}}^2 + ( \mu_{\max}'' \lVert X \rVert X_{ (\gamma -1)k + \delta })^2 \\ & \quad + (\mu_{\max}'' \sum_{j=1}^{n} X_{(j-1)k+\beta} X_{ (j - 1)k + \delta } 1_{\{ \alpha = \gamma \}} )^2 \Big] \\ & \leq 12 \Big[ n^2 k + n k (\mu_{\max}'')^2 \lVert X \rVert^4 + (\mu_{\max}'')^2 \sum_{\alpha = 1}^n \sum_{\beta, \delta=1}^{k} \big( \sum_{j=1}^{n} X_{(j-1)k+\beta}^2 \big) \big( \sum_{j=1}^{n} X_{ (j - 1)k + \delta }^2 \big) \Big] \\ & \leq 12 \big[ n^2 k + n k (\mu_{\max}'')^2 \lVert X \rVert^4 + n (\mu_{\max}'')^2 \lVert X \rVert^4 \big] . \end{split} \end{equation*} \fi We define $\Omega = \lbrace \theta \in \mathbb{R}^d \; | \; \lVert \theta \rVert_{\infty} < B_{\Omega} \rbrace$. Using the assumption that $\sup_{j \geq 0} \lVert \theta_{j}\rVert_{\infty} < B_{\Omega}$, as well as $\lvert \sigma \rvert \leq 1$, we can bound $X$ by $\lVert X \rVert \leq B_{\Omega} (\sqrt{l_m} +1) =: B_X$. Therefore, we have that $\left\Vert\frac{\partial\tilde{\phi}\left(X\right)}{\partial X} \right\Vert \leq \tilde{\phi}_{max}'$ and $\left\Vert\frac{\partial^2\tilde{\phi}\left(X\right)}{\partial X^2} \right\Vert \leq \tilde{\phi}_{max}''$ with $\tilde{\phi}_{max}' = 2 n B_X$ and $\tilde{\phi}_{max}''= \sqrt{12 n \big[ n k + (\mu_{\max}'')^2 B_X^4 (k+1) \big] } $. Hence, $\Phi$ and $\nabla \Phi$ are Lipschitz continuous on $\Omega$ with constants $L_{\Phi}$ and $L_{\nabla \Phi}$ \citep{Lipschitz2020}. We establish the assumptions of Theorem 4 in \cite{Li_Orabona_2019}, which in turn establishes our result. We set $f := \Phi$ and remark first that their results still hold when restricting $\Phi$ and $\nabla_{\theta} \Phi$ to be Lipschitz only on the subset $\Omega$. Indeed, by the assumption $\sup_{j \geq 0} \lVert \theta_{j}\rVert_{\infty} < B_{\Omega}$ we know that $\theta$ stays within $\Omega$ for the entire training process. In the remainder of the proof we show that all needed assumptions \textbf{H1}, \textbf{H3} and \textbf{H4'} (as defined in \cite{Li_Orabona_2019}) are satisfied. \textbf{H1}, the Lipschitz continuity of $\nabla \Phi$, holds as outlined above. Let $Z_1, \dotsc, Z_N \sim \P$ be independent and identically distributed random variables with the distribution of the training set. By the stochastic gradient method outlined in Algorithm~\ref{eq:gradient-method}, in each step the approximation of the gradient $\nabla \Phi(\theta_{j})$ is given by the random variable \begin{equation*} G_j := G(\theta_{j}; Z_1, \dotsc, Z_M) := \tfrac{1}{N} \sum_{i=1}^N \nabla_{\theta} \tilde\varphi(\theta_{j}, Z_i). \end{equation*} The proofs in \citep{Lipschitz2020} imply that \begin{equation*}\label{eq:gradient-Phi-equality} \nabla \Phi(\theta) = \nabla_{\theta} \mathbb{E}[ \tilde\varphi(\theta, Z) ] = \mathbb{E}[ \nabla_{\theta} \tilde\varphi(\theta, Z)], \end{equation*} hence we have $\mathbb{E}[ G_j ] = \nabla \Phi$, yielding \textbf{H3}.\\ In the proof of Theorem 4 of \cite{Li_Orabona_2019}, assumption \textbf{H4'} is only used for the proof of their Lemma 8. In particular, it is only used to show \begin{equation}\label{eq:proof-corollary-SGD-H4-condition} \mathbb{E}\left[\max_{1 \leq i \leq T} \lVert \nabla \Phi(\theta_i) - G_i \rVert^2 \right] \leq \sigma^2 (1 + \log(T)), \end{equation} for a constant $\sigma \in \mathbb{R}$. Instead of showing \textbf{H4'}, we directly show that \eqref{eq:proof-corollary-SGD-H4-condition} is satisfied. We have \if\twocol1 \begin{equation*} \begin{split} \mathbb{E}&\left[ \max_{1 \leq i \leq T} \lVert \nabla \Phi(\theta_i) - G_i \rVert^2 \right] \\ & \leq \mathbb{E}\left[ \max_{1 \leq i \leq T} \left( 2 \lVert \nabla \Phi(\theta_i) \rVert^2 + 2 \lVert G_i \rVert^2 \right) \right] \\ & \leq 2 L_{\nabla \Phi}^2 + 2 \mathbb{E}\left[ \max_{1 \leq j \leq T} \tfrac{1}{M} \sum_{i=1}^M \lVert \nabla_{\theta} \tilde \varphi(\theta_{j}, Z_i) \rVert^2 \right] \\ & \leq 2 L_{\nabla \Phi}^2 + 2 \mathbb{E}[ L_{\nabla \tilde\varphi}^2] =: \sigma^2, \end{split} \end{equation*} \else \begin{equation*} \begin{split} \mathbb{E}\left[ \max_{1 \leq i \leq T} \lVert \nabla \Phi(\theta_i) - G_i \rVert^2 \right] & \leq \mathbb{E}\left[ \max_{1 \leq i \leq T} \left( 2 \lVert \nabla \Phi(\theta_i) \rVert^2 + 2 \lVert G_i \rVert^2 \right) \right] \\ & \leq 2 L_{\nabla \Phi}^2 + 2 \mathbb{E}\left[ \max_{1 \leq j \leq T} \tfrac{1}{M} \sum_{i=1}^M \lVert \nabla_{\theta} \tilde\varphi(\theta_{j}, Z_i) \rVert^2 \right] \\ & \leq 2 L_{\nabla \Phi}^2 + 2 \mathbb{E}[ L_{\nabla \tilde\varphi}^2] =: \sigma^2, \end{split} \end{equation*} \fi where in the second inequality we used Cauchy--Schwarz and in the last equality we used that for the Lipschitz constant of the gradient of $\tilde\varphi$ we have $\mathbb{E}[ L_{\nabla \tilde\varphi}^2] < \infty$, since $r \in L^2$ (compare with \cite{Lipschitz2020}). In particular this implies that \eqref{eq:proof-corollary-SGD-H4-condition} is satisfied. For completeness we also remark that \textbf{H2} holds as well, since $\Phi$ is Lipschitz. Applying Theorem 4 of \cite{Li_Orabona_2019} concludes the proof. \end{proof} \section{Conclusion} We provide a simple deep learning based algorithm to decompose positive semi-definite matrices into low rank plus sparse matrices. After the deep neural network was trained, only an evaluation of it is needed to decompose any new unseen matrix. Therefore the computation time is negligible, which is an undeniable advantage in comparison with the classical algorithms. To support our claim, we provided theoretical guarantees, for the recovery of the optimal decomposition. To the best of our knowledge, this is the first time that neural networks are used to learn the low rank plus sparse decomposition for any unseen matrix. The obtained results are very promising. We believe that this subject merits to be further investigated for all online applications where the decomposition must be instantaneous and stable with respect to the inputs. \newpage \acks{We thank Hartmut Maennel, Maximilian Nitzschner, Thorsten Schmidt and Martin Stefanik for valuable remarks and helpful discussions. Moreover, the authors would like to acknowledge support for this project from the Swiss National Science Foundation (SNF grant 179114).} \vskip 0.2in
1,477,468,751,339
arxiv
\section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. Note that the author rebuttal is optional and, following similar guidelines to previous CVPR conferences, it is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were not included in the original submission. You may optionally add a figure, graph or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should not request additional experiments for the rebuttal, or penalize authors for lack of additional experiments. This includes any experiments that involve running code, e.g., to create tables or figures with new results. \textbf{Authors should not include new experimental results in the rebuttal}, and reviewers should discount any such results when making their final recommendation. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. The rebuttal must adhere to the same blind-submission as the original submission and must comply with this rebuttal-formatted template. \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1.0 inch (2.54 cm) from the top edge of the page. The bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. Please number all of your sections and any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in Figure~\ref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{1in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } {\small \bibliographystyle{ieee_fullname} \section{Conclusion} In this study, a pipeline for weakly- and semi-supervised object detection (WSSOD) is proposed. WSSOD utilizes a small portion of full annotations with bounding box information and a large quantity of weak annotations with only image-level multi-labels. It is able to achieve a better balance between the labeling labor and the detector's performance. WSSOD is composed of two stages with the first stage training an agent detector and the second stage training a target detector using the pseudo label generated by the agent. Starting from a unified EM framework, the current pipeline is also carefully examined and improved. The weakly-supervised loss and label attention module are adopted during Stage1 training, to achieve a better agent model and thus a more accurate posterior estimate of the hidden bounding boxes in weakly-supervised data. The traditional pseudo-label generation process are then demonstrated to involve a chain of poorly-founded assumptions at Stage2. A random pseudo-label sampling (RPS) module is then proposed to bypass the chain of assumptions and directly sample a group of pseudo-labels according to the probability. RPS is more theoretically grounded and proves to be effective in both semi-supervised and WSSOD settings. With the above improvements, the proposed WSSOD is able to achieve $78.9\%$ AP on Pascal-VOC12 test set, only $0.9\%$ AP behind the fully-annotated baseline. The efficacy of WSSOD is also verified on MS-COCO dataset. \section{Related Work} \label{sec:related} \noindent \textbf{Object Detection} is a fundamental task in the field of computer vision, which has made great progress in recent years. The contemporary detectors can be categorized as two paradigms, two-stage detectors \cite{girshick2014rich,girshick2015fast,ren2015faster,lin2017feature,he2017mask,cai18cascadercnn} which first generate region proposals which are then classified and refined in the second stage; single-stage detectors \cite{redmon2017yolo9000,liu2016ssd,lin2017focal,tian2019fcos} removes the process of proposal generation but directly make predictions on top of the predefined anchor boxes. Though both of two paradigms are thoroughly studied in the circumstance where sufficient and fulled labeled data are given, weakly supervised and semi-supervised situations are less explored for object detections and the performance is still not satisfactory. In this paper, we try to explore a balance between detection performance and label cost by utilizing a small part of weakly image-level labels for a better semi-supervised learning. \noindent \textbf{Weakly Supervised Object Detection (WSOD)} is a challenging problem that aims to eliminate the need of the tight instance-level annotation but only gives image-level labels. Many studies \cite{gokberk2014multi, bilen2016weakly,song2014weakly} formulated WSOD as a Multiple Instance Learning (MIL) \cite{dietterich1997solving} problem and adopt an alternative pipeline to iteratively train the detector and infer the instance label. To alleviate the non-convex problem of MIL strategy, various initialization \cite{song2014weakly,deselaers2010localizing} and regulation methods \cite{diba2017weakly,song2014weakly,song2014learning} were proposed to prevent it from sticking at the local minimum. Recently, \cite{bilen2016weakly} proposed a two-stream network WSDDN to simultaneously perform region selection and classification. The region level scores from these two streams are then element-wise multiplied and transformed to image-level scores by summing over all regions. Its subsequent works \cite{kantorov2016contextlocnet} which combines with context information and \cite{tang2017multiple} which combine with multi-stage refinement further push the performance. However, their works both relies on the traditional Selective Search \cite{uijlings2013selective} which is outdated. We leverage it with the objectness score predicted by Region Network and incorporate it to the latest detectors. \noindent \textbf{Semi-Supervised Learning (SSL)} studies the scenario where annotated data is sparse. Two representative methods for SSL in image classification are consistency regularization and pseudo labeling. Consistency regularization, like \cite{xie2020self,sohn2020simple,devries2017improved,rasmus2015semi}, aims to regularize the network prediction when presenting a image and its augmented one. Pseudo labeling, like \cite{lee2013pseudo,xie2020self,pham2019semi,bachman2014learning}, where a teacher model is firstly trained on the labeled data and then used to make prediction on unlabeled data as pseudo labels. The success of SSL for image classification inspires some recent researches in object detection. CSD \cite{jeong2019consistency} explores consistency regularization by enforcing the detector make consistent prediction on a image and its horizontally flipped one. Proposal learning \cite{tang2020proposal} adds noise to the proposal features instead of the raw image for better noise-robust proposals feature predictions. Noisy Student \cite{xie2020self} adopts the student-teacher pseudo labeling pipeline and ensembles extra classifier for eliminating the noise introduced by box mining phase. Recently, \cite{sohn2020simple} proposes a SSL framework STAC for object detection. In this paper, we follow this framework and adopt several improvements to the phases of teacher model training and pseudo label generation to improve the overall performance. \section{Introduction} \label{sec:intro} With recent advances in deep learning, object detectors such as Faster-RCNN\cite{ren2015faster}, RetinaNet\cite{lin2017focal} and FCOS\cite{tian2019fcos} have made great impact. Their success partially attributes to the the large-scale datasets. However, building such a dataset with accurate bounding box annotations is time consuming and laborious. The reliance of object detection on detailed annotations poses a great challenge for industrial applications, where only scarce annotations may be available. There have been attempts to alleviate this problem, e.g., weakly-supervised object detection (WSOD) and semi-supervised object detection (SSOD). WSOD~\cite{bilen2016weakly, tang2018pcl, lin2020object, shen2019category, tang2017multiple} avoids the high cost of labeling bounding boxes with weakly-annotated data. Only image-level labels (i.e. categories of objects in image) are utilized to train detectors. Most of these methods are based on multi-instance learning (MIL)\cite{dietterich1997solving}. SSOD\cite{jeong2020interpolation, tang2019learning, chen2020temporal, sohn2020simple, jeong2019consistency} makes use of a few fully-annotated data (i.e., with both category labels and bounding box coordinates) as well as a larger amount of unlabeled data. Training based on consistency regularization\cite{jeong2020interpolation, tang2019learning, jeong2019consistency} or pseudo labels\cite{wang2018weakly,sohn2020simple} are two popular frameworks in SSOD. However, the performance of both settings still fall far behind the fully-supervised counterpart. The absence of box-wise annotations in WSOD is a bottleneck for predicting accurate spatial bounding boxes. Thus the state-of-the-art WSOD method only achieves 10.2\% mAP on MSCOCO dataset \cite{shen2019category}. As for the SSOD algorithms, the utilization of unlabeled data is still unsatisfactory since the noise caused by incorrect gradients may be accumulated and hurt the convergence. Therefore, there still exists a large performance gap between semi-supervised and fully-supervised learning. \begin{figure}[!t] \subfigure[Fully-supervised]{ \begin{minipage}{0.44\linewidth} \includegraphics[width=1.6in]{figs/fsod.png} \end{minipage} } \subfigure[Weakly-supervised]{ \begin{minipage}{0.44\linewidth} \includegraphics[width=1.6in]{figs/wsod.png} \end{minipage} } \subfigure[Semi-supervised]{ \begin{minipage}{0.44\linewidth} \includegraphics[width=1.6in]{figs/ssod.png} \end{minipage} } \subfigure[Weakly- and Semi-supervised]{ \begin{minipage}{0.44\linewidth} \includegraphics[width=1.6in]{figs/wssod.png} \end{minipage} } \centering \label{fig: different_setting_detection} \caption{Different settings of object detection.} \end{figure} Although remarkable achievements have been made in WSOD and SSOD, the non-negligible performance gap still hind them from real applications. Considering the limitation of those explorations, a natural question raises: \textit{Is there a better trade-off between annotation cost and performance?} In this paper, we propose to train detectors in a weakly- and semi- supervised manner. Under this setting, the detector is trained with a few fully-annotated data as well as abundant weakly-annotated data. The idea stems from the imbalanced labor in annotating images for classification and detection tasks. For instance, it only takes a few seconds to annotate an image with category labels but can take minutes to annotate all bounding boxes, especially in crowded scenes.Therefore it is possible to obtain lots of weakly labeled data with a relatively small expense. In this study, we jointly adopt a small number of fully annotated samples and a large number of weakly-annotated images to train an object detector, forming a Weakly- and Semi-Supervised Object Detection (WSSOD) pipeline. The fully-annotated samples serves as important anchors while the weakly-annotated samples help increase the generalization ability of models. In this way, our method shrinks the gap with fully-annotated baselines and achieves high detection performance while maintaining an affordable annotation cost. The proposed WSSOD pipeline is rooted in an Expectation-Maximization (EM) view over the hidden variables, which are the unknown bounding box annotations in weakly-supervised data. Specifically, the designed pipeline is a single-circle EM with two stages. In the first stage, an agent detector is trained on both fully and weakly labeled data, to yield a better initialization and more accurate estimate of the posterior probability of the hidden variables. The trained model is later used to predict bounding boxes and class labels on weakly labeled data, which is called pseudo label. In the second stage, we use fully-annotated data as well as pseudo-labeled data to train a target detector, so as to estimate and maximize the joint probability of both fully- and weakly-annotated data. This pipeline shares slight similarity with FixMatch~\cite{sohn2020fixmatch} in semi-supervised classification. We also noticed that some previous works~\cite{chen2020temporal, sohn2020simple} on object detection also utilized a similar pseudo-label based method with multiple stages. However, WSSOD differs from them on the following aspects. For one thing, the performance of agent detector has the most direct impact on the quality of pseudo label, thus further determines power of target detector. Unlike \cite{chen2020temporal, sohn2020simple} which only considered fully-annotated samples in stage 1, we also utilized the weakly-annotated data to boost the performance of agent detector. For another, within the framework of EM, we are able to explain the initiatives and drawbacks of various designs in the original pipeline. We also propose a label attention module to achieve a better estimation of the posterior probability of the hidden bounding boxes in weakly-annotated data. Moreover, an adaptive pseudo-label generation module is proposed to relax the assumptions when conducting EM, which addresses the ever-present problem to determine the threshold while generating pseudo labels. In contrast, precedent studies like~\cite{sohn2020simple} manually tuned the threshold (from $0.9$ to $0.5$) according to the performance of agent detector, which is resource consuming and unreliable. In summary, the main contributions of our work are summarized as follows. \begin{enumerate} \item We propose the WSSOD framework to train detectors in a weakly- and semi-supervised manner, which takes both the annotation cost and performance into account. \item The current pipeline is designed and analyzed under the EM framework. In order to improve the posterior probability of the unknown bounding boxes in the weakly-annotated data, a novel attention module and loss function are employed in WSSOD. \item We list the major assumptions in common pipelines under the EM framework, and one major assumption is relaxed by the proposed adaptive pseudo label generation module, which also reduces the number of key hyper parameters and improves the quality of pseudo labeling. \item Results on the challenging MSCOCO 2017 dataset demonstrates the effectiveness of our method. Specifically, we achieve 36.1\% mAP using Faster-RCNN with only 30\% fully-labeled data, which performs comparably with the fully-supervised setting. \end{enumerate} \section{Methodology} \label{sec:methodology} \subsection{Notation} Let $\bI \in \mathbb{R}^{H\times W\times 3}$ denotes an image, $\bc \in \{0, 1\}^C$ represent the weakly-annotated multilabels of the image, $C$ be the total number of foreground categories. In the following theoretical derivation, $C=1$ is assumed for better clarity without loss of generalizations and therefore $\bc$ degrades into a scalar $c$. $\bt = (\tilde{\bt}_1, \tilde{\bt}_2, ...)$ is a full annotation of an image where each $\tilde{\bt}=(c, \vect{x})$ is a tuple representing a foreground instance $c=1$ with coordinate $\vect{x}$. We denote by $\mathcal{S}$ the index set of the fully-annotated images and by $\mathcal{W}$ the index set of the weakly-annotated images. $\btheta$ is the parameters of a neural network which takes $\bI$ as input and is able to output its box predictions. There are also $N$ proposals (or anchors) in an image to descretize the infinite-sized sliding windows in object detection, and the each proposal is denoted as $\ba$. We also slightly abuse the symbol $\bt = (\tilde{\bt}_1, \tilde{\bt}_2, ...\tilde{\bt}_N)$ to denote the model-generated bounding boxes for each proposal. In this case $\tilde{{\bt}}=(c=0, \varnothing)$ is allowed and denotes a background prediction. \subsection{Background: Expectation-Maximization} The objective of weakly- and semi-supervised object detection is to maximize the joint probability of supervised and weakly-supervised data. \begin{equation} \label{eq:total_p} P = \prod_{i \in \cS} P(\bt_i|\bI_i; \btheta)\prod_{i\in \cW } P(c_i |\bI_i; \btheta). \end{equation} The optimization of $P(\bt_i|\bI_i; \btheta)$ is a well-studied problem and it's log-probability can be decomposed into \begin{equation} \label{eq:log_s} \begin{split} \log{P(\bt|\bI; \btheta)} = &\sum_{j=1}^N {\log P(\tilde{\bt}_{s(j)}|\bI, \ba_j)} \\ = &\sum_{j \in \mathcal{A}_P}{\log P(\vect{x}_{s(j)}|\bI, \ba_j)} \\ & + \sum_{j \in \mathcal{A}}{\log P(c_{s(j)}|\bI, \ba_j)}, \end{split} \end{equation} where $\mathcal{A}$ and $\mathcal{A}_P$ denote the set of total and positive proposals. For common supervised object detection pipelines, the loss design can be viewed as the maximization of the above log-likelihood. Assume the coordinates of each bounding box are independent and follow a Laplacian distribution and classification labels follow a Bernoulli distribution, then we yield the familiar combination of (smooth) L1 loss and cross-entropy loss for each term in Eq. \ref{eq:log_s}, \begin{equation} \begin{split} \log P(\tilde{\bt}_{s(j)}|\bI, \ba_j) &= \log P(c|\bI, \ba_j) + \log P(\vect{x}_{s(j)} |\bI, \ba_j) \\ &= - \left( \lambda_1 CE(c, \vect{p}) + \lambda_2 L1(\vect{x}, \vect{x_s} \right). \end{split} \end{equation} As for the weakly-supervised probability, since $P(c=1|\bt) = 1$ if for each of the image-level label, there is at least one instance belonging to this category, $P(c=1 |\bI; \btheta)$ can be simplified as \begin{equation} \begin{split} P(c=1 |\bI; \btheta) &= \sum_{\bt}P(c=1, \bt |\bI; \btheta) \\ &= \sum_{\bt}P(\bt | \bI;c ; \btheta)P(c=1|\bt) \\ &= \sum_{\bt \in \mathcal{B}}P(\bt | \bI; c; \btheta), \end{split} \end{equation} where $\mathcal{B}$ is the set of proposal assignments that satisfy $P(c=1|\bt) = 1$. Note that each $\bt=(\tilde{\bt}_1, \tilde{\bt}_2, ..., \tilde{\bt}_N)$ actually represents one possible combination of the output from all proposals, and there are in total $2^N-1$ elements in $\mathcal{B}$. For example, $((c=1, \vect{x})_1, (c=0, \varnothing)_2, ..., (c=0, \varnothing)_N)$ and $((c=0, \varnothing)_1, (c=1, \vect{x})_2, , ..., (c=0, \varnothing)_N)$ are both valid elements in $\mathcal{B}$. The above equation involves the probability of hidden variables $\bt$, and EM is a common method to optimize it. The EM algorithm over both the supervised and weakly-supervised data is the estimation and maximization of the following property \begin{equation} \label{eq:Q1} Q = \sum_{i \in \mathcal{S}}\log{P(\bt_i|\bI_i; \btheta)} + \sum_{i\in\mathcal{W}}{Q(c_i, \bI_i)}, \end{equation} where \begin{equation} \label{eq:Q2} Q(c, \bI) = \sum_{\bt \in \mathcal{B}}P(\bt | \bI; c; \btheta')\log P(\bt | \bI; \btheta), \end{equation} with $P(\bt | \bI; \btheta)$ expressed as Eq. \ref{eq:log_s}, and $\btheta'$ is the model parameter of the last EM iteration. \subsection{Pipeline} \label{subsec:pipeline} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{figs/framework.png} \caption{The pipeline design for WSSOD. The cyan curved arrow indicates the flow of fully-annotated data while the red one indicates weakly-annotated data. } \label{fig:pipeline} \end{figure*} Based on the EM framework, we design the pipeline as shown in Fig. \ref{fig:pipeline}. It is a two-stage iterative process. But in reality, only a single circle is conducted, constrained by the common training budget for fair comparison. Stage1 trains an agent detector and resembles the initialization of $\btheta'$ in Eq. \ref{eq:Q2} and tries to yield an accurate $P(\bt | \bI; c; \btheta')$. The instantiation of $P(\bt | \bI; c; \btheta')$ is realized by the pseudo-label generation. That is, the predicted bounding boxes of weakly-annotated images are samples from the agent detector and fed into Stage2 along with fully-annotated images as the target detector's training data. Stage2 performs both the E-step and M-step in the sense that the target detector first evaluates the loss (\emph{i.e.} $-Q$ in Eq. \ref{eq:Q1}) brought by the pseudo label and then optimize it through back-propagation. For the training of weakly-annotated set in Stage2, we borrow the idea from semi-supervised learning \cite{sohn2020fixmatch, sohn2020simple} that apply strong augmentation $T$ only on weakly-labeled data and multiple a scalar $\lambda_u$ to the loss of them. That is to say, the Eq. \ref{eq:Q1} now becomes \begin{equation} \label{eq:Q3} Q = \sum_{i \in \mathcal{S}}\log{P(\bt_i|\bI_i; \btheta)} + \lambda_u \sum_{i\in\mathcal{W}}{Q(c_i, T(\bI_i))}. \end{equation} The brought $T$ can be treated as disturbance and $\lambda_u$ is a scale factor, which will not affect the optimization objective. So we still use the original notations for convenience. Therefore, under the EM framework, our target now becomes how to effectively evaluate $Q$ as well as how to obtain a good agent model and further a more accurate estimation of $P(\bt | \bI; c; \btheta')$. \subsection{Stage1: towards an accurate $P(\bt | \bI; c; \btheta')$} The major target in Stage1 is to obtain a good initialization of the agent detector, $\btheta'$, which is then used to yield a good $P(\bt | \bI; c; \btheta')$ as pseudo label for the target detector in the second stage. The training process also involves both $\cS$ and $\cW$ to yield a better $\btheta'$. The optimization for labeled set $\cS$ is identical to Eq. \ref{eq:log_s}, and in order to make use of the weakly-annotated set $\cW$, we borrow the idea in WSDDN \cite{bilen2016weakly} and propose a WSL loss. \textbf{Weakly-supervised loss (WSL)}. The image-label probability for weakly-supervised images can be formulated as \begin{equation} \label{eq:wsl} \begin{split} P(c=1 |\bI; \btheta') & \sim \max_j{P(c_j=1 | \bI; \ba_j; \btheta')} \\ & \sim \mathrm{Softmax}_j{P(c_j=1 | \bI; \ba_j; \btheta')}, \end{split} \end{equation} where $j$ is the index of proposal $\ba_j$. The equation implies that maximizing the probability of an image label is converted to the maximization of the upper bound of the label probability among all proposals. The hard max function is then softened using $\mathrm{Softmax}$, over the classification score of all proposals. Despite the multiple assumptions involved during the derivation of Eq. \ref{eq:wsl}, as will be later unveiled in Sec. \ref{sec:stage2}, it still serves as an effective tool to leverage $\cW$ and therefore leads to a better initialization of $\btheta'$. As for object detectors with region proposal network (RPN), the $P(c=1 |\bI; \btheta')$ is divided into the product of a prior (RPN score) and its posterior (bbox score), \begin{equation} \label{eq:wsl2} \begin{split} P(c^k=1 |\bI; \btheta') \sim & \sum_j [\mathrm{Softmax}_j{P_1(c_j=1 | \bI; \ba_j; \btheta')} \\ & \cdot \mathrm{Softmax}_k{P_2(c_j^k=1 | \bI; \ba_j; \btheta')}], \end{split} \end{equation} where $P_1$ and $P_2$ denotes the score in RPN and bbox head, respectively, $j$ is the index of proposals and $k$ is the index of classes. The cross entropy between $P(c=1 |\bI; \btheta')$ and the given label in $\cW$ then leads to the weakly-supervised loss (WSL) adopted in this study. In implementation, only $K$ (\emph{e.g.} $K=512$) proposals are sampled in the RPN stage to calculate their $\mathrm{Softmax}$ w.r.t. their objectness score, to reduce the influence of overwhelming simple background proposals. This implementation is different from that used in WSDDN \cite{bilen2016weakly} , which relies on the traditional selective search\cite{uijlings2013selective} and use an extra stream to predict the detection score. \textbf{Label Attention}. The target of agent model in the EM framework is to provide an accurate estimation of $P(\bt | \bI; c; \btheta')$, instead of $P(\bt | \bI; \btheta')$ used in only semi-supervised settings. This offers us a great advantage when designing the agent model as it able to explicitly leverage the image label of $\cW$ in both \textbf{train} and \textbf{test} phase, since the test phase is also conducted on the train dataset to generate pseudo labels. This is also different from the target detector in Stage2, which should be designed to be unaware of any image annotations during test phase. Therefore, the small difference inspires us to design an explicit label attention module as displayed in Fig. \ref{fig:pipeline}. The input image label is directly encoded into a one-hot vector $\bc$, followed by an FC layer to generate the attention map to be fused with the feature map of the agent detector. $$\vect{F}(\bI) \leftarrow \textrm{Sigmoid}\left( \textrm{FC}(\bc) \right) \odot \vect{F}(\bI), $$ where $\vect{F}$ denotes an intermediate featuremap to be multiplied with the attention map from the image label over the featuremap channels. \subsection{Stage2: towards an effective estimation of $Q$} \label{sec:stage2} Estimating $Q(c=1, \bI)$ in Eq. \ref{eq:Q2} is a super challenging task as it requires the summation over all possible proposal assignments, which for each foreground class requires computation complexity $\sim O(2^N-1)$, which means that as long as there are at least one foreground proposal, others could be arbitrarily assigned as foreground or background. This complexity of the evaluation process is already so overwhelming, not to mention the maximization, that numerous assumptions have been adopted in the literature to simplify it. For instance, \\ \textbf{1)}. A trivial reduction method is utilizing the spatial smoothness prior to reduce the effective $N$ to $N' (N' \ll N)$, as generated bounding boxes with relatively large IoU (typically larger than 0.5) are considered to belong to the same class; \\ \textbf{2)}. Some work in weakly-supervised learning assume there is only one instance for each label in the image \cite{bilen2016weakly}, which reduces the complexity to $\mathcal{O}(N')$ for each foreground class; \\ \textbf{3)}. The summation in Eq. \ref{eq:Q2} can be approximated using only its maximum value, \begin{equation} \label{eq:max_q} Q(c=1, \bI) \approx \max_{\bt \in \mathcal{B}}P(\bt | \bI; c; \btheta')\log P(\bt | \bI; \btheta), \end{equation} yet one still needs to find the proposal assignment protocol that maximize this equation; \\ \textbf{4)}. The model output $\bt$ that maximizes $P(\bt | \bI; c; \btheta')$ are assumed to maximize $P(\bt | \bI; c; \btheta')\log P(\bt | \bI; \btheta)$ in Eq.\ref{eq:max_q} and is then used to calculate $Q(c=1, \bI)$. \\ \textbf{5)}. Preset a hard score threshold $p_t$, and assume that $P(\bt | \bI; c; \btheta')=\prod_{m \in \mathcal{A}_P} P(\tilde{\bt}_{s(m)} | \ba_m)\prod_{m \in \mathcal{A}_N}{P(c=0 |\ba_m)}$ is the maximum when $\mathcal{A}_P$ and $\mathcal{A}_N$ are the set of positive and negative proposals separated by a predefined confidence threshold $p_t$ (\emph{e.g.} 0.9). After these 5 steps of simplifications, the original task with complexity $\sim O(2^N-1)$ reduces to one $\sim O(1)$. These flow of assumptions and simplification are commonly seen in state-of-the-art semi- or weakly-supervised learnings. For example, only one $\bt$ is chosen out of the $2^N$ all possible proposal assignments and a hard confidence threshold ($\emph{e.g.}$, $0.9$ in \cite{sohn2020simple} and $0.6$ in \cite{nguyen2019semi}) is chosen to differentiate the foreground and background pseudo labels. \textbf{Random pseudo-label sampling (RPS)}. Many of the above assumptions when reducing the dimensionality of $Q(c=1, \bI)$ in Eq. \ref{eq:Q2} are indeed just desperate choices and are far from being valid. In this study, we propose a random pseudo-label sampling (RPS) strategy, which can directly bypass the assumption chains and proves able to bring significant performance improvements. \begin{equation} \label{eq:sample} \begin{split} Q(c, \bI) &= \sum_{\bt \in \mathcal{B}}P(\bt | \bI; c; \btheta')\log P(\bt | \bI; \btheta) \\ &=\mathbb{E}_{\bt \sim P(\bt | \bI; c; \btheta')}\left[ \log P(\bt | \bI; \btheta)\right] \\ &\approx \frac{1}{B'} \sum_{\bt \in \mathcal{B}'} \log P(\bt | \bI; \btheta), \end{split} \end{equation} where $\mathcal{B}'$ is the sampled set of $\bt$ according to $\bt \sim P(\bt | \bI; c; \btheta')$, $B'$ is the length of the sampled set $\mathcal{B}'$. Eq. \ref{eq:sample} basically transforms the summation over $2^N-1$ terms into $B'$ ones. And in this study we simply choose $B'=1$, meaning only one $\bt$ (\textbf{not} $\tilde{\bt}$) is sampled at each iteration. Since negative proposals are overall simple prediction and account for the vast majority of predictions\cite{pang2019libra, lin2017focal}, the set $\mathcal{B}'$ that consists at least one positive proposals is actually confined to a small space in $\mathcal{B}$. Therefore, reducing $B'$ to a small number or even $B'=1$ does not bring much variance during the estimation. The sampling process goes as Algorithm \ref{alg:rps}. It involves two sampling processes. When the pseudo boxes are filtered using nms, conventional output only contains the most-likely representative at each location. Here we use a \textit{nms\_group} operator to output a set of index groups. In each group, all indices that are originally filtered by nms are kept in descending order w.r.t their scores. This is to guarantee that pseudo-boxes having a relative overlap can also be sampled, an approach to relax Assumption (1). The first boxes at all nms groups now form the originally nms-filtered pseudo boxes. Therefore, Step 1 in L-\ref{alg:step1} is to sample at different locations and Step 2 in L-\ref{alg:step2} is to sample inside a nms group. In this regards, every possible combination of pseudo-boxes from each proposals is possible in theory, relaxing Assumptions (1-5) directly. \begin{algorithm}[t] \caption{Random pseudo-label sampling (RPS)} \label{alg:rps} \begin{algorithmic}[1] \footnotesize \STATE {\bfseries Input:} Unlabeled image $\bI$, image-level labels $\bc$, agent model $\btheta'$ \STATE {\bfseries Output:} Sampled pseudo label sets $\vect{X}'$, $\vect{S}'$ \STATE Inference $\bI$ with $\btheta'$ yields the predicted bboxes $\vect{X}$ and classifications scores $\vect{S}$ \FOR{$k = 1$ \TO $C$} \STATE \textbf{if} $\bc_k = 0$ \textbf{then} \STATE \ \ \ \ \ continue \COMMENT{Skip BG category} \STATE $\bG \leftarrow $ \textit{nms\_group}$(\vect{X}, \bS)$ \COMMENT{Initialize the pseude label set $\vect{X}'$ and $\bS'$} \STATE $\vect{X}' \leftarrow \emptyset$, $\bS' \leftarrow \emptyset$ \FOR{$g$ \textbf{in} $\bG$} \STATE \COMMENT{Step 1: Keep/drop group by its maximum score} \STATE \textbf{if} $p \sim \mathcal{U}(0,1) > max(S_g)$ \textbf{then} \label{alg:step1} \STATE \ \ \ \ \ continue \STATE \COMMENT{Step 2: Draw a sample from the group by its score} \STATE $\bS_g \leftarrow \{s_i \in \bS: i \in g\}$ \label{alg:step2} \STATE $\bS_g \leftarrow \bS_g / sum(\bS_g)$ \COMMENT{normalize to [0, 1]} \STATE $i \leftarrow$ \textit{random\_choice}$(g, p=\bS_g)$ \STATE $\vect{X}' \leftarrow \vect{X}' \cup \vect{X}'_i$, $\bS' \leftarrow \bS' \cup \bS'_i$ \ENDFOR \ENDFOR \STATE \RETURN $\vect{X}'$, $\bS'$ \STATE \R{\textit{nms\_group($\vect{X}$, $\bS$)}} return a group set $\bG$ where each element $g \in \bG$ is an index group in descending order of scores and represents the indexes of bboxes that are filtered using NMS. \STATE \R{\textit{random\_choice($g$, $p$)}} return a sampled index $i$ that draw from the index pool $g$, $p$ gives the chance that index to be picked. \end{algorithmic} \end{algorithm} Note that even though there is only one proposal assignment strategy after both Assumption (5) and Eq. \ref{eq:sample}, their theoretical foundations differ significantly. Assumption (5) uses an empirical score threshold to differentiate the true pseudo-labels from the background, which is deterministic for each image in every step. This involves meticulous adjustment to find the best threshold value. Moreover, choosing a hard threshold brings unavoidable confirmation bias \cite{arazo2020pseudo}. That is, less confident foreground instances will never be chosen and over confident false positives will be given too large credit, which causes a bias for the target detector in Stage2. Whereas in RPS, the sampling method enlarges the space of pseudo-labels. All instances at different locations of different confidence scores, or pseudo-boxes inside one NMS group (the group that contains all boxes filtered during NMS) could be sampled according to their confidence scores. This has an combining effect of soft pseudo-labels and psedo-label voting. Moreover, the sampling process reduce the reliance of the target detector on the agent, reducing the possible confirmation bias and increase the generalization ability of the target detector. \section{Experiments} \label{sec:experiments} \begin{table} \centering \begin{tabular}{l| c } \hline \textbf{Supervised} & mAP(\%) \\ \hline Faster R-CNN (VOC07) & 73.15 \\ \hline Faster R-CNN (VOC07, 2x) & 72.13 \\ \hline Faster R-CNN (VOC07+12, 2x) & 79.79 \\ \hline SSD512 (VOC07)\cite{jeong2020interpolation} & 73.30 \\ \hline \hline \textbf{Semi-supervised} & mAP(\%) \\ \hline SSD512+CSD\cite{jeong2019consistency} & 75.80 \\ \hline SSD512+CSD+ISD\cite{jeong2020interpolation} & 76.77 \\ \hline STAC \cite{sohn2020simple} & 76.77 \\ \hline WSSOD (ours) & 78.00 \\ \hline \hline \textbf{ Weakly- and Semi-supervised} & mAP(\%) \\ \hline WSSOD (ours)& 78.90 \\ \hline \end{tabular} \caption{ Comparison on mAPs for PASCAL VOC dataset, where VOC07 is fully-labeled set and VOC12 is weakly-labeled set. The number of iterations for Supervised and Supervised 2x setting on Faster R-CNN is 90k, 180k respectively. For results of SSD512, we directly borrowed them from \cite{jeong2020interpolation, jeong2019consistency}. For STAC, we implement their methods and re-trained it, since they reports the performce under COCO evaluation in paper. } \label{table: wssod on voc} \end{table} \begin{table*}[!t] \centering \begin{tabular}{l|| c |c| c |c |c } \hline Methods & 1\% COCO & 5\% COCO & 10\% COCO & 20\% COCO & 30\% COCO \\ \hline Supervised & 10.6 & 19.2 & 24.1 & 29.4 & 33.6 \\ \hline Supervised (2x) & 9.0 & 18.0 & 23.1 & 27.6 & 31.8 \\ \hline \hline WSSOD-Target (ours) & 18.4 & 27.4 & 31.3 & 35.0 & 36.1 \\ \hline \end{tabular} \caption{Comparison on mAPs for different ratio of fully-labeled data, where Stage1 refers to the agent detector and Stage2 refers to the target detector. Results are evaluated on COCO test dev.} \label{table: wssod on coco} \end{table*} To demonstrate the effectiveness of WSSOD, we conduct experiments on MS-COCO\cite{lin2014microsoft} and PASCAL VOC\cite{everingham2010pascal} datasets, which are the most popular datasets in object detection. MS-COCO is consists of 115,000 trainval images of 80 categories, while VOC07 and VOC12 contains 5,011 and 11,540 images belonging to 20 classes respectively. Our experiments are conducted based on two settings. For MS-COCO, similar to STAC\cite{sohn2020simple}, we randomly sample 1, 5, 10, 20 and 30\% data as fully-annotated set and take the rest as weakly-labeled set. The corresponding results are evaluated on MS-COCO test-dev set. As for PASCAL VOC, we utilize VOC07 as fully-annotated set and VOC12 as weakly-annotated set, the evaluation results are reported based on VOC07 test-dev set. \subsection{Implementation Details} Though WSSOD doesn't restrict the agent and target detector to be the same type, we choose the typical Faster RCNN\cite{ren2015faster} for both of them for simplicity. The codes used for our experiments are based on Pytorch. For MS-COCO experiments, we basically follow the \textbf{quick} training schedule proposed in STAC\cite{sohn2020simple}, which trains the agent detector for 90k iterations (1x) and target detector for 180k (2x) iterations \footnote{STAC trained Stage1 and Stage2 for both 180k iterations but with the batch size of 8 and 16 respectively. For simplify, we double the batch size but halve the number of iterations for Stage1.}. As for PASCAL VOC, we train the agent detector and target detector for 120k, 240k iterations respectively with batch size 16. In our setting, both fully-annotated and weakly-annotated samples sit together with equal amounts in a mini-batch. As for the strong augmentation $T$ in Stage2, we simply uses the same strategy in STAC\cite{sohn2020simple}. It is extended from the RandAugment\cite{cubuk2019autoaugment} and contains transformations on color, global geometry, box-level geometry and cutout\cite{devries2017improved}. For the loss weight of weakly-annotated data in Stage2, we take $\lambda_u=2$ since it has been testified to be optimum in \cite{sohn2020simple}. \subsection{Results} \label{results} Though semi-supervised and weakly-supervised object detection have been widely studied, rare works are related to weakly- and semi-supervised setting. Consequently, we mainly compare WSSOD with semi-supervised methods and fully-supervised settings. The experiments on PASCAL VOC are carried out by choosing VOC07 train/val dev as fully lableed set and VOC12 train/val dev as weakly-labeled set. We reports the evaluation results on VOC test dev, which is shown in Table \ref{table: wssod on voc}. This table is composed of three parts: Firstly, we show the results under fully-supervised setting. For Faster R-CNN, we trained for both 90k and 180k (2x) iterations to make fair comparison to WSSOD, since WSSOD trains 180k iterations in Stage2. However, the longer training schedule only brings negative effectiveness under fully-supervised setting, which may be contributiond to the overfitting problem on small scale dataset. We also reports the performance of SSD512 by directly borrowing the results from \cite{jeong2019consistency, jeong2020interpolation}, whose performance is better than Faster R-CNN. Secondly, we exhibit the results under semi-supervised setting. For fair comparison, we omit the category label and abandon Global Classification Loss and Label Attention in Stage1, which finally yields an agent model of 73.15\% mAP. With the proposed RPS module, we obtain a target model of 78.0\% mAP based on Faster R-CNN, which outperforms \cite{jeong2019consistency, jeong2020interpolation}, though they are trained based on a stronger detector. Also, we beat the pseudo label based method STAC\cite{sohn2020simple}, which can demonstrate the effectiveness of the proposed RPS. Thirdly, we demonstrate the result of WSSOD under weakly- and semi-supervised setting. With the aid of Global Classification Loss and Label Attention, we boost the performance of agent model from 73.15\% to 74.59\%, and further obtained a target model of 78.90\% mAP. It is worth mentioning that the performance of Faster R-CNN under strongly-supervised setting (trained with VOC07 + VOC12) is 79.79\% mAP. It means we achieve a performance that is comparable to fully-supervised setting with only about one third (VOC07) fully-labeled data. The experiments on COCO are carried out by randomly taking part (i.e. 1, 5, 10, 20 and 30\%) of its trainval dev as fully-labeled set and using the rest as weakly-labeled set. For fair comparison, we also trained model under fully-supervised setting, which only considers the available fully-annotated data to tain. The evaluation results of Stage1 and Stage2 are reported in Table \ref{table: wssod on coco}. We design the Supervised setting as contrast together to agent detector and Supervised (2x) as constrasts to target detector for fairness. Supervised setting trains the detector with 90k iterations with batch size of 16, Supervised (2x) increase the number of iterations to 180k. As we can see, using 2x training schedule leads to poorer performance due to the overfitting problem. The performance of agent detector exceed all supervised setting with the aid of weakly-annotated data and label attention. For example, it reaches 28.9\% mAP under 10\% protocol, which is 4.8\% higher than the supervised setting of equal training iterations. However, a stronger agent detector is only half success, a stronger target detector is the ultimate aim.% For Stage2, our methods reaches 36.1\% mAP under 30\% protocol, which almost reaches the performance under completely fully-supervised (i.e. 100\% COCO) setting, 37.6\% mAP. \subsection{Abalation Study} \subsubsection{Stage1: Global Classification Loss and Label Attention} \begin{table} \centering \begin{tabular}{c | c | c } \hline Global Loss & Label Atten. & mAP(\%) \\ \hline & & 24.1 \\ \checkmark & & 26.9 \\ & \checkmark & 26.2 \\ \checkmark & \checkmark & 28.9 \\ \hline \end{tabular} \caption{Effects of Global Classification Loss and Label Attention on agent model performance under 10\% COCO protocol.} \label{table: abalation on agent} \end{table} \begin{table}[!t] \centering \begin{tabular}{l | c | c c c | c} \hline \multicolumn{2}{c|}{Methods} & 1\% COCO& VOC \\ \hline \multirow{2}{*}{Semi Supervised} & $\tau=0.9$ & 15.2\% & 76.77\% \\ & RPS & 15.6\% & 78.00\% \\ \hline \multirow{2}{*}{Weakly- and Semi-} & $\tau=0.9$ & 18.2\% & 77.52\% \\ & RPS & 18.4\% & 78.90\%\\ \hline \end{tabular} \caption{Given the same agent model, the performance of target model with two different methods of generating pseudo label on COCO and VOC dataset.} \label{table: thr} \end{table} In Sec. \ref{sec:methodology}, Global Classification Loss and Label Attention Module were presented to obtain a stronger agent model. Here we separately evaluate their influence on the agent model under 10\% COCO protocol, as shown in Table \ref{table: abalation on agent}. Using Global Classification along, an improvement of 2.8\% mAP is obtained on agent detector, which demonstrates the effectiveness of leveraging weakly-labeled data in Stage1 training. Separately employing Label Attention Module also brings 2.1\% mAP improvements. This can be attributed to the reason that label attention module extracts the context information behind category labels. This insight is close to \cite{hu2018relation}, which considers the spatial relation and category of proposals in object detection. We argue that Global Classification Loss and Label Attention Module are proposed for different intuitions, they can benefit from each other. By jointly using both of them, we finally achieved an improvements of 4.8\% mAP on agent model. \subsubsection{Stage2: Random Pseudo-Label Sampler} In this part, we conduct experiments under both semi-supervised and weakly- and semi-supervised to demonstrate the effectiveness of the proposed Random Pseudo-Label Sampler. Since the way to generate pseudo label only influence Stage2, we take the same agent detector but using two different methods to generate pseudo label. The most widely used method for choosing pseudo label is hrad-threshold \cite{sohn2020simple,sohn2020fixmatch, chen2020temporal}, as described by Simplification 5. To be more detailed, a fixed threshold $\tau$ is firstly given, and used to drop those predictions with probability lower than $\tau$. Previous works \cite{sohn2020simple} have done detailed research on the optimum threshold $\tau$. In order to avoid repetitive labors, we directly borrow the conclusion of \cite{sohn2020simple} and use the optimum $\tau=0.9$ in both VOC and COCO for contrasts. Experiments are conducted on both PASCAL VOC and MS-COCO dataset, as shown in Table \ref{table: thr}. It's obvious that using RPS module yields better target model, which means pseudo label with higher quality were generated. In additional, our RPS module is parameter-free and doesn't need to tune for different cases.
1,477,468,751,340
arxiv
\section{Introduction and results} A \emph{triangulation} of a topological space $M$ is a simplicial complex $K$ together with a homeomorphism $M\approx |K|$ between $M$ and the geometric realization of $K$. If $M$ is a closed $d$-dimensional manifold then we are particularly interested in \emph{combinatorial} triangulations, where we require that the link (see bellow) of every simplex in $K$ is homeomorphic to a sphere. A manifold admitting a combinatorial triangulation is called a \emph{PL-manifold}. Given a PL-manifold $M$, what is the minimal number of vertices in a combinatorial trian\-gu\-lation of $M$? This is a difficult question, because there are no standard constructions for triangulations with few vertices of a given manifold, nor there are sufficiently general methods to prove that some specific triangulation is in fact minimal. Apart from classical results on minimal triangulations of spheres and closed surfaces, and a special family of minimal triangulations for certain sphere bundles over a circle (so called Cs\'asz\'ar tori - see \cite{K}), there exists only a handful of examples for which the minimal triangulations are known. An exhaustive survey of the results and the existing literature on this problem can be found in \cite{L}. See also the recent article \cite{KN} which discusses a more general question of the number of faces in triangulations of manifolds and polytopes. Generally speaking, one may expect that the minimal number of vertices in a triangulation of a space increases with its complexity. Most results that can be found in the literature use dimension, connectivity or Betti numbers of $M$ to express lower bounds for the number of vertices in a triangulation of $M$. In this paper we have been able to exploit the fundamental group and the Lusternik-Schnirelmann category to improve several estimates of the minimal number of vertices in a triangulations of a manifold. In the rest of this section we state our main results. In Section 2 we introduce and explain prerequisites on triangulations, Lusternik-Schnirelmann category and the covering type. Finally, in Section 3 we give the proofs of the theorems presented bellow. Let us begin with a slight improvement of the theorem first proved by Brehm and K\"uhnel \cite{BK}. Our approach is based on the notion of covering type \cite{GMP} and is much simpler than the original one. Recall that Poincar\'e duality together with the positive answer to the Poincar\'e conjecture imply that every simply-connected closed $d$-manifold, whose homology is trivial in dimensions less or equal to $d/2$ is homeomorphic to the $d$-sphere. For the remaining cases the minimal number of vertices in a triangulation can be estimated as follows. \begin{theorem} \label{thm:simply connected} Let $M$ be a simply-connected $d$-dimensional closed PL-manifold, and let $i$ be the minimal index for which $\widetilde H_i(M)\ne 0$. \begin{itemize}[leftmargin=7.5mm] \item[(a)] If $i=\frac{d}{2}$ then every combinatorial triangulation of $M$ has at least $\frac{3d}{2}+k+2$ vertices, where $k$ is the minimal integer for which ${i+k \choose i+1}\ge\mathop{\rm rank}\nolimits H_i(M)$. Moreover, $k$ can be equal to 1 only if $d\in\{2,4,8,16\}$. \item[(b)] If $i<\frac{d}{2}$ then every combinatorial triangulation of $M$ has at least $2d-i+4$ vertices. \end{itemize} In particular, every combinatorial triangulation of a closed, simply-connected $d$-manifold with at most $3\frac{d}{2}+2$ vertices represents the $d$-dimensional sphere. \end{theorem} The main contribution of this paper is the following theorem and its corollaries. In particular we obtain considerable improvements of estimates by Brehm-K\"uhnel \cite{BK} and Bagchi-Datta \cite{BD} of the number of vertices in PL-triangulations of homology spheres. By \cite[Corollary 2]{BK} every PL-triangulation of a non simply-connected $d$-manifold ($d\ge 3$) has at least $2d+3$ vertices. That the value cannot be improved in general is shown by K\"uhnel who constructed a family of $S^{d-1}$-bundles over the circle $S^1$ that admit PL-triangulations with $2d+3$ vertices. However, if the fundamental group of $M$ is not free, then we obtained a better estimate: \begin{theorem} \label{thm:non free pi1} If $M$ is a $d$-dimensional ($d\ge 3$) closed manifold whose fundamental group is not free, then every combinatorial triangulation of $M$ has at least $3d+1$ vertices. \end{theorem} It is worth noting that closed 3-manifolds whose fundamental group is free are quite special, being either the 3-sphere or connected sums of tori $S^2\times S^1$ and twisted tori $S^2\underline{\times}\, S^1$. All the other closed 3-manifolds (in particular, all hyperbolic manifolds) satisfy the assumptions of the above theorem. An important family of examples whose fundamental group is not free are the \emph{homology spheres}, i.e., manifolds, whose homology groups vanish except in the top dimension, where the homology group is ${\mathbb{Z}}$. A simply-connected homology sphere is homeomorphic to a sphere by the positive answer to the Poincar\'e conjecture but for every $d\ge 3$ there exist $d$-dimensional homology spheres that are not homeomorphic to $S^d$. As the fundamental group of a homology sphere must be a perfect group, it cannot be free (unless it is trivial), therefore Theorem \ref{thm:non free pi1} implies the following improvement of the estimate in \cite[Corollary 4]{BK}. \begin{corollary} \label{cor:homology sphere} Every $d$-dimensional homology sphere that admits a combinatorial trian\-gu\-la\-tion with at most $3d$ vertices is a PL-sphere. \end{corollary} Bagchi and Datta \cite{BD} obtained an estimate of the minimal number of vertices in PL-triangulations of ${\mathbb{Z}}_2$-homology spheres, i.e., manifolds whose ${\mathbb{Z}}_2$-homology is isomorphic to that of a sphere (non-trivial examples are 3-dimensional odd lens spaces). Their results are improved (except in dimensions 3 and 4) by the following: \begin{corollary} \label{cor:Z2 homology sphere} Every $d$-dimensional ${\mathbb{Z}}_p$-homology sphere that admits a combinatorial trian\-gu\-la\-tion with at most $3d$ vertices is a PL-sphere. \end{corollary} We conclude with a useful recognition criterion for combinatorial triangulations. \begin{theorem} \label{thm:small tgl} Let $K$ be a triangulation of a $d$-dimensional manifold. If for every $k\ge 3$ the link of each simplex of codimension $k+1$ in $K$ has at most $3k$ vertices, then the triangulation $K$ is combinatorial. \end{theorem} \section{Preliminaries} In this section we give recollect concepts and results that are needed in the proofs of above theorems. \subsection{Simplicial complexes and PL-triangulations} Here we describe two special constructions and refer the reader to the article of J. Bryant \cite{Bryant} for the definitions of triangulations, skeleta, open and closed stars, links, joins, combinatorial triangulations and other standard concepts of PL-topology. Given a triangulation $M\approx |K|$ we identify the set of vertices of the triangulation with the $0$-skeleton $K^0$ of the simplicial complex $K$. For a subset $V\subseteq K^0$, let $K(V)$ denote the full subcomplex of $K$ spanned by $V$, i.e. the maximal subcomplex of $K$ whose 0-skeleton is $V$. It is easy to check that for every vertex $v\in K^0$ the subcomplex $K(V\cup\{v\})$ can be obtained as the union of $K(V)$ and the join of $v$ with the part of the link of $v$ contained in $K(V)$, which can be expressed by the following formula: \begin{equation} K(V\cup\{v\})=K(V)\cup v*(\mathop{\rm lk}\nolimits(v)\cap K(V)) \end{equation} Furthermore, let us denote by $N(V)\subseteq |K|$ the union of open stars (with respect to $K$) of vertices in $V$. Clearly, the geometric realization $|K(V)|$ is a subspace of $N(V)$. \begin{lemma} \label{lem:KN} $N(V)=|K|-|K(K^0-V)|$, therefore $N(V)$ is an open neighbourhood of $|K(V)|$ in $|K|$. Moreover, $|K(V)|$ is a deformation retract of $N(V)$. \end{lemma} \begin{proof} The first statement is obvious. In order to obtain a deformation retraction recall that every point $x\in|K|$ can be written uniquely in terms of barycentric coordinates $$x=\sum_{v\in K^0} \lambda_v(x)\cdot v.$$ By definition, for every $x\in N(V)$ there is at least one $v\in V$ for which $\lambda_v(x)>0$, so we may define the retraction $$r\colon N(V)\to |K(V)| \ \ \text{as} \ \ r(x):=\sum_{v\in V} \lambda_v(x)\cdot v.$$ Clearly, $r$ is homotopic to the identity of $N(V)$ through a straight-line homotopy. \end{proof} In particular, if $K^0$ is partitioned into two disjoint subsets $V,V'$ then $N(V)$ and $N(V')$ form an open cover of $|K|$ and $$N(V)\cap N(V')=N(V)-|K(V)|=N(V')-|K(V')|.$$ \begin{lemma} \label{lem:complement homology} Let $K$ be a combinatorial triangulation of a closed $d$-dimensional manifold. If $V\subseteq K^0$ spans a $d$-dimensional simplex in $K$ then $N(V)-|K(V)|$ is homotopy equivalent to a $(d-1)$-dimensional sphere and for $i<d$ $$H_i(K(K^0-V))\cong H_i(K)\ \ \text{and}\ \ H^i(K(K^0-V))\cong H^i(K)$$ (integer (co)homology unless $|K|$ is non-orientable and $i=d-1$, in which case one should use ${\mathbb{Z}}_2$-coefficients). \end{lemma} \begin{proof} The first claim follows easily by excision of the interior of the simplex $K(V)$. To prove the second statement for homology groups let $V'=K^0-V$ and consider the following portion of the Mayer-Vietoris sequence $$H_i(N(V)\cap N(V'))\to H_i(N(V))\oplus H_i(N(V'))\to H_i(K)\to H_{i-1}(N(V)\cap N(V'))$$ Observe that $H_i(N(V))=0$, that $H_i(N(V)\cap N(V'))=H_i(S^{d-1})=0$ for $i<d-1$, and that $H_d(|K|)\to H_{d-1}(N(V)\cap N(V'))$ is surjective (with ${\mathbb{Z}}_2$-coefficients if $|K|$ is non-orientable). \ By exactness of the above sequence $$H_i(K)\cong H_i(N(V'))\cong H_i(K(V'))$$ for $i<d$. The proof for cohomology groups is similar. \end{proof} \subsection{Lusternik-Schnirelmann category} A subset $A\subseteq X$ of a topological space $X$ is said to be \emph{categorical} if the inclusion map $A\hookrightarrow X$ is nul-homotopic (i.e., if there exists a homotopy between the inclusion and the constant map). The minimal cardinality of an open categorical cover of $X$ is denoted $\mathop{\rm cat}\nolimits(X)$ and is called the \emph{Lusternik-Schnirelmann category} of $X$. For example, the category of a space is 1 if, and only if, it is contractible, and the category of a (non-contractible) suspension is 2, because every suspension has a natural cover by two contractible cones. See \cite{CLOT} for a comprehensive survey of the results and the vast literature about Lusternik-Schnirelmann category and related topics. (Keep in mind when comparing the results that the survey \cite{CLOT} and the article \cite{DKR} use the normalized value of $\mathop{\rm cat}\nolimits(X)$ which is one less than in our definition so that contractible spaces have category 0 and non-contractible suspensions have category 1). Lusternik-Schnirelmann category is tightly related to other homotopy invariants, for example, a well-known result states that if $\mathop{\rm cat}\nolimits(X)\le 2$ then the fundamental group of $X$ is free (see \cite[p.44]{CLOT}). We will base our results on a similar but much deeper theorem proved by Dranishnikov, Katz and Rudyak \cite[Corollary 1.2]{DKR}: if $M$ is a closed $d$-dimensional manifold ($d\ge 3$) and if $\mathop{\rm cat}\nolimits(M)\le 3$ then the fundamental group of $M$ is free. Their proof is based on the notion of category weight which we briefly recall. Roughly speaking, a non-zero class $u\in \widetilde H^*(M)$ (here we omit the coefficients for cohomology from the notation) has \emph{category weight} at least $k$ if the restriction of $u$ to any union of $k$ categorical subsets of $M$ is trivial. Precise definition is slightly more technical - see \cite[Section 2.7]{CLOT} or \cite[Section 3]{DKR}. Clearly, if we can find classes $u,v\in \widetilde H^*(M)$ of weight $k$ and $l$ respectively, and such that $0\neq u\cdot v\in\widetilde H^*(M)$, then $\mathop{\rm cat}\nolimits(M)>k+l$. We can summarize the main result of \cite[Section 4]{DKR} as follows: \begin{theorem} \label{thm:DKR} Let $M$ be a closed $d$-dimensional ($d\ge 3$) manifold $M$ whose fundamental group is not free. Then there exist suitable systems of coefficients on $M$ and cohomology classes $u\in H^2(M)$ of weight 2 and $v\in H^{d-2}(M)$ of weight 1, such that $0\neq u\cdot v\in H^d(M)$. As a consequence, $\mathop{\rm cat}\nolimits(M)\ge 4$. \end{theorem} \subsection{Homotopy triangulations and covering type} Let us denote by $\Delta(X)$ the minimal number of vertices in a triangulation of a compact polyhedron. Clearly, $\Delta(X)$ is a topological invariant of compact polyhedra but it is in general very far from being a homotopy invariant. As an easy example let $X_1:=S^1\vee S^1\vee S^1$, the one-point union of three circles, let $X_2$ be the graph with two vertices and four parallel edges between them and let $X_3:=\Delta_3^{(1)}$, the 1-skeleton of the tetrahedron. All three spaces have the same homotopy type and yet easy geometric reasoning shows that $\Delta(X_1)=7$, $\Delta(X_2)=5$, $\Delta(X_3)=4$. To obtain a homotopy invariant notion recall that a \emph{homotopy triangulation} of $X$ is a simplicial complex $K$ together with a homotopy equivalence $X\simeq |K|$. Then the minimal number of vertices among all possible homotopy triangulations of $X$ is not only a homotopy invariant of $X$ but it also provides a link to the concept of covering type that was recently introduced by M.~Karoubi and C.~Weibel \cite{KW}. Recall that a cover ${\mathcal{U}}$ of a space $X$ is said to be \emph{good} if all finite non-empty intersections of elements of ${\mathcal{U}}$ are contractible. Standard examples are covers by convex sets, covers of polyhedra by open stars of vertices and covers of Riemannian manifolds by geodesic balls. One of the main facts about good covers is the Nerve Theorem (see \cite[Corollary 4.G3]{Hatcher}): if ${\mathcal{U}}$ is a good open cover of a paracompact space $X$, then $X\simeq |N({\mathcal{U}})|$, where $|N({\mathcal{U}})|$ is the geometric realization of the nerve of ${\mathcal{U}}$. Karoubi and Weibel defined the \emph{covering type} of $X$ as the minimum cardinality of a good open cover of a space that is homotopy equivalent to $X$. If $X$ admits a homotopy triangulation $X\simeq |K|$, where the simplicial complex $K$ has $n$ vertices, then the open stars of the vertices form a good cover for $|K|$, therefore $\mathop{\rm ct}\nolimits(X)\le n$. Conversely, if there exists a homotopy equivalence $X\simeq Y$ where $Y$ has a good open cover ${\mathcal{U}}$ with $n$ elements, then $X\simeq Y\simeq |N({\mathcal{U}})|$ is a homotopy triangulation of $X$ with $n$ vertices. Thus we have proved the following result (cf. \cite[Theorem 1.2]{GMP}): \begin{proposition} If $X$ has the homotopy type of a compact polyhedron, then $\mathop{\rm ct}\nolimits(X)$ equals the minimal number of vertices in a homotopy triangulation of $X$. \end{proposition} For every compact polyhedron $X$ there is the obvious relation $\Delta(X)\ge\mathop{\rm ct}\nolimits(X)$ and we have seen previously that $\Delta(X)$ can be in fact much bigger that $\mathop{\rm ct}\nolimits(X)$. However, if $M$ is a closed triangulable manifold then there is some evidence that $\Delta(M)$ are $\mathop{\rm ct}\nolimits(M)$ close and often equal. Notably, Borghini and Minian \cite{BM} showed that for closed surfaces $\Delta(M)$ and $\mathop{\rm ct}\nolimits(M)$ coincide, with the sole exception of the orientable surface of genus 2, where the two quantities differ by one. There are several useful estimates of $\mathop{\rm ct}\nolimits(X)$ based on other homotopy invariants of $X$. For example, let $\mathop{\rm hdim}\nolimits(X)$ denote the homotopy dimension of $X$, i.e. the minimal dimension of a homotopy triangulation of $X$. Then we have the following estimate (cf. \cite[Proposition 3.1]{KW}): \begin{proposition} \label{prop:top homology} Let $k=\mathop{\rm hdim}\nolimits(X)$. If $\mathop{\rm ct}\nolimits(X)=k+2$, then $X\simeq S^k$, otherwise $\mathop{\rm ct}\nolimits(X)\ge k+3$. \end{proposition} \begin{proof} If $\mathop{\rm ct}\nolimits(X)\le n$, then by Nerve theorem $X$ has a homotopy triangulation by a subcomplex of $\Delta_{n-1}$. However, $|\Delta_{n-1}|$ is contractible, the only subcomplex of $\Delta_{n-1}$ whose homotopy dimension is its $(n-2)$-skeleton $|\Delta_{n-1}^{(n-2)}|\approx S^{n-2}$, while all the other subcomplexes have the homotopy dimension at most $n-3$. \end{proof} Govc, Marzantowicz and Pave\v{s}i\'{c} \cite{GMP} applied techniques from Lusternik-Schnirelmann category to obtain further estimates of the covering type of a space and proved the following results: \begin{theorem}(\cite[Theorem 4.1]{GMP}) \label{thm:wedge of spheres} The covering type of a $r$-fold wedge of sphere of dimension $i$ equals the minimal integer $n$ for which ${n-1\choose i+1}\ge r$. \end{theorem} and \begin{theorem}(\cite[Corollary 2.4]{GMP}) Let $M$ be a $d$-dimensional closed manifold. Then every triangulation of $M$ has at least $$ 1+d+ \frac{1}{2}\mathop{\rm cat}\nolimits(M)(\mathop{\rm cat}\nolimits(M)-1)$$ vertices. \end{theorem} \section{Proofs} In this section we provide the proofs for the results states in Section 1. \begin{proof} ({\bf of Theorem \ref{thm:simply connected}}) Observe that $M$ is by assumption simply-connected and hence orientable, which implies that Poincar\'e duality holds with arbitrary coefficients. Let $K$ be a combinatorial triangulation of $M$. Since $M$ is $d$-dimensional, there exists a $(d+1)$-element subset $V\subset K^0$ spanning a simplex. Lemma \ref{lem:complement homology}, together with Seifert-van Kampen theorem imply that $K(K^0-V)$ is simply connected and that $H_i(K(K^0-V))=H_i(K)$ for $i<d$. Under the assumption (a), if $d=2i$ and if $H_i(M)$ is the first non-trivial homology group of $M$, then the homology of $K(K^0-V)$ is free and concentrated in dimension $i$. It follows that $K(K^0-V)$ is homotopy equivalent to a wedge of $i$-dimensional spheres. By Theorem \ref{thm:wedge of spheres} the covering type of a wedge of $r$ spheres of dimension $i$ is equal to $i+k+1$ where $k$ is the minimal integer satisfying ${i+k \choose i+1}\ge r$. We conclude that $K^0$ has at least $(d+1)+(\frac{d}{2}+k+1)=3\frac{d}{2}+k+2$ elements. Moreover, if $k=1$ then clearly $\mathop{\rm rank}\nolimits H_i(M)=1$, therefore $M$ admits a CW-decomposition with three cells in dimensions $0,i$ and $d$, respectively. Then the $i$-dimensional skeleton is the sphere $S^i$ and the $d$-dimensional cell is attached to $S^i$ by a map with Hopf invariant 1. By the celebrated theorem of Adams, this is possible only if $i\in\{1,2,4,8\}$. Under the assumption (b) $H_i(M)\ne 0$. If $H_i(M)\cong {\mathbb{Z}}$, then by the Universal Coefficients Theorem $H^i(M)\cong{\mathbb{Z}}$ and by Poincar\'e duality $H^{d-i}(M)\cong{\mathbb{Z}}$. On the other hand if $H_i(M)\not\cong {\mathbb{Z}}$, then by Poincar\'e duality $H^{d-i}(M)\not\cong 0$ or ${\mathbb{Z}}$. Lemma \ref{lem:complement homology} yields $H^k(K(K^0-V))\cong H^k(M)$ for $k<d$, which in both cases implies that $\mathop{\rm hdim}\nolimits(K(K^0-V))\ge d-i$, and that the cohomology of $K(K^0-V)$ is not that of a sphere. By Proposition \ref{prop:top homology} the covering type of $K(K^0-V)$ is at least $d-i+3$. We conclude that $K^0$ has at least $(d+1)+(d-i+3)=2d-i+4$ elements. \end{proof} \begin{proof} ({\bf of Theorem \ref{thm:non free pi1}}) Let $K$ be a combinatorial triangulation of $M$. Since $M$ is $d$-dimensional, its triangulation must contain at least one $d$-simplex, and so there exist vertices $v_1,\ldots,v_{d+1}\in K^0$ that span a $d$-dimensional simplex in $K$. Let us enumerate the remaining vertices so that $K^0=\{v_1,\ldots,v_{d+1},\ldots,v_n\}$. By adding one vertex at a time we obtain a sequence of subcomplexes $$\Delta_d=K_{d+1}<\ldots<K_k<K_n=K,$$ where $K_k=K(v_1,\ldots,v_k)\le K$. Since $\pi_1(M)$ is non-trivial, there exists a minimal $l$, such that $\pi_1(|K(v_1,\ldots,v_l)|)$ is non-trivial. By expressing $K_l$ as in formula (1) $$K_l=K_{l-1}\cup v_l*(\mathop{\rm lk}\nolimits(v_l)\cap K_{l-1}),$$ we see that $K_l$ is a union of two simply-connected subcomplexes. By Seifert-van Kampen theorem its fundamental group can be non-trivial only if (the geometric realization of) the intersection $L:=\mathop{\rm lk}\nolimits(v_l)\cap K(v_1,\ldots,v_{l-1})$ has at least two components. Let us denote $L':=\mathop{\rm lk}\nolimits(v_l)\cap K(v_{l+1},\ldots,v_n)$. Then $L$ and $L'$ are full subcomplexes of $\mathop{\rm lk}\nolimits(v_l)$ and their vertices determine a partition of the vertices of $\mathop{\rm lk}\nolimits(v_l)$. By lemma \ref{lem:KN} $|L'|$ is a deformation retract of $|\mathop{\rm lk}\nolimits(v_l)|-|L|$. Since $|\mathop{\rm lk}\nolimits(v_l)|\approx S^{d-1}$, we can apply Alexander duality \cite[Theorem 3.44]{Hatcher} and obtain that $H^{d-2}(|L'|)\cong \widetilde H_0(|L|)\ne 0.$ By Proposition \ref{prop:top homology} there exist $d-1$ vertices, which we may label as $v_{l+1},\ldots,v_{l+d-1}$, that span a simplex in $L'$. Since these vertices are contained in $\mathop{\rm lk}\nolimits(v_l)$, they can be joined to $v_l$ in $K$, therefore vertices $v_{l},\ldots,v_{l+d-1}$ span a simplex in $K$. Let us denote $A:=\{v_1,\ldots, v_{d+1}\}$ and $B:=\{v_l,\ldots,v_{l+d-1}\}$. $A$ and $B$ are disjoint and together contain $2d+1$ vertices of $K^0$. To conclude the proof, we must show that $K^0-A-B$ contains at least $d$ vertices. Since $\pi_1(M)$ is not free, Theorem \ref{thm:DKR} states that there exist cohomology classes $u\in H^2(M)$ of weight 2 and $v\in H^{d-2}(M)$ of weight 1, such that $u\cdot v\neq 0$. Both $K(A)$ and $K(B)$ are contractible, therefore $N(A\cup B)=N(A)\cup N(B)$ is a union of two categorical sets. It follows that $u|_{N(A\cup B)}=0$, and so the restriction of $v$ to $N(K^0-A-B)$ cannot be trivial, as it would contradict $u\cdot v\neq 0$. Therefore $H^{d-2}(N(K^0-A-B))\neq 0$ and the Proposition \ref{prop:top homology} implies that $K^0-A-B$ must contain at least $d$ vertices, as claimed. \end{proof} \begin{proof} ({\bf of Corollaries \ref{cor:homology sphere} and \ref{cor:Z2 homology sphere}}) Every 1- or 2-dimensional homology sphere is a PL-sphere so we may assume $d\ge 3$. If there is a PL-triangulation of $M$ with less than $3d+1$ vertices, then $\pi_1(M)$ is free by by Theorem \ref{thm:non free pi1}. Therefore, assumptions $H_1(M)=0$ or $H_1(M;{\mathbb{Z}}_p)=0$ imply that $M$ is simply-connected, and so it is homeomorphic to $S^d$ by the positive answer to the Poincar\'e conjecture. \end{proof} \begin{proof} ({\bf of Theorem \ref{thm:small tgl}}) Let $\sigma$ be a simplex in $K$ of codimension $k+1$ and let $x\in |K|$ be a point lying in the interior of $\sigma$. Then we may use excision and homology sequence of the pair to relate the homology of $\mathop{\rm lk}\nolimits(\sigma)$ to the local homology of $|K|$ at $x$: $${\mathbb{Z}}\cong H_d(|K|,|K|-x)\cong H_d(\sigma*\mathop{\rm lk}\nolimits(\sigma),\partial\sigma*\mathop{\rm lk}\nolimits(\sigma))\cong $$ $$\cong H_{d-1}(\partial\sigma*\mathop{\rm lk}\nolimits(\sigma))=H_{d-1}(\Sigma^{d-k-1}\mathop{\rm lk}\nolimits(\sigma))\cong \widetilde H_k(\mathop{\rm lk}\nolimits(\sigma)).$$ It follows that $\mathop{\rm lk}\nolimits(\sigma)$ is a $k$-dimensional homology sphere. If codimension of $\sigma$ is at most $3$, then $\dim\mathop{\rm lk}\nolimits(\sigma)\le 2$, therefore $\mathop{\rm lk}\nolimits(\sigma)$ is a combinatorial triangulation of a sphere. We will use this as a base for induction. Let $\sigma$ be a simplex of codimension $k+1$, and assume that for each $v\in\mathop{\rm lk}\nolimits(\sigma)$ the link $\mathop{\rm lk}\nolimits(v,\mathop{\rm lk}\nolimits(\sigma))=\mathop{\rm lk}\nolimits(\{v\}\cup\sigma)$ is combinatorially equivalent to $S^{k-1}$. It follows that $\mathop{\rm lk}\nolimits(\sigma)$ is a combinatorial triangulation of a $k$-dimensional homology sphere. By assumption $\mathop{\rm lk}\nolimits(\sigma)$ has at most $3k$ vertices, so Corollary \ref{cor:homology sphere} implies that $\mathop{\rm lk}\nolimits(\sigma)$ is a combinatorial triangulation of $S^k$. We conclude that links of all simplices in $K$ are homeomorphic to spheres of suitable dimensions, hence the triangulation $K$ is combinatorial. \end{proof}
1,477,468,751,341
arxiv
\section{Introduction and results} In the problem of sampling with derivatives one tries to recover or approximate a function by sampling a number of its derivatives. In analogy to Hermite interpolation this procedure is sometimes called Hermite sampling. For a well-defined problem one must fix a suitable signal model, which in engineering is usually a space of bandlimited functions (the Paley-Wiener space in mathematical terminology). In recent years the more general model of shift-invariant spaces has received considerable attention as a viable substitute for bandlimited functions. See~\cite{AG00} for an early survey. Hermite sampling can be seen as a purely mathematical problem in approximation theory, but it is also informed by practical considerations. Whereas a sample $f(\lambda )$ at a sampling point $\lambda $ gives its pointwise value, the derivative $f'(\lambda )$ measures the trend of $f$ at $\lambda $, and higher derivatives yield information about the local approximation by Taylor polynomials. In addition, by taking several measurements at each point, one may hope to use fewer sampling points. Based on our experience, we will analyze Hermite sampling in shift-invariant spaces that are generated by certain totally positive functions. We will call a function $g: \Rst \to \bC$ \emph{\tp\ of Gaussian type} if its Fourier transform factors as \begin{equation}\label{eq:tpaa} \hat g(\xi)= \prod_{j=1}^n (1+2\pi i\delta_j\xi)^{-1} \, e^{-c \xi ^2},\qquad \delta_1,\ldots,\delta_n\in\bR, c >0, n\in \bN \cup \{0\} \, . \end{equation} We study the problem of sampling with multiplicities in the shift-invariant space \begin{align*} \sisp^p(g) = \big\{ f\in L^p(\Rst) : f = \sum _{k\in \bZ } c_k g(\cdot - k), \, c\in \ell ^p(\bZ )\big\}, \end{align*} generated by a totally-positive function of Gaussian-type, where $1\le p\le \infty$. To describe the sampling process, we fix a sampling set $\Lambda \subseteq \Rst$ and a multiplicity function $\ml: \Lambda \to \bN$, and call $(\Lambda,\ml)$ a set with multiplicity. The number $m_\Lambda (\lambda )$ indicates how many derivatives are sampled at $\lambda \in \Lambda $. We then say that $(\Lambda, \ml)$ is a sampling set for $\sisp^p(g)$ with $1\leq p <\infty$, if there exist constants $A,B>0$ such that \begin{equation}\label{eq:lpstable} A \norm{f}_p^p\leq\sum_{\lambda \in \Lambda} \sum_{j=0}^{\ml(\lambda)-1} \abs{f^{(j)}(\lambda)}^p \leq B \norm{f}_p^p, \qquad f \in \sisp^p(g) \, . \end{equation} If $p=\infty$, a sampling set is defined by the inequalities \begin{align} \label{eq:h7} A \norm{f}_\infty \leq \sup_{\lambda \in \Lambda} \max_{0\le j\le\ml(\lambda)-1} \abs{f^{(j)}(\lambda)} \leq B \norm{f}_\infty, \qquad f \in \sisp^\infty(g) \, . \end{align} From a theoretical point of view the sampling inequality \eqref{eq:lpstable} completely solves the (Hermite) sampling problem. We note that a sampling inequality always leads to a general reconstruction algorithm based on frame theory~\cite{DS52}. In addition, for localized generators the frame algorithm converges even in the correct $L^p$-norm~\cite{CG04}. Thus \eqref{eq:lpstable} is also a first step towards the numerical treatment of the sampling problem. Our objective is the characterization of sampling sets satisfying the sampling inequality~\eqref{eq:lpstable} and to obtain sharp conditions on the sampling set. In Beurling's tradition of complex analysis we will characterize sampling sets in terms of a weighted version of Beurling's lower density \begin{align} \label{eq_bdens} D^{-}(\Lambda,\ml) := \liminf_{r \rightarrow \infty} \inf_{x \in \Rst} \frac{1}{2r} \sum_{\lambda \in \Lambda \cap [x-r,x+r]} \ml(\lambda). \end{align} Within this setting we can already formulate our main result. \begin{tm} \label{th_samp_der_tp} Let $g$ be a totally positive function of Gaussian type. Let $\Lambda \subseteq \Rst$ be a separated set and let $m_\Lambda:\Lambda \to \bN$ be a multiplicity function such that $\sup_{\lambda \in \Lambda} \ml(\lambda) <\infty$. (i) If $D^{-}(\Lambda, m_\Lambda) >1$, then $(\Lambda, m_\Lambda)$ is a sampling set for $\sisp^p(g)$ for every $1\le p\le \infty$. (ii) Conversely, if $(\Lambda, m_\Lambda)$ is a sampling set for $\sisp^2(g)$, then $D^{-}(\Lambda, m_\Lambda) \geq 1$. \end{tm} Theorem \ref{th_samp_der_tp} extends one of the results in \cite{grrost17} to sampling with multiplicities. We also have an analogous density result for the shift-invariant space generated by the hyperbolic secant. \begin{tm} \label{th_samp_der_sec} Let $\psi(x)=\sech(a x)=\frac{2}{e^{a x}+e^{-a x}}$ be the hyperbolic secant. Let $\Lambda \subseteq \Rst$ be a separated set and $m_\Lambda $ be a multiplicity function such that $\sup_{\lambda \in \Lambda} \ml(\lambda) <\infty$. (i) If $D^{-}(\Lambda, m_\Lambda) >1$, then $(\Lambda, m_\Lambda)$ is a sampling set for $\sisp^p(\psi)$ and every $1\le p\le \infty$. (ii) Conversely, if $(\Lambda, m_\Lambda)$ is a sampling set for $\sisp^2(\psi)$, then $D^{-}(\Lambda, m_\Lambda) \geq 1$. \end{tm} For comparison, we state the corresponding sampling result for the Paley-Wiener space $$ \mathrm{PW}^2 = \{ f\in \ltwo : \supp \, \hat f \subseteq [-1/2,1/2]\} \, . $$ The statement is analogous to Theorems \ref{th_samp_der_tp} and \ref{th_samp_der_sec} and is considered folklore among complex analysts (we tested it!). \begin{tm} \label{th_samp_der_pw} Let $\Lambda \subseteq \Rst$ be a separated set and let $m_\Lambda$ be a multiplicity function such that $\sup_{\lambda \in \Lambda} \ml(\lambda) <\infty$. (i) If $D^{-}(\Lambda, m_\Lambda) >1$, then $(\Lambda, m_\Lambda)$ is a sampling set for $\mathrm{PW}^2$. (ii) Conversely, if $(\Lambda, m_\Lambda)$ is a sampling set for $\mathrm{PW}^2$, then $D^{-}(\Lambda, m_\Lambda) \geq 1$. \end{tm} Although folklore, Theorem \ref{th_samp_der_pw} does not seem to have been formulated explicitly in the literature. A very interesting result involving divided differences of samples was proved for the Bernstein space $\mathrm{PW}^\infty $ by Lyubarski and Ortega-Cerda~\cite{LOC14}. For the Fock space a result similar to Theorem \ref{th_samp_der_pw} was derived early on by Brekke and Seip \cite{bese93}. Theorems~\ref{th_samp_der_tp} and~\ref{th_samp_der_sec} have also several consequences for Gabor systems. Specifically, we characterize semi-regular sets $\Lambda \times \beta\bZ$ that generate a multiwindow Gabor frame with respect to the first $n$ Hermite functions or with respect to a specific finite set of totally positive functions. See Section~6 for the precise formulations. In the literature most sampling results for shift-invariant spaces work with the assumption that the sampling set $\Lambda $ is ``dense enough''. However, when the sufficient density is made explicit, it is usually very far from the known necessary density, even in dimension $1$. In fact, until~\cite{grrost17} all authors use the covering density or maximum gap between samples, and the density then depends on some modulus of continuity of the generator. See ~\cite{AF98} for one of the first nonuniform sampling theorems in shift-invariant spaces, \cite{Raz95} for nonuniform sampling with derivatives for bandlimited functions, and~\cite{AGH17,SR16,selvan17} for more recent examples of sufficient conditions for Hermite sampling in terms of the covering density. In the light of \cite{grrost17} the sharp results for sampling with derivatives are perhaps not surprising, but they definitely go far beyond the current state-of-the-art. Our main point is to show the usefulness and power of the established methods, which consist of the combination of Beurling's techniques, spectral invariance, complex analysis, and the comparison of zero sets in different shift-invariant spaces. We believe that these methods carry a high potential in many other situations. To arrive at sharp results, we combine several techniques. Roughly, we proceed in three steps: (i) We use Beurling's method of weak limits and show that the sampling inequality \eqref{eq:h7} for $p=\infty $ is equivalent to the fact that every weak limit of integer translates of $\Lambda $ is a uniqueness set for $V^\infty (g)$. In this way we obtain a general characterization of sampling sets \emph{without inequalities} (Theorem~\ref{th_wl_tp}). (ii) To switch between sampling inequalities for $p=\infty $ and $p<\infty $, we use the theory of localized frames and Sj\"ostrand's beautiful version of Wiener's Lemma for convolution-dominated matrices~\cite{sj95}. These two steps are part of a general mathematical formalism that can be applied to many different situations. In particular, they work for shift-invariant spaces with almost arbitrary generators. (iii) The concrete understanding then rests on the analysis of uniqueness sets for a particular shift-invariant space $\sisp ^p(g)$, or in other words, we need to analyze the zero sets of arbitrary functions in $\sisp ^p(g)$. For instance, for the classical Paley-Wiener space this is the relation between the density of the zero set of an entire function and its growth. This is precisely the aspect where we develop new arguments. Firstly, we observe that every function in $V^p(\phi )$ for a Gaussian generator $\phi $ possesses an extension to an entire function, and secondly, we can relate the real zeros of some $f\in \sisp ^p(\phi )$ to the complex zeros of its analytic extension. A similar, but technically more involved strategy works for the hyperbolic secant $\psi (x) = (e^{ax} - e^{-ax})\inv $. In a final step we relate the zero sets of functions in \emph{different} shift-invariant spaces to each other. In this way we develop a direct line of arguments and avoid the detour in\cite{grrost17} via the characterization of Gaussian Gabor frames. The paper is organized as follows: Section~2 introduces the necessary definitions for sampling in vector-valued shift-invariant spaces. These provide a convenient language to formulate the problem of sampling with derivatives. Section~3 then contains the main structural characterization of sampling with derivatives and the necessary density condition (Proposition~\ref{prop_nec}). Section~4 is devoted to the investigation of the density of zero sets in shift-invariant spaces. This is the part that contains the main arguments and new proof ideas. The proofs of Theorems \ref{th_samp_der_tp}, \ref{th_samp_der_sec}, and \ref{th_samp_der_pw} are then in Section \ref{sec_proofs}. In Section~6 we draw some consequences of the sampling theorems with derivatives for multi-window Gabor frames. Finally Section 7 contains some of the postponed proofs of the structural results in Sections~2 and ~3. As these are essentially known, we explain only the necessary modifications. \section{Vector-valued shift-invariant spaces and sampling} \subsection{Vector-valued shift-invariant spaces} The treatment of sampling with derivatives requires us to formulate several standard concepts for vector-valued functions. In this section, we collect the precise definitions. For the proper formulation of sampling results we make use of the Wiener amalgam space $W_0=W_0(\Rst)$, which consists of continuous functions $g$ such that \begin{align*} \|g\|_{W} := \sum_{k\in\bZ} \max_{x\in [k,k+1]} |g(x)| <\infty. \end{align*} Let $G=(G^1, \ldots, G^N) \in (W_0(\Rst))^N$. We consider the vector-valued shift-invariant space \begin{align} \sisp^p(G) := \left\{ \sum_{k \in \Zst} c_k G(\cdot-k): c \in \ell^p(\Zst) \right\} \end{align} as a subspace of $(L^p(\Rst))^N$ with norm \begin{align*} \norm{(F^1, \ldots, F^N)}_p := \left(\sum_{j=1}^N \norm{F^j}^p_p\right)^{1/p}, \qquad 1 \leq p < \infty, \end{align*} and $\norm{(F^1, \ldots, F^N)}_\infty = \max_{j=1,\ldots,N} \norm{F^j}_\infty$. We always assume that $G$ has stable integer shifts, i.e. \begin{align} \label{eq_vec_riesz} \bignorm{\sum_{k \in \Zst} c_k G(\cdot-k)}_p \asymp \norm{c}_p. \end{align} \subsection{Sampling and weak limits} We consider tuples of sets $\vec{\Lambda}=(\Lambda^1, \ldots, \Lambda^N)$ with $\Lambda^j \subseteq \Rst$. We say that $\vec{\Lambda}$ is a \emph{sampling set for} $\sisp^p(G)$, $1\leq p \leq \infty $, if \begin{align} \label{eq_vec_samp} \norm{F}_p \asymp \left(\sum_{j=1}^N \norm{F^j|\Lambda^j}^p_p\right)^{1/p} = \Big( \sum _{j=1}^N \sum _{\lambda \in \Lambda ^j} |F^j(\lambda )|^p\Big)^{1/p} , \qquad \text{ for all } F \in \sisp^p(G). \end{align} For $p=\infty$ the condition reads as $\norm{F}_\infty \asymp \max_{j=1,\ldots,N} \norm{F^j|\Lambda ^j}_\infty$. We say that $\vec{\Lambda}$ is a \emph{uniqueness set for} $\sisp^p(G)$ if whenever $F \in \sisp^p(G)$ is such that $F^j \equiv 0$ on $\Lambda^j$, for all $j=1, \ldots, N$, then $F \equiv 0$. Clearly, sampling sets are also uniqueness sets. \medskip We first recall Beurling's notion of a weak limit of a sequence of sets. A sequence $\{\Lambda_n: n \geq 1\}$ of subsets of $\bR$ is said to \emph{converge weakly} to a set $\Lambda \subseteq \bR$, denoted $\Lambda_n \weakconv \Lambda$, if for every open bounded interval $(a,b)$ and every $\varepsilon >0$, there exist $n_0 \in \bN$ such that for all $n \geq n_0$ \begin{align*} \Lambda_n \cap (a,b) \subseteq \Lambda + (-\varepsilon,\varepsilon) \mbox { and } \Lambda \cap (a,b) \subseteq \Lambda_n + (-\varepsilon,\varepsilon). \end{align*} We let $\WZ(\Lambda)$ denote the class of all sets $\Gamma$ that can be obtained as weak limits of integer translates of $\Lambda$, i.e., $\Gamma \in \WZ(\Lambda)$ if there exists a sequence $\{k_n: n \geq 1\} \subseteq \mathbb{Z}$ such that $\Lambda + k_n \weakconv \Gamma$. We extend this notion to tuples of sets as follows. Given two $N$-tuples of sets $\vec\Lambda=(\Lambda^1,\ldots,\Lambda^N)$ and $\vec\Gamma=(\Gamma^1, \ldots, \Gamma^N)$, we say that $\vec\Gamma \in \WZ(\vec\Lambda)$ if there exists a sequence $\{k_n: n \geq 1\} \subseteq \mathbb{Z}$ such that $\Lambda^j + k_n \weakconv \Gamma^j$ for all $1 \leq j \leq N$. (Note that the limits involve \emph{the same sequence} $\{k_n: n \geq 1\}$ for all $j$.) The following is a vector-valued extension of \cite[Theorem 3.1]{grrost17}. \begin{tm} \label{th_wl_vec} Let $G=(G^1, \ldots, G^N) \in (W_0(\Rst))^N$ have stable integer shifts and let $\vec{\Lambda}=(\Lambda^1, \ldots, \Lambda^N)$ be a tuple of separated sets. Then the following are equivalent. \begin{itemize} \item[(a)] $\vec\Lambda$ is a sampling set for $\sisp^p(G)$ for some $p \in [1,\infty]$. \item[(b)] $\vec\Lambda$ is a sampling set for $\sisp^p(G)$ for all $p \in [1,\infty]$. \item[(c)] Every weak limit $\vec\Gamma \in \WZ(\vec\Lambda)$ is a sampling set for $\sisp^\infty(G)$. \item[(d)] Every weak limit $\vec\Gamma \in \WZ(\vec\Lambda)$ is a set of uniqueness for $\sisp^\infty(G)$. \end{itemize} \end{tm} The proof is similar to the scalar-valued version; a sketch of the proof is given in Section \ref{sec_post}. \section{Sampling with multiplicities} \subsection{Sets with multiplicities and derivatives} For $N \in \Nst$ we let $W^N_0=W^N_0(\Rst)$ be the class of functions $g$ having derivatives up to order $N-1$ in $W_0(\Rst)$. For a set with multiplicity $(\Lambda,\ml)$, we define its height as $\sup_\lambda m_\Lambda(\lambda)$. When sampling in shift-invariant spaces with generators on $W^N_0(\Rst)$ we assume that the sampling sets have height $\leq N$. The lower density of $(\Lambda,\ml)$ is defined by \eqref{eq_bdens}. \subsection{Sampling with derivatives} We now describe how the problem of sampling with multiplicities can be reformulated in terms of sampling of vector-valued functions. Let a generator $g \in W^N_0(\Rst)$ with stable integer shifts be given. We define $G \in \left(W_0(\Rst)\right)^N$ by choosing as components the derivatives of $g$, so \[G=(g,g^{(1)}, \ldots, g^{(N-1)}).\] There is an obvious one-to-one correspondence between $f=\sum_k c_k g(\cdot-k)\in \sisp^p(g)$ and $F=(f,f^{(1)}, \ldots, f^{(N-1)})\in \sisp^p(G)$. In addition, since $g$ has stable integer shifts, we have the norm equivalence $$ \|f\|_p \asymp \|c\|_p .$$ Furthermore, since $g^{(j)}\in W_0(\Rst)$ for $1\le j\le N-1$, there is a constant $ B>0$ such that $$ \|f^{(j)}\|_p \le B\|c\|_p\quad \mbox{for}\quad 1\le j\le N-1,$$ and this implies $$ \|f\|_p \asymp \|c\|_p \asymp \|F\|_p.$$ This shows that $G$ has stable integer shifts in the sense of \eqref{eq_vec_riesz}. Second, given a set with multiplicity $(\Lambda,\ml)$ and height at most $N <\infty$, we consider the tuple sets $\vec\Lambda=(\Lambda^1, \ldots, \Lambda^N)$ given by \begin{align*} \Lambda^k := \left\{\lambda \in \Lambda: \ml(\lambda) \geq k \right\}. \end{align*} Note that $\Lambda^1=\Lambda$. The connection between vector-valued sampling and sampling with derivatives is stated in the following lemma, which is a direct consequence of our notation. \begin{lemma} A set with multiplicity $(\Lambda,\ml)$ and height at most $N<\infty$ is a sampling set for $\sisp^p(g)$ in the sense of \eqref{eq:lpstable}, if and only if $\vec\Lambda=(\Lambda^1,\ldots,\Lambda^N)$ is a sampling set for $\sisp^p(G)$, with $G=(g,g^{(1)}, \ldots, g^{(N-1)})$. \end{lemma} Finally, we interpret a weak limit $\vec\Gamma \in \WZ(\vec\Lambda)$ as a set with multiplicity by setting $\Gamma := \Gamma^1$ and \begin{align*} \mg(\gamma) := \max\{j \in \Nst: \gamma \in \Gamma^j\}, \qquad \gamma \in \Gamma. \end{align*} In order to keep our notations consistent, we also write $(\Gamma,\mg)\in \WZ(\Lambda,\ml)$ for the current situation. For \emph{separated sets} $\Lambda$, i.e., $\inf\{\abs{\lambda-\lambda'}: \lambda, \lambda' \in \Lambda, \lambda \not= \lambda'\}>0$, we have the following alternative description of weak convergence. \begin{prop} \label{prop_wstar} Let $(\Lambda, \ml)$ be a separated set with multiplicity and finite height $N$, let $(\Gamma, \mg)$ be a set with multiplicity, and $\{k_n: n \geq 1\} \subseteq \Zst$. Then $\Lambda^j-k_n \weakconv \Gamma^j$, as $n \longrightarrow \infty$ for all $j=1,\ldots,N$ if and only if \begin{align*} \sum_{\lambda \in \Lambda} \ml(\lambda) \delta_{\lambda-k_n} \longrightarrow \sum_{\gamma \in \Gamma} \mg(\gamma) \delta_\gamma, \mbox{ as }n \longrightarrow \infty, \end{align*} in the $\sigma(C^*_c,C_c)$ topology (where $C_c$ denotes the class of continuous functions with compact support). \end{prop} A proof of Proposition \ref{prop_wstar} is given in Section \ref{sec_post}. As a consequence, we obtain the following lemma; see, e.g. \cite[Lemma 7.1]{grrost17} for a proof without multiplicities. \begin{lemma} \label{lemma_sep} Let $(\Lambda, \ml)$ be a separated set with multiplicity and finite height, and let $(\Gamma, \mg) \in \WZ(\Lambda, \ml)$. Then $D^{-}(\Gamma, \mg) \geq D^{-}(\Lambda, \ml)$. \end{lemma} \subsection{Characterization of sampling with derivatives} Theorem \ref{th_wl_vec} can be recast in terms of sampling with derivatives. \begin{tm} \label{th_wl_tp} Let $g \in W^N_0(\Rst)$ have stable integer shifts and let $(\Lambda, m_\Lambda)$ be a separated set with multiplicity and height at most $N < \infty$. Then the following are equivalent. \begin{itemize} \item[(a)] $(\Lambda, m_\Lambda)$ is a sampling set for $\sisp^p(g)$ for some $p \in [1,\infty]$. \item[(b)] $(\Lambda, m_\Lambda)$ is a sampling set for $\sisp^p(g)$ for all $p \in [1,\infty]$. \item[(c)] Every weak limit $(\Gamma, \mg) \in \WZ(\Lambda, m_\Lambda)$ is a sampling set for $\sisp^\infty(g)$. \item[(d)] Every weak limit $(\Gamma, \mg) \in \WZ(\Lambda, m_\Lambda)$ is a set of uniqueness for $\sisp^\infty(g)$. \end{itemize} \end{tm} For bandlimited functions, only some of the implications in Theorem \ref{th_wl_tp} are valid. These are formulated in terms of the Bernstein space $\mathrm{PW}^\infty$ of continuous bounded functions which are Fourier transforms of distributions supported on $[-1/2,1/2]$. \begin{tm} \label{th_wl_pw} Let $(\Lambda, m_\Lambda)$ be a separated set with multiplicity and finite height. Then the following are equivalent. \begin{itemize} \item[(a)] $(\Lambda, m_\Lambda)$ is a sampling set for $\mathrm{PW}^\infty$. \item[(c)] Every weak limit $(\Gamma, \mg) \in \WZ(\Lambda, m_\Lambda)$ is a sampling set for $\mathrm{PW}^\infty$. \item[(d)] Every weak limit $(\Gamma, \mg) \in \WZ(\Lambda, m_\Lambda)$ is a set of uniqueness for $\mathrm{PW}^\infty$. \end{itemize} \end{tm} As a replacement for the $L^2$ part of Theorem \ref{th_wl_tp}, we have the following result. \begin{prop} \label{prop_pw} Let $(\Lambda, m_\Lambda)$ be a separated set with multiplicity and finite height, and assume that $(\Lambda, m_\Lambda)$ is a sampling set for $\mathrm{PW}^\infty$. Then, for every $\alpha \in (0,1)$, $(\alpha \Lambda, m_\Lambda)$ is a sampling set for $\mathrm{PW}^2$. \end{prop} Theorem \ref{th_wl_pw} and Proposition \ref{prop_pw} are due to Beurling \cite{be66, be89} (without multiplicities) - see also \cite[Theorem 2.1]{olul12}. A slight modification of the arguments yields the case with multiplicities. \goodbreak \subsection{Necessary density conditions} \begin{prop} \label{prop_nec} Let $g \in W^N_0(\Rst)$ have stable integer shifts and let $(\Lambda, m_\Lambda)$ be a separated set with multiplicity and height at most $N < \infty$. If $(\Lambda, m_\Lambda)$ is a sampling set for $V^2(g)$, then $D^{-}(\Lambda, m_\Lambda) \geq 1$. A similar statement holds for the Paley-Wiener space $\mathrm{PW}^2$. \end{prop} Proposition \ref{prop_nec} follows from standard results on density of frames, see e.g. \cite{bchl06, fghkr}. See Section \ref{sec_post} for a sketch of a proof. \section{Density of zero sets in shift-invariant spaces} We derive sharp upper bounds for the density of real zeros of functions in shift-invariant spaces with special generators. First, we use methods of complex analysis when the generator is a Gaussian (Section 4.1) or a hyperbolic secant (Section 4.2). The results and arguments are similar for both cases, but the case of the hyperbolic secant requires considerably more work and the analysis of meromorphic functions. In Sections 4.3 and 4.4 we then analyze the zero sets in shift-invariant spaces generated by a totally positive function of Gaussian type by means of a comparison theorem. \subsection{The Gaussian} We now consider Gaussian functions $\phi_a(x) := e^{-a x^2}$ with $a>0$. \begin{lemma} \label{l1} Every $f=\sum_k c_k \phi_a(\cdot-k)\in \sisp ^\infty (\phi_a )$ possesses an extension to an entire function satisfying the growth estimate \begin{align} \label{eq_growth} |f(x+iy)| \leq C \|c\|_\infty e^{a y^2} \qquad x,y \in \bR \, . \end{align} \end{lemma} \begin{proof} Using $$ e^{-a (x+iy-k)^2} = e^{a y^2} e^{-2a i xy} e^{2a i ky} e^{-a (x-k)^2} \, , $$ we obtain that \begin{equation} \label{eq:1} f(x+iy) = e^{a y^2} e^{-2a i xy} \sum _{k\in \bZ } c_k e^{2a i ky} e^{-a (x-k)^2} \, . \end{equation} Consequently $$ |f(x+iy)| \leq e^{a y^2} \|c\|_\infty \sum_{k\in \bZ } e^{-a (x-k)^2} \, , $$ and we may take $C= \sup _{0\leq x\leq 1} \sum_{k\in \bZ } e^{-a (x-k)^2}$. Clearly, $x+iy \mapsto f(x+iy)$ is an entire function. \end{proof} Our key observation relates the real zeros of $f\in \sisp^\infty(\phi_a)$ to the zeros of its analytic extension. \begin{lemma} \label{l2} Let $f\in \sisp ^\infty (\phi _a)$ and $\lambda \in \bR $ be a zero of $f$ with multiplicity $m$. Then for every $l\in \frac{\pi}{a}\bZ $, $\lambda + il$ is a zero of the analytic extension of $f$ with the same multiplicity $m$. In particular, if $f^{(j)}(\lambda ) = 0$ for $j= 0, \dots , m-1$, then $f^{(j)}(\lambda +il) = 0$ for $j= 0, \dots , m-1$ and all $l\in \frac{\pi}{a}\bZ $. \end{lemma} \begin{proof} By \eqref{eq:1} we obtain that \begin{align*} f(\lambda +il) = e^{a l^2} e^{-2a i \lambda l} \sum _{k\in \bZ } c_k e^{2a i kl} e^{-a (\lambda -k)^2} = e^{a l^2} e^{-2a i \lambda l} f(\lambda) = 0, \end{align*} because $e^{2a i kl}=1$ for all $l\in \frac{\pi}{a}\bZ$. For higher multiplicities we argue as follows. Note first that $\frac{d^j}{dx ^j}(e^{-a x^2}) = p_j(x) e^{-a x^2}$ for a polynomial of degree $j$ satisfying the recurrence relation $p_{j+1} (x) = -2a x p_j(x) + p_j ' (x)$. It follows that the set $\{ p_j: j=0, \dots , m-1\}$ is a basis for the polynomials of degree smaller than $m$. Now assume that $f\in \sisp ^\infty (\phi_a )$ and $f^{(j)}(\lambda ) = 0$ for $j=0, \dots , m-1$. Then $$ \sum _{k\in \bZ } c_k p_j(\lambda-k) e^{-a (\lambda -k)^2} = 0 \qquad \text{ for } j= 0, \dots , m-1 \, . $$ This implies that for every polynomial $q$ of degree $<m$ \begin{equation} \label{eq:3} \sum _{k\in \bZ } c_k q(\lambda-k) e^{-a (\lambda -k)^2} = 0 \, . \end{equation} We now proceed as in~\eqref{eq:1} and find that, for $j=0,\ldots,m-1$, \begin{align*} f^{(j)}(\lambda +il) &= \sum _{k\in \bZ } c_k p_j(\lambda-k+il ) e^{-a (\lambda -k+il )^2} \\ &= e^{a l^2} e^{-2a i \lambda l} \sum _{k\in \bZ } c_k p_j(\lambda-k+il ) e^{2a i kl } e^{-a (\lambda -k)^2} \, . \end{align*} Note that $e^{2a i kl}=1$ for all $l\in \frac{\pi}{a}\bZ$. We insert the Taylor expansion of $p_j$ at $\lambda -k$, i.e., \[p_j(\lambda -k + il) = \sum _{r=0}^j p_j^{(r)}(\lambda -k) \frac{(il)^r}{r!},\] and we obtain that $$ f^{(j)}(\lambda +il) = e^{a l^2} e^{-2a i \lambda l} \sum _{r=0}^j \frac{(il)^r}{r!} \sum _{k\in \bZ } c_k p_j^{(r)}(\lambda -k) e^{-a (\lambda -k)^2} \, . $$ Since each $p_j^{(r)}$ is a polynomial of degree $< m$, \eqref{eq:3} implies that $f^{(j)}(\lambda +il) = 0$ for all $l\in \frac{\pi}{a}\bZ $ and $j=0, \dots , m-1$. This shows that the multiplicity of $\lambda + i l$ is at least that of $\lambda$. Reversing the roles of $\lambda$ and $\lambda + i l$ we see that the multiplicities are actually equal. \end{proof} We recall Jensen's formula, which relates the number of zeros $n(r)$ in a disk $B(0,r)$ to the growth of an entire function by the identity \begin{align} \label{jens} \int_0^R \frac{n(r)}{r} dr = \frac{1}{2\pi} \int_0^{2\pi} \log\abs{f(R e^{i\theta})} d\theta - \log\abs{f(0)} \, . \end{align} This is our main tool (from complex analysis) to prove the following result about the density of real zeros of functions in a shift-invariant space. \begin{tm} \label{tm1} Let $\phi_a(x) = e^{-a x^2}$ with $a>0$. Let $f\in \sisp ^\infty (\phi_a) \setminus \{0\}$ and $N_f$ its set of real zeros, with multiplicities $\mult_f(x)$, $x \in N_f$. Then $D^-(N_f, \mult_f) \leq 1$. \end{tm} \begin{proof} Note that $N_f = \{\lambda \in \bR: f(\lambda ) =0\} $ is the set of \emph{real} zeros of $f$. By Lemma \ref{l2}, the set of \emph{complex} zeros of (the analytic extension of) $f$ contains the set $N_f + i\frac{\pi}{a}\bZ \subseteq \bC $, and, moreover, multiplicities are preserved. To prove the theorem, we argue indirectly and assume that $D^-(N_f,\mult_f) >1$. Then there exists $\nu >1$ and $R_0$, such that $$ \sum_{\lambda\in N_{f} \cap [x,x+r]} m_f(\lambda) \geq \nu r \qquad \text{ for all } x\in \bR, \, r \geq R_0 \, . $$ Let $n(r)$ be the number of complex zeros of $f$ inside the open disk $B(0,r)\subseteq \bC $ counted with multiplicities. Let us assume for the moment that $f(0)\not=0$. The right-hand side of Jensen's formula~\eqref{jens} can be estimated, by means of the growth estimate~\eqref{eq_growth}, as \begin{equation} \label{eq:5} \frac{1}{2\pi} \int_0^{2\pi} \log\abs{{f}(R e^{i\theta})} \, d\theta - \log\abs{{f}(0)} \leq A+\frac{1}{2\pi} \int_0^{2\pi} a R^2 \sin ^2 \theta \, d\theta = A+\frac{a R^2}{2} \, , \end{equation} where $A:= -\log{\abs{f(0)}}+ \log(\|c\|_\infty C)$. To estimate the left-hand side of \eqref{jens}, we choose $R\in \bN $ and $R\geq R_0$ and partition $[-R^2, R^2) = \bigcup _{k=-R}^{R-1} [kR, (k+1)R)$. On each interval there are at least $\nu R$ real zeros of ${f}$ counted with multiplicity. By symmetry it is enough to consider intervals $[kR,(k+1)R)$ with $0\le k\le R-1$. By Lemma \ref{l2}, for each real zero $\lambda \in [kR, (k+1)R)$, with a certain multiplicity $m$, there are $2 \lfloor \frac{a}{\pi}\sqrt{(R^2)^2 - \lambda ^2} \rfloor +1 \geq \frac{2a}{\pi}\sqrt{ R^4 - (k+1)^2R^2}-1$ complex zeros $\lambda +il, l \in \tfrac{\pi}{a} \bZ$ in the disk $B(0,R^2)$, each with multiplicity $m$. By counting with multiplicities, there are at least $$ \nu R \left( \frac{2a}{\pi}\sqrt{ R^4 - (k+1)^2R^2}-1\right)$$ complex zeros in $B(0,R^2)$ with real part in $[kR, (k+1)R)$ where $0 \le k\le R-1$. By summing over (positive and negative) $k$, we obtain the following lower bound for the number of complex zeros of ${f}$ in $B(0,R^2)$: $$ n(R^2) \geq 2 \nu R \sum _{k=0}^{R-1}\left( \frac{2a}{\pi}\sqrt{ R^4 - (k+1)^2R^2}-1\right) = \frac{4 \nu aR^4}{\pi}\sum _{k=0}^{R-1} \frac{1}{R}\sqrt{1 - \frac{(k+1)^2}{ R^2}} - 2\nu R^2\, . $$ The last sum is a Riemann sum of the integral $\int _{0}^1 \sqrt{1-x^2} \, dx = \pi /4$. Let $\epsilon >0$ and $R_1\ge R_0$ satisfy $\beta:=\nu (1 - \epsilon-\frac{2}{aR_1^2} ) > 1 $. Then, for some $R_2\ge R_1$ and all $R \geq R_2$, we conclude that $$ n(R^2) \geq a \nu R^4 \left(1 - \epsilon -\frac{2}{aR^2}\right)\ge a\beta R^4, $$ or, equivalently, $$ n(r) \geq a\beta r^2 \mbox{ for } r \geq R_2^2. $$ Therefore, the left-hand side of \eqref{jens} can be estimated as $$ \int _{0} ^R \frac{n(r)}{r} \, dr \geq \int _{R_2^2} ^R \frac{n(r)}{r} \, dr \geq a\beta \left(\frac{R^2}{2} - \frac{R_2^4}{2} \right) \, . $$ Since $\beta > 1 $, this estimate is incompatible with the growth of ${f}$ as encoded in \eqref{eq:5}. Therefore $D^-(N_{{f}},m_f) > 1$ is impossible. This concludes the proof for $f$ such that $f(0) \not= 0$. If $f(0)=0$, we let $n \geq 0$ be the vanishing order of $f$ at $0$ and apply the previous argument to $\tilde{f}(z):=z^{-n}f(z)$. Alternatively, one can verify directly that if $f \not\equiv 0$, then there exists $k \in \bZ$ such that $f(k) \not=0$, and consider $\tilde{f}(x)=f(x+k)$ with $\tilde f \in V^\infty (\phi )$. \end{proof} \subsection{The hyperbolic secant} Let $\psi _a(x)=\sech(a x)=\frac{2}{e^{a x}+e^{-a x}}$. Our goal is to study the shift-invariant space generated by $\psi _a$. While in \cite{grrost17} we studied $\sisp^2(\psi _a)$ by exploiting a connection to Gabor analysis, and a certain representation of the Zak transform of $\psi _a$ due to Janssen and Strohmer \cite{jast02}, here we consider meromorphic extensions of the functions in $\sisp^\infty(\psi _a)$. We introduce the following notation. For real $x$ we denote the roundoff error to the nearest integer as $\round{x}:= x-l$, where $l\in\bZ$ and $\round{x}\in[-1/2,1/2)$. \begin{lemma}\label{l3} Every $f=\sum_{k\in\bZ} c_k \psi _a(\cdot-k) \in V^\infty(\psi _a)$ has an extension to a meromorphic function on $\bC$ with poles in $$ P_f \subseteq P := \bZ+\frac{i\pi}{a}\left(\frac{1}{2}+\bZ\right).$$ Moreover, every pole of $f$ is simple and $f$ satisfies the growth estimate \begin{equation}\label{eq:boundf} \begin{array}{rcl} \abs{f(x+iy)} &\le& C \|c\|_\infty \abs{\psi _a(\round{x}+iy)} \\[5pt] &\le& C \|c\|_\infty \min\{\abs{a\round{x}}^{-1}, \abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}^{-1}\}. \end{array} \end{equation} \end{lemma} \begin{proof} The meromorphic function $\psi _a(z)=\sech(a z)$ has simple poles on the imaginary axis at $\frac{i\pi}{a}\left(\frac{1}{2}+\bZ\right)$. The identity \begin{eqnarray*} \abs{\cosh a(x-k+iy)} &=& (\sinh^2 a(x-k) + \cos^2 a y)^{1/2} \end{eqnarray*} shows that $\abs{\psi _a (x-k+iy)} \lesssim e^{-a\abs{x-k}}$, if $\abs{x-k} \geq 1$ and $y$ is arbitrary. We consider the covering of $\bC$ given by \begin{align*} U_{s,t} := \left\{x+iy \in \bC: \abs{x-s}< 3/4, \abs{y-\frac{\pi}{a}(t+1/2)} < \frac{3\pi}{4a} \right\}, \qquad s,t \in \bZ. \end{align*} On $U_{s,t}$, the partial sums \begin{align*} f_N(x+iy)=\sum_{k: \abs{k-s} \leq N } c_k \psi _a(x-k+iy) \end{align*} have at most a simple pole at $s + \tfrac{i\pi}{a}(\tfrac{1}{2} + t)$ and are otherwise analytic. Since, for $x+iy \in U_{s,t}$, \begin{align*} \sum_{k: \abs{k-s} > N } \abs{c_k} \abs{\psi _a(x-k+iy)} \lesssim \norm{c}_\infty \sum_{k: \abs{k-s} > N } e^{-a\abs{x-k}} \lesssim \norm{c}_\infty e^{-N} \, , \end{align*} the partial sums $f_N$ converge uniformly on $U_{s,t} \setminus \left\{s + \tfrac{i\pi}{a}(\tfrac{1}{2} + t)\right\}$ to an analytic extension of $f$. More precisely, \begin{align*} \sup_{z \in U_{s,t} \setminus \left\{s + \tfrac{i\pi}{a}(1/2 + t)\right\}} \abs{f(z)-f_N(z)} \longrightarrow 0, \qquad \mbox{as } N \longrightarrow \infty. \end{align*} (Note that this is stronger than the usual uniform convergence on compact sets.) This fact implies that $f$ has at most a simple pole at $z=s + \frac{i \pi}{a}(1/2 + t)$. Hence, $f$ is meromorphic on $\bC $ with at most simple poles in $\bZ+\frac{i\pi}{a}\left(\frac{1}{2}+\bZ\right)$. For the growth estimate \eqref{eq:boundf} we let $x+iy \in \mathbb{C} \setminus P_f$ and write $x=l+\round{x}$ with $l\in\bZ$ and $\round{x}\in [-1/2,1/2)$. Then we have $$ \abs{f(x+iy)} \le \abs{\psi _a (\round{x}+iy)}\left(\abs{c_l}+ \sum_{k\ne l} \Big| \frac{c_k\psi _a (x-k+iy)}{\psi _a (\round{x}+iy)} \Big| \right).$$ For all $k\ne l$ we observe that $\abs{x-k} \geq \abs{l-k} - \abs{\round{x}}\ge \frac{1}{2} \geq \abs{\round{x}}$. Therefore, we have $\sinh^2 a (x-k)\ge \sinh^2 a \round{x}$. Since the rational function $r(y)=\tfrac{c+y}{d+y}$ with $d\ge c\ge 0$, $d>0$ is increasing for $y>0$, we obtain that, for all $k\ne l$, $$ \frac{\abs{\psi _a (x-k+iy)}^2}{\abs{\psi _a (\round{x}+iy)}^2} = \frac{\sinh^2 a \round{x}+\cos^2 a y}{\sinh^2 a(x-k)+\cos^2 a y}\le \frac{\sinh^2 a \round{x}+1}{\sinh^2 a(x-k)+1} =\frac{\cosh^2 a\round{x}}{\cosh^2 a(x-k)},$$ and furthermore $$\frac{\cosh a\round{x}}{\cosh a(x-k)}= \frac{e^{a\abs{\round{x}}}(1+e^{-2a\abs{\round{x}}})}{e^{ a\abs{x-k}}(1+e^{-2a\abs{x-k}})} \le 2e^a e^{-a\abs{k-l}}.$$ Therefore, we have $$ \abs{c_l}+ \sum_{k\ne l} \left|\frac{c_k\psi _a (x-k+iy)}{\psi _a (\round{x}+iy)}\right| \le \|c\|_\infty \left(1+ 2e^a\sum_{k\ne l} e^{-a\abs{k-l}} \right) \le C\|c\|_\infty.$$ This proves the first inequality in \eqref{eq:boundf}. For the second inequality note that $$ \abs{\sinh a x}\ge \abs{a x}\quad\mbox{for all}\quad x\in\bR$$ and, by periodicity and elementary trigonometric identities, $$ \abs{\cos a y}= \abs{\sin \pi \round{\tfrac{ay}{\pi}-\tfrac{1}{2}}} \ge 2\abs{\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}\quad\mbox{for all}\quad y\in \bR.$$ Hence, we obtain $$\abs{\psi _a (\round{x}+iy)}= \left( \sinh^2 a\round{x} +\cos^2 a y\right)^{-1/2} \le \min\{\abs{ a \round{x}}^{-1},\abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}^{-1}\},$$ which gives the second inequality in \eqref{eq:boundf}. \end{proof} The following result is an analogue of Lemma \ref{l2} for $\sisp^\infty(\psi _a)$. \begin{lemma} \label{l4} Let $f\in \sisp ^\infty (\psi _a )$ and $\lambda \in \bR $ be a zero of $f$ with multiplicity $m$. Then for every $l\in \tfrac{\pi}{a}\bZ $, $\lambda + il$ is a zero of the meromorphic extension of $f$ with the same multiplicity $m$. \end{lemma} \begin{proof} For every $x\in\bR$ and $l=\tfrac{\pi t}{a}\in\tfrac{\pi}{a}\bZ$ we have $$ \cosh a(x+il)=\cosh a x\cos a l +i \sinh a x\sin a l= (-1)^t \cosh a x.$$ Therefore, every $f=\sum_{k\in\bZ} c_k \psi _a(\cdot-k) \in \sisp^\infty(\psi _a)$ satisfies $$ f(x+il)=\sum_k c_k (-1)^t \psi _a (x-k)=(-1)^t f(x).$$ This implies that the Taylor expansions of $f$ around $z_0= x\in\bR$ and around $z_l= x+il$ have exactly the same coefficients, up to a factor $(-1)^t$. In particular, $f^{(j)}(\lambda)=0$ holds for some $\lambda\in\bR$ and $j\ge 0$ if and only if $f^{(j)}(\lambda+il)=0$ for all $l\in\tfrac{\pi}{a}\bZ$. \end{proof} Let $n(r)$ denote the difference of the number of zeros and the number of poles of $f$ in the closed disk $\overline{B(0,r)}$, counted with multiplicities. Jensen's formula for meromorphic functions $f$ with $f(0)\not\in \{0,\infty\}$ says that \begin{align} \label{jens2} \int_0^r \frac{n(t)}{t} dt = \frac{1}{2\pi} \int_0^{2\pi} \log\abs{f(r e^{i\theta})} d\theta - \log\abs{f(0)}, \end{align} see e.g. \cite[pages 4--6]{hayman59}. The special case $f(0)=0$ or $\infty$ is treated as follows: if $f$ has a zero or pole at $0$, choose $m\in\bZ$ such that $\lim_{z\to 0} f(z)/z^m=C_m\ne 0$. Then \begin{align} \label{jens3} \int_0^r \frac{n(t)}{t} dt = \frac{1}{2\pi} \int_0^{2\pi} \log\abs{f(r e^{i\theta})} d\theta - \log\abs{C_m} -m\log r. \end{align} After this excursion to meromorphic functions we can now prove an analogue of Theorem~\ref{tm1}. \begin{tm} \label{th_zeros_sec} Let $f\in \sisp ^\infty (\psi _a )\setminus \{0\}$ and $N_f$ its set of real zeros with multiplicities $\mult_f$. Then $D^-(N_f,\mult_f) \leq 1$. \end{tm} The main part of the proof is an estimate of the integral in Jensen's formula. \begin{lemma}\label{prop:boundint} For every $f\in V^\infty(\psi _a)$ we have \begin{equation}\label{eq:boundint} \sup_{r>1} \frac{1}{2\pi} \int_0^{2\pi} \log\abs{f(re^{i\theta})}\,d\theta <\infty. \end{equation} \end{lemma} \begin{proof} We divide the integral into four pieces corresponding to $$ \theta\in I_j=\left[-\frac{\pi}{4},\frac{\pi}{4}\right]+\frac{j\pi}{2},\qquad j=0,1,2,3.$$ For $\theta \in I_0\cup I_2$, we let $$ r e^{i\theta} = \pm\sqrt{r^2-y^2}+iy\quad\mbox{where}\quad y\in \left[-\frac{r}{\sqrt{2}},\frac{r}{\sqrt{2}}\right].$$ By \eqref{eq:boundf}, we have $$ \log \abs{f(re^{i\theta})} \le \log (C\|c\|_\infty) - \log \abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}$$ and (using $d\theta = \pm dy/\sqrt{r^2-y^2}$) $$\frac{1}{2\pi} \int_{I_0\cup I_2} \log\abs{f(r e^{i\theta})}\,d\theta \le \frac{1}{2}\log (C\|c\|_\infty) - \frac{1}{\pi} \int_{-\frac{r}{\sqrt{2}}}^{\frac{r}{\sqrt{2}}} \log \abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}\,\frac{dy}{\sqrt{r^2-y^2}}.$$ Note that $\log \abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}\le 0$ and $\sqrt{r^2-y^2}\ge r/\sqrt{2}$ for all $y\in \left[-\tfrac{r}{\sqrt{2}},\tfrac{r}{\sqrt{2}}\right]$. Therefore, $$ - \frac{1}{\pi} \int_{-\frac{r}{\sqrt{2}}}^{\frac{r}{\sqrt{2}}} \log \abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}\,\frac{dy}{\sqrt{r^2-y^2}} \le \frac{\sqrt{2}}{\pi r} \int_{-\frac{r}{\sqrt{2}}}^{\frac{r}{\sqrt{2}}} \Big| \log \abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}\Big| \,dy .$$ For the last integral, we use the substitution $u=\tfrac{ay}{\pi}$ and observe that the resulting integrand is even and periodic with period $1$. This gives for all $c<d$ \begin{eqnarray*} \int_c^d \abs{ \log \abs{2\round{\tfrac{ay}{\pi}-\tfrac{1}{2}}}}\,dy &=& \frac {\pi}{a} \int_{\tfrac{ac}{\pi}}^{\tfrac{ad}{\pi}} \abs{ \log \abs{2\round{u-\tfrac{1}{2}}}}\,du \\[5pt] &\le& \frac {2\pi}{a} \left(\frac{ad}{\pi}-\frac{ac}{\pi}+1\right) \int_0^{1/2} \abs{ \log (2u)}\,du =d-c+\frac{\pi}{a}, \end{eqnarray*} and finally $$\frac{1}{2\pi} \int_{I_0\cup I_2} \log|f(re^{i\theta})|\,d\theta \le \frac{1}{2}\log (C\|c\|_\infty) + \frac{\sqrt{2}}{\pi r}\left(\sqrt{2} r+\frac{\pi}{a}\right). $$ In the same way, for $\theta \in I_1\cup I_3$ we let $$ r e^{i\theta} = x \pm i\sqrt{r^2-x^2}\quad\mbox{where}\quad x\in \left[-\frac{r}{\sqrt{2}},\frac{r}{\sqrt{2}}\right],$$ and obtain from \eqref{eq:boundf} $$\frac{1}{2\pi} \int_{I_1\cup I_3} \log\abs{f(r e^{i\theta})}\,d\theta \le \frac{1}{2}\log (C\|c\|_\infty) - \frac{1}{\pi} \int_{-\frac{r}{\sqrt{2}}}^{\frac{r}{\sqrt{2}}} \log \abs{a\round{x}}\,\frac{dx}{\sqrt{r^2-x^2}}.$$ The same techniques as before give $$ - \frac{1}{\pi} \int_{-\frac{r}{\sqrt{2}}}^{\frac{r}{\sqrt{2}}} \log \abs{a\round{x}}\,\frac{dx}{\sqrt{r^2-x^2}} \le \frac{\sqrt{2}}{\pi r} \int_{-\frac{r}{\sqrt{2}}}^{\frac{r}{\sqrt{2}}} \Big| \log \abs{a\round{x}} \Big| \,dx $$ and, for every $d>c$, \begin{eqnarray*} \int_c^d \abs{ \log \abs{a\round{x}}}\,dx &\le& (d-c)\abs{\log (a/2)}+ \int_c^d \abs{ \log \abs{2\round{x}}}\,dx \\ &\le& (d-c)(\abs{\log (a/2)}+ 2(d-c+1)\int_0^{1/2} \abs{ \log (2u)}\,du \\ &\le & \left(d-c+1\right) \left(1+\abs{\log (a/2)}\right) . \end{eqnarray*} Hence, we obtain $$\frac{1}{2\pi} \int_{I_1\cup I_3} \log|f(re^{i\theta})|\,d\theta \le \frac{1}{2}\log (C\|c\|_\infty) + \frac{\sqrt{2}}{\pi r}\left(\sqrt{2} r+1\right)\left(1+\abs{\log (a/2)}\right). $$ Combining both integrals, we get for $r\ge 1$ $$ \frac{1}{2\pi} \int_{0}^{2\pi} \log|f(re^{i\theta})|\,d\theta \le \log (C\|c\|_\infty ) + \frac{\sqrt{2}}{\pi r}\left(2\sqrt{2} r+1+\frac{\pi}{a}+ (\sqrt{2} r+1)\abs{\log (a/2)}\right) , $$ which is bounded for $r\geq 1$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{th_zeros_sec}] Assume $D^-(N_f)>1$. Let $n_z(r)$ denote the number of zeros of $f$ in $\overline{B(0,r)}$ and $n_p(r)$ the number of poles in that disk (both counted with multiplicities). The same counting argument as in the proof of Theorem \ref{tm1}, this time invoking Lemma \ref{l4}, gives $$ n_z(r) \ge a\beta r^2$$ for some $\beta>1$ and for all $r\ge R_1$. On the other hand, by Lemma \ref{l3}, the poles of $f$ are simple and contained in the shifted lattice $\mathcal{L}:=\bZ+\tfrac{i\pi}{a}(\tfrac{1}{2}+\bZ)$. To find an upper bound for $n_p(r)$, we place rectangles $Q_x=x+[-\tfrac{1}{2},\tfrac{1}{2}]\times [-\tfrac{\pi}{2a},\tfrac{\pi}{2a}]$ of area $|Q_x|=\tfrac{\pi}{a}$ and diagonal $\sqrt{1 + \pi ^2 / a^2}$ around each pole and observe that $$n_p(r)= \frac{a}{\pi} \left| \bigcup_{x\in \mathcal{L}\cap\overline{ B(0,r)}} Q_x\right| \le \frac{a}{\pi}~ \left|B\left(0,r+\tfrac{1}{2} \sqrt{1 + \pi ^2 / a^2} \right)\right| = a\left(r+\tfrac{1}{2} \sqrt{1 + \pi ^2 / a^2} \right)^2.$$ As a consequence, $n(r)=n_z(r)-n_p(r)\ge (\beta-1)a r^2-cr$, for some constant $c>0$, and $$ \int_{R_1}^{R} \frac{n(r)}{r}\,dr \ge \frac{(\beta-1)a (R^2-R_1^2)}{2}-c(R-R_1).$$ Due to Lemma \ref{prop:boundint}, for $R \gg 1$ this contradicts Jensen's formula \eqref{jens3}. \end{proof} \subsection{Transference of zero sets} The following lemma modifies \cite[Lemma 5.1]{grrost17} to include multiplicities and allows us to compare the density of zero sets in different shift-invariant spaces. \begin{lemma}\label{Rolle} Let $f\in C^{\infty}(\bR)$ be real-valued and $\mult_f:N_f\to\bN$ be the multiplicity function of its zeros. For $a\in \bR $ let $g=\left(aI+\frac{d}{dx}\right)f$. Then \begin{align} \label{c1} D^-(N_{g},\mult_g)\ge D^-(N_f,\mult_f). \end{align} For $f \in C^{N-1}(\bR)$ the same statement holds, replacing $m_f$ and $m_g$ by the multiplicity functions of the zeros of height at most $N$ and $N-1$, respectively. \end{lemma} \begin{proof} Let $f\in C^{\infty}(\bR)$. Note that $aI + \tfrac{d}{dx} = e^{-ax} \tfrac{d}{dx} e^{ax}$. We define $h\in C^{\infty}(\bR)$ by $h(x)= e^{ ax}f(x)$ and note that $N_h = N_f$ with equal multiplicities $m_h=\mult_f$. Furthermore, since $$ g(x)= \left(aI+\frac{d}{dx}\right)f(x)= e^{- ax}h'(x) , $$ we conclude that $N_g=N_{h'}$, again with equal multiplicities $\mult_g=m_{h'}$. It remains to show that $D^-(N_{h'},m_{h'})\ge D^-(N_h,m_h)$. Let $x\in\bR$, $R>0$, and $F \subseteq N_{h}\cap[x-R,x+R]$ a finite subset. All zeros $x$ of $h$ with multiplicity $m_h(x)>1$ are zeros of $h'$ with multiplicity $m_{h'}(x)=m_h(x)-1$. Since the zeros of $h$ and $h'$ are interlaced by Rolle's theorem, we obtain additional zeros $\tilde F\subset [x-R,x+R]\setminus F$ of $h'$ with cardinality $\# \tilde F = \#F-1$. Combining both types of zeros of $h'$ gives $$ \sum_{x\in F\cup \tilde F} m_{h'}(x) \ge \left(\sum_{x\in F} m_{h}(x)\right)-1. $$ Since this holds for every finite subset $F \subseteq N_{h}\cap[x-R,x+R]$, it follows that $D^-(N_{h'},m_{h'})\geq D^-(N_h,m_h)$, and $$ D^-(N_g,\mult_g) = D^-(N_{h'},m_{h'})\ge D^-(N_h,m_h) = D^-(N_f,\mult_f) \, , $$ as claimed. Finally, for $f \in C^{N-1}(\bR)$, $g \in C^{N-2}(\bR)$, and the same argument applies to the multiplicity functions of zeros of height $N$ and $N-1$. \end{proof} Although generically one would expect equality in \eqref{c1}, the density of the zero set may actually jump. Let $f(x) = \sum_{k\in \Zst} e^{-\pi (x-k)^2} \in V^\infty (\phi _\pi )$, and $h=f' \in V^\infty (\phi '_\pi )$. Then $f$ is a non-constant, strictly positive, periodic, real-valued function, and we have $N_f = \emptyset$ and $D^-(N_f) = 0$. Since $f$ assumes two extremal values in $[0,1)$, we have $D^-(N_h)=2$, in fact, $N_h = \tfrac{1}{2} \Zst $. This example explains why the methods of this paper cannot be applied directly to sampling in shift-invariant spaces generated by Hermite functions. Indeed, Theorem~\ref{th_samp_der_tp} does not have a direct analog for $\sisp (h_n)$ with the $n$-th Hermite function $h_n, n>0$. \subsection{Totally positive functions of Gaussian type} We next study shift-invariant spaces generated by a totally positive function of Gaussian type and their density of zeros. \begin{tm} \label{th_zeros_tp} Let $g$ be a totally positive function of Gaussian type. Let $f\in \sisp ^\infty (g)\setminus\{0\}$ be real-valued and $(N_f, \mult_f)$ its set of real zeros counted with multiplicities. Then $D^-(N_f, \mult_f) \leq 1$. In particular, if $D^-(\Lambda )>1$, then $\Lambda $ is a uniqueness set for $\sisp ^\infty (g)$. \end{tm} \begin{proof} The proof is an adaption of the argument in \cite{grrost17} using multiplicities. Recall that $g$ is real-valued and has stable integer shifts. Let $c\in\ell^\infty(\bZ)$ and assume that $f=\ctilg \in \sisp^\infty(g)$ vanishes on $N_f\subset \bR$ with $D^-(N_f,\mult_f ) >1$. We want to show that $f \equiv 0$. Note that $f\in C^\infty(\bR)$. Since $g$ is real-valued, we may assume without loss of generality that $f$ is also real-valued (by replacing $c_k$ by $\Re(c_k)$ or $\Im(c_k)$ if necessary). Using \eqref{eq:tpaa}, write \begin{equation}\label{eq:ch1a} \hat g(\xi)= \prod_{j=1}^n (1+2\pi i\delta_j\xi)^{-1} \, \hat\phi(\xi),\qquad \delta_1,\ldots,\delta_n\in\bR\setminus\{0\}, \ c >0, \end{equation} where $\hat{\phi}(\xi)=e^{-c\xi^2}$. In other words, $\phi = \prod_{j=1}^n \left(I+\delta_j \tfrac{d}{dx}\right) g $ is a Gaussian. Since $\phi$, $g$, and their derivatives decay exponentially, we may interchange summation and differentiation in $f$, and obtain that $$ h = \prod_{j=1}^n \left(I+\delta_j \frac{d}{dx}\right) f \in \sisp^\infty(\phi). $$ The repeated use of Lemma \ref{Rolle} implies that $D^-(N_h,\mult_h) \geq D^-(N_{f},\mult_f)>1$. Hence, by Theorem \ref{tm1}, $h=\sum_k c_k \phi(\cdot-k) \equiv 0$. Hence $c_k \equiv 0$ and $f \equiv 0$, as claimed. \end{proof} \subsection{Bandlimited functions} For a simple comparison of the results in Theorems~\ref{tm1}, \ref{th_zeros_sec}, and \ref{th_zeros_tp}, we mention the following result for bandlimited functions. \begin{tm} \label{th_uniqueness_pw} Let $f \in \mathrm{PW}^\infty \setminus \{0\}$. Then $D^-(N_{f},\mult_f) \leq 1$. \end{tm} \begin{proof} The result follows from the Paley-Wiener characterization of bandlimited functions as restrictions of entire functions of exponential type, and Jensen's formula. Beurling's proof \cite{be66,be89} applies almost verbatim. \end{proof} \section{Proof of the sampling theorems} \label{sec_proofs} The proofs of our main theorems are now short and follow from the combination of the characterization of sampling sets without inequalities (Theorem~\ref{th_wl_tp}) and the new insights about the density of zero sets in shift-invariant spaces (Section~4). \begin{proof}[Proof of Theorem \ref{th_samp_der_tp}] The necessity of the density conditions is stated in Proposition \ref{prop_nec}. For the sufficiency, we apply the characterization of Theorem \ref{th_wl_tp}. Suppose that $D^{-}(\Lambda, \ml)>1$, and let $(\Gamma, \mg) \in \WZ(\Lambda, \ml)$. By Lemma \ref{lemma_sep}, $D^{-}(\Gamma, \mg)>1$. Hence, by Theorem \ref{th_zeros_tp}, $(\Gamma, \mg)$ is a uniqueness set for $\sisp ^\infty (g)$. Therefore, the criterion in Theorem \ref{th_wl_tp} is satisfied, and we conclude that $(\Lambda, \ml)$ is a sampling set for $\sisp ^2 (g)$. \end{proof} \begin{proof}[Proof of Theorem \ref{th_samp_der_sec}] The proof is the same as for Theorem \ref{th_samp_der_tp}; this time we resort to Theorem \ref{th_zeros_sec} (instead of Theorem~\ref{th_zeros_tp}). \end{proof} \begin{proof}[Proof of Theorem \ref{th_samp_der_pw}] The first part of the proof (treating $\mathrm{PW}^\infty $) is similar to the one of Theorem \ref{th_samp_der_tp}. If $(\Lambda, \ml)$ is a separated set with finite height and density $D^{-}(\Lambda, \ml) > 1$, then Theorem \ref{th_wl_pw} (combined with Lemma~\ref{lemma_sep} and Theorem~\ref{th_uniqueness_pw}) implies that $(\Lambda, \ml)$ is a sampling set for $\mathrm{PW}^\infty$. As a second step, we use Proposition \ref{prop_pw} to extend the conclusion to $\mathrm{PW}^2$. More precisely, if $D^{-}(\Lambda, \ml) > 1$, we select $\alpha <1$ such that $\alpha D^{-}(\Lambda, \ml) = D^{-}(\alpha^{-1} \Lambda, \ml) > 1$. We conclude that $(\alpha^{-1} \Lambda, \ml)$ is a sampling set for $\mathrm{PW}^\infty$, and therefore, by Proposition \ref{prop_pw}, $(\Lambda, \ml)$ is a sampling set for $\mathrm{PW}^2$. \end{proof} \section{Consequences for Gabor frames} The Hermite-sampling results of Theorems~\ref{th_samp_der_tp} and~\ref{th_samp_der_sec} can be applied in order to obtain sharp density results for multi-window Gabor frames. This extends our previous work in \cite{grrost17} and was, in fact, one of our original motivations for the present work. We obtain new families of multi-window Gabor frames with optimal conditions for semi-regular sets of time-frequency shifts. \subsection{Multi-window Gabor frames} Let $\pi(x,w)g(t)= g(t-x) e^{2\pi i w t}$ denote the time-frequency shift of $g$ by $(x,w) \in \Rst \times \Rst $. For given windows $g^1, \ldots, g^N \in L^2(\Rst)$ and sets $\Delta^1, \ldots, \Delta^N \subseteq \Rst^2$ the associated multi-window Gabor system is \begin{align} \mathcal{G}(g^1, \ldots, g^N, \Delta^1, \ldots \Delta^N) =\sett{ \pi(x,w)g^j \, : \,(x,w) \in \Delta^j, j=1,\ldots,N}\, . \end{align} It will be convenient to use the notation $G=(g^1,\ldots,g^N)$, $\vec\Delta=(\Delta^1,\ldots,\Delta^N)$ and $\mathcal{G}(G, \vec\Delta)$. When all the sets $\Delta^j$ are equal, we just write $\mathcal{G}(G, \Delta)$. \subsection{Connection between sampling and Gabor frames} For semi-regular sets $\vec\Delta$, the Gabor frame property can be related to a sampling problem as follows. \begin{tm} \label{tm_gab_con} Assume that $G=(g^1, \ldots, g^N) \in (W_0(\Rst))^N$ has stable integer shifts and that the sets $\Lambda^1, \ldots, \Lambda^N \subseteq \Rst$ are separated. Let $\vec\Delta=(\Delta^1,\ldots,\Delta^N)$ be given by $\Delta^j := (-\Lambda^j)\times\Zst$. Then $\mathcal{G}(G,\vec\Delta)$ is a frame for $L^2(\Rst)$ if and only if $\vec\Lambda+(x,\ldots,x)$ is a sampling set for $\sisp^2(G)$ for all $x \in \Rst$. \end{tm} Theorem \ref{tm_gab_con} is a vector-valued extension of \cite[Theorem 2.3]{grrost17} (equivalence of conditions (a) and (b)), and we therefore omit its proof. \subsection{Characterization of multi-window Gabor frames with totally positive windows} \begin{tm} \label{th_gab_1} Let $g$ be a totally positive function of Gaussian-type or the hyperbolic secant and let $\Lambda \subseteq \Rst$ be a separated set. Let $\{ p_1, \dots p_N\} $ be a basis of the space of polynomials of degree less than $N$. Set $g^j = p_j\big( \tfrac{d}{dx}\big) g$ and $G=(g^1,\ldots,g^N)$. Then $\mathcal{G}(G, (-\Lambda) \times \Zst)$ is a frame for $L^2(\Rst)$ if and only if $D^-(\Lambda) > 1/N$. \end{tm} \begin{proof} The necessity of the condition $D^-(\Lambda) > 1/N$ for multi-window Gabor frames over a rectangular lattice is contained in~\cite[Thm.~12.2.11]{zzgab}. It also follows from general results, see, e.g., \cite{grorro15}. For the sufficiency, assume that $D^-(\Lambda) > 1/N$, and let $\vec\Lambda := (\Lambda,\ldots,\Lambda)$. By Theorem \ref{tm_gab_con}, it suffices to show that for all $x\in \Rst$, $\vec\Lambda+\vec x$ is a sampling set for the vector-valued shift-invariant space $V^2(G)$, where $\vec x := (x,\ldots,x)$. To verify this condition, we apply Theorem \ref{th_wl_tp}. Let $\vec\Gamma \in \WZ (\vec\Lambda+\vec x)$. This set is necessarily of the form $\vec\Gamma=(\Gamma, \ldots, \Gamma)$, for some $\Gamma \in \WZ (\Lambda+x)$, and, by Lemma \ref{lemma_sep}, $D^{-}(\Gamma) > 1/N$. Assume that $F \in \sisp^\infty(G)$ vanishes on $\vec\Gamma$. We need to show that $F \equiv 0$. Explicitly $F$ is given by an expansion $F = \sum_{k \in \Zst} c_k G(\cdot-k)$ with $c\in \ell ^\infty (\bZ )$. We now relate the sampling problem for vector-valued functions to a sampling problem with derivatives. To do this, we set $P=(p_1, \dots , p_N)$ and $Q= (1, x, \dots , x^{N-1})$. By assumption on $P$, there is an invertible $N\times N$-matrix $B$, such that $BP=Q$, i.e., $x^{j-1} = \sum _{k=1}^N b_{jk} p_k(x)$ for $j=1, \dots, N$ and thus \begin{equation} \label{eq:h9} \sum _{k=1}^N b_{jk} g^k = \sum _{k=1}^N b_{jk} p_k\big( \tfrac{d}{dx}\big) g = g^{(j-1)} \, . \end{equation} Consequently, after taking linear combinations of translates we obtain $$ BF (x) _j = \sum _{l\in \bZ } c_l BG(x-l)_j = \sum _{l\in \bZ } c_l g^{(j-1)}(x-l) = f^{(j-1)}(x) \, , $$ where $f=\sum_l c_l g(\cdot-l)\in \sisp^\infty(g)$ is the first component of $BF$. If $F$ vanishes on $\vec\Gamma $, then also $f^{(j-1)} $ vanishes on $\Gamma $ for $j=1, \dots , N$. Hence, $f$ vanishes on $\Gamma$ with multiplicity $N$ and $D^-(N_f, m_f) \geq N D^-(\Gamma ) >1$. By Theorem \ref{th_zeros_tp} or \ref{th_zeros_sec}, this implies that $f \equiv 0$. Hence, $c_k \equiv 0$ and $F \equiv 0$, as desired. \end{proof} We single out two special cases of Theorem~\ref{th_gab_1}. \begin{cor} Let $g$ be a totally positive function of Gaussian-type or the hyperbolic secant and let $\Lambda \subseteq \Rst$ be a separated set. Let $a_1, a_2, \dots , a_{N-1} \in \bR $, $g^1=g$, and set \begin{align} g^j := \prod_{k=1}^{j-1} \left(a_{k} I+ \tfrac{d}{dx}\right) g, \qquad j=2,\ldots,N , \end{align} and $G=(g^1,\ldots,g^N)$. Then $\mathcal{G}(G, \Lambda \times \Zst)$ is a frame for $L^2(\Rst)$ if and only if $D^-(\Lambda) > 1/N$. \end{cor} For the second corollary we use the basis of Hermite functions $\{h_k: k \geq 0\}$ which is defined by $$ h_k(x)= \gamma_k e^{\pi x^2} ~\frac{d^{k}}{dx^{k}} e^{-2\pi x^2} = (-1)^k\gamma_k e^{-\pi x^2} ~H_k(x), $$ with the Hermite polynomials $H_k$ of degree $k$ and some normalizing constant $\gamma_k>0$. \begin{cor} \label{coro_hermite} Let $\Lambda \subseteq \Rst$ be a separated set and $b>0$. Then $\mathcal{G}(h_0, \ldots, h_{N-1}, \Lambda \times b \Zst)$ is a frame of $L^2(\Rst)$ if and only if $D^{-}(\Lambda) > b/N$. \end{cor} \begin{proof} We use the fact that $\mathcal{G}(h_0, \ldots, h_{N-1}, \Lambda \times b \Zst)$ is a frame if and only if \[\mathcal{G}( h_0(b\inv \cdot), \ldots, h_{N-1}(b\inv \cdot), b\Lambda \times \Zst),\] is. Because the Hermite polynomials $H_k, k=0, \dots , N-1$, form a basis for the polynomials of degree $<N$, the span of $h_k$, $0\le k\le N-1$, is the same as the span of all derivatives $\tfrac{d^{j}}{dx^{j}}e^{-\pi x^2}$, $0\le j\le N-1$. The result is a consequence of Theorem \ref{th_gab_1}. \end{proof} Corollary \ref{coro_hermite} actually follows from a sampling result of Brekke and Seip in Fock space ~\cite{bese93}. It can also be reformulated for spaces of polyanalytic functions. For this connection see \cite{ab10}. \section{Postponed proofs} \label{sec_post} \subsection{Proof of Proposition \ref{prop_wstar}} For sets without multiplicities, i.e., $m_\Lambda \equiv 1$, the proposition is classical. Let $(\Lambda, \ml)$ be a separated set with multiplicity with finite height, let $(\Gamma, \mg)$ be a set with multiplicity, and $\{k_n: n \geq 1\} \subseteq \Zst$. Recall that $\Lambda ^j = \{ \lambda \in \Lambda : m(\lambda ) \geq j\}$. Suppose first that $\Lambda^j-k_n \weakconv \Gamma^j$, as $n \longrightarrow \infty$ for all $j=1,\ldots,N$. Then, by the case without multiplicity, $\sum_{\lambda \in \Lambda^j} \delta_{\lambda-k_n} \longrightarrow \sum_{\gamma \in \Gamma^j} \delta_\gamma$, in the $\sigma(C^*_c,C_c)$ topology. Since $(\Lambda, \ml)$ has finite height, the claim follows by summing over $j$. Conversely, assume that $\mu_n:= \sum_{\lambda \in \Lambda} \ml(\lambda) \delta_{\lambda-k_n} \longrightarrow \mu:= \sum_{\gamma \in \Gamma} \mg(\gamma) \delta_\gamma$, in the $\sigma(C^*_c,C_c)$ topology. As discussed in \cite[Lemmas 4.3, 4.4]{grorro15}, it follows that \[ \Lambda^1-k_n=\Lambda-k_n = \supp(\mu_n) \weakconv \supp(\mu) = \Gamma = \Gamma^1. \] (Here it is crucial that the multiplicities $\ml(\lambda)$, $\mg(\gamma)$ are integers.) It remains to show that $\Lambda^j-k_n \weakconv \Gamma^j$ for $j>1$. Since $\Lambda^1-k_n \weakconv \Gamma^1$, the case without multiplicity implies that $\sum_{\lambda \in \Lambda} \delta_{\lambda-k_n} \longrightarrow \sum_{\gamma \in \Gamma} \delta_\gamma$. Therefore, \begin{equation} \label{eq_abcd} \sum_{\lambda \in \Lambda} (\ml(\lambda)-1) \delta_{\lambda-k_n} \longrightarrow \sum_{\gamma \in \Gamma} (\mg(\gamma)-1) \delta_\gamma. \end{equation} Since $(\Lambda,\ml)$ has finite height, we can proceed by induction. Indeed, we consider the sets $\Lambda_0:=\Lambda^2$ and $\Gamma_0:=\Gamma^2$, with multiplicites $\mult_{\Lambda_0} := \ml-1$ and $\mult_{\Gamma_0}:=\mg-1$, and note that $\Lambda^j_0=\Lambda^{j+1}$ and $\Gamma^j_0=\Gamma^{j+1}$. $\qed$ \subsection{Sketch of a proof of Theorem \ref{th_wl_vec}} Let $I:= \{(\lambda,j) \in \Rst^2: \lambda \in \Lambda^j, j=1,\ldots,N\}$ and consider the matrix $A \in \bC^{I \times \Zst}$, given by \begin{align*} A_{(\lambda,j), k} := G^j(\lambda-k). \end{align*} Then $\vec\Lambda$ is a sampling set for $\sisp(G)$ if and only if $A:\ell^p(\Zst) \to \ell^p(I)$ is bounded below. The independence of $p$ of this property for the range $p \in [1,+\infty]$ follows from (a slight extension of) Sj\"ostrand's Wiener-type lemma \cite{sj95}. The formulation in \cite[Proposition A.1]{grrost17} is applicable directly. Specifically, \cite[Proposition A.1]{grrost17} concerns a matrix indexed by two \emph{relatively separated} subsets of the Euclidean space (where a relatively separated set is just a finite union of separated sets). In our case, $I$ is a relatively separated subset of $\Rst^2$, while $\Zst$ can be embedded into $\Rst^2$ as $\Zst\times\{0\}$. This accounts for the equivalences $(a) \Leftrightarrow (b)$. The other implications follow, with very minor modifications, as in the proof of \cite[Theorem 3.1]{grrost17}. See also \cite[Section 4]{grorro15} for some relevant technical tools. $\qed$. \subsection{Sketch of a proof of Proposition \ref{prop_nec}} The proposition follows from the theory of density of frames. The Paley-Wiener case is explicitly treated in \cite{grra96} following the technique of Ramanathan and Steger \cite{rast95}. For shift-invariant spaces with generators in $g \in W^N_0(\Rst)$, we can use the abstract density results for frames from \cite{bchl06} as follows. Suppose that $(\Lambda,\ml)$ is a sampling set for $\sisp^2(g)$. By assumption, the Bessel map, $\ell^2(\Zst) \ni c \to \sum_k c_k g(\cdot-k) \in \sisp^2(g)$, is an isomorphism. The sampling inequality \eqref{eq:lpstable} with $p=2$ means that the set $\mathcal{F}$ formed by the sequences \begin{align*} \varphi_{\lambda,j} := \left(g^{(j)}(\lambda-k)\right)_{k \in \Zst}, \qquad \lambda \in \Lambda, j=0, \ldots, \ml(\lambda)-1, \end{align*} is a frame for $\ell^2(\Zst)$. We consider the index set $I:=\{(\lambda,j) \in \Rst^2: \lambda \in \Lambda, j=0, \ldots, \ml(\lambda)-1\}$ and a map $\alpha:I \to \Zst$ such that $\alpha(\lambda,j)=l$, with $\abs{l-\lambda} \leq 1/2$. Second, we let $\Phi(x) := \sum_{j=0}^{N-1} \max_{y: \abs{y-x} \leq 1} \abs{g^{(j)}(y)}$. Since $g \in W_0^N(\Rst)$, it follows that $\Phi \in W_0(\Rst)$, and we have the estimate \begin{align*} \abs{\varphi_{\lambda,j}(k)} =\frac{}{} \abs{g^{(j)}(\lambda-k)} \leq \Phi(\alpha(\lambda)-k), \end{align*} which, in the terminology of \cite{bchl06} means that $\mathcal{F}$ is $\ell^1$-localized with respect to the canonical basis of $\ell^2(\Zst)$. The comparison theorem \cite[Thm.~3]{bchl06} yields the estimate $D^{-}(I,\alpha) \geq 1$ in terms of the density \begin{align*} D^{-}(I,\alpha)=\liminf_{n \longrightarrow \infty} \inf_{k \in \Zst} \frac{\# \alpha^{-1}([k-n,k+n])}{\# [k-n,k+n]} \, . \end{align*} Clearly $D^-(I,\alpha )$ coincides with $D^{-}(\Lambda, \ml)$. Alternative arguments can be given by checking the general conditions in \cite{fghkr} or \cite{ro11}. $\qed$.
1,477,468,751,342
arxiv
\section{Introduction} Thermonuclear fusion holds a great promise as the power source for Human Civilization. Energy production is one of the primary driving forces for industrialization and economic progress of Humankind \cite{ShubovPM}. In this work we discuss possibilities, advances and technical challenges in the field of sustainable nuclear fusion power. Thermonuclear energy has a potential of becoming almost unlimited power source on Earth. Further, in the future, thermonuclear energy can provide an almost unlimited power source during the colonization of the Solar System. The resources of thermonuclear fuel on Earth are truly vast. Earth's oceans contain 52 trillion tons of deuterium \cite[p.10]{Fus1}. Even though tritium does not exist in nature, it is generated from $^6$Li isotope. Lithium contains 6.6\% isotope $^6$Li \cite[p.1-15]{crc}. World Lithium reserves are 14 million tons in ores \cite[p.99]{minerals2019} and 180 billion tons in seawater \cite[p.14-14]{crc}. Lithium contained in Earth's oceans can be used to generate 5.9 billion tons of tritium. World energy reserves of fossil fuel are equal to 0.93 trillion ton oil equivalent, while possible resources are 12.6 trillion ton oil equivalent \cite{WorldEnergy}. Thermonuclear reactors using deuterium -- tritium fusion can generate an energy equivalent of $7.5\cdot 10^4$ trillion tons of oil from tritium obtained from oceanic lithium. Reactors using pure deuterium fusion can generate an energy equivalent of $3.3\cdot 10^8$ trillion tons of oil from 52 trillion tons of oceanic deuterium. Within the Solar System, thermonuclear fuel is even more abundant. Each ton of Jupiter's atmosphere contains about 100 $g$ deuterium and 11 $g$ $^3$He \cite{Jupiter_He3}. Thus, Jupiter contains $18 \cdot 10^{18}\ tons$ $^3$He and $170 \cdot 10^{18}\ tons$ deuterium. As we discuss in this work, building a reactor using deuterium -- $^3$He fusion may be difficult, yet these difficulties should be overcome by the time when the Outer Solar System is being colonized. The primary fusion reaction of interest in this century is deuterium -- tritium fusion \be \label{1.01.01} \text{$^2$H+$^3$H$\ \to ^4$He+n+17.6}\ MeV. \ee Even though tritium does not exist in nature, it can be produced from the isotope $^6$Li via the reaction \be \label{1.01.02} \text{$^6$Li+n$\ \to ^4$He+$^3$H}. \ee Energetic neutrons produced by fusion knock out neutrons of other atoms before being slowed down and consumed by $^6$Li. On average, one nuclear fusion event, which consumes one tritium atom, produces 1.2 tritium atoms \cite[p.67]{Triton1}. A fusion reaction which does not produce neutrons is called aneutronic. The only aneutronic fusion reaction which can be run in a Tokamak or Stellarator is deuterium -- $^3$He fusion \be \label{1.01.03} \text{$^2$H+$^3$He$\ \to ^4$He+p+18.5}\ MeV \ee Other aneutronic fusion reactions exist, but they are impractical to run in Tokamaks and Stellarators. Deuterium -- $^3$He reactors would require much more advanced technology than deuterium -- tritium reactors. Moreover, resources of $^3$He on Earth are miniscule, thus $^3$He would have to be mined in space \cite{EarthHe3}. Deuterium -- $^3$He reactors should appear long after deuterium -- tritium reactors have reached maturity. Fusion reaction can occur only in a plasma at very high temperatures of 100 -- 200 million $^o$K \cite{Tok2}. The temperature of plasma is measured in kiloelectronvolts ($keV$): \be \label{1.01.04} 1\ keV=1.16 \cdot 10^7\ ^oK. \ee Plasma at this temperature can be confined either by inertia or by a magnetic field. In this work we consider only magnetic confinement. In Toakamak and Stellarator reactors, plasma confined within a magnetic field has a shape of torus. Original reactors built in the 1950s were faced with plasma leakage problem. Due to unevenness of toroid field, charged particles would drift out of confinement. In Tokamak reactors, this problem is solved by induction of toroidal electric current through the plasma. This current generates a poloidal magnetic field. Overall magnetic field lines take a shape of a twisted torus. The term Tokamak originated in Russian language in USSR -- Toroidalnya Kamera Magnetnaya Katushka \cite[p.2]{Tok2}. In Stellarator reactors, the problem of plasma drifting is solved by introducing a poloidal magnetic field by "careful profiling of the magnetic field topology through complex non-planar toroidal field coils" \cite[p.32]{Tokamak}. Stellarator magnetic fields are very complex with up to 50 features which have to be accounted for in design \cite[p. 31]{Fus2}. A Tokamak reactor is illustrated in Figure \ref{1.0F01} below. \begin{center} \includegraphics[width=12cm,height=12cm]{Tokamak} \captionof{figure}{Tokamak cross-section \label{1.0F01}} \end{center} The first Tokamak was built in 1952 in Kurchatov Institute, Moscow, Russia \cite[p. 20]{Fus3}. Confinement of deuterium-tritium plasma at temperature of 5 $keV$ has been achieved at Princeton Large Torus Tokamak in 1978 \cite[p. 78]{Hist02}. This was the first Tokamak to produce fusion energy. Even though Tokamaks and Stellarators which produce fusion energy have been built, the energy produced in these reactors is much lower than the energy consumed by external plasma heating. Tokamak or stellarator plasma rapidly loses energy by four processes -- neutron radiation, Bremsstrahlung radiation, synchrotron radiation, and heat conduction to the walls. Bremsstrahlung radiation is caused by collision of energetic electrons with nuclei. It consists of x-rays. Synchrotron radiation is caused by radiation of electrons moving in the magnetic field. It consists of visible and infrared photons. Conduction heat loss is caused by interaction of the plasma with the walls. Plasma energy loss by neutron, Bremsstrahlung radiation and synchrotron radiation are proportional to energy generated by fusion. As we show in Subsection {\rr 4.1}, in a typical deuterium-tritium reactor, 80\% of fusion energy escapes plasma with neutrons, 6.7\% with Bremsstrahlung radiation and 1.3\% with synchrotron radiation. As we show in Subsection {\rr 4.2}, conductive energy loss is either independent or weakly dependent on the fusion power. Big ITER reactor designed in great detail (not built) has conductive power loss of 182 $MW$ \cite[p. 7]{BigITER}. Lowest conductive power loss for any designed Tokamak is about 40 $MW$ \cite[p. 10]{STCensus01}. In order for the confined plasma to generate more energy than it loses, the plasma must have sufficient pressure and \textbf{energy confinement time} \cite[p.2]{Tok2}. Energy confinement time $\tau_{_E}$ is the ratio of plasma internal heat energy and the rate of energy loss by the plasma \cite[p.18]{Tokamak}. Notice, that energy confinement time is not related to the time during which the plasma itself is confined \cite[p.4]{Lawson1}. Any working fusion power plant must have stable plasma. The product of plasma pressure and confinement time is called Lawson pressure criterion and denoted $C_{_{LP}}$. As we show in Subsection {\rr 4.1}, in order for deuterium -- tritium fusion reactor to operate, it must have\\ $C_{_{LP}} \ge 16\ bar \cdot s$. A deuterium -- $^3$He fusion reactor must have $C_{_{LP}} \ge 430\ bar \cdot s$ \cite[p.81]{AF01}. As we demonstrate in Subsection {\rr 4.2}, the Lawson pressure criterion of a reactor is proportional to its reactor criterion defined as \be \label{1.01.05} \mathcal{R}_{_C}=S_{_F}^{3} B_{_T}^{4} R^{3}., \ee where $R$ is the major radius of the plasma torus, $B_{_T}$ is the toroidal magnetic field on the plasma axis, and $S_{_F}$ is the shape factor defined in Subsection {\rr 4.2}. In order for a thermonuclear reactor to sustain nuclear fusion, it must have sufficiently high reactor criterion. Reactor criterion threshold depends on reactor shape, toroidal current, and especially on the fuel used. For any reactor with shape similar to International Thermonuclear Experimental Reactor (ITER), the thresholds are the following: \be \label{1.01.06} \begin{split} \mathcal{R}_{_C}\Big(\ ^2\text{H}-^3\text{H} \Big)=3.2 \cdot 10^7\ Tesla^4\ m^3, \qquad \mathcal{R}_{_C}\Big(\ ^2\text{H}-^3\text{He} \Big)=1.4 \cdot 10^{10}\ Tesla^4\ m^3. \end{split} \ee Two proposed deuterium -- $^3$He spherical Tokamaks discussed in Subsection {\rr 4.6} have reactor criteria of about $5 \cdot 10^{10}\ Tesla^4\ m^3$. ITER obtains sufficient reactor criterion by having a major plasma radius $R=6.2\ m$, toroidal magnetic field $B_{_T}=5.3\ Tesla$, and shape factor $S_{_F}=4.2$ \cite[p.2]{EConf03}. Increasing toroidal magnetic field presents significant technical challenges, which are likely to be solved within coming decades. Compact fusion reactors may be developed in the future. They will achieve sufficient reactor criterion by increasing the toroidal magnetic field \cite{HField1}. Achieving high $B_{_T}=5.3\ Tesla$ is difficult -- as we show in Table \ref{1.0T12}, the maximum field experienced by the superconducting coils is 2 to 3 times higher than $B_{_T}$. The development of high temperature superconductor tape should significantly increase reactor magnetic fields \cite{HTSTape03,SCMS01,HTSTape04}. In Section 2, we briefly introduce the physics of Tokamak and Stellarator plasmas. In Subsection 2.1, we derive expressions for plasma pressure, magnetic field pressure, and $\beta$ -- the ratio of the two pressures. In Subsection 2.2, we derive the expression for electrical resistivity of the plasma, and show that it is a good conductor. In Subsection 2.3, we derive the nuclear reaction rate within plasma. In Section 3, we describe and calculate radiative energy losses from plasma. In Subsection 3.1, we describe the four types of plasma energy loss -- neutron radiation, Bremsstrahlung radiation, synchrotron radiation, and conduction to walls. In Subsection 3.2, we derive the energy loss due to Bremsstrahlung radiation. In Subsection 3.3, we derive the energy loss due to synchrotron radiation. Energy losses from both Bremsstrahlung and synchrotron radiation are significant only for deuterium -- $^3$He reactors. In Section 4, we discuss transport energy loss. In Subsection 4.1, we introduce the concept of \textbf{fusion power gain} $Q$, which is the ratio of thermal energy produced by fusion to heating power which must be supplied to the plasma in order to sustain fusion. We also introduce \textbf{Lawson pressure criterion} -- which is the product of plasma pressure and energy confinement time. In Subsections 4.2 and 4.3, we analyze scaling laws for energy confinement time and conductive energy loss by plasma. In Subsections 4.3 to 4.5, we define the reactor criterion and calculate it for several reactors. In Section 5, we discuss the ways in which reactor criterion and Lawson pressure criterion can be improved. In Subsection 5.1, we introduce the Greenwald density limit and show that it does place a flexible upper bound on size and power of reactors. In Subsection 5.2, we discuss the simple approach of increasing a Tokamak of Stellarator major radius. In Subsection 5.3, we discuss advantages and disadvantages of using spherical Tokamaks. In Subsection 5.4, we discuss technology which can increase the magnetic field which confines plasma. In Subsection 5.5, we discuss advantages and disadvantages of running a reactor in an unsafe regime. In Section 6, we discuss prospects for reactor development. In Subsection 6.1, we discuss a possible timeline for nuclear fusion reactors. In Subsection 6.2, we describe Spheromak2100 -- a concept of a future deuterium -- $^3$He reactor. \section{Tokamak and Stellerator plasma} In this section we introduce the physics of plasma present in Tokamaks and Stellarators. This plasma is very hot and completely ionized. The physics of this plasma is different from that of "low temperature" or partially ionized plasmas. Plasma physics consists of many dynamic and electric phenomena which have been subject to extensive studies \cite{Tokamak}. In this section, we only consider the aspects of plasma physics most relevant to operation of Tokamaks and Stellarators. \subsection{Plasma pressure and $\beta$} Plasma pressure is an important parameter of Tokamak or Stellerator plasma. As we demonstrate below, the power of a thermonuclear reactor is proportional to the square of plasma pressure. The magnetic field required to contain plasma is proportional to the square root of plasma pressure. Plasma Pressure is calculated in the same way as the pressure of any monatomic gas. The particle density of plasma is the total number of ions and electrons per unit volume. It is $(1+\overline{Z})\ n$ where $\overline{Z}$ is the average ion charge and $n$ is the number density of electrons. Thus, by the laws of perfect gas, the plasma pressure is \cite[p.45]{Fus1}: \be \label{1.02.01} P_{_g}=(1+\overline{Z})\ n\ R_{_{g}}\ T, \ee where the gas constant $R_{_{g}}$ is \be \label{1.02.02} R_{_{g}}=8.314 \frac{J}{mol \cdot ^oK}=1.6015 \cdot 10^{-21} \frac{bar \cdot m^3 }{keV}. \ee The notation $R$, generally used for gas constant, is reserved for the Tokamak's or Stellarator's major radius. Substituting (\ref{1.02.01}) into (\ref{1.02.02}), we obtain the pressure of plasma: \be \label{1.02.03} P_{_{g}}=1.6015 \cdot 10^{-21} \frac{bar \cdot m^3 }{keV}\ (1+\overline{Z})\ n\ T. \ee Subscript $g$ denotes gas. Magnetic field pressure is \be \label{1.02.04} P_{_m}=\frac{B^2}{2 \mu_{_0}}=3.98\ bar \left(\frac{B}{1\ Tesla}\right)^2. \ee The quotient of plasma pressure and magnetic field pressure is \cite[p.34]{Fus1}: \be \label{1.02.05} \beta^*=\frac{P_{_g}}{P_{_m}}. \ee Given that both plasma pressure and magnetic field are variable over space, we define $\beta$ as the average given by \cite[p.17]{Tk02} \be \label{1.02.06} \beta=\frac{2 \mu_{_0} \sqrt{\int P_{_g}^2(V)\ dV}}{B_{_T}^2}, \ee where $V$ is the plasma volume and $B_{_T}$ is the toroidal magnetic field within a Tokamak or Stellerator \cite[p.17]{Tk02}. For most Tokamaks, $\beta$ is usually close to 0.01, or 1\% \cite[p.30]{Fus3}. International Thermonuclear Experimental Reactor (ITER) will have $\beta=2.4$\% \cite[p.26]{Tk03} Spherical Tokamaks have values of $\beta$ up to 40\% \cite[p.333]{Tokamak}. Spherical Tokamaks should be more compact than conventional ones \cite{ST04}. \subsection{Electrical resistivity of plasma} In Tokamaks, toroidal currents of 15 $MA$ and more would have to flow through the plasma \cite[p.226]{FusBk1}. In order for the current to be stable, the plasma should be a strong electric conductor. The specific electric resistivity of plasma is \be \label{1.02.07} \eta=\frac{\sqrt{m_{_e}}\ \overline{Z}\ e^2\ \ln \Lambda}{91.5 \epsilon_{_{0}}^2} \big(k T_{_e} \big)^{-3/2}, \ee where $T_{_e}$ is the electron temperature, and $\ln \Lambda \approx 20$ is the Coulomb logarithm. For hydrogen isotope plasma \cite[p.19]{FusBk2}, \be \label{1.02.08} \eta=3.3 \cdot 10^{-8}\ \left( \frac{T}{1\ keV} \right)^{-3/2}\ \big( \Omega \cdot m \big). \ee The specific resistivity of copper at 20$^o$C is $1.8 \cdot 10^{-8}\ \big( \Omega \cdot m \big)$. In a typical fusion reactor which uses deuterium -- tritium reaction, the temperature is 10 $keV$ and the plasma resistivity is $1.0 \cdot 10^{-9}\ \Omega \cdot m$, which is 18 times lower than copper resistivity. In a typical fusion reactor which uses deuterium -- $^3$He reaction, the temperature is 45 $keV$, the average nuclear charge is $\overline{Z}=1.5$ and the plasma resistivity is $1.6 \cdot 10^{-10}\ \Omega \cdot m$, which is 110 time lower than copper resistivity. \subsection{Nuclear reaction rate within plasma} A two component plasma consists of two types of nuclei. The reaction rate in a two-component plasma is \be \label{1.02.09} r=n_{_{1}} n_{_{2}}\sigma_{_{v}}(T), \ee where $n_{_{1}}$ and $n_{_{2}}$ are number densities of reactant nuclei and $\sigma_{_{v}}(T)$ is the temperature-dependent reaction rate parameter measured in $m^3/s$ . In a reactor plasma, \be \label{1.02.10} \begin{split} n_{_{1}}&=x_{_1}\ n\\ n_{_{2}}&=x_{_2}\ n, \end{split} \ee where $x_{_1}$ is the proportion of deuterium in plasma, $x_{_2}$ is the proportion of either tritium or $^3$He in plasma, and $1-x_{_1}-x_{_2}$ is the proportion of reaction products and impurities in plasma. Denote \be \label{1.02.11} x_{_r}=4\ x_{_1}\ x_{_2}. \ee The maximum value of $x_{_r}$ is 1. This maximum is reached if the two reactants are present in equal proportion, and no fuel has been consumed yet. The reaction rate is \be \label{1.02.12} r=\frac{x_{r}}{4}\ n^2\ \sigma_{_{v}}(T), \ee where $n$ is the number density of all nuclei \cite[p.38]{Fus1}. Substituting (\ref{1.02.01}) into (\ref{1.02.12}), we obtain the following reaction rate \be \label{1.02.13} r= \frac{x_{_r}\ \sigma_{_{v}}(T)}{4\ (1+\overline{Z})\ R_{_{g}}\ T}\ P_{_g}\ n= \frac{x_{_r}\ \sigma_{_{v}}(T)}{4\ (1+\overline{Z})^2\ R_{_{g}}^2\ T^2}\ P_{_g}^2. \ee From (\ref{1.02.13}) above, we deduce the fuel consumption rate \be \label{1.02.14} \frac{1}{n} \frac{dn}{dt}=- \frac{x_{_r}\ \sigma_{_{v}}(T)}{4\ (1+\overline{Z})\ R_{_{g}}\ T}\ P_{_g}=- \frac{x_{_r}}{2}\ \sigma_{_{B}}(T)\ P_{_g}, \ee where \be \label{1.02.15} \sigma_{_{B}}(T)= \frac{\sigma_{_{v}}(T)}{2\ (1+\overline{Z})\ R_{_{g}}\ T} \ee is the temperature-dependent burning rate constant. The fusion power per unit volume of two component plasma is given by \be \label{1.02.16} \mathcal{P}= \frac{x_{_r}}{4}\ n^2\ \sigma_{_{v}}(T)E_{_r}= \frac{x_{_r}\ \sigma_{_{v}}(T) E_{_r} }{4\ (1+\overline{Z})^2\ R_{_{g}}^2\ T^2}\ P_{_g}^2= \frac{x_{_r}}{4}\ \sigma_{_{P}}(T)\ P_{_g}^2, \ee where $E_{_r}$ is the energy released by a single nuclear reaction. The \textbf{specific power constant} is \be \label{1.02.17} \sigma_{_{P}}(T)= \frac{\sigma_{_{v}}(T) E_{_r} }{(1+\overline{Z})^2\ R_{_{g}}^2\ T^2}. \ee The reaction \be \label{1.02.18} ^2H+^3H \to ^4He+n \ee has $E_{_r}=2.82 \cdot 10^{-12}\ J$ \cite[p.40]{Fus1}. Substituting (\ref{1.02.04}) and (\ref{1.02.06}) into (\ref{1.02.16}) we obtain \be \label{1.02.19} \mathcal{P}=15.8\ bar^2 \ x_{_r}\ \sigma_{_{P}}(T)\ \left(\frac{B_{_T}}{1\ Tesla}\right)^4 \beta^2 \ee for deuterium -- tritium plasma. The values of $\sigma_{_{v}}(T)$ for deuterium -- tritium plasma are tabulated below \cite[p.46]{Fus1}. The value of $\sigma_{_{B}}(T)$ is obtained by (\ref{1.02.15}). The value of $\sigma_{_{P}}(T)$ is obtained by (\ref{1.02.17}). The units of $\sigma_{_{P}}(T)$ is $bar^{-2} W m^{-3}$, which can be rewritten as \be \label{1.02.20} \begin{split} &1\ bar^{-2} W m^{-3}=1\ bar^{-2}\ s^{-1}\ \big( J\ m^{-3}\big)=1\ bar^{-2}\ s^{-1}\ Pa\\ =&1\ bar^{-2}\ s^{-1}\ Pa\ 10^{-5}\ bar=10^{-5}\ bar^{-1}\ s^{-1}. \end{split} \ee \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & & \\ \hline T &$keV$ & 5 & 8 & 10 & 15 & 20 & 25 & 30 & 35 & 40 \\ \hline $\sigma_{_{v}}(T)$ & $10^{-22}\ m^3 s^{-1}$ & 0.13 & 0.59 & 1.09 & 2.65 & 4.24 & 5.6 & 6.7 & 7.5 & 8.0\\ \hline $\sigma_{_{B}}(T)$ & $10^{-3}\ bar^{-1} s^{-1}$ & 0.41 & 1.15 & 1.7 &2.76 &3.31 &3.5 &3.49 &3.35 &3.12\\ \hline $\sigma_{_{P}}(T)$ & $bar^{-1} s^{-1}$ & 1.43 & 2.53 &3.00 &3.24 &2.91 &2.46 &2.05 &1.68 & 1.37 \\ \hline \end{tabular} \captionof{table}{Reaction rate parameter for deuterium-tritium fusion} \label{1.0T01} \end{center} Similar parameters for deuterium -- $^3$He reaction are tabulated below. Cross-section data \cite[p.10]{RRates0} is used to calculate the last two rows. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & & & \\ \hline T &$keV$ & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 & 120 & 140\\ \hline $\sigma_{_{v}}(T)$ & $10^{-23}\ m^3 s^{-1}$ & 0.13 & 0.31 & 0.54 & 0.79 & 1.04 & 1.27 & 1.48& 1.67 & 1.97& 2.19 \\ \hline $\sigma_{_{B}}(T)$ & $10^{-5}\ bar^{-1} s^{-1}$ & 0.54 & 0.97 & 1.35 & 1.64 & 1.86 & 1.98 & 2.05 & 2.09 & 2.05 & 1.95\\ \hline $\sigma_{_{P}}(T)$ & $10^{-2}\ bar^{-1} s^{-1}$ & 2.54 & 3.41 & 3.80 & 3.86 & 3.73 & 3.49 & 3.21 & 2.94 & 2.41 & 1.97 \\ \hline \end{tabular} \captionof{table}{Reaction rate parameter for deuterium-$^3$He fusion} \label{1.0T02} \end{center} Peak specific power constants for several fusion reactions are tabulated below. The data for Column 1 is given in \cite{RRates}. Column 2 is measured in units of $10^{-27}\ m^3\ s^{-1}\ keV^2$. \begin{center} \begin{tabular}{|l|r|c|c|c|l|l|l|l|l|l|l|l|l|} \hline Reaction & Maximal & $\overline{Z}$ & $E_{_r}$ & Maximal $\sigma_{_{P}}$ & Temperature \\ & $\sigma_{_{v}}/T^2$ & & $10^{-12}\ J$ & $bar^{-1} s^{-1}$ & $keV$ \\ \hline $^2$H+$^3$H $ \to ^4$He+n & 1,240 & 1.0 & 2.82 & 3.4 & 13.6 \\ $^2$H+$^3$He $ \to ^4$He+p & 22.4 & 1.5 & 2.93 & $4.1 \cdot 10^{-2}$ & 58 \\ p+$^6$Li $ \to ^4$He+$^3$He & 1.5 & 2.0 & 0.64 & $9.2 \cdot 10^{-4}$ & 66 \\ p+$^{11}$B $ \to 3 ^4$He & 3.0 & 3.0 & 1.39 & $4.0 \cdot 10^{-3}$ & 123 \\ \hline \end{tabular} \captionof{table}{Specific power constant} \label{1.0T03} \end{center} As we see from the table above, deuterium -- $^3$He reaction is the only aneutronic fusion reaction worth considering in the foreseeable future. \subsection{Calculation of $x_r$} In this subsection, we calculate $x_r$ for deuterium -- tritium and deuterium -- $^3$He plasmas. Suppose we start out with plasma with equal concentration of deuterium and the second component. The proportion of fuel which has burned is $x_{_b}$. Obviously, $x_{_b}$ changes over time of a Tokamak discharge. The proportion of impurity nuclei is $x_{_i}$. Impurity proportion should be under 2\% and it can be assumed to be static. First, we calculate $x_{_b}$ for deuterium -- tritium reactor plasma. Prior to fusion, the number density of deuterium and tritium nuclei is \be \label{1.02.21} n_{_1}=n_{_2}=\frac{n \big(1-x_{_i}\big)}{2}. \ee The number density of deuterium and tritium nuclei after $x_{_b}$ of fuel has burned is \be \label{1.02.22} n_{_1}=n_{_2}=\frac{n \big(1-x_{_i}\big)\big(1-x_{_b} \big)}{2}. \ee Deuterium -- tritium fusion transforms two fuel nuclei into one $^4$He nucleus. Thus, the number density of products is \be \label{1.02.23} n_{_{\text{Products}}}=\frac{n \big(1-x_{_i}\big) x_{_b}}{2}. \ee The number density of impurities is \be \label{1.02.24} n_{_i}=n x_{_i}. \ee Adding (\ref{1.02.22}), (\ref{1.02.23}), and (\ref{1.02.24}) we obtain the overall number density of particles after $x_{_b}$ of fuel has been consumed: \be \label{1.02.25} n=n_{_1}+n_{_2}+n_{_{\text{Products}}}+n_{_i}= n\left[1-\frac{x_{_b}\big(1-x_{_i} \big)}{2} \right]. \ee Dividing (\ref{1.02.22}) by (\ref{1.02.25}) we obtain \be \label{1.02.26} x_{_1}=x_{_2}=\frac{ \big(1-x_{_i}\big)\big(1-x_{_b} \big)} {2-x_{_b}\big(1-x_{_i} \big)}. \ee Given that $x_{_i} \ll 1$, we simplify (\ref{1.02.26}): \be \label{1.02.27} \begin{split} x_{_1}=x_{_2}&=\frac{ \big(1-x_{_b} \big)\big(1-x_{_i}\big)} {2-x_{_b}\big(1-x_{_i} \big)}= \frac{ \big(1-x_{_b} \big)\big(1-x_{_i}\big)} {\big(2-x_{_b}\big)\left[1-\frac{x_{_i}\ x_{_b}}{2-x_{_b}}\right]} \approx \frac{1-x_{_b}}{2-x_{_b}}\big(1-x_{_i} \big) \left[1-\frac{x_{_i}\ x_{_b}}{2-x_{_b}}\right]\\ & \approx \frac{1-x_{_b}}{2-x_{_b}} \left[1-x_{_i}\left(1+ \frac{x_{_b}}{2-x_{_b}}\right) \right]= \frac{1-x_{_b}}{2-x_{_b}} \left[1-\frac{2 x_{_i}}{2-x_{_b}} \right]. \end{split} \ee Substituting (\ref{1.02.27}) into (\ref{1.02.11}), we obtain $x_{_r}$ for deuterium -- tritium reactor plasma: \be \label{1.02.28} x_{_r}=4 \left[\frac{1-x_{_b}}{2-x_{_b}}\right]^2 \left[1-\frac{2 x_{_i}}{2-x_{_b}} \right]^2 \approx \left[\frac{2-2\ x_{_b}}{2-x_{_b}}\right]^2 \left[1-\frac{4 x_{_i}}{2-x_{_b}} \right]. \ee Second, we calculate $x_{_b}$ for deuterium -- $^3$He reactor plasma. Prior to fusion, the number density of deuterium and $^3$He nuclei is also given by (\ref{1.02.21}). The number density of deuterium and $^3$He nuclei after $x_{_b}$ of fuel has burned is still given by (\ref{1.02.22}). Deuterium -- $^3$He fusion transforms two fuel nuclei into two product nuclei. The total number density of nuclei remains unchanged in the process of deuterium -- $^3$He fusion. The number density of particles after $x_{_b}$ of fuel has been consumed is $n$ -- the same as the original number density. Dividing (\ref{1.02.22}) by $n$ we obtain \be \label{1.02.29} x_{_1}=x_{_2}=\frac{ \big(1-x_{_i}\big)\big(1-x_{_b} \big)}{2}. \ee Substituting (\ref{1.02.29}) into (\ref{1.02.11}), we obtain $x_{_r}$ for deuterium -- $^3$He reactor plasma: \be \label{1.02.30} x_{_r}=\big(1-x_{_b}\big)^2\big(1-x_{_i}\big)^2 \approx \big(1-x_{_b}\big)^2\ \big(1-2x_{_i}\big). \ee In calculations presented above, we have ignored other nuclear reactions taking place within the plasma. Deuterium -- deuterium fusion will have relatively minor effect on plasma composition. Other fusion reactions would have negligible effect. \section{Radiative energy loss} \subsection{Reactor energy balance} Reactors in which deuterium-tritium fusion took place have existed since 1978. This has been achieved by confining deuterium-tritium plasma at fusion temperature within a Tokamak or Stellarator torus \cite[p. 78]{Hist02}. The principal problem for past and present Tokamaks and Stellarators is the fact that energy produced by fusion has been much lower than energy lost by plasma. Thus, in order to sustain fusion, thermal energy has to be supplied to the plasma by an external source. In thermonuclear reactors, the plasma has to be heated with beams of energetic neutral particles or electromagnetic waves of radio or microwave frequencies \cite[p. 64-70]{FusBk1}. In order for a fusion power station to operate, heat produced by fusion must considerably exceed heat supplied to the plasma. Plasma in which heat generation by fusion exceeds energy loss does not need an external heat source to sustain fusion. This plasma is called \textbf{ignited}. There are four processes by which heat escapes plasma. First is neutron radiation. Deuterium -- tritium fusion given by $^2$H+$^3$H --> $^4$He+n releases 80\% of energy with the neutron. Deuterium -- deuterium fusion takes place in deuterium -- $^3$He reactors. About half of this fusion is given by $^2$H+$^2$H --> $^3$He+n. This reaction releases 75\% of energy with the neutron. Overall, 80\% of energy generated by deuterium-tritium fusion and 5\% of energy generated by deuterium -- $^3$He fusion is released as neutron radiation \cite[p.24]{Tokamak}. The second form of energy loss is Bremsstrahlung radiation resulting from collisions of charged particles. As we show in Subsection {\rr 3.2} below, Bremsstrahlung energy loss decreases with increasing reactor operating temperature. The third form of energy loss is Synchrotron radiation. This radiation is caused by charged particles moving in a magnetic field. This form of energy loss is significant only for deuterium -- $^3$He reactors. Most Synchrotron radiation is absorbed by the plasma, with only 1.5\% to 4.5\% reaching the reactor wall \cite[p.70]{BeegITER}. Theoretical calculations on the total power loss by Synchrotron radiation are vague, with different theories giving results different by up to a factor of 2 \cite[p.70]{BeegITER}. As we show in Subsection {\rr 3.3}, energy loss by Synchrotron radiation increases with increasing reactor operating temperature. Energy loss for Bremsstrahlung radiation increases with lowering of reactor operating temperature, while energy loss for Synchrotron radiation increases with increasing reactor operating temperature. As we show in Table \ref{1.0T13} of Subsection {\rr 6.2}, the ideal operating temperature for deuterium -- $^3$He reactors is 60 $keV$ to 70 $keV$. The fourth form of energy loss is conduction or transport. Plasma confined in the magnetic field still fills up the volume of reactor torus -- thus the plasma touches reactor walls. The behavior of plasma is not completely understood, thus many experimental and theoretical models exist for plasma energy loss via conduction \cite[p.138]{FusBk3}. Conduction energy loss is very important for both deuterium -- tritium and deuterium -- $^3$He reactors. Energy losses in the form of neutron, Bremsstrahlung, and synchrotron radiation are proportional to energy generated by nuclear fusion. Neutronic energy loss is easiest to calculate. Bremsstrahlung energy loss is somewhat easy to calculate. Calculation of synchrotron energy loss is complicated, and different theories produce results differing by up to a factor of 2 \cite[p. 70]{BeegITER}. In this section, we calculate plasma energy loss via three types of radiation -- neutron, Bremsstrahlung, and synchrotron. Energy losses in the form of conduction is weakly related or unrelated to energy generated by nuclear fusion. This mechanism of energy loss is not completely understood, thus there are different physical and empirical models. This form of energy loss is discussed in Section {\rr 4}. \subsection{Bremsstrahlung radiation} Fully ionized plasma produces electromagnetic radiation via Bremsstrahlung -- radiation resulting from collisions of charged particles. The total power per unit volume is \cite[p.162]{bslr}: \be \label{1.03.01} \begin{split} \mathcal{P}_{_B}&=1.4 \cdot 10^{-40} \frac{W}{m^3} \cdot \left( \frac{T}{1 ^oK}\right)^{1/2} \left( \frac{n_e}{m^{-3}}\right)^2 \left( \frac{\sum a_i Z_i^2}{\sum a_i Z_i} \right)\overline{g}_B,\\ \end{split} \ee where $\overline{g}_B$ ``is a frequency average of the speed averaged Gaunt factor, which is in the range 1.1 to 1.5. Choosing a value of 1.2 will give an accuracy to within about 20\%" \cite[p.162]{bslr}. Substituting (\ref{1.02.03}) and into (\ref{1.03.01}) we obtain \be \label{1.03.02} \begin{split} \mathcal{P}_{_B}&=1.9 \cdot 10^5 \frac{W}{m^3} \cdot \left( \frac{T}{1\ keV}\right)^{-3/2} \left( \frac{P_{_g}}{1\ bar}\right)^2 \left( \frac{\overline{Z}\ \overline{Z^2}}{\big(1+\overline{Z}\big)^2} \right)\overline{g}_B,\\ \end{split} \ee where overline denotes average value. Dividing (\ref{1.03.01}) by (\ref{1.02.16}), we obtain the fraction of fusion power radiated away via Bremsstrahlung radiation. This fraction is called \textbf{Bremsstrahlung power fraction}: \be \label{1.03.03} \begin{split} f_{_B}&= \frac{\mathcal{P}_{_B}}{\mathcal{P}}= \frac{7.6 \cdot 10^5 \frac{W}{m^3\cdot bar^2}}{x_{_r}\ \sigma_{_{P}}(T)} \left( \frac{T}{1\ keV}\right)^{-3/2} \left( \frac{\overline{Z}\ \overline{Z^2}}{\big(1+\overline{Z}\big)^2} \right)\overline{g}_B\\ &=\frac{7.6\ bar^{-1} s^{-1}}{x_{_r}\ \sigma_{_{P}}(T)} \left( \frac{T}{1\ keV}\right)^{-3/2} \left( \frac{\overline{Z}\ \overline{Z^2}}{\big(1+\overline{Z}\big)^2} \right)\overline{g}_B. \end{split} \ee As we see from (\ref{1.03.03}) above, the Bremsstrahlung power fraction does not depend on the plasma pressure. We take $\overline{g}_B=1.2$. For deuterium-tritium fusion with 20\% of fuel burned, $\overline{Z}=1.11$, and\\ $\overline{Z^2}=1.33$. Substituting these values into (\ref{1.03.03}), we obtain \be \label{1.03.04} f_{_B}= \frac{3.0 \ bar^{-1} s^{-1}}{x_{_r}\ \sigma_{_{P}}(T)} \left( \frac{T}{1\ keV}\right)^{-3/2}. \ee Using the data presented in Table \ref{1.0T01}, we calculate the Bremsstrahlung power fraction of deuterium -- tritium plasma as a function of temperature. This fraction is tabulated in Table \ref{1.0T04} below: \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & & \\ \hline T &$keV$ & 5 & 8 & 10 & 15 & 20 & 25 & 30 & 35 & 40 \\ \hline $\sigma_{_{P}}(T)$ & $bar^{-1} s^{-1}$ & 1.43 & 2.53 &3.00 &3.24 &2.91 &2.46 &2.05 &1.68 & 1.37 \\ \hline $x_{_r}\ f_{_B}$ & &0.19 & 0.052 & 0.032 & 0.016 & 0.012 & 0.010 & 0.009 & 0.009 & 0.009\\ \hline \end{tabular} \captionof{table}{Bremsstrahlung power fraction of deuterium -- tritium plasma} \label{1.0T04} \end{center} Taking $\overline{g}_B=1.2$, $\overline{Z}=1.5$, and $\overline{Z^2}=2.5$ for deuterium -- $^3$He fusion reactor we obtain the following \be \label{1.03.05} f_{_B}\Big(\ ^2\text{H}-^3\text{He} \Big)= \frac{5.5 \ bar^{-1} s^{-1}}{x_{_r}\ \sigma_{_{P}}(T)} \left( \frac{T}{1\ keV}\right)^{-3/2}. \ee Using the data presented in Table \ref{1.0T02}, we calculate the Bremsstrahlung power fraction of deuterium -- $^3$He plasma as a function of temperature. This fraction is tabulated in Table \ref{1.0T05} below: \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & & & \\ \hline T &$keV$ & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 & 120 & 140\\ \hline $\sigma_{_{P}}(T)$ & $10^{-2}\ bar^{-1} s^{-1}$ & 2.54 & 3.41 & 3.80 & 3.86 & 3.73 & 3.49 & 3.21 & 2.94 & 2.41 & 1.97 \\ \hline $x_{_r}\ f_{_B}$ & & 1.32 & 0.64 & 0.41 & 0.31 & 0.25 & 0.22 & 0.2 & 0.19 & 0.17 & 0.17 \\ \hline \end{tabular} \captionof{table}{Bremsstrahlung Power Fraction of Deuterium -- $^3$He Plasma} \label{1.0T05} \end{center} In all fusion reactions other than deuterium -- tritium and deuterium -- $^3$He, plasmas radiate more energy than fusion produces \cite[p.15]{AirNR}. \subsection{Synchrotron radiation} Overall, the field of synchrotron power loss within Tokamaks and Stellerators has not been studied extensively. This phenomenon is not important for deuterium -- tritium reactors. Deuterium -- $^3$He reactors, where synchrotron radiation is significant, are still a project for the far future. Thus, there has been little practical need to study synchrotron power loss. In the discussion below, by \textbf{synchrotron radiation power} we mean the power of synchrotron radiation actually escaping plasma -- not the totality of emitted synchrotron power. In most deuterium -- $^3$He reactors, only 1.5\% to 4.5\% of plasma synchrotron power actually escapes. In Big ITER deuterium-tritium reactor, about 6\% to 8\% of synchrotron power escapes. The rest is absorbed by plasma \cite[p.70]{BeegITER}. Most plasma analysis in this section is done in zero dimensions. In the \textbf{zero-dimensional analysis}, the plasma is examined as a homogenous medium, and all parameters are calculated per unit plasma volume\cite[p.4]{R3B4v01}. Most of zero-dimensional analysis does not take reactor dimensions into consideration. In case of synchrotron radiation, however, we must consider reactor major and minor radii. An exact calculation of synchrotron radiation power per unit volume is beyond the scope of this work. An approximate expression for synchrotron radiation power per unit volume is below \cite[p.4]{R3B4v01}. \be \label{1.03.06} \mathcal{P}_{_{\text{Synchrotron}}} \propto \sqrt{1-w_{_r}}\ B_{_T}^{2.5}\ T^{2.5}\ \sqrt{\frac{A\ n_e}{R}} \ \left[1+\sqrt{\frac{420\ keV}{A^2\ T}} \right], \ee where $B_{_T}$ is the toroidal magnetic field within a Tokamak or Stellarator reactor, $A=R/a$ is the ratio of the major and major radii of reactor torus, and $w_{_r}$ is the wall reflectivity of synchrotron radiation \cite[p.1783]{ANST01}. Substituting (\ref{1.02.01}) into (\ref{1.03.06}), we obtain \be \label{1.03.07} \begin{split} \mathcal{P}_{_{\text{Synchrotron}}} &\propto \sqrt{1-w_{_r}}\ B_{_T}^{2.5}\ T^{2.5}\ \sqrt{n_e} \ h(A,R,T) \propto \sqrt{1-w_{_r}}\ B_{_T}^{2.5}\ T^{2.5}\ \sqrt{\frac{P_{_g}}{T}} \ h(A,R,T) \\ & \propto \sqrt{1-w_{_r}}\ B_{_T}^{2.5}\ T^{2.5}\ \sqrt{\frac{B_{_T}^2\ \beta}{T}} \ h(A,R,T) \propto \sqrt{1-w_{_r}}\ B_{_T}^{3.5}\ T^{2}\ \beta^{0.5} \ h(A,R,T). \end{split} \ee where \be \label{1.03.08} \begin{split} h(A,R,T)=\sqrt{\frac{A}{R}}\ \left[1+\sqrt{\frac{420\ keV}{A^2\ T}}\right]. \end{split} \ee Recall the expression (\ref{1.02.19}) for fusion power per volume: \be \label{1.03.09} \mathcal{P} \propto x_{_r}\ \sigma_{_{P}}(T)\ B_{_T}^4 \beta^2. \ee Dividing (\ref{1.03.08}) by (\ref{1.03.09}), we obtain the fraction of fusion power radiated away via synchrotron radiation. This fraction is called \textbf{synchrotron power fraction}: \be \label{1.03.10} f_{_{\text{Synchrotron}}}= \frac{\mathcal{P}_{_{\text{Synchrotron}}}}{\mathcal{P}} \propto \frac{1}{x_{_r}}\ \sqrt{1-w_{_r}}\ B_{_T}^{-0.5}\ \beta^{-1.5}\ A^{0.5}\ R^{-0.5} \left[\frac{T^{2}}{\sigma_{_{P}}(T)}\ \left(1+\sqrt{\frac{420\ keV}{A^2\ T}}\right)\right]. \ee First, we calculate synchrotron radiation loss for deuterium -- $^3$He reactors. In order to visualize the implications of (\ref{1.03.10}), we simplify that equation: \be \label{1.03.11} f_{_{\text{Synchrotron}}}^{\text{He}}\propto \frac{1}{x_{_r}}\ \sqrt{1-w_{_r}} \ \left(\frac{B_{_T}}{1\ Tesla}\right)^{-0.5} \ \left(\frac{a}{1\ m}\right)^{-0.5} \ \beta^{-1.5} \ \mathcal{T}_{_{\text{He}}}(T,A), \ee where \be \label{1.03.12} \mathcal{T}_{_{\text{He}}}(T,A)= \frac{ \left[\frac{T^{2}}{\sigma_{_{P}}(T)}\ \left(1+\sqrt{\frac{420\ keV}{A^2\ T}}\right)\right]} {\left. \left[\frac{T^{2}}{\sigma_{_{P}}(T)}\ \left(1+\sqrt{\frac{420\ keV}{A^2\ T}}\right)\right] \right|_{A=1.4}^{T=50\ keV} }. \ee The function $\mathcal{T}_{_{\text{He}}}(T,A)$ is tabulated in rows 5-7 of Table \ref{1.0T06} below. Row 4 is the fraction of fusion power radiated away as Bremsstrahlung radiation. This row is useful as the optimal choice of temperature optimizes between Bremsstrahlung and Synchrotron losses. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & \\ \hline T &$keV$ & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 \\ \hline $\sigma_{_{P}}(T)$ & $10^{-2}\ bar^{-1} s^{-1}$ & 2.54 & 3.41 & 3.80 & 3.86 & 3.73 & 3.49 & 3.21 & 2.94 \\ \hline $x_{_r}\ f_{_B}$ & & 1.32 & 0.64 & 0.41 & 0.31 & 0.25 & 0.22 & 0.2 & 0.19 \\ \hline $\mathcal{T}_{_{\text{He}}}(T,1.4)$ & & 0.64 & 0.77 & 1. & 1.33 & 1.79 & 2.39 & 3.17 & 4.15 \\ $\mathcal{T}_{_{\text{He}}}(T,2)$ & & 0.5 & 0.61 & 0.8 & 1.07 & 1.45 & 1.95 & 2.60 & 3.41 \\ $\mathcal{T}_{_{\text{He}}}(T,3)$ & & 0.39 & 0.48& 0.64& 0.87 & 1.18 & 1.60 & 2.15 & 2.84 \\ \hline \end{tabular} \captionof{table}{Function $\mathcal{T}_{_{\text{He}}}(T,A)$} \label{1.0T06} \end{center} Second, we calculate synchrotron radiation loss for deuterium-tritium reactors. Synchrotron radiation loss is insignificant in some deuterium -- tritium reactors. As we can see from data in Tables \ref{1.0T01} and \ref{1.0T02}, the temperature term for deuterium -- tritium reactor is at least 680 times lower than the temperature term for deuterium -- $^3$He reactor. Most works dealing with deuterium -- tritium fusion do not even mention synchrotron radiation. Nevertheless, synchrotron radiation loss can be significant for deuterium -- tritium reactors with low $\beta$ and high operating temperature. Moreover, given that 80\% of energy produced by deuterium -- tritium fusion is carried off by neutrons, even a small loss by synchrotron radiation can affect energy balance considerably. Simplifying (\ref{1.03.10}), we obtain \be \label{1.03.13} f_{_{\text{Synchrotron}}}^{\text{T}}\propto \frac{1}{x_{_r}}\ \sqrt{1-w_{_r}} \ \left(\frac{B_{_T}}{1\ Tesla}\right)^{-0.5} \ \left(\frac{a}{1\ m}\right)^{-0.5} \ \beta^{-1.5} \ \mathcal{T}_{_{\text{T}}}(T,A), \ee where \be \label{1.03.14} \mathcal{T}_{_{\text{T}}}(T,A)= \frac{ \left[\frac{T^{2}}{\sigma_{_{P}}(T)}\ \left(1+\sqrt{\frac{420\ keV}{A^2\ T}}\right)\right]} {\left. \left[\frac{T^{2}}{\sigma_{_{P}}(T)}\ \left(1+\sqrt{\frac{420\ keV}{A^2\ T}}\right)\right] \right|_{A=3}^{T=15\ keV} }. \ee The function $\mathcal{T}_{_{\text{T}}}(T,A)$ is tabulated in rows 5-7 of Table \ref{1.0T07} below. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & & \\ \hline T &$keV$ & 5 & 8 & 10 & 15 & 20 & 25 & 30 & 35 & 40 \\ \hline $\sigma_{_{P}}(T)$ & $bar^{-1} s^{-1}$ & 1.43 & 2.53 &3.00 &3.24 &2.91 &2.46 &2.05 &1.68 & 1.37 \\ \hline $x_{_r}\ f_{_B}$ & &0.19 & 0.052 & 0.032 & 0.016 & 0.012 & 0.010 & 0.009 & 0.009 & 0.009\\ \hline $\mathcal{T}_{_{\text{T}}}(T,1.4)$ & & 0.69 & 0.81 & 0.98 & 1.73 & 3.05 & 5.19 & 8.41 & 13.2 & 20.1\\ $\mathcal{T}_{_{\text{T}}}(T,2)$ & & 0.51 & 0.61 & 0.74 & 1.32 & 2.35 & 4.03 & 6.57 & 10.4 & 15.9\\ $\mathcal{T}_{_{\text{T}}}(T,3)$ & & 0.37 & 0.45 & 0.55 & 1.00 & 1.81 & 3.13 & 5.15 & 8.17 & 12.6\\ \hline \end{tabular} \captionof{table}{Function $\mathcal{T}_{_{\text{T}}}(T,A)$} \label{1.0T07} \end{center} Maximum synchrotron power fractions for several proposed reactors are tabulated in Table \ref{1.0T08} below \cite[p.70]{BeegITER}. The same source contains lower estimates for synchrotron power. As we have mentioned, estimation of total synchrotron power loss for Tokamaks and Stellerators is still an unsolved problem. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline Reactor & Type & $R$ & $A$ & $B_{_T}$ & T & $\beta$ & $w_{_r}$ & $x_{_r}\ f_{_{\text{Synchrotron}}}$ \\ & & $m$ & & $Tesla$ & $keV$ & & & \\ \hline Big ITER & D-T & 8.14 & 2.91 & 5.7 & 12 & 0.030 & 0.7 & 0.013 \\ ARIES III & D-$^3$He & 7.5 & 3.00 & 7.6 & 53 & 0.13 & 0.85 & 0.47 \\ JOHNER 91 & D-$^3$He & 15 & 3.00 & 8.0 & 40 & 0.063 & 0.85 & 0.41 \\ \hline \end{tabular} \captionof{table}{Synchrotron power fractions of several proposed reactors} \label{1.0T08} \end{center} Based on Eq. (\ref{1.03.11}) and data presented in Table \ref{1.0T08}, we derive the following proportionality coefficient: \be \label{1.03.15} \begin{split} f_{_{\text{Synchrotron}}}^{\text{He}} &\lessapprox \frac{0.35}{x_{_r}}\ \sqrt{1-w_{_r}} \ \left(\frac{B_{_T}}{1\ Tesla}\right)^{-0.5} \ \left(\frac{a}{1\ m}\right)^{-0.5} \ \beta^{-1.5} \ \mathcal{T}_{_{\text{He}}}(T,A)\\ f_{_{\text{Synchrotron}}}^{\text{T}} &\approx \frac{6.7 \cdot 10^{-4}}{x_{_r}}\ \sqrt{1-w_{_r}} \ \left(\frac{B_{_T}}{1\ Tesla}\right)^{-0.5} \ \left(\frac{a}{1\ m}\right)^{-0.5} \ \beta^{-1.5} \ \mathcal{T}_{_{\text{T}}}(T,A). \end{split} \ee As we see from Eq. (\ref{1.03.15}) above, the proportionality constant for deuterium -- $^3$He fusion exceeds the proportionality constant for deuterium -- tritium fusion by a factor of 520. In Table \ref{1.0T09} below, we present synchrotron power fraction for Big ITER reactor as a function of plasma temperature. In row 6, $f_{_{\gamma}}$ is the total loss due to photon radiation, which is the sum of rows 4 and 5. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & & \\ \hline T &$keV$ & 5 & 8 & 10 & 15 & 20 & 25 & 30 & 35 & 40 \\ \hline $\sigma_{_{P}}(T)$ & $bar^{-1} s^{-1}$ & 1.43 & 2.53 &3.00 &3.24 &2.91 &2.46 &2.05 &1.68 & 1.37 \\ \hline $x_{_r}\ f_{_B}$ & &0.19 & 0.052 & 0.032 & 0.016 & 0.012 & 0.010 & 0.009 & 0.009 & 0.009\\ \hline $x_{_r}\ f_{_{\text{Synchrotron}}}$ & & 0.005 & 0.006 & 0.007 & 0.013 & 0.024 & 0.041 & 0.068 & 0.108 & 0.167\\ \hline $x_{_r}\ f_{_{\gamma}}$ & &0.20 & 0.058 & 0.039 & 0.029 & 0.036 & 0.051 & 0.077 & 0.117 & 0.176 \\ \hline \end{tabular} \captionof{table}{Radiation losses for Big ITER} \label{1.0T09} \end{center} As we see from the table above, synchrotron radiation loss does hinder reactor operation at high temperatures. \section{Transport energy loss} \subsection{Fusion power gain $Q$} \textbf{Fusion power gain} denoted by $Q$ is defined as the ratio of thermal energy produced by fusion to the heating power supplied to the plasma \cite[p.14]{Tokamak}. A nuclear power station must have fusion power gain of at least 3. In order to calculate $Q$ we must calculate the power generated by nuclear fusion as well as external power needed to heat the plasma. External plasma heating power per unit volume is \be \label{1.04.01} \mathcal{P}_{_{\text{heat}}}=\mathcal{P}_{_{\text{loss}}}-\mathcal{P}_{_{\text{fh}}}. \ee In Eq. (\ref{1.04.01}) above, $\mathcal{P}_{_{\text{loss}}}$ is the rate of plasma energy loss per unit volume. $\mathcal{P}_{_{\text{fh}}}$ is the fusion power which is converted into plasma thermal energy. Notice, that if the expression in Eq. (\ref{1.04.01}) is negative, then it means that plasma temperature is rising. In order to understand how plasma loses energy by conduction, we must understand plasma behavior within the reactor. Magnetic confinement can not be perfect, thus the plasma fills up the volume of a Tokamak or Stellarator torus and physically interacts with the walls \cite[p.81]{FusBk1}. Thermal energy is transported within plasma by conduction, convection, and turbulent flow \cite[p.81]{FusBk1}. The outer layer of plasma conducts thermal energy to the walls. Even though the plasma temperature is hundreds of millions $^o$K, the plasma neither melts nor vaporizes the container walls. Some solid surfaces can be in contact with very hot, yet rarefied gases without overheating or sustaining damage. A common experiment demonstrating this effect is boiling water in a paper cup. The cup does not get charred as the heat from the fire is immediately conducted into water. Another experiment from a different branch of science is the descent of a space capsule into the atmosphere. Even though such a capsule may encounter a stream of gas at a temperature of several thousand degrees Kelvin, the rarefied air does not damage the capsule. Energy loss of plasma per unit volume by conduction is \cite[p.17]{Tokamak} \be \label{1.04.02} \mathcal{P}_{_{\text{loss}}}=\frac{\mathcal{W}}{\tau_{_E}}=\frac{1.5\ P_{_g}}{\tau_{_E}}, \ee where $\tau_{_E}$ is called \textbf{energy confinement time}, and $\mathcal{W}$ is kinetic energy density of plasma. For a fully ionized plasma, $\mathcal{W}=1.5\ P_{_g}$. The fusion power per unit volume used to heat plasma is derived from (\ref{1.02.16}): \be \label{1.04.03} \mathcal{P}_{_{\text{fh}}}= f_{_{\text{heat}}}\ \mathcal{P}= \frac{x_{_r}}{4}\ f_{_{\text{heat}}}\ \sigma_{_{P}}(T)\ P_{_g}^2, \ee whaere $f_{_{\text{heat}}}$ is the fraction of fusion power which is not lost to neutron, Bremsstrahlung and synchrotron radiation. It is given by \be \label{1.04.04} f_{_{\text{heat}}}=1-f_{_{\text{n}}}-f_{_{\gamma}}, \ee where $f_{_{\text{n}}}$ is the neutronicity or the fraction of fusion power carried away by neutrons, and $f_{_{\gamma}}$ is the fraction of fusion power radiated away via synchrotron and Bremsstrahlung radiation. The vales of $f_{_{\text{heat}}}$ depend on plasma temperature and composition. For deuterium -- tritium fusion, neutronicity is 80\%. The value of $f_{_{\gamma}}$ is dependent on reactor dimensions, $\beta$, and operating temperature. For Big ITER reactor, which would operate at $T=1.3\ keV$, $f_{_B}=0.067$, $f_{_{\text{Synchrotron}}}=0.013$, and $f_{_{\gamma}}=0.079$ \cite{BigITER,BeegITER}. Based on the values of neutronicity and synchrotron and Bremsstrahlung power fractions presented above, we conclude that for Big ITER reactor \be \label{1.04.05} f_{_{\text{heat}}}=0.12. \ee As we show in Subsection {\rr 6.2} below, deuterium-$^3$He reactors may have higher $f_{_{\text{heat}}} \le 0.45$. Combining (\ref{1.02.16}) and (\ref{1.04.03}), we obtain the fusion power gain: \be \label{1.04.06} \begin{split} Q&=\frac{\mathcal{P}}{\mathcal{P}_{_{\text{heat}}}} =\frac{\mathcal{P}}{\mathcal{P}_{_{\text{loss}}}-\mathcal{P}_{_{\text{fh}}}} =\frac{\sigma_{_{P}}(T)\ P_{_g}^2}{\frac{1.5\ P_{_g}} {\tau_{_E}}-\frac{1}{4}\ f_{_{\text{heat}}}\ \sigma_{_{P}}(T)\ P_{_g}^2} = \frac{P_{_g}\ \tau_{_E}\ f_{_{\text{heat}}}^{-1}} {1.5\ f_{_{\text{heat}}}^{-1}\ \big(\sigma_{_{P}}(T)\big)^{-1} -\frac{1}{4}\ P_{_g}\ \tau_{_E}}. \end{split} \ee In order to understand the physical meaning of Eq. (\ref{1.04.06}), we introduce the following concepts. The \textbf{Lawson pressure criterion} is \be \label{1.04.07} C_{_{LP}}=P_{_{g}} \ \tau_{_E}. \ee The \textbf{Lawson pressure ignition criterion} is \be \label{1.04.08} C_{_{LPI}}=\frac{6}{f_{_{\text{heat}}}\ \sigma_{_{P}}(T)}. \ee Notice, that Lawson pressure ignition criterion increases as more fuel gets consumed. As we have shown in Subsection {\rr 2.4}, as more fuel is consumed, $x_{_r}$ decreases. As we have shown in Section {\rr 3}, decreasing $x_{_r}$ causes the fraction of fusion power radiated away via synchrotron and Bremsstrahlung radiation to increase. This causes $f_{_{\text{heat}}}$ to decrease and $C_{_{LPI}}$ to increase. Substituting (\ref{1.04.05}) into (\ref{1.04.08}) above, we find that for a deuterium -- tritium reactor such as Big ITER $C_{_{LPI}}=16\ bar \cdot s$. Given that $C_{_{LPI}}$ varies over the course of fuel burn cycle, $16\ bar \cdot s$ is the time average. Substituting (\ref{1.04.07}) and (\ref{1.04.08}) into (\ref{1.04.06}), we obtain \be \label{1.04.09} \left\{ \begin{split} Q&=\frac{C_{_{LP}}\ f_{_{\text{heat}}}^{-1}} {C_{_{LPI}}-C_{_{LP}}} \qquad \text{if} \qquad C_{_{LPI}} > C_{_{LP}} \\ Q&=\infty \hskip2.3cm \text{if} \qquad C_{_{LPI}} \le C_{_{LP}} \end{split} \right. \ee If the reactor Lawson pressure criterion is the same or higher than the Lawson Pressure Ignition Criterion, then the fusion is ignited -- the reaction continues without an external heat source. Most works on nuclear fusion define the Lawson criterion based on particle density \cite[p.10]{FusBk1}: \be \label{1.04.10} C_{_{Ld}}=n\ T\ \tau_{_E}, \ee where $n$ is the ion number density. The Lawson Criterion for working deuterium -- tritium reactors has to be $C_{_{Ld}} \ge 3 \cdot 10^{21}\ \ keV\ s\ m^{-3}$ \cite[p.3]{TSurvey03}. Using (\ref{1.02.01}) and (\ref{1.04.10}), we express Lawson pressure criterion in terms of Lawson density criterion: \be \label{1.04.11} C_{_{LP}}=P_{_{g}} \ \tau_{_E}=(1+Z)\ R_{_{g}}\ n\ \ T \tau_{_E}=(1+\overline{Z})\ R_{_{g}}\ C_{_{Ld}}. \ee \subsection{Confinement time and Lawson criterion scaling} Plasma held by magnetic fields inside a Tokamak or Stellarator is a fluid governed by very complex physics. Thus, energy confinement time is dependent on many factors, including plasma heating mechanism. Plasma heated only by electric current is defined to be in \textbf{ohmic mode} \cite[p.55]{FusBk1}. Ohmic heating alone can not raise plasma temperature beyond about 2 $keV$ \cite[p.64]{FusBk1}. As we have mentioned in Subsection {\rr 3.1}, plasma can be heated by neutral particle beams or microwaves. This mode of heating produces plasma in \textbf{L-mode confinement} \cite[p.55]{FusBk1}. In L-mode, L denotes "low". It has lower energy confinement time than ohmic mode \cite[p.20]{T5}. Ohmic and L-mode confinement have been known since the 1960s. Then a new mode was discovered \cite[p.55]{FusBk1}: \begin{quote} It was found that when sufficient power was applied to an L-mode discharge that the discharge made an abrupt transition in which the edge transport was apparently reduced, leading to edge pedestals in the temperature and density. The effect of this was to produce roughly a doubling of the confinement time. \end{quote} The new mode of confinement is called \textbf{H-mode}. Other modes with confinement times longer than H-mode are possible. These modes are unstable and thus non-usable \cite[p.58]{T5}. All proposed reactors would use H-mode confinement. In the rest of the work, we consider only this mode. A general model for confinement time is given by \cite[p.58]{T5} \be \label{1.04.12} \tau_{_{E}} \propto I^{\alpha_{_I}}\ B_{_T}^{\alpha_{_B}}\ P^{\alpha_{_P}}\ n^{\alpha_{_n}}\ R^{\alpha_{_R}}\ \overline{M}^{\alpha_{_M}}\ A^{\alpha_{_A}}\ \kappa^{\alpha_{_\kappa}}, \ee where $P$ is the plasma heating power, $\overline{M}$ is the mean plasma ion mass in $amu$, $A=a/R$ is the aspect ratio. From (\ref{1.02.03}), we obtain \be \label{1.04.13} n \propto \frac{P_{_{g}}} {(1+\overline{Z})\ T}. \ee At this point, we express the plasma gas pressure in terms of magnetic field. Using (\ref{1.02.04}) and (\ref{1.02.05}), we obtain \be \label{1.04.14} P_{_g} \propto B_{_T}^2 \beta. \ee Substituting (\ref{1.04.14}) into (\ref{1.04.13}), we get \be \label{1.04.15} n \propto \frac{B_{_T}^2 \beta} {(1+\overline{Z})\ T}. \ee The \textbf{safety factor} is defined as \cite[p. 268]{Tokamak} \be \label{1.04.16} q=\frac{S_{_F}}{A}\ \left(\frac{I}{10^6\ A}\right)^{-1} \left(\frac{R}{1\ m}\right) \left(\frac{B_{_T}}{1\ Tesla}\right), \ \ \text{hence} \ \ I \propto \big(S_{_F}\ A\big) \frac{R\ B_{_T}}{A^2\ q}= \hat{S}_{_F}\ \frac{R\ B_{_T}}{A^2\ q}. \ee In Eq. (\ref{1.04.16}) above, $S_{_F}$ is the \textbf{shape factor}. It is approximated by \cite[p. 268]{Tokamak}: \be \label{1.04.17} S_{_F} \approx \frac{2.5}{A}\ \Big(1+\kappa^2 \left(1+2\ \delta^2-1.2\ \delta^3 \right) \Big)\ \frac{1.17-0.65/A}{\Big(1-1/A^2 \Big)^2}, \ee where $\delta$ is triangularity of plasma. The function $f(A)$ is \be \label{1.04.18} f(A)=\frac{1.17-0.65/A}{\Big(1-1/A^2 \Big)^2}. \ee Another estimate for $f(A)$ is \cite[p.1226]{ST30} \be \label{1.04.19} f_{_1}(A)=1.17 \sqrt{\frac{A}{A-1}}. \ee Projects for several spherical Tokamaks listed in \cite[p. 14]{ST19} agree with Eq. (\ref{1.04.17}) to within a maximal error of 23\% and an average error of 13\%. We also introduce scaled shape factor \be \label{1.04.20} \hat{S}_{_F}=S_{_F}\ A. \ee Substituting (\ref{1.04.15}) and (\ref{1.04.16}) into (\ref{1.04.12}), we obtain \be \label{1.04.21} \tau_{_{E}} \propto \hat{S}_{_F}^{\alpha_{_I}}\ P^{\alpha_{_P}}\ B_{_T}^{\big[\alpha_{_B}+2 \alpha_{_n} + \alpha_{_I}\big]}\ R^{^{\big[\alpha_{_R}+\alpha_{_I} \big]}}\ A^{^{\big[\alpha_{_A}-2\alpha_{_I} \big]}}\ \beta^{^{\alpha_{_n}}}\ q^{-\alpha_{_I}}\ T^{-\alpha_{_n}}\ \kappa^{\alpha_{_\kappa}}\ \overline{M}^{\alpha_{_M}}\ (\overline{Z}+1)^{-\alpha_{_n}}\ \ee For a steady-state reactor, conductive power loss is \cite[p.82]{AF01}: \be \label{1.04.22} P=\frac{W}{\tau_{_E}}, \ee where $W$ is the total thermal energy of plasma. It is given by \be \label{1.04.23} W=1.5\ P_{_g} V_{_g}, \ee where $V_{_g}$ is the plasma volume given by \be \label{1.04.24} V_{_g}=2\ \pi^2\ R\ a^2\ \kappa \propto R^3\ A^{-2}\ \kappa \ee Substituting (\ref{1.04.14}), (\ref{1.04.23}), and (\ref{1.04.24}) into (\ref{1.04.22}), we obtain \be \label{1.04.25} P \propto \tau_{_E}^{-1}\ \Big( B_{_T}^2\ \beta \Big)\ \Big( R^3\ A^{-2}\ \kappa \Big)= \tau_{_E}^{-1}\ B_{_T}^2\ R^3\ A^{-2}\ \beta\ \kappa. \ee Substituting (\ref{1.04.25}) into (\ref{1.04.21}), we obtain \be \label{1.04.26} \begin{split} \tau_{_E} \propto& \tau_{_E}^{-\alpha_{_P}}\ \hat{S}_{_F}^{\alpha_{_I}}\ B_{_T}^{\big[2\alpha_{_P}+\alpha_{_B}+2 \alpha_{_n} + \alpha_{_I}\big]}\ R^{^{\big[3\alpha_{_P}+\alpha_{_R}+\alpha_{_I} \big]}}\ A^{^{\big[-2\alpha_{_P}+\alpha_{_A}-2\alpha_{_I} \big]}}\\ &\beta^{^{\alpha_{_P}+\alpha_{_n}}}\ q^{-\alpha_{_I}}\ T^{-\alpha_{_n}}\ \kappa^{^{\big[\alpha_{_\kappa}+\alpha_{_P}\big]}}\ \overline{M}^{\alpha_{_M}}\ (\overline{Z}+1)^{-\alpha_{_n}}. \end{split} \ee Solving (\ref{1.04.26}), we obtain \be \label{1.04.27} \begin{split} \tau_{_E} \propto& \hat{S}_{_F}^{\big[\frac{\alpha_{_I}}{1+\alpha_{_P}}\big]}\ B_{_T}^{\big[\frac{2\alpha_{_P}+\alpha_{_B}+2 \alpha_{_n} + \alpha_{_I}} {1+\alpha_{_P}}\big]}\ R^{^{\big[\frac{3\alpha_{_P}+\alpha_{_R}+\alpha_{_I}} {1+\alpha_{_P}} \big]}}\ A^{^{\big[\frac{-2\alpha_{_P}+\alpha_{_A}-2\alpha_{_I}} {1+\alpha_{_P}} \big]}}\ \beta^{^{\big[\frac{\alpha_{_P}+\alpha_{_n}}{1+\alpha_{_P}}\big]}}\ \\ & q^{^{\big[\frac{-\alpha_{_I}}{1+\alpha_{_P}}\big]}}\ T^{^{\big[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\big]}}\ \kappa^{^{\big[\frac{\alpha_{_\kappa}+\alpha_{_P}}{1+\alpha_{_P}}\big]}}\ \overline{M}^{^{\big[\frac{\alpha_{_M}}{1+\alpha_{_P}}\big]}}\ (\overline{Z}+1)^{^{\big[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\big]}}. \end{split} \ee Given the complexity of (\ref{1.04.27}), we express it in a logarithmic form \be \label{1.04.28} \begin{split} \ln \tau_{_E} &\propto \left[\frac{\alpha_{_I}}{1+\alpha_{_P}}\right] \ln \hat{S}_{_F}+ \left[\frac{2\alpha_{_P}+\alpha_{_B}+2 \alpha_{_n} + \alpha_{_I}} {1+\alpha_{_P}}\right] \ln B_{_T}+ \left[\frac{3\alpha_{_P}+\alpha_{_R}+\alpha_{_I}} {1+\alpha_{_P}} \right] \ln R\\ &+\left[\frac{-2\alpha_{_P}+\alpha_{_A}-2\alpha_{_I}} {1+\alpha_{_P}} \right] \ln A+ \left[\frac{\alpha_{_P}+\alpha_{_n}}{1+\alpha_{_P}}\right] \ln \beta+ \left[\frac{-\alpha_{_I}}{1+\alpha_{_P}}\right] \ln q\\ &+\left[\frac{\alpha_{_\kappa}+\alpha_{_P}}{1+\alpha_{_P}}\right] \ln \kappa+ \left[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln T+ \left[\frac{\alpha_{_M}}{1+\alpha_{_P}} \right] \ln \overline{M}+ \left[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln (\overline{Z}+1). \end{split} \ee Substituting (\ref{1.04.11}) and (\ref{1.04.14}) into (\ref{1.04.28}) for Lawson pressure criterion, we obtain \be \label{1.04.29} \begin{split} \ln C_{_{LP}} &\propto \left[\frac{\alpha_{_I}}{1+\alpha_{_P}}\right] \ln \hat{S}_{_F}+ \left[2+\frac{2\alpha_{_P}+\alpha_{_B}+2 \alpha_{_n} + \alpha_{_I}} {1+\alpha_{_P}}\right] \ln B_{_T}+ \left[\frac{3\alpha_{_P}+\alpha_{_R}+\alpha_{_I}} {1+\alpha_{_P}} \right] \ln R\\ &+\left[\frac{-2\alpha_{_P}+\alpha_{_A}-2\alpha_{_I}} {1+\alpha_{_P}} \right] \ln A+ \left[\frac{1+2\alpha_{_P}+\alpha_{_n}}{1+\alpha_{_P}}\right] \ln \beta+ \left[\frac{-\alpha_{_I}}{1+\alpha_{_P}}\right] \ln q\\ &+\left[\frac{\alpha_{_\kappa}+\alpha_{_P}}{1+\alpha_{_P}}\right] \ln \kappa+ \left[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln T+ \left[\frac{\alpha_{_M}}{1+\alpha_{_P}} \right] \ln \overline{M}+ \left[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln (\overline{Z}+1). \end{split} \ee Substituting (\ref{1.04.28}) into (\ref{1.04.25}), we obtain the heating power \be \label{1.04.30} \begin{split} \ln P &\propto \left[\frac{-\alpha_{_I}}{1+\alpha_{_P}}\right] \ln \hat{S}_{_F}+ \left[2-\frac{2\alpha_{_P}+\alpha_{_B}+2 \alpha_{_n} + \alpha_{_I}} {1+\alpha_{_P}}\right] \ln B_{_T}+ \left[3-\frac{3\alpha_{_P}+\alpha_{_R}+\alpha_{_I}} {1+\alpha_{_P}} \right] \ln R\\ &+\left[-2+\frac{2\alpha_{_P}-\alpha_{_A}+2\alpha_{_I}} {1+\alpha_{_P}} \right] \ln A+ \left[1-\frac{\alpha_{_P}+\alpha_{_n}}{1+\alpha_{_P}}\right] \ln \beta+ \left[\frac{\alpha_{_I}}{1+\alpha_{_P}}\right] \ln q\\ &+\left[1-\frac{\alpha_{_\kappa}+\alpha_{_P}}{1+\alpha_{_P}}\right] \ln \kappa+ \left[\frac{\alpha_{_n}}{1+\alpha_{_P}}\right] \ln T+ \left[\frac{-\alpha_{_M}}{1+\alpha_{_P}} \right] \ln \overline{M}+ \left[\frac{\alpha_{_n}}{1+\alpha_{_P}}\right] \ln (\overline{Z}+1). \end{split} \ee Simplifying (\ref{1.04.30}), we obtain \be \label{1.04.31} \begin{split} \ln P &\propto \left[\frac{-\alpha_{_I}}{1+\alpha_{_P}}\right] \ln \hat{S}_{_F}+ \left[\frac{2-\alpha_{_B}-2 \alpha_{_n} - \alpha_{_I}} {1+\alpha_{_P}}\right] \ln B_{_T}+ \left[\frac{3-\alpha_{_R}-\alpha_{_I}} {1+\alpha_{_P}} \right] \ln R\\ &+\left[\frac{-2-\alpha_{_A}+2\alpha_{_I}} {1+\alpha_{_P}} \right] \ln A+ \left[\frac{1-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln \beta+ \left[\frac{\alpha_{_I}}{1+\alpha_{_P}}\right] \ln q\\ &+\left[\frac{1-\alpha_{_\kappa}}{1+\alpha_{_P}}\right] \ln \kappa+ \left[\frac{\alpha_{_n}}{1+\alpha_{_P}}\right] \ln T+ \left[\frac{-\alpha_{_M}}{1+\alpha_{_P}} \right] \ln \overline{M}+ \left[\frac{\alpha_{_n}}{1+\alpha_{_P}}\right] \ln (\overline{Z}+1). \end{split} \ee Below, we express conductive power loss and Lawson pressure criterion in terms of normalized $\beta$, which is defined as \cite[p.9]{FusBk3} \be \label{1.04.32} \beta_{_N}=\frac{100\ \beta}{A}\ \left(\frac{I}{10^6\ A}\right)^{-1} \left(\frac{R}{1\ m}\right) \left(\frac{B_{_T}}{1\ Tesla}\right).\\ \ee Substituting (\ref{1.04.16}) into (\ref{1.04.32}) we obtain \be \label{1.04.33} \begin{split} \beta_{_N}=\frac{100\ q\ \beta}{S_{_F}}=\frac{100\ q\ \beta\ A}{\hat{S}_{_F}}, \qquad \text{hence} \qquad \beta \propto \frac{\beta_{_N}\ S_{_F}}{q\ A}. \end{split} \ee Both $\beta$ and $\beta_{_N}$ are dimensionless parameters. Substituting (\ref{1.04.33}) into (\ref{1.04.29}), we obtain \be \label{1.04.34} \begin{split} \ln C_{_{LP}} &\propto \left[\frac{1+2\alpha_{_P}+\alpha_{_n}+\alpha_{_I}}{1+\alpha_{_P}}\right] \ln \hat{S}_{_F}+ \left[2+\frac{2\alpha_{_P}+\alpha_{_B}+2 \alpha_{_n} + \alpha_{_I}} {1+\alpha_{_P}}\right] \ln B_{_T}\\ &+ \left[\frac{3\alpha_{_P}+\alpha_{_R}+\alpha_{_I}} {1+\alpha_{_P}} \right] \ln R +\left[\frac{-1-4\alpha_{_P}+\alpha_{_A}-2\alpha_{_I}-\alpha_{_n}} {1+\alpha_{_P}} \right] \ln A\\ &+\left[\frac{1+2\alpha_{_P}+\alpha_{_n}}{1+\alpha_{_P}}\right] \ln \beta_{_N}+ \left[\frac{-1-2\alpha_{_P}-\alpha_{_n}-\alpha_{_I}}{1+\alpha_{_P}}\right] \ln q\\ &+\left[\frac{\alpha_{_\kappa}+\alpha_{_P}}{1+\alpha_{_P}}\right] \ln \kappa+ \left[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln T+ \left[\frac{\alpha_{_M}}{1+\alpha_{_P}} \right] \ln \overline{M}+ \left[\frac{-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln (\overline{Z}+1). \end{split} \ee Substituting (\ref{1.04.33}) into (\ref{1.04.31}), we obtain \be \label{1.04.35} \begin{split} \ln P &\propto \left[\frac{1-\alpha_{_n}-\alpha_{_I}}{1+\alpha_{_P}}\right] \ln \hat{S}_{_F}+ \left[\frac{2-\alpha_{_B}-2 \alpha_{_n} - \alpha_{_I}} {1+\alpha_{_P}}\right] \ln B_{_T}+ \left[\frac{3-\alpha_{_R}-\alpha_{_I}} {1+\alpha_{_P}} \right] \ln R\\ &+\left[\frac{-3+\alpha_{_n}-\alpha_{_A}+2\alpha_{_I}} {1+\alpha_{_P}} \right] \ln A+ \left[\frac{1-\alpha_{_n}}{1+\alpha_{_P}}\right] \ln \beta+ \left[\frac{-1+\alpha_{_n}+\alpha_{_I}}{1+\alpha_{_P}}\right] \ln q\\ &+\left[\frac{1-\alpha_{_\kappa}}{1+\alpha_{_P}}\right] \ln \kappa+ \left[\frac{\alpha_{_n}}{1+\alpha_{_P}}\right] \ln T+ \left[\frac{-\alpha_{_M}}{1+\alpha_{_P}} \right] \ln \overline{M}+ \left[\frac{\alpha_{_n}}{1+\alpha_{_P}}\right] \ln (\overline{Z}+1). \end{split} \ee One model for Tokamak energy confinement time in good agreement with experimental data is called ``IPB98(y,2)". It is given by \cite[p.212]{FusBk1} \be \label{1.04.36} \tau_{_{ET}} \propto I^{0.93}\ B_{_T}^{0.15}\ P^{-0.69}\ n^{0.41}\ R^{1.97}\ \overline{M}^{0.19}\ A^{-0.58}\ \kappa^{0.78}. \ee Substituting (\ref{1.04.36}) into (\ref{1.04.30}) and (\ref{1.04.29}), we obtain scaling laws for power and Lawson pressure criterion based on ``IPB98(y,2)": \be \label{1.04.37} \begin{split} P_{_{\text{Tokamak}}} \propto &\hat{S}_{_F}^{-1.10} B_{_T}^{0.32} R^{0.32} A^{-0.48} \beta_{_N}^{1.90} q^{1.10} \kappa^{0.71} T^{1.32} \overline{M}^{-0.61} (\overline{Z}+1)^{1.32}\\ C_{_{LP}} \propto &\hat{S}_{_F}^{3.10} B_{_T}^{3.68} R^{2.68} A^{-3.52} \beta_{_N}^{0.10} q^{-3.1} \kappa^{0.29} T^{-1.32} \overline{M}^{0.61} (\overline{Z}+1)^{-1.32}. \end{split} \ee Notice, that $P_{_{\text{Tokamak}}}$ is the conductive power flux to Tokamak walls. This quantity is not directly related to the fusion power within Tokamak. A more recent scaling model is \cite[p. 131]{Tokamak} \be \label{1.04.38} \tau_{_{ET}} \propto I^{0.86}\ B_{_T}^{0.21}\ P^{-0.65}\ n^{0.41}\ R^{1.99}\ \overline{M}^{0.08}\ A^{-0.68} \kappa^{0.84}. \ee Substituting (\ref{1.04.38}) into (\ref{1.04.30}) and (\ref{1.04.29}), we obtain scaling laws for power and Lawson pressure criterion based on the later model \be \label{1.04.39} \begin{split} P_{_{\text{Tokamak}}} \propto &\hat{S}_{_F}^{-0.71} B_{_T}^{0.43} R^{0.43} A^{-0.60} \beta_{_N}^{1.74} q^{0.71} \kappa^{0.46} T^{1.11} \overline{M}^{-0.23} (\overline{Z}+1)^{1.11}\\ C_{_{LP}} \propto &\hat{S}_{_F}^{2.71} B_{_T}^{3.57} R^{2.57} A^{-3.4} \beta_{_N}^{0.26} q^{-2.71} \kappa^{0.54} T^{-1.11} \overline{M}^{0.23} (\overline{Z}+1)^{-1.11}. \end{split} \ee A theoretical model is \cite[p. A1]{ST13} \be \label{1.04.40} \tau_{_{ET}} \propto I^{1.00}\ B_{_T}^{0.00}\ P^{-0.50}\ n^{0.50}\ R^{2.00}\ \overline{M}^{0}\ A^{-0.50}\ \kappa^{0.75}. \ee Substituting (\ref{1.04.40}) into (\ref{1.04.30}) and (\ref{1.04.29}), we obtain scaling laws for power and Lawson pressure criterion based on the later model \be \label{1.04.41} \begin{split} P_{_{\text{Tokamak}}} \propto &\hat{S}_{_F}^{-1} B_{_T}^{ 0} R^{ 0} A^{ 0} \beta_{_N}^{ 1} q^{ 1} \kappa^{0.5} T^{ 1} \overline{M}^{ -0} (\overline{Z}+1)^{ 1}\ \ \ \ \ =\hat{S}_{_F}^{-1} \beta_{_N}\ q\ \kappa^{0.5}\ T\ (\overline{Z}+1) \\ C_{_{LP}} \propto &\hat{S}_{_F}^{3} B_{_T}^{ 4} R^{ 3} A^{ -4} \beta^{ 1} q^{ -3} \kappa^{0.5} T^{ -1} \overline{M}^{ 0} (\overline{Z}+1)^{ -1} =\hat{S}_{_F}^{3} B_{_T}^{ 4} R^{ 3} A^{ -4} \beta_{_N}\ q^{ -3} \kappa^{0.5} T^{ -1} (\overline{Z}+1)^{ -1}. \end{split} \ee One model for Stellarator energy confinement time in good agreement with experimental data is called ``ISS04v3". It is given by \cite{StelConf} \be \label{1.04.42} \begin{split} \tau_{_{ES}} \propto B_{_T}^{0.85}\ P^{-0.61}\ n^{0.55} R^{2.97}\ \iota^{0.41}\ A^{-2.33}, \end{split} \ee where $\iota$ -- \textbf{magnetic helicity} or rotational transform of the field lines and $q=1/\iota$ is the safety factor \cite[p.2]{Tok2}. Substituting (\ref{1.04.16}) into (\ref{1.04.42}), we obtain ``ISS04v3" in terms of the model (\ref{1.04.12}): \be \label{1.04.43} \tau_{_{ES}} \propto I^{0.41}\ B_{_T}^{0.44}\ P^{-0.61}\ n^{0.55}\ R^{2.56}\ \overline{M}^{0.00}\ A^{-1.51}. \ee Substituting (\ref{1.04.43}) into (\ref{1.04.30}) and (\ref{1.04.29}), we obtain scaling laws for power and Lawson pressure criterion based on ``ISS04v3": \be \label{1.04.44} \begin{split} P_{_{\text{Stellarator}}} \propto &B_{_T}^{0.13} R^{0.08} A^{0.85} \beta^{1.15} q^{1.05} T^{1.41} (\overline{Z}+1)^{1.41}\\ C_{_{LP}} \propto &B_{_T}^{3.87} R^{2.92} A^{-2.85} \beta^{0.85} q^{-1.05} T^{-1.41} (\overline{Z}+1)^{-1.41}. \end{split} \ee The term $\kappa$ is ignored, since $\kappa=1$ for Stellerator reactors. The shape factor is also irrelevant to Stellarators. Moreover, the result is expressed in terms of $\beta$, as normalized $\beta$ is irrelevant to Stellarators. Notice, that $P_{_{\text{Stellarator}}}$ is the conductive power flux to Stellarator walls. This quantity is not directly related to the fusion power within Stellarator. Other sources make predictions of Tokamak and Stellarator fusion power based on \textbf{zero-dimensional model} -- analysis of fusion power balance per unit volume. \cite{R3B4v01}. The total fusion power of a Tokamak is proportional to \cite[p.3]{R3B4v02}: \be \label{1.04.45} P_{_{_{\text{fusion}}}} \propto \frac{\beta_{_N}^2\ B_{_T}^4\ R^3}{q^2\ A^4}. \ee The Lawson pressure criterion of a Tokamak is proportional to \cite[p.3]{R3B4v02}: \be \label{1.04.46} C_{_{LP}} \propto \frac{H^{3.23}\ \beta_{_N}^{0.1}\ R^{2.7}\ B_{_T}^{3.7}} {q^{3.1}\ A^{3.53}}, \ee where $H$ is the ratio of the actual and predicted confinement times. Notice, that the result from Eq. (\ref{1.04.46}) matches the result from Eq. (\ref{1.04.37}) very well. The author of \cite{R3B4v02} does not consider $\kappa$. \subsection{Conductive power loss for fusion reactors} Below, we summarize conductive power loss from (\ref{1.04.37}), (\ref{1.04.39}), (\ref{1.04.41}), and (\ref{1.04.44}): \be \label{1.04.47} \begin{split} (a) \qquad P_{_{\text{Tokamak}}} \propto & \Big\{ B_{_T}^{0.32} R^{0.32} \Big\} \Big[ \hat{S}_{_F}^{-1.10} \kappa^{0.71} A^{-0.48} \Big] \Big(\beta_{_N}^{1.90} q^{1.10} T^{1.32} \Big)\\ (b) \qquad P_{_{\text{Tokamak}}} \propto & \Big\{B_{_T}^{0.43} R^{0.43} \Big\} \Big[ \hat{S}_{_F}^{-0.71} \kappa^{0.46} A^{-0.60} \Big] \Big( \beta_{_N}^{1.74} q^{0.71} T^{1.11} \Big) \\ (c) \qquad P_{_{\text{Tokamak}}} \propto & \Big[ \hat{S}_{_F}^{-1} \kappa^{0.5}\Big] \Big( \beta_{_N}\ q\ T\Big) \\ (d) \qquad P_{_{\text{Stellarator}}} \propto & \Big\{B_{_T}^{0.13} R^{0.08}\Big\} \Big[ A^{0.85}\Big] \Big(\beta^{1.15} q^{1.05} T^{1.41}\Big). \end{split} \ee Terms related to reactor size and magnetic field are in curly brackets. These terms are most important for fusion power. Terms related to reactor shape are in square brackets. Terms related to reactor operation are in round brackets. First, we discuss dependence of conductive power loss on reactor fusion power. Neutron, Bremsstrahlung, and synchrotron radiation losses are proportional to fusion power. Dependence of conductive power loss on fusion power is much weaker and much more complicated. From (\ref{1.04.45}) it follows that fusion reactor power is proportional to $B_{_T}^4\ R^3$. The model ``IPB98(y,2)" predicts conductive power loss in Tokamaks being proportional to $B_{_T}^{0.32} R^{0.32}$ as given in (\ref{1.04.47} (a)). This is approximately proportional to $P_{_{\text{fusion}}}^{0.09}$. A more recent model given in (\ref{1.04.38}) predicts conductive power loss in Tokamaks being proportional to $B_{_T}^{0.43} R^{0.43}$ as given in (\ref{1.04.47} (b)). This is approximately proportional to $P_{_{\text{fusion}}}^{0.13}$. A theoretical model given in (\ref{1.04.40}) predicts conductive power loss in Tokamaks being independent of $B_{_T}$ and $R$ as given in (\ref{1.04.47} (c)). The model ``ISS04v3" predicts conductive power loss in Stellarators being proportional to $B_{_T}^{0.13} R^{0.08}$ as given in (\ref{1.04.47} (d)). This is approximately proportional to $P_{_{\text{fusion}}}^{0.03}$. Theoretical results from Covaliu's version of Tang's model predict conductive power loss in Tokamaks and Stellarators being independent of $B_{_T}$ and $R$ \cite[p. 17]{T1}. Other sources predict steeper dependence of conductive power loss on fusion power. Costly predicts dependence proportional to $P_{_{\text{fusion}}}^{0.25}$ \cite{R3B4v03}. Below, we tabulate conduction heat loss at steady-state operation for several built and proposed reactors \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline Reactor & Type & Status & $P_{_{\text{Fusion}}}$ & $P_{_{\text{Conduction}}}$ & Source \\ & & & $MW$ & $MW$ & \\ \hline TFTR & D -- T & Built & 9.5 & 20 & \cite[p. 1384]{EConf05} \\ JET & D -- T & Built & 15.7 & 18.6 & \cite[p. 1384]{EConf05} \\ IGNITOR & D -- T & Proposed & 50 & 20.5 & \cite[p. 5]{EConf06} \\ IGNITOR & D -- T & Proposed & 75.1 & 39.7 & \cite[p. 1384]{EConf05} \\ IGNITOR & D -- T & Proposed & 95 & 19.4 & \cite[p. 5]{EConf06} \\ FIRE & D -- T & Proposed & 149 & 33.3 & \cite[p. 1384]{EConf05} \\ IGNITOR & D -- T & Proposed & 155 & 38.5 & \cite[p. 5]{EConf06} \\ ITER & D -- T & Proposed & 404 & 94.3 & \cite[p. 1384]{EConf05} \\ Big ITER & D -- T & Proposed & 1,500 & 182 & \cite[p. 7]{BigITER} \\ \hline \end{tabular} \captionof{table}{Conduction heat loss at steady-state operation for several built and proposed reactors} \label{1.0T10} \end{center} Notice that in Table \ref{1.0T10} above, more than one version of IGNITOR reactor is mentioned. Even in one version, different power levels are likely to appear at different times. Based on Table \ref{1.0T10} above, conductive power loss is proportional to $P_{_{\text{fusion}}}^{0.43}$. Overall, dependence of conductive power loss on fusion power is weak -- ranging from $P_{_{\text{fusion}}}^0$ to $P_{_{\text{fusion}}}^{0.43}$. Accurate data will be obtained only when large Tokamaks and fusion power plants are built. More powerful reactors lose much lower fraction of fusion power by conduction than the less powerful ones. This sets a lower power limit for thermonuclear reactors. For a deuterium -- tritium reactor, the minimal thermal power is about 200 $MW$ \cite{SmallT}. Minimum power for deuterium -- $^3$He fusion reactor is 20 times higher or 4 $GW$ \cite[p.44]{Spheromak04}. Tokamaks of lower power should be possible and have been designed, but they are likely to encounter technical problems. Second, we discuss dependence of conductive power loss on reactor shape. Eq. (\ref{1.04.47}) parts (a), (b), and (c) present three predicted types of dependence on square bracketed parts: \be \label{1.04.48} \begin{split} \hat{S}_{_F}^{-1.10} \kappa^{0.71} A^{-0.48}&= S_{_F}^{-1.10} \kappa^{0.71} A^{-1.58}, \\ \hat{S}_{_F}^{-0.71} \kappa^{0.46} A^{-0.60}&= S_{_F}^{-0.71} \kappa^{0.46} A^{-1.31} , \\ \hat{S}_{_F}^{-1} \kappa^{0.5}&=S_{_F}^{-1} \kappa^{0.5} A^{-1}. \end{split} \ee The shape factor approximated by (\ref{1.04.17}) drastically grows for lower values of $A$, thus small aspect ratio Tokamaks have lower conductive power loss. Optimal elongation is $2 \le \kappa \le 3.5$ \cite[p.14]{ST19}. Second, we discuss dependence of conductive power loss on reactor operating conditions. Eq. (\ref{1.04.47}) presents four predicted types of dependence on round bracketed parts: \be \label{1.04.49} \beta_{_N}^{1.90} q^{1.10} T^{1.32}, \qquad \beta_{_N}^{1.74} q^{0.71} T^{1.11}, \qquad \beta_{_N}\ q\ T, \qquad \beta^{1.15} q^{1.05} T^{1.41}. \ee Conductive power loss increases with normalized $\beta$, safety factor, and operating temperature. Spherical Tokamaks which consume more power than they produce can be very small. FNS-ST Tokamak has major radius $R=0.5\ m$, minor radius $a=0.3\ m$, and toroidal field $B_{_T}=1.5\ Tesla$. It has heating power of 15 $MW$ and fusion power of 500 $kW$. ST-CTF reactor has major radius $R=0.81\ m$, minor radius $a=0.52\ m$, and toroidal field $B_{_T}=2.6\ Tesla$. It has heating power of 44 $MW$ and fusion power of 35 $kW$ \cite[p.10]{STCensus01}. Aforementioned reactors would be useful as neutron sources or as parts of hybrid fission-fusion reactors. \subsection{Lawson pressure criterion} Below, we summarize Lawson pressure criterion from (\ref{1.04.37}), (\ref{1.04.39}), (\ref{1.04.41}), and (\ref{1.04.44}): \be \label{1.04.50} \begin{split} (a) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{3.68} R^{2.68} \Big\} \Big[ \hat{S}_{_F}^{3.10} A^{-3.52} \kappa^{0.29} \Big] \Big( \beta_{_N}^{0.10} q^{-3.1} T^{-1.32} \Big) \\ (b) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{3.57} R^{2.57} \Big\} \Big[ \hat{S}_{_F}^{2.71} A^{-3.4} \kappa^{0.54} \Big] \Big( \beta_{_N}^{0.26} q^{-2.71} T^{-1.11} \Big)\\ (c) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{ 4} R^{ 3} \Big\} \Big[ \hat{S}_{_F}^{3} A^{ -4} \kappa^{0.5} \Big] \Big( \beta_{_N}\ q^{ -3} T^{ -1} \Big)\\ (d) \qquad C_{_{LPS}} \propto & \Big\{ B_{_T}^{3.87} R^{2.92} \Big\} \Big[ A^{-2.85} \Big] \Big( \beta^{0.85} q^{-1.05} T^{-1.41} \Big). \end{split} \ee Rewriting (\ref{1.04.50}) in terms of $S_{_F}$, we have \be \label{1.04.51} \begin{split} (a) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{3.68} R^{2.68} \Big\} \Big[ S_{_F}^{3.10} A^{-0.42} \kappa^{0.29} \Big] \Big( \beta_{_N}^{0.10} q^{-3.1} T^{-1.32} \Big) \\ (b) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{3.57} R^{2.57} \Big\} \Big[ S_{_F}^{2.71} A^{-0.69} \kappa^{0.54} \Big] \Big( \beta_{_N}^{0.26} q^{-2.71} T^{-1.11} \Big)\\ (c) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{ 4} R^{ 3} \Big\} \Big[ S_{_F}^{3} A^{ -1} \kappa^{0.5} \Big] \Big( \beta_{_N}\ q^{ -3} T^{ -1} \Big)\\ (d) \qquad C_{_{LPS}} \propto & \Big\{ B_{_T}^{3.87} R^{2.92} \Big\} \Big[ A^{-2.85} \Big] \Big( \beta^{0.85} q^{-1.05} T^{-1.41} \Big). \end{split} \ee The notations are similar to those used in the last subsection. Terms related to reactor size and magnetic field are in curly brackets. These terms are most important for fusion power. Terms related to reactor shape are in square brackets. Terms related to reactor operation are in round brackets. Lawson pressure criterion is almost linearly proportional to reactor fusion power. From (\ref{1.04.45}) it follows that fusion reactor power is proportional to $B_{_T}^4\ R^3$. Expressions in curly brackets in Eq. (\ref{1.04.51}) are almost similar. According to Eq. (\ref{1.04.51}) parts (a), (b), and (c), Lawson pressure criterion of Tokamaks is almost proportional to $S_{_F}^{3}$. As we see from approximation (\ref{1.04.17}) for the shape factor, reactors with low aspect ratio and high elongation have the highest shape factors. As we see from expressions in round brackets in (\ref{1.04.51}), running a reactor at low safety factor and lower temperature increases the Lawson pressure criterion. Nevertheless, both approaches create technical problems. Define the \textbf{reactor criterion} as \be \label{1.04.52} \mathcal{R}_{_C}=S_{_F}^{3} B_{_T}^{4} R^{3}. \ee Reactor criterion has units of $Tesla^4\ m^3$. As we see from (\ref{1.04.51}), the Lawson pressure criterion for Tokamaks and Stellarators is proportional to the reactor criterion to the power of 0.9 to 0.97. The Tokamak fusion power is proportional to \cite[p.1]{SFact01}: \be \label{1.04.53} P_{_{_{\text{fusion}}}} \propto S_{_F}^2 B_{_T}^{4} R^{3} \beta_{_N}^2 q^{-2}= \frac{\beta_{_N}^2}{S_{_F} q^2}\ \mathcal{R}_{_C}. \ee \subsection{The Reactor criterion for reactors similar to ITER} Below we estimate $\mathcal{R}_{_C}$ required for fusion reactors similar to International Thermonuclear Experimental Reactor (ITER) and using different types of fuel. First, we calculate $\mathcal{R}_{_C}$ required for deuterium -- tritium fusion reactor. ITER has toroidal magnetic field $B_{_T}=5.3\ Tesla$, major radius $R=6.2\ m$, and shape factor $S_{_F}=4.2$ \cite[p.243]{FusBk2}. Hence, it has the Reactor Criterion \be \label{1.04.54} \mathcal{R}_{_C}\big(\text{ITER} \big)=1.4 \cdot 10^7\ Tesla^4\ m^3. \ee Notice, that ITER reactor does not achieve ignition. A working nuclear power plant should have an even higher $\mathcal{R}_{_C}$, even though it is not a requirement. Ignition should be achieved for $R=7.5$ \cite[p.14]{Lawson1}. Big ITER has the major radius $R=8.14\ m$, toroidal magnetic field $B_{_T}=5.68\ Tesla$, and shape factor $S_{_F}=3.9$. This reactor has fusion power of 1.5 $GW$ \cite[p.7]{BigITER}. Big ITER has \be \label{1.04.55} \mathcal{R}_{_C}\big(\text{Big ITER} \big)=3.3 \cdot 10^7\ Tesla^4\ m^3. \ee Below we calculate $\mathcal{R}_{_C}$ needed for a reactor similar to ITER which uses deuterium -- $^3$He fusion reaction. Assuming that the values of $F_{_{\text{Reactor}}}$ and $F_{_{\text{RO}}}$ are equal for similar reactors regardless of the fuel, we derive the following from (\ref{1.04.37}), (\ref{1.04.39}), (\ref{1.04.41}), and (\ref{1.04.44}): \be \label{1.04.56} \begin{split} \mathcal{R}_{_C}& \propto C_{_{LPT}}^{1.1}\ T^{1.44}\ \overline{M}^{-0.67}\ (\overline{Z}+1)^{1.45} \qquad \text{for Tokamaks}, \\ \mathcal{R}_{_C}& \propto C_{_{LPS}}^{1.1}\ T^{1.45}\ (\overline{Z}+1)^{1.45} \hskip2.2cm \text{for Stellarators}. \end{split} \ee If $\mathcal{R}_{_{C1}}$ is a reactor criterion for a Tokamak using fuel 1, then a similar Tokamak using fuel 2 would need the following reactor criterion derived from (\ref{1.04.56}): \be \label{1.04.57} \begin{split} \mathcal{R}_{_{C2}}&=\mathcal{R}_{_{C1}}\ \frac{ \left[C_{_{LPT}}^{1.1}\ T^{1.44}\ \overline{M}^{-0.67}\ (\overline{Z}+1)^{1.45} \right]_{\text{Fuel 2}}} {\left[C_{_{LPT}}^{1.1}\ T^{1.44}\ \overline{M}^{-0.67}\ (\overline{Z}+1)^{1.45} \right]_{\text{Fuel 1}}}. \end{split} \ee Deuterium -- $^3$He reactor would need Lawson Criterion 27 times higher than a deuterium -- tritium reactor \cite[p.81]{AF01}. As we see from Table \ref{1.0T03}, a deuterium -- $^3$He reactor would have operating plasma temperature 4.3 times higher than a deuterium -- tritium reactor. Plasma consisting of deuterium and $^3$He would have an average nuclear charge of $Z=1.5$. Plasma consisting of deuterium and tritium would have an average nuclear charge of $Z=1$. Both plasmas will have the same average nuclear mass. Substituting the data from the last paragraph into (\ref{1.04.57}) we conclude that the reactor criterion for a deuterium -- $^3$He reactor should be about 424 times higher than the reactor criterion for a similar deuterium -- tritium reactor. The reactor criterion for a deuterium -- $^3$He reactor with proportions similar to Big ITER is: \be \label{1.04.58} \mathcal{R}_{_C}\Big(\ ^2\text{H}-^3\text{He} \Big)=1.4 \cdot 10^{10}\ Tesla^4\ m^3. \ee A deuterium -- $^3$He Tokamak has a very high reactor criterion. Thus, aneutronic fusion has almost no prospect within the next 50 years or perhaps for the rest of the Century. Nevertheless, aneutronic fusion reactors can play a major role in Solar System Colonization and Deep Space propulsion in the next Century. Apollo is a design of a deuterium -- $^3$He Tokamak. This Tokamak has a toroidal magnetic field of $B_{_T}=10.9\ Tesla$, major radius $R=7.9\ m$ and shape factor $S_{_F}=5.1$. Apollo Tokamak has fusion power of 2.1 $GW$ \cite{AN03}. The reactor coefficient of Apollo Tokamak is \be \label{1.04.59} \mathcal{R}_{_C}\Big(\ ^2\text{H}-^3\text{He},\ \text{Apollo} \Big)=9.2 \cdot 10^8\ Tesla^4\ m^3. \ee This coefficient is 15 times lower than the coefficient predicted by (\ref{1.04.58}). This model may be too optimistic. Moreover, as we argue in Subsection {\rr 5.3}, a non-spherical Tokamak for deuterium -- $^3$He fusion may be impossible. \subsection{Reactor criteria for spherical deuterium -- $^3$He Tokamaks} A Spherical deuterium -- $^3$He Tokamak has been designed in a great detail. This Tokamak has toroidal magnetic field of $B_{_T}=2.7\ Tesla$, major radius $R=8\ m$, unreasonably high shape factor $S_{_F}=124$, and plasma volume of 17,900 $m^3$ \cite{AF01,ANST01}. The reactor coefficient of this Tokamak is \be \label{1.04.60} \mathcal{R}_{_C}\Big(\ ^2\text{H}-^3\text{He},\ \text{spherical} \Big)=5.1 \cdot 10^{10}\ Tesla^4\ m^3. \ee This coefficient is 3.6 times higher than the coefficient predicted by (\ref{1.04.58}). Another proposed deuterium -- $^3$He Tokamak is the GA Project 4437 Tokamak. This Tokamak has toroidal magnetic field of $B_{_T}=2.7\ Tesla$ and major radius $R=9.45\ m$. The Tokamak has aspect ratio $A=1.4$, plasma elongation $\kappa=2.5$, and triangularity $\delta=0.8$ \cite[p.45]{Spheromak04}. Using (\ref{1.04.17}), we estimate the shape factor of $S_{_F}=60$. The reactor coefficient of GA Project 4437 Tokamak is \be \label{1.04.61} \mathcal{R}_{_C}\Big(\ ^2\text{H}-^3\text{He},\ \text{spherical} \Big)=9.7 \cdot 10^9\ Tesla^4\ m^3. \ee In Section {\rr 6.2}, we also mention a Tokamak of our design -- Spheromak2100. This Tokamak has a toroidal magnetic field of $B_{_T}=4.1\ Tesla$ and major radius $R=9.45\ m$. This Tokamak would have the same shape and dimensions as GA Project 4437 Tokamak, thus $S_{_F}=60$. The reactor coefficient of this Tokamak is \be \label{1.04.62} \mathcal{R}_{_C}\Big(\ ^2\text{H}-^3\text{He},\ \text{spherical} \Big)=5.2 \cdot 10^{10}\ Tesla^4\ m^3. \ee \section{Ways of improving Lawson pressure criterion} Improving Lawson pressure criterion for a Tokamak or Stellarator reactors is the main objective in achieving commercial deuterium -- tritium fusion power generation. It is vital for building a deuterium -- $^3$He Tokamaks or Stellarators, as these reactors require 27 times higher Lawson Criterion. Recall the expression pressure criteria (\ref{1.04.51}) for Tokamaks and Stellerators: \be \label{1.05.01} \begin{split} (a) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{3.68} R^{2.68} \Big\} \Big[ S_{_F}^{3.10} A^{-0.42} \kappa^{0.29} \Big] \Big( \beta_{_N}^{0.10} q^{-3.1} T^{-1.32} \Big) \\ (b) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{3.57} R^{2.57} \Big\} \Big[ S_{_F}^{2.71} A^{-0.69} \kappa^{0.54} \Big] \Big( \beta_{_N}^{0.26} q^{-2.71} T^{-1.11} \Big)\\ (c) \qquad C_{_{LPT}} \propto & \Big\{ B_{_T}^{ 4} R^{ 3} \Big\} \Big[ S_{_F}^{3} A^{ -1} \kappa^{0.5} \Big] \Big( \beta_{_N}\ q^{ -3} T^{ -1} \Big)\\ (d) \qquad C_{_{LPS}} \propto & \Big\{ B_{_T}^{3.87} R^{2.92} \Big\} \Big[ A^{-2.85} \Big] \Big( \beta^{0.85} q^{-1.05} T^{-1.41} \Big). \end{split} \ee There are four approaches to improving Lawson pressure criterion of a fusion reactor. First, we can increase reactor torus major radius $R$. Second, we can increase reactor shape factor of $S_{_F}$. Third, we can increase toroidal magnetic field $B_{_T}$. Fourth, we can run the reactor at higher $\beta$ and lower safety factor $q$. \subsection{Greenwald density limit} Lawson pressure criterion of Tokamaks and Stellarators can be improved by increasing $R$, $B_{_T}$, and $\beta$. These improvements run up against a limit which we discuss in this Subsection. Based on decades of observation, it has been determined that the density of nuclei in plasma can not exceed the \textbf{Greenwald density limit} given by \cite[p.52]{FusBk1} \be \label{1.05.02} \frac{n_{_G}}{10^{20}\ m^{-3}}= \frac{1}{\pi}\ \left(\frac{I}{10^6\ A}\right) \left(\frac{a}{1\ m} \right)^{-2}= \frac{A^2}{\pi}\ \left(\frac{I}{10^6\ A}\right) \left(\frac{R}{1\ m} \right)^{-2}, \ee where $MA$ denotes Mega Amperes. Substituting (\ref{1.02.03}) into (\ref{1.05.02}) we obtain \textbf{Greenwald pressure limit} \be \label{1.05.03} P_{_G}=0.0510\ bar\ A^2\ \big(1+\overline{Z}\big)\ \left(\frac{T}{1\ keV}\right) \ \left(\frac{I}{10^6\ A}\right) \left(\frac{R}{1\ m} \right)^{-2}. \ee Using (\ref{1.02.04}) and the definition of $\beta$, we obtain plasma pressure: \be \label{1.05.04} P_{_g}=3.98\ bar\ \left(\frac{B_{_T}}{1\ Tesla}\right)^2 \beta \ee The \textbf{Greenwald ratio} is the ratio of plasma pressure to Greenwald pressure limit. It is given by \be \label{1.05.05} R_{_G}=\frac{P_{_g}}{P_{_G}}= \frac{78.0\ \beta}{\big(1+\overline{Z}\big)\ A^2} \left(\frac{B_{_T}}{1\ Tesla}\right)^2\ \left(\frac{R}{1\ m}\right)^2\ \ \left(\frac{I}{10^6\ A}\right)^{-1}\ \left(\frac{T}{1\ keV}\right)^{-1}. \ee Normalized $\beta$ is defined as \cite[p.9]{FusBk3}: \be \label{1.05.06} \beta_{_N}=\frac{100\ \beta}{A}\ \left(\frac{I}{10^6\ A}\right)^{-1} \left(\frac{R}{1\ m}\right) \left(\frac{B_{_T}}{1\ Tesla}\right). \ee From Eq. (\ref{1.05.06}), we obtain \be \label{1.05.07} \beta=\frac{A\ \beta_{_N}}{100}\ \left(\frac{I}{10^6\ A}\right) \left(\frac{R}{1\ m}\right)^{-1} \left(\frac{B_{_T}}{1\ Tesla}\right)^{-1}. \ee Substituting (\ref{1.05.07}) into (\ref{1.05.05}), we obtain \be \label{1.05.08} R_{_G}= \frac{0.78\ \beta_{_N}}{\big(1+\overline{Z}\big)\ A} \left(\frac{B_{_T}}{1\ Tesla}\right)\ \left(\frac{R}{1\ m}\right)\ \left(\frac{T}{1\ keV}\right)^{-1}. \ee In a working reactor, the Greenwald ratio should be below 0.7 and must be below 0.8 \cite[p.57]{FusBk1}. Greenwald ratios for several proposed reactors are presented in Table \ref{1.0T11} below. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Reactor & Reaction & Thermal & $R_{_G}$ & Source \\ & & Power, $GW$ & & \\ \hline ITER & D-T & 0.4 & 0.84 & \cite[p.243]{FusBk2} \\ Big ITER & D-T & 1.5 & 0.75 & \cite[p.7]{BigITER} \\ STPP & D-T & 3.1 & 0.63 & \cite[p.16]{FusBk2} \\ \hline & D-$^3$He & & 0.39 & \cite{AF01} \\ GA Project 4437 & D-$^3$He & 11.0 & 0.48 & \cite[p.45]{Spheromak04} \\ Apollo & D-$^3$He & 2.14 & 0.55 & \cite[p.2]{AN03} \\ \hline \end{tabular} \captionof{table}{Greenwald ratios for proposed reactors} \label{1.0T11} \end{center} As we see from (\ref{1.05.08}) above, the Greenwald limit presents an effective limit to increasing the reactor's major radius and toroidal magnetic field. Recalling (\ref{1.04.45}), the reactor's fusion power is proportional to $R^3 B_{_T}^4$. Thus, the Greenwald limit presents an effective limit to reactor power. Power of deuterium -- tritium reactors is likely to have an upper limit of about 5 $GW$ to 10 $GW$. Power of deuterium -- $^3$He reactors is likely to have an upper limit of about 50 $GW$ to 100 $GW$. Accurate determination of these limits remains an open problem. From (\ref{1.05.08}) above, it follows that a reactor operating at thermal power close to the upper limit should operate at high temperature. For a deuterium-tritium reactor, a good operating temperature should be about 25 $keV$. As we see from Table \ref{1.0T09}, at higher temperatures, energy losses due to synchrotron radiation from plasma become too high. \subsection{Increasing reactor torus major radius $R$} Historically, the major radius (R) of the Tokamak Stellarator toruses grew over time. In 1958, Soviet T1 Toakamak had $R=0.67\ m$. In 1962, Soviet T3 Tokamak had $R=1.0\ m$. In 1975, Soviet T10 Tokamak had $R=1.5\ m$ \cite{TokUSSR}. Joint European Torus Tokamak built in 1984 had $R=2.96\ m$. International Thermonuclear Experimental Reactor (ITER) will have $R=6.2\ m$ \cite[p. 19]{TProgress01}. Original (1998) design for ITER had $R=8.14\ m$ \cite[p. 14]{Hist01}. This design was abandoned due to high cost. \subsection{Increase reactor shape factor of $S_{_F}$.} This is accomplished by educing reactor torus aspect ratio $A$. According to Eq. (\ref{1.05.01}), the Lawson pressure criterion of Tokamak is proportional to $S_{_F}^3$. As we see from (\ref{1.04.17}), reducing aspect ratio greatly increases the shape factor. Also, plasma elongation should be high. Generally, spherical Tokamaks have $2.4 \le \kappa \le 3.5$. Spherical Tokamaks should have plasma triangularity of at least $\delta=0.5$ \cite[p.14]{ST19}. For Stellarators, aspect ratio does not play as important role as it does for Tokamaks. A \textbf{Spherical Tokamak} is "a tokamak with the central region reduced to the minimum size possible" \cite[p. 59]{FusBk1}. The torus looks like a sphere with a pencil-shaped region cut from the middle. A center-post carrying very high electric current makes up the pencil-shaped region. "The National Spherical Torus Experiment has achieved $\beta=0.3-0.4$ and has consistently achieved energy confinement times 2–3 times larger than predicted by conventional Tokamak correlations" \cite[p. 59]{FusBk1}. Theoretically, $\beta=0.5$ should be sustainable in spherical Tokamaks \cite[p. 376]{Freidberg}. Spherical Tokamaks have aspect ratios between 1.4 to 1.8 \cite[p. 8]{STK01}. This is about half of ITER's ratio of 3.1. One advantages of using spherical Tokamaks is that some designs do not use superconductive toroidal field coil magnets \cite[p. 17]{ST19}. If such Tokamaks are possible, their cost would be significantly reduced. For spherical Tokamaks, shape factors as high as $S_{_F}=41$ have been achieved \cite[p. 268]{Tokamak}. National Spherical Torus Experiment (NSTX) is a Tokamak with major radius $R=0.85\ m$, minor radius $0.6\ m$, aspect ratio $A=1.42$, and elongation $\kappa=2.2$. In different runs, NSTX has values of $30 \le S_{_F} \le 38$. This reactor has sustained $\beta$ of 0.22 to 0.25. NSTX discharges have $\beta_{_N}$ between 5 and 6 \cite{SFact02}. A spherical Tokamak reactor is illustrated in Figure \ref{1.0F02} below. This Tokamak has $A=1.6$, $\kappa=2.3$, and $\delta \approx 0.9$. According to Eq. (\ref{1.04.17}), this Tokamak should have $S_{_F}=33$. \begin{center} \includegraphics[width=12cm,height=12cm]{Spheromak} \captionof{figure}{Spherical Tokamak cross-section \label{1.0F02}} \end{center} Spherical Tokamaks do present very significant engineering difficulties. The center-post is exposed to very high loads of heat and neutron radiation. ARIES Spherical Tokamak concept envisions heat flux of 6.4 $MW/m^2$ to the middle section of the center-post. About 80\% of this heat is carried by neutron radiation \cite[p.31]{STK01}. Another problem for spherical Tokamaks is very high magnetic field at the superconducting wires running within the center-post. As we see from Table \ref{1.0T11} below, the ratio of magnetic field at the wires on the inner circle of the torus to $B_{_T}$ increases with decreasing $A$. \begin{center} \begin{tabular}{|c|r|r|l|l|l|l|l|l|l|l|l|} \hline Reactor & Status & $B_{_T}$ & $B_{_{\text{Max}}}$ &Field & A & Source \\ & & $Tesla$ & $Tesla$ & Ratio & & \\ \hline Aries-ST& Project & 2.14 & 7.6 & 3.55 & 1.6 & \cite[p.246]{FusBk2} \\ Vector & Project & 4.7 & 19.6 & 4.17 & 2.0 & \cite[p.246]{FusBk2} \\ SlimCS & Project & 6.0 & 16.4 & 2.73 & 2.6 & \cite[p.246]{FusBk2} \\ JT-60SA & Completed & 2.68 & 6.5 & 2.43 & 2.7 & \cite[p.21]{SCMS01} \\ ITER & UC & 5.3 & 13.5 & 2.55 & 3.1 & \cite[p.117]{FusBk1} \\ TS & Completed & 4.5 & 9.3 & 2.07 & 3.2 & \cite[p.18]{SCMS01} \\ SABR & Project & 5.7 & 13.5 & 2.37 & 3.4 & \cite[p.231]{FusBk1} \\ KSTAR & Completed & 3.5 & 7.2 & 2.06 & 3.6 & \cite[p.20]{SCMS01} \\ Aries-RS& Project & 8 & 16 & 2.00 & 4.0 & \cite[p.227]{FusBk1} \\ EAST & Completed & 3.5 & 5.8 & 1.66 & 4.3 & \cite[p.19]{SCMS01} \\ Aries-I & Project & 11 & 19 & 1.73 & 4.5 & \cite[p.227]{FusBk1} \\ SST-1 & Completed & 3.0 & 5.1 & 1.70 & 5.5 & \cite[p.20]{SCMS01} \\ \hline \end{tabular} \captionof{table}{Tokamak magnetic field} \label{1.0T12} \end{center} It is possible that spherical Tokamak reactors would present new and unexpected problems. This happens to every technology on its way to maturity. For instance, Thompson's and Blackman's 1946 patent envisioned a deuterium-deuterium fusion reactor with major radius $R=1.3\ m$, minor radius $a=0.3\ m$ and operating temperature of 500 $keV$ \cite[p.3]{Spheromak02}. Energy confinement time of 65 $s$ was an overestimate by a factor of at least $10^5$. Given that they had no experimental data, their mistake is understandable. It is possible that our understanding of fully working Tokamaks which have not been built yet is equally flawed. Recalling (\ref{1.03.15}), for a deuterium -- $^3$He reactor, the synchrotron power fraction is proportional to $\beta^{-1.5}$. As we see from Table \ref{1.0T13}, synchrotron power fraction is substantial even for $\beta=0.3$. In order to operate, a deuterium -- $^3$He reactor must have $\beta \ge 0.3$. Such high values of $\beta$ are possible only for spherical Tokamaks. Thus, spherical Tokamaks are required for deuterium -- $^3$He fusion. \subsection{Increasing toroidal magnetic field $B_{_T}$} According to Eq. (\ref{1.05.01}), the Lawson pressure criterion of Tokamak is proportional to about $B^{3.7}$, and the Lawson pressure criterion of Stellarator is proportional to $B^{3.87}$. Moreover, fusion reactor power is proportional to $B^4$. Thus, increasing the magnetic field is very important. In all working fusion reactors, magnetic fields would be produced by superconducting electromagnets. In order for an electric conductor to be superconducting, it must be kept below critical temperature $T_{_{\text{critical}}}$. It also has to be kept below temperature-dependent critical magnetic field $B_{_{\text{critical}}}(T)$. Generally, $B_{_{\text{critical}}}(T)$ is a strongly decreasing function of temperature, thus a superconductor which has to sustain a strong magnetic field must be kept well below critical temperature. ITER uses Nb$_3$Sn superconducting wires at 4.5 $^o$K temperature. The gross weight of all coils is 9,677 $tons$ \cite[p.23]{SCMS01}. For a reactor similar to ITER, the magnetic field at the wires on the inner circle of the torus is $2.55\ B_{_T}$. For a reactor with aspect ratio $A=2$, the magnetic field at the wires on the inner circle of the torus is $3.5\ B_{_T}$ to $3.5\ B_{_T}$ \cite[p.246]{FusBk2}. Thus, Tokamaks and Stellarators using high magnetic fields must sustain very high magnetic fields on the superconducting wires. There are two types of superconductors. The first type is called low temperature superconductor (LTS). These superconductors are almost always metallic. Critical magnetic fields for all LTS are at most 23.5 $Tesla$. These superconductors are already used in magnetic resonance imaging (MRI) and large hadron collider (LHC) magnets \cite[p.1]{HTSTape04}. The second type of superconductors are high temperature superconductors. These superconductors have much higher critical temperatures and critical magnetic fields. These superconductors are ceramic. They can be somewhat flexible as very thin foil attached to a tape. REBCO tape is 4 $mm$ wide and .12 $mm$ thick. The superconducting layer is only 2 $\mu m$ thick \cite[p.2]{HTSTape04}. High temperature superconductor tape is indispensable for all applications with fields over 25 $Tesla$ and for all applications with operating temperature of 10 $^o$K or higher \cite[p.5]{HTSTape03}. "High temperature superconductor tape (HTST) can carry high current densities even at field strengths of 30 $Tesla$" \cite[p.1]{HTSTape04}. High temperature superconductor tape would definitely enable Tokamaks to have toroidal field of $B_{_T}=12\ Tesla$. Superconducting tapes can carry high currents in a 40 $Tesla$ field \cite[p.6]{HField1}. Thus, toroidal fields of $B_{_T}=16\ Tesla$ may be possible in a few decades. A blueprint for a 100 $Tesla$ superconducting magnet has been produced \cite{100T}. A new class of superconducting materials have been recently discovered. These materials become superconducting at room temperature at pressures of several million atmospheres. Some carbon-doped sulfur hydrides exhibit superconductivity at 15 $^o$C at a pressure of 2.7 million atmospheres \cite{RTSC}. Carbon nanotubes have tensile strength of 1.5 million atmospheres, thus super pressurized superconducting wires may be possible \cite{CNT1}. Possibly, these wires will have very high critical magnetic field. \subsection{Decreasing safety factor $q$ and increasing $\beta_{_N}$} According to Eq. (\ref{1.05.01}), the Lawson pressure criterion of Tokamak is proportional to about $q^{-3}$, and Lawson pressure criterion of Stellarator is proportional to $q^{-1.05}\cdot \beta^{0.85}$. According to Eq. (\ref{1.04.53}), fusion reactor power is proportional to $\beta_{_N}^2\ q^{-2}$. According to Eq. (\ref{1.03.15}), the fraction of fusion power lost to synchrotron radiation is proportional to $\beta^{-1.5}$. Since $\beta \propto \beta_{_N}$, synchrotron radiation loss is proportional to $\beta_{_N}^{-1.5}$. Thus, decreasing safety factor $q$ and increasing $\beta_{_N}$ are important goals. ITER has $\beta_{_N}=1.77$ \cite[p.243]{FusBk2}. If $\beta_{_N}$ exceeds the \textbf{Troyon limit}, then plasma becomes unstable \cite[p.933]{Tokamak}. According to Troyon himself, the limit is $\beta_{_N} \le 2.8$ \cite{Troyon}. According to theoretical calculations, $\beta_{_N} \le 3.5$ \cite[p.332]{Tokamak}. START spherical Tokamak experiment achieved stability for $\beta_{_N}=6$ and $\beta=0.4$. Most proposed Tokamaks have safety factors between 3 and 4. ITER has safety factor $q=3$ \cite[p.243]{FusBk2}. Some researchers believe that Tokamaks and Stellarators can operate with safety factors $q \le 2$ \cite{SF03}. Operating at low safety factor and high $\beta_{_N}$ would greatly enhance reactor performance. Nevertheless, operating in unsafe regimes is likely to cause many reactor shutdowns and accidents. A fusion reactor accident would not consist of a thermonuclear explosion, but it may break a reactor worth billions. Moreover, such an accident may release large amount of radioactive material. \section{Prospects for reactor development} \subsection{Likely timeline} In our opinion, deuterium -- tritium fusion power will play a considerable role in Global energy production during the second half of this century. Deuterium -- $^3$He fusion power is unlikely to play any role on Earth, but it is likely to play a major role during Solar System colonization during the next century. We are not optimistic about the potential for thermonuclear power in the short time frame. Over the previous four decades, fusion power research has received very low budget. Between 1960 and 1974, annual US budget for magnetic confinement fusion was about \$200 million \cite[p.79]{Hist02}. Between 1975 and 1982, magnetic confinement fusion research in the USA received annual funding of about \$1.0 billion. In later years, annual funding for magnetic confinement fusion was continuously decreasing until about 1997 \cite{NFBudget1}. Between 2000 and 2012, the average annual magnetic confinement fusion funding in the USA was \$300 million to \$400 million \cite{NFBudget2}. Overall fusion funding based on \cite[p. 7]{LCostF1} is presented in Figure \ref{1.0F03} below: \begin{center} \includegraphics[width=16cm]{FFunding} \captionof{figure}{Federal Fusion Funding \label{1.0F03}} \end{center} A 1976 plan for development of magnetic confinement fusion drew several scenarios for funding of magnetic confinement fusion. Maximum effective effort would have required about \$9 billion per year for 11 years. The medium effort plan would have required \$4.5 billion per year for 14 years. The low effort plan would have required \$3 billion per year for 26 years. Funding of \$1.2 billion per year was predicted to bring no result \cite[p.12]{FPlan}. In reality, average annual funding for magnetic confinement fusion between 1978 and 2015 has been \$460 million per year. All amounts mentioned in this subsection are in year 2020 dollars. Lawson pressure criterion of the newest reactors has experienced phenomenal growth between 1955 and 1997. Top Lawson pressure criterion achieved by nuclear reactors grew from $3 \cdot 10^{-6}\ bar \cdot s$ in 1960 to $3 \cdot 10^{-4}\ bar \cdot s$ in 1970 to $2 \cdot 10^{-2}\ bar \cdot s$ in 1980 to $4 bar \cdot s$ in 1997 \cite[p. 106]{Fus1}. Then development of new larger reactors ran up against the lack of funding. As of 2021, records obtained in the 1990s stand. According to a 1997 source, ITER is supposed to be built and running no later than 2010 \cite[p.15]{BeegITER}. A 2011 source puts ITER starting date at 2026 \cite[p.109]{Fus1}. A 2018 source puts that date at 2035 \cite[p.65]{Fus3}. Given that most previous predictions have turned out to be too optimistic, the starting date of ITER is unknown. Under the best-case scenario, the first deuterium -- tritium thermonuclear power plants will appear in the 2050s \cite[p.65]{Fus3}. In his previous work, the author has forecast that photovoltaic solar power would grow to become the World's leading energy source \cite[p.36755]{ShubovPM}. This may also slow down development of fusion power. Nevertheless, deuterium -- tritium fusion power is likely to play a significant role in the second half of this century. We expect many technological improvements, such as less expensive and more durable high temperature superconductor tape, to make fusion energy cost-effective by that time. Introduction of deuterium -- $^3$He fusion power will take place only after deuterium -- tritium fusion technology will reach maturity. As we have mentioned in Section {\rr 2.5}, deuterium -- $^3$He reactors must have reactor criterion $\mathcal{R}_{_C}$ at least 424 times higher than similar deuterium -- tritium reactors. It is likely that proliferation of deuterium -- $^3$He reactors would take place only in the next century. Another major obstacle for deuterium -- $^3$He is low availability of $^3$He. Resources of $^3$He are almost non-existent on Earth. USA has a stockpile of about 30 $kg$ $^3$He -- a fraction of the amount needed to run a fusion reactor \cite{EarthHe3}. Moderate deposits of $^3$He are available on the Moon. Overall reserves of $^3$He in Lunar Regolith are estimated at 2.5 million tons \cite{MoonHe3}. Energy content of $^3$He is about $2 \cdot 10^7$ times higher than that of coal \cite[p. 33]{MoonHe3_03}. Hence, lunar $^3$He has energy equivalent of 50 trillion tons of coal. Obtaining $^3$He would be difficult due to its low concentration in lunar regolith. Helium concentration in the lunar rock is 30-40 parts per million (ppm) in terms of mass. About 0.03\% of that helium is $^3$He \cite[p. 29]{MoonHe3_02}. The highest weight concentration of $^3$He anywhere in lunar soil is 44 parts per billion (ppb) in terms of mass \cite{MoonHe3}. Hence, the amount of lunar soil needed to be mined for a given amount of energy is at least the same as the amount of coal needed to be mined on Earth for the same energy. A lunar plant would heat lunar regolith to 700 $^o$C, sort the volatiles, and export them to Earth \cite[p. 22]{MoonHe3_02}. This would be possible only during advanced stages of Solar System colonization. The main source of $^3$He in the Solar System is the outer planets -- Jupiter, Saturn, Uranus, and Neptune. Atmospheres of these planets have $^3$He concentration of 3 ppm to 20 ppm. Proposals for atmospheric mining in the outer solar system (AMOSS) are already being presented \cite{JupiterMine}. This resource should become available during advanced stages of Solar System colonization. \subsection{Spheromak2100} Spheromak2100 is our concept of a deuterium -- $^3$He Tokamak to be used in Solar System exploration and colonization. As we discuss in Subsection {\rr 6.1}, deuterium -- $^3$He3 Tokamaks are unlikely to appear within this century, hence the name reflects the probable year an early deuterium -- $^3$He3 Tokamak can be built. Even though it is unlikely that engineers of that time will find any use for an archaic design, this design is likely to represent a minimum of performance. Spheromak2100 represents a slight modification of GA Project 4437 developed by another team in 1996 \cite[p. 45]{Spheromak04}. The reactor is shaped like a giant egg. It has a height of 35.8 $m$ and diameter of 32.4 $m$. A column 5.4 $m$ in diameter is running from top to bottom of the "egg" \cite[p. 45]{Spheromak04}. The outer 1 $m$ of the "egg" would consist of a shell. The shell and inner column would contain superconducting wires carrying current to provide the magnetic field inside the "egg". The total current flowing through the central column is 194 $MA$. The shell and inner column would also contain a neutron shield and heat rejection system composed of multiple tubes carrying cooling fluid. Plasma would form a torus within the hollow space of Spheromak2100. Similar to GA Project 4437 Tokamak, it has major radius $R=9.45\ m$, minor radius $a=6.75\ m$, aspect ratio $A=1.4$, and elongation $\kappa=2.5$. GA Project 4437 Tokamak has toroidal magnetic field $B_{_T}=2.7\ Tesla$, $\beta=.61$, and operating temperature of 100 $keV$. Its fusion power is 11 $GW$ \cite[p.45]{Spheromak04}. So far, $\beta=0.4$ is the record demonstrated for any Tokamak \cite[p.29]{Spheromak02}. This record has been set in 1998, and in 2021 there has not been any indication of this record being surpassed. Thus, $\beta=0.62$, is unreasonably optimistic. Spheromak2100 has $\beta=0.4$, toroidal magnetic field $B_{_T}=4.1\ Tesla$, and operating temperature of 70 $keV$. At this point, we can calculate the fusion power of Spheromak2100. In (\ref{1.04.45}), we demonstrate that the power of a Tokamak is proportional to $\beta^2\ B_{_T}^4$. Tokamak power is also proportional to $\sigma_{_{P}}(T)$ tabulated in Table \ref{1.0T02}. Hence, the fusion power of Spheromak2100 is \be \label{1.06.01} P_{_{\text{Spheromak2100}}} = P_{_{\text{GA Project 4437 Tokamak}}}\ \left[ \frac{\sigma_{_{P}}(T)\ \beta^2\ B_{_T}^4\Big|_{\text{Spheromak2100}}} {\sigma_{_{P}}(T)\ \beta^2\ B_{_T}^4\Big|_{\text{GA Project 4437 Tokamak}}} \right]=31\ GW. \ee An increase in Tokamak power corresponds to an increase in heat flux at the wall. GA Project 4437 Tokamak has been designed with maximum heat flux at the wall of 10 $MW/m^2$. We make an optimistic assumption, that in 2100, a wall loading of 30 $MW/m^2$ would be sustainable. The wall reflectivity of Spheromak2100 fusion chamber is $w_{_r}=0.8$. Below, we calculate $f_{_{\text{Synchrotron}}}$ for Spheromak2100. The values of $f_{_{\text{Synchrotron}}}$ for Spheromak2100 are calculated by (\ref{1.03.15}) and tabulated in Table \ref{1.0T13} below. The last row is $f_{_{\gamma}}$ -- fraction of fusion power radiated away via synchrotron and Bremsstrahlung radiation. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Quantity & Unit & & & & & & & & \\ \hline T &$keV$ & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 \\ \hline $\sigma_{_{P}}(T)$ & $10^{-2}\ bar^{-1} s^{-1}$ & 2.54 & 3.41 & 3.80 & 3.86 & 3.73 & 3.49 & 3.21 & 2.94 \\ \hline $x_{_r}\ f_{_B}$ & & 1.32 & 0.64 & 0.41 & 0.31 & 0.25 & 0.22 & 0.2 & 0.19 \\ \hline $x_{_r}\ f_{_{\text{Synchrotron}}}$ & & 0.08 & 0.09 & 0.12 & 0.16 & 0.21 & 0.28 & 0.37 & 0.49 \\ \hline $x_{_r}\ f_{_{\gamma}}$ & & 1.40 & 0.73 & 0.53 & 0.47 & 0.46 & 0.50 & 0.57 & 0.68 \\ \hline \end{tabular} \captionof{table}{Radiation losses for Spheromak2100} \label{1.0T13} \end{center} As we see from Table \ref{1.0T13} above, the optimal operating temperature of deuterium -- $^3$He reactor is between 60 $keV$ and 70 $keV$. Now we calculate the value of Greenwald ratio for Spheromak2100. From Table \ref{1.0T11}, we see that GA Project 4437 Tokamak has $R_{_G}=0.48$. These two reactors have the same major radius $R$ and safety factor $q$. From (\ref{1.05.07}), it follows that $R_{_G}$ of the two reactors would be proportional to $\beta\ B_{_T}\ T^{-1}$. Hence, Spheromak2100 has $R_{_G}=0.65$. Now we calculate $f_{_{\text{heat}}}$ for Spheromak2100. The fraction of fusion power which is used to heat the plasma exclusive of the power lost to Bremsstrahlung and synchrotron radiation is given in (\ref{1.04.04}). For deuterium -- $^3$He fusion, neutronicity is 5\% \cite[p.24]{Tokamak}. As we see from Table \ref{1.0T13}, for deuterium -- $^3$He fusion $f_{_{\gamma}}=0.46\ x_{_r}^{-1}$. Based on the values of neutronicity and synchrotron and Bremsstrahlung power fractions presented above, we conclude that \be \label{1.06.02} f_{_{\text{heat}}}=0.95- 0.46\ x_{_r}^{-1}. \ee for deuterium -- $^3$He plasma. Based on Eq. (\ref{1.04.08}) and data from Table 3 and Eq. (\ref{1.06.02}), we find that for Spheromak2100, Lawson pressure criterion is \be \label{1.06.03} C_{_{LPI}}=\frac{6}{f_{_{\text{heat}}}\ \sigma_{_{P}}(T)} =\frac{6}{0.041\ bar^{-1}\ s^{-1} \big(0.95- 0.46\ x_{_r}^{-1} \big)} =\frac{299\ bar \cdot s}{1.94-0.94\ x_{_r}^{-1}}. \ee Another source gives a value of $C_{_{LPI}} = 430\ bar \cdot s$ for deuterium -- $^3$He Lawson pressure ignition criterion \cite[p.81]{AF01}. As we have shown in Section 1.5, $f_{_{\text{heat}}}$ for a deuterium -- $^3$He strongly depends on the reactor. Thus, a more optimistic value given by the previous source is not a contradiction. In Table \ref{1.0T14} below, we summarize properties of Spheromak2100. The values of Bremsstrahlung and synchrotron radiation power are based on data in Table \ref{1.0T13}. Normalized beta is calculated by (\ref{1.04.32}). \begin{center} \begin{tabular}{|l||l|l|l|l|l|l|l|l|l|l|l|} \hline \textbf{Reactor dimensions} & \textbf{Reactor performance} \\ Shape: egg-like & Fusion power: 31 $GW$ \\ Height: 35.8 $m$ & Bremsstrahlung power: $\big(7.8\ x_{_r}^{-1}\big)\ GW$ \\ diameter: 32.4 $m$ & Synchrotron power: $\big(6.5\ x_{_r}^{-1}\big)\ GW$ \\ Shell thickness: 1 $m$ & Synchrotron wall reflectivity: $w_{_r}=0.8$ \\ Inner column radius: 2.7 $m$ & Greenwald ratio: $R_{_G}=0.65$ \\ Plasma volume: 15,600 $m^2$ & Toroidal current: $I=202\ MA$ \\ Inner wall area: 3,200 $m^2$ & Beta: $\beta=0.4$ \\ \cline{1-1} \textbf{Plasma torus properties} & Normalized beta: $\beta_{_N}=5.6$ \\ Major radius: $R=9.45\ m$ & \\ Aspect ratio: $A=1.4$ & \\ Plasma elongation: $\kappa=2.5$ & \\ Plasma triangularity: $\delta=0.8$ & \\ Shape factor: $S_{_F}=60$ & \\ Toroidal field: $B_{_T}=4.1\ Tesla$ & \\ Plasma temperature $T=70\ keV$ & \\ \hline \end{tabular} \captionof{table}{Properties of Spheromak2100} \label{1.0T14} \end{center} \section{Conclusion} \subsection{Summary of energy balance in fusion reactors} In this subsection, we describe the energy balance of two typical Tokamaks -- a deuterium -- tritium reactor similar to Big ITER and a deuterium -- $^3$He Tokamak similar to Spheromak2100. Energy balance in these reactors is representative of energy balance in all possible Tokamaks and Stellarators. Deuterium -- tritium reactor would be running at a temperature of 20 $keV$. From Table \ref{1.0T09} we find that operating the reactor at 15 $keV$ minimizes radiation loss. Nevertheless, operating at higher temperature increases plasma conductivity and reduces constraints imposed by Greenwald density limit. Deuterium -- $^3$He reactor would operate at 70 $keV$. In Table \ref{1.0T14} below, we present the energy balance of the two aforementioned reactors. The values of $x_{_r}\ f_{_{\gamma}}$ on Row 5 are obtained from Table \ref{1.0T09} and Table \ref{1.0T13}. In Row 6, $P_{_{\text{Conductive}}}$ is the conductive heat loss. The magnitude of $P_{_{\text{Conductive}}}$ is weakly dependent or independent on reactor power. \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline & Big ITER & Spheromak2100 \\ \hline Reactor type & D -- T & D -- $^3$He \\ \hline Operating temperature & 20 $keV$ & 70 $keV$ \\ \hline Neutronicity $f_{_n}$ & 0.8 & 0.05 \\ \hline $x_{_r}\ f_{_{\gamma}}$ & 0.036 & 0.46 \\ \hline $P_{_{\text{Conductive}}}$ & 20 $MW$ to 300 $MW$ & 400 $MW$ to 6 $GW$ \\ \hline \end{tabular} \captionof{table}{Energy balance for two Tokamaks} \label{1.0T15} \end{center} Recall the expressions for $x_{_r}$: \be \label{1.07.01} x_{_r} \approx \left\{ \begin{split} &\left[\frac{2-2\ x_{_b}}{2-x_{_b}}\right]^2 \left[1-\frac{4 x_{_i}}{2-x_{_b}} \right] \qquad \text{for D -- T fusion} \\ &\big(1-x_{_b}\big)^2\ \big(1-2x_{_i}\big) \hskip1.98cm \text{for D -- $^3$He fusion}. \end{split} \right. \ee In Eq. (\ref{1.07.01}) above, $x_{_b}$ is the proportion of the fuel burned and $x_{_i}$ is the proportion of impurities within plasma. The fraction of fusion power which is used to heat the plasma exclusive of the power lost to Bremsstrahlung and synchrotron radiation is plotted below: \begin{center} \includegraphics[width=16cm,height=12cm]{Fheat} \captionof{figure}{Tokamak cross-section \label{1.0F04}} \end{center} As we see from Figure \ref{1.0F04} above, the proportion of the fuel burned should not exceed 20\% for deuterium -- $^3$He fusion and 50\% for deuterium -- tritium fusion. \subsection{Remaining problems} The first open problem is building even one Tokamak or Stellarator with fusion power of several hundred MW to several GW. ITER should be the first such reactor -- it would have fusion power of 500 $MW$. Its construction cost is \$25 Billion \cite[p. 1]{LCostF1}. We do not know when ITER will be running. A 1997 source puts ITER starting date at 2010 \cite[p.15]{BeegITER}. A 2018 source puts that date at 2035 \cite[p.65]{Fus3}. The second open problem is calculating conductive power loss within fusion reactors. In Subsection 4.2, we have mentioned several theoretical and experimental models for energy confinement time. Each of these models has a corresponding model for conductive power loss. Combining (\ref{1.04.37}), (\ref{1.04.39}), and (\ref{1.04.41}) we obtain the following set of models for conductive power loss: \be \label{1.07.02} P_{_{\text{Conductive}}} \propto \left\{ \begin{split} &B_{_T}^{0.32} R^{0.32} A^{1.42} \beta^{1.9} q^{3} \kappa^{4.29} T^{1.32} \overline{M}^{-0.61} (\overline{Z}+1)^{1.32} \ \ \ \ \text{for ``IPB98(y,2)" model} \\ &B_{_T}^{0.43} R^{0.43} A^{1.14} \beta^{1.74} q^{2.46} \kappa^4 T^{1.11} \overline{M}^{-0.23} (\overline{Z}+1)^{1.11} \ \ \text{for model from \cite{Tokamak}} \\ &A\ \beta\ q^2\ \kappa^{3.5}\ T\ (\overline{Z}+1) \hskip5.1cm \text{for model from \cite{ST13}} \end{split} \right. \ee More accurate power coefficients will be obtained only when Tokamaks with fusion power in hundreds $MW$ to several $GW$ are built. The third open problem is developing a detailed understanding and tools for valid prediction of synchrotron radiation power loss. As we have mentioned in Subsection {\rr 3.1}, different theories predict different rates of plasma energy loss by synchrotron radiation. These results vary by up to a factor of 2 \cite[p.70]{BeegITER}. The fourth open problem is the study of behavior of spherical Tokamaks. What is the highest value of $\beta$ under which a spherical Tokamak can operate? What is the most accurate model for conductive power loss in spherical Tokamaks? In order to answer these questions, large spherical Tokamaks must be built.
1,477,468,751,343
arxiv
\section*{Experimental details} Expanded polystyrene spheres were deposited into a Hele-Shaw cell, as shown in Fig. \ref{fig:Setup} (the size distribution of the granular material and the dimensions of the cell were similar to those in \cite{Diaz-Melian2020}). A squared-face intruder of $6.8$~cm side, $5.2$ ~cm width and $0.237$~kg mass was released from the surface of the granular bed by means of an electromagnetic device that minimized initial spurious vibrations and torques on the intruders. Initially, the intruder was gently touching the left vertical wall of the cell as illustrated in Fig. \ref{fig:Setup}. Videos of the penetration process were taken through one of the large faces of the cell using a digital camera. Three colored dots, located at the center and near the edges of one of the squared faces of the intruder, served as reference points for image analysis \cite{reyes2021yupi}, so the motion and rotation angles could be quantified. \section*{Results and discussion} A squared intruder released near a vertical wall shows an interesting behavior during its penetration. In Fig. \ref{fig:Snapshots} a series of snapshots of a typical experiment is illustrated. The first stage of the motion ($<50\,$ms) is a vertical plunge corresponding to an almost free fall. This is followed by a tilting phase (between $50\,$ms and $125\,$ms) with little penetration. After the intruder rotates a certain angle it ``slides" into the granular bed (between $125\,$ms and $400\,$ms) resulting in a very large lateral displacement (1.2 times the intruder size). The ``slope" over which the intruder moves is formed by the loaded force chains that form between the bottom of the intruder and the vertical wall. The motion ends with an inverse tilting (starts around $300\,$ms) that partially compensates the initial tilting, in such a way that the intruder ends with little inclination. This effect is the result of the intruder colliding into a more solid layer of the granular bed, that exerts a net torque ``correcting" its rotation. Fig. \ref{fig:Trajectories} indicates the trajectories of the intruder released at different initial separations ($x_0$) from the wall, resulting from the average over $10$ repetitions of the experiment. The trajectories followed by the intruders corresponding to $x_0 = 0$ and $1\,$cm are strikingly similar, suggesting the formation of the same ``slope" in both cases. The effect of the slope for higher values of $x_0$ greatly diminishes, as should be expected with the increase in the distance from the wall, resulting in less stressed force chains (``loose slope"). \begin{figure}[!h] \includegraphics[width=260px]{trajectories.pdf} \caption{(a) Trajectories averaged over 10 repetitions for $x_0=$ 0, 1, 2, 3, 4 and $5\,$cm. (b) Same trajectories, where the horizontal axis shows the net displacement ($\Delta x = x - x_0$).} \label{fig:Trajectories} \end{figure} Fig. \ref{fig:deltax} illustrates how much greater the lateral repulsion for $x_0 = 0$ and $1\,$cm are, compared to all other values of $x_0$. The intruders released closer to the wall slide through a more rigid slope of force chains, resulting in a very large horizontal motion. This phenomena is not observed in cylindrical intruders \cite{Diaz-Melian2020}, for which the maximum value of $\Delta x$ is much smaller (0.65 times the intruder size for a cylinder, and 1.2 times the intruder size for a square cuboid). \begin{figure}[!h] \includegraphics[width=255px]{x_exp.pdf} \caption{Net horizontal repulsion. (a) Time evolution of $\Delta x = x - x_0$ of the intruder after being released for the $x_0$ values shown in Fig. \ref{fig:Trajectories}. (b) Maximum lateral displacement (error bars are the corresponding standard deviations). The color scale of both (a) and (b) is consistent with the one shown in Fig. \ref{fig:Trajectories}.} \label{fig:deltax} \end{figure} Fig. \ref{fig:deltaz} shows the vertical penetration of the intruder released from different initial position $x_0$. Here we distinguish a slightly higher penetration for $x_0=1\,$cm than for $x_0=0\,$cm, caused by the stronger force chains closer to the wall that dissipate a higher portion of the intruder's potential energy. The maximum penetration decreases for $x_0=2\,$ and $3\,$cm, due to a lesser rotation of the intruder (for $x_0 = 0$ and $1\,$cm the geometry of the rotated intruder favors the penetration). Surprisingly, the maximum value of $\Delta z$ increases at $x_0=4\,$cm, reaching a stable value (approximately the same for higher values of $x_0$). This could be caused by the reduction of the effect of the vertical wall in the stress of the force chains formed under the intruder (note that this effect was overlapped by the rotation that favored penetration for lower values of $x_0$). \begin{figure}[!h] \includegraphics[width=255px]{z_exp.pdf} \caption{Vertical penetration. (a) Time evolution of $\Delta z = z - z_0$ of the intruder after being released for the $x_0$ values shown in Fig. \ref{fig:Trajectories}. (b) Maximum penetration depth.} \label{fig:deltaz} \end{figure} Fig. \ref{fig:Rotation} shows the time evolution of the rotated angle $\theta$ of the intruder released from increasing values of $x_0$. The three stages of the motion: tilting, sliding and inverse tilting can be clearly observed. As shown by the large values of the error bars for $x_0\le2\,$cm, the rotation of the intruder seems quite sensitive to the fluctuations in the configuration of the granular bed. \begin{figure}[!h] \includegraphics[width=255px]{theta_exp.pdf} \caption{Rotation. (a) Time evolution of $\Delta \theta = \theta - \theta_0$ of the intruder after being released. (b) Maximum and final values of $\theta$.} \label{fig:Rotation} \end{figure} We are performing Discrete Element Simulations using LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) \cite{plimpton2007lammps} in order to reproduce experimental results, visualize the force chains, and understand the role of dissipation in the dynamics. The results will be published elsewhere. \section*{Conclusions} We have shown that a square cuboid intruder released near a vertical wall into a granular bed moves in three phases. Firstly, it rotates around its symmetry axis, moving away from the wall (tilting phase). Secondly, it "slides downhill" on top of a virtual “slope” into the granular bed (sliding phase). Thirdly, the motion ends with an opposite rotation that partially compensates the rotated angle back to a value closer to zero (reverse tilting phase). However, as the cube is initially released at increasing distances from the wall, both tilting and sliding become smaller. \section*{Acknowledgment} We acknowledge the University of Havana's institutional project ``Granular media: creating tools for the prevention of catastrophes". The Institute ``Pedro Kourí" is thanked for allowing us using their computing cluster. E. Altshuler found inspiration in the late M. \'Alvarez-Ponte. \bibliographystyle{unsrt}
1,477,468,751,344
arxiv
\section*{Introduction} The production of weak boson pairs is an important topic to study at hadron colliders because these processes can be used to test the standard model (SM) as well as probe beyond it \cite{FIRSTWZ}. Diboson production is important for the following reasons. \begin{itemize} \item{} The $W^{\pm}\gamma$, $W^{\pm}Z$, and $W^+ W^-$ processes can be used to test the trilinear $WW\gamma$ and $WWZ$ couplings. These couplings are completely fixed by the ${\rm SU(2)} \otimes {\rm U(1)}$ gauge structure of the SM, thus measurements of these couplings provide stringent tests of the SM. Remarkable progress has recently been made in measuring these couplings at the Fermilab Tevatron collider \cite{UCLA}. \item{} The electroweak symmetry breaking (EWSB) mechanism can be probed by studying weak boson pair production. The EWSB mechanism is unknown, but it is believed that either there exists a scalar particle with mass $m < 1$~TeV or else the longitudinal components of the $W$ and $Z$ bosons become strongly interacting for parton center-of-mass energies larger that about 1~TeV \cite{EWSB}. For example, the observation of resonance production of $ZZ$, $W^+ W^-$, or $\gamma \gamma$ would be a signal for the standard model Higgs boson, whereas enhanced production of longitudinally polarized $W$ and $Z$ pairs would be evidence for a strongly interacting EWSB scenario. \item{} Diboson production is a potential background to new physics. New heavy particles, such as $H^0$, $H^{\pm}$, $\rho_{\rm TC}^{}$, $\eta_{\rm TC}^{}$, $W^\prime$, $Z^\prime$, $\tilde q$, and $\tilde g$ can decay into weak boson pairs. \end{itemize} In order to test and probe the SM with hadronic diboson production, it is necessary to have precise calculations of SM diboson production, which means the cross sections must be calculated to next-to-leading-order (NLO). The NLO cross section is, in general, less sensitive to the choices of the arbitrary factorization and renormalization scales. The results described here are based on complete ${\cal O}(\alpha_s)$ calculations of the processes $p\,p\hskip-7pt\hbox{$^{^{(\!-\!)}}$} \rightarrow V_1 V_2 + X$ where $V_i = W, Z, \gamma$ \cite{NLOVV}. The calculations also include the leptonic decays of the $W$ and $Z$ bosons \cite{NEWJO,BHO}. This is an important feature to include since the $W$ and $Z$ bosons are observed experimentally via their leptonic decay products. It is therefore important to include the experimental cuts on the decay leptons when comparing a theoretical calculation to the experimental data. The calculations have been done using a combination of analytic and Monte Carlo integration techniques. Among the advantages of this formalism are: \begin{itemize} \item{} It is easy to impose cuts in the calculation. \item{} It is possible to calculate any number of observables simultaneously by simply histogramming the quantity of interest. \item{} It is possible to calculate not only the NLO inclusive cross section, but also the 0-jet and 1-jet exclusive cross sections. \end{itemize} Details of the formalism can be found in the original references \cite{NLOVV,NEWJO,BHO}. \section*{The $Z\gamma$ and $W\gamma$ Processes} The first processes to be considered are the $Z\gamma$ and $W\gamma$ processes. The total LO and NLO cross sections for these processes are plotted as functions of the center of mass energy in Fig.~1. The difference between the NLO and LO curves is the ${\cal O}(\alpha_s)$ correction. In the $Z\gamma$ process, the ${\cal O}(\alpha_s)$ corrections range from 10\% to 30\% over the domain of $\sqrt{s}$. This is what one naively expects since $\alpha_s$ is of order 0.10. In the $W\gamma$ process, on the other hand, the corrections range from 20\% at small $\sqrt{s}$ to a surprising 300\% at large $\sqrt{s}$. In order to understand the large ${\cal O}(\alpha_s)$ corrections in the $W\gamma$ process, it is instructive to compare the behavior of the $2 \to 2$ and $2 \to 3$ processes for $Z\gamma$ and $W\gamma$ production. Figure~2(a) compares the $2\to 2$ cross sections. Normally, hadronic $W$ production is about twice as large as hadronic $Z$ production because the $W$-to-quark coupling is about twice as big as the $Z$-to-quark coupling. However, for the $W\gamma$ and $Z\gamma$ processes, exactly the opposite behavior is seen; the $W\gamma$ cross section is only half as big as the $Z\gamma$ cross section. The $W\gamma$ cross section is smaller because it is suppressed by a radiation amplitude zero (RAZ) \cite{RAZ}. Delicate cancellations in the $W^{\pm}\gamma$ amplitude cause it to vanish at $\cos\theta^* = \pm {1\over 3}$ where $\theta^*$ is the parton center-of-mass scattering angle. The $2 \to 3$ cross sections for $W\gamma$ and $Z\gamma$ are compared in Fig.~2(b). Here a jet is defined as a final state quark or gluon with transverse momentum $p_T^{} > 50$~GeV and pseudorapidity $|\eta| < 3$. The cross sections have been decomposed into contributions from $qg$ and $q\bar q$ initial states ($qg$ also includes $\bar q g$). The $q g \to W\gamma + 1$~jet cross section is about twice as big as the $qg \to Z\gamma + 1$~jet cross section, as naively expected. (The $q g \to W\gamma q$ subprocess does not have a RAZ.) The $q \bar q \to W\gamma + 1$~jet and $q \bar q \to Z\gamma + 1$~jet cross sections, on the other hand, are nearly equal, indicating that the former is still suppressed relative to the later. (The $q \bar q \to W\gamma g$ subprocess has a RAZ in the limit $E_g \to 0$.) In summary, the $2 \to 2$ $W\gamma$ cross section is suppressed relative to the $2 \to 2$ $Z\gamma$ cross section by a RAZ, while the $2 \to 3$ $W\gamma$ cross section is larger than the $2\to 3$ $Z\gamma$ cross section due to the larger $W$-to-quark coupling. The net result of these two behaviors is that the ${\cal O} (\alpha_s)$ corrections are much larger for $W\gamma$ production than for $Z\gamma$ production. Figure~3 again shows the total $Z\gamma$ and $W\gamma$ cross sections versus $\sqrt{s}$, but now the NLO cross sections have been decomposed into the Born cross sections and ${\cal O} (\alpha_s)$ corrections from $q \bar q$ and $qg$ initial states. This decomposition shows that the ${\cal O} (\alpha_s)$ $q\bar q$ corrections tend to be proportional to the Born cross section, whereas the ${\cal O} (\alpha_s)$ $qg$ corrections increase rapidly with $\sqrt{s}$. The ${\cal O} (\alpha_s)$ $qg$ corrections increase with $\sqrt{s}$ because the gluon density increases with $\sqrt{s}$. Figure~4 shows the $p_T^{}(\gamma)$ spectra for $Z\gamma$ and $W^+ \gamma$ production at the Large Hadron Collider (LHC) center of mass energy ($\sqrt{s} = 14$~TeV). The figure shows that the NLO corrections increase with $p_T^{}(\gamma)$. This behavior is common to all the diboson processes; the NLO corrections increase with the $p_T^{}$ of the boson. The rapidity distribution of the photon in the diboson rest frame is shown in Fig.~5 for the Tevatron center of mass energy ($\sqrt{s} = 1.8$~TeV). For the $Z\gamma$ process, the distribution exhibits the usual bell-shaped rapidity distribution, however, for the $W\gamma$ process, the distribution has a pronounced dip in the central rapidity region. This dip is due to the RAZ in the $W\gamma$ process. At the Tevatron energy, the NLO corrections slightly fill the dip, but do not obscure it. Figure~6 shows the photon rapidity distribution at the LHC energy. The NLO corrections are now very large in the $W\gamma$ process and they completely fill the dip in the central rapidity region. It may still be possible, however, to observe the dip in the $W\gamma + 1$~jet exclusive cross section \cite{BHO}. Figure~7 compares the $p_T^{}(\gamma)$ spectra for the $Z\gamma$ and $W\gamma$ processes at the Tevatron energy. This comparison shows that at high $p_T^{}(\gamma)$, the $W\gamma$ distribution falls more rapidly than the $Z\gamma$ distribution. This behavior is also due to the RAZ in the $W\gamma$ process. \section*{The $ZZ$, $W^+W^-$, and $WZ$ Processes} Attention now turns to the $ZZ$, $W^+ W^-$, and $WZ$ processes. The transverse momentum distributions for these processes are shown in Fig.~8. The figure shows that the NLO corrections increase with the $p_T^{}$ of the weak boson and are quite large at high values of $p_T^{}$. Also note that the NLO corrections increase in the order $ZZ$, $W^+W^-$, $WZ$. This behavior will be discussed later. Figure~9 again shows the $p_T^{}$ spectra of the weak bosons, but now the 0-jet and 1-jet exclusive components of the NLO inclusive cross section are also shown. (The 0-jet and 1-jet exclusive cross sections sum to the NLO inclusive cross section.) This decomposition shows that the bulk of the large corrections at high $p_T^{}$ are due to events containing a hard jet in the final state. The jet definition used here is $p_T^{}(jet) > 50$~GeV and $|\eta(jet)| < 3$. The large enhancements to the cross section at high $p_T^{}$ can be traced to collinear splittings in diagrams such as $q g \to Z q$ followed by $q \to q W$; the $Z$ and the quark are produced with high $p_T^{}$ and the quark subsequently radiates a nearly collinear $W$. In the collinear limit, the $q g \to WZq$ subprocess can be approximated by \cite{FRIXWZ} \begin{equation} d\sigma(qg \rightarrow WZq) \approx d\sigma(qg \rightarrow Zq) \, {g^2 \over 16 \pi^2} \, \log^2 \left( {p_T^2(Z) \over M_W^2} \right) \>. \end{equation} Figure~10 compares this collinear approximation to the full NLO calculation and shows that the approximation describes well the shape of the $p_T^{}$ distribution at high $p_T^{}$. The scale dependance of the total $WZ$ cross section is illustrated in Fig.~11. A common scale $Q$ has been used for both the renormalization scale $\mu$ and the factorization scale $M$. The Born and NLO inclusive cross sections are shown along with the 0-jet and 1-jet components of the NLO inclusive cross section. The 1-jet cross section is a LO quantity and thus has considerable scale dependance. The 0-jet cross section, on the other hand, is a NLO quantity and exhibits little scale dependance. The decomposition shows that the scale dependance of the NLO inclusive cross section is dominated by the scale dependance of the 1-jet component. Figure~12 compares the $p_T^{}$ spectra of the weak bosons for the $ZZ$, $W^+W^-$, and $WZ$ processes. The $ZZ$ and $W^+W^-$ distributions have the same shape at high $p_T^{}$ and are parallel to one another, whereas the $WZ$ distribution falls more rapidly. A similar behavior was observed earlier in Fig.~7 where the $Z\gamma$ and $W\gamma$ processes were compared. In the present case, the $WZ$ $p_T^{}$ spectrum falls faster than the $ZZ$ and $W^+ W^-$ spectra because of an approximate amplitude zero \cite{UJH} in the $WZ$ process. \subsection*{Approximate Amplitude Zero} The $q_1 \bar q_2 \to WZ$ subprocess is very similar to the $q_1 \bar q_2 \to W\gamma$ subprocess, in fact, they are described by the same set of Feynman diagrams, with $Z$ and $\gamma$ interchanged. Recall that the RAZ in the $W\gamma$ process gave rise to a large ${\cal O}(\alpha_s)$ correction. A difference between the two processes is that whereas the $W^{\pm}\gamma$ process has an exact amplitude zero at $\cos\theta^* = \pm {1\over 3}$, the $W^{\pm}Z$ process has only an approximate amplitude zero at $\cos\theta^* = \pm 0.1$. Basically, what happens in the $WZ$ case is that the dominant helicity amplitudes have an exact zero, while the other helicity amplitudes remain finite but small. The approximate amplitude zero in the $WZ$ process causes the NLO corrections to be larger than they were in either the $ZZ$ or $W^+ W^-$ processes. The approximate amplitude zero suppresses the $WZ$ Born cross section and thus makes the NLO corrections appear large. A more in depth discussion of approximate amplitude zeros can be found in the talk by T.~Han \cite{HAN}. \section*{Summary} The QCD radiative corrections to weak boson pair production at hadron colliders has been reviewed. The ${\cal O} (\alpha_s$) cross sections for the diboson combinations $Z\gamma$, $W\gamma$, $ZZ$, $W^+ W^-$, and $WZ$ have been discussed and compared. Some general features of the ${\cal O}(\alpha_s)$ cross sections are summarized here. \begin{itemize} \item{} The NLO corrections increase with the center-of-mass energy. This is due to the opening of the $q g \to V_1 V_2 q$ subprocess at ${\cal O} (\alpha_s)$ in conjunction with the gluon density which increases with the center-of-mass energy. \item{} The NLO corrections are largest at high $p_T^{}(V)$. This is due to collinear splittings in the $q g \to V_1 V_2 q$ subprocesses which give rise to an enhancement factor $\log^2(p_T^2(V_1)/M_2^2)$. \item{} The bulk of the large corrections at high $p_T^{}(V)$ come from events which contain a hard jet in the final state. \item{} $p_T^{}$ distributions are most affected by the NLO corrections. These distributions tend to be enhanced at large values of $p_T^{}$. \item{} Invariant mass and angular distributions under go relatively little change in shape at NLO, instead, these distributions tend to be scaled up uniformly. \item{} The NLO corrections to $W\gamma$ production are large due to a radiation amplitude zero. \item{} The NLO corrections to $WZ$ production are large due to an approximate amplitude zero. \item{} The NLO corrections are modest at the Tevatron center of mass energy but are significant at the LHC energy. \end{itemize} \begin{figure} \psfig{file=fig1.ps,width=6.0in,clip= } \caption{Total cross section as a function of the center-of-mass energy for (a) $p p \to Z\gamma + X$ and (b) $pp \to W^+ \gamma + X$. The LO and NLO cross sections are shown.} \end{figure} \begin{figure} \psfig{file=fig2.ps,width=6.0in,clip= } \caption{(a) The $2 \to 2$ Born cross sections for $pp \to Z\gamma$ and $p p \to W^+ \gamma$. (b) The $2 \to 3$ cross sections for $Z\gamma$ and $W^+ \gamma$ production. The cross sections have been decomposed into contributions from $q \bar q$ and $qg$ initial states.} \end{figure} \begin{figure} \psfig{file=fig3.ps,width=6.0in,clip= } \caption{Same as Fig.~1, but now the NLO cross section has been decomposed into the Born cross section and the order $\alpha_s$ corrections from $q\bar q$ and $qg$ initial states.} \end{figure} \begin{figure} \psfig{file=fig4.ps,width=6.0in,clip= } \caption{Photon transverse momentum distributions at the LHC energy for (a) $p p \to Z \gamma + X \to e^- e^+ \gamma + X$ and (b) $p p \to W^+ \gamma + X \to e^+ \nu_e \gamma + X$}. \end{figure} \begin{figure} \psfig{file=fig5.ps,width=6.0in,clip= } \caption{Photon rapidity distributions in the diboson rest frame at the Tevatron energy for (a) $Z\gamma$ production and (b) $W^+ \gamma$ production.} \end{figure} \begin{figure} \psfig{file=fig6.ps,width=6.0in,clip= } \caption{Same as Fig.~5, but for the LHC energy.} \end{figure} \begin{figure} \psfig{file=fig7.ps,width=6.0in,clip= } \caption{Photon transverse momentum distributions for $Z\gamma$ and $W\gamma$ production at the Tevatron energy. Parts (a) and (b) are the LO and NLO cross sections, respectively.} \end{figure} \begin{figure} \psfig{file=fig8.ps,width=7.0in,clip= } \caption{Weak boson transverse momentum distributions for (a) $ZZ$, (b) $W^+W^-$, and (c) $W^+ Z$ production at the LHC energy.} \end{figure} \begin{figure} \psfig{file=fig9.ps,width=7.0in,clip= } \caption{Same as Fig.~8. but now the 0-jet and 1-jet exclusive components of the NLO inclusive cross section are also shown.} \end{figure} \begin{figure} \centerline{\psfig{file=fig10.ps,height=4.00in,clip= }} \caption{The $p_T(Z)$ distribution for $pp \to W^+ Z + X$ at the LHC energy. The full NLO cross section is compared to the cross section obtained from the collinear approximation given in Eq.~(1).} \end{figure} \begin{figure} \psfig{file=fig11.ps,width=6.0in,clip= } \caption{Total cross section for $W^+Z$ production as a function of the scale $Q$ for (a) the Tevatron energy and (b) the LHC energy. The Born, NLO inclusive, 0-jet exclusive, and 1-jet exclusive cross sections are shown.} \end{figure} \begin{figure} \psfig{file=fig12.ps,width=6.0in,clip= } \caption{The weak boson transverse momentum distributions at LO for $ZZ$, $W^+ W^-$, and $W^- Z$ production. Parts (a) and (b) are for the Tevatron and LHC energies, respectively.} \end{figure}
1,477,468,751,345
arxiv
\section{Introduction} \label{sec:introduction} The potential speedup of quantum algorithms is demonstrated by Shor's factoring algorithm, which is exponentially faster than any known classical algorithm \cite{Shor1994}. Several other quantum algorithms, which are more efficient than their classical counterparts were introduced ~\cite{Deutsch1992,Grover1996,Grover1997,Jozsa1997}. Factorization is of special interest due to its role in current methods of cryptography. Although the origin of the speed-up offered by quantum algorithms is not fully understood, there are indications that quantum entanglement plays a crucial role \cite{Jozsa2003,Vidal2003}. In particular, it was shown that quantum algorithms that do not create entanglement can be simulated efficiently on a classical computer \cite{Aharonov1996}. It is therefore of interest to quantify the entanglement produced by quantum algorithms and examine its correlation with their efficiency. This requires to develop entanglement measures for the quantum states of multiple qubits that appear in quantum algorithms. Recently, the Groverian measure of entanglement was introduced and used for the evaluation of entanglement in certain pure quantum states of multiple qubits \cite{Biham2002}. Using computer simulations of the evolution of quantum states during the operation of a quantum algorithm, one can obtain the time evolution of the entanglement. Such analysis was performed for Grover's search algorithm with various initial states and different choices of the marked states \cite{Shimoni2004}. It was shown that Grover's iterations generate highly entangled states in intermediate stages of the quantum search process, even if the initial state and the target state are product states. In this paper we analyze the quantum states that are created during the operation of Shor's factoring algorithm. The entanglement in these states is evaluated using the Groverian measure. It is found that the entanglement is generated during the pre-processing stage. When the quantum Fourier transform (QFT) is applied to the resulting states, their entanglement remains unchanged. This feature is unique to periodic quantum states, such as those that result from the pre-processing stage of Shor's algorithm. When other states, such as product states or random states are fed into the QFT, their entanglement does change. Another interesting feature is that the entanglement is found to be correlated with the speedup achieved by the quantum factoring algorithm compared to classical algorithms. This means that the cases where no entanglement is created are those in which classical factoring is efficient. The paper is organized as follows. In Sec. \ref{sec:algorithm} we briefly review Shor's factoring algorithm, the QFT algorithm, and the quantum circuit used to perform it. In Sec. \ref{sec:groverian} we describe the Groverian entanglement measure and the numerical method in which it is calculated. In Sec. \ref{sec:ent} we use the Groverian measure to evaluate the entanglement created by Shor's algorithm. The results are discussed in Sec. \ref{sec:discussion} and summarized in Sec. \ref{sec:summary}. \section{Shor's Factoring Algorithm} \label{sec:algorithm} Shor's algorithm factorizes a given non-prime integer $N$, namely, it finds integers $p_1$ and $p_2$, such that their product $p_1 p_2 = N$. The algorithm consists of three parts: (a) Pre-processing stage, in which the quantum register is prepared using classical algorithms and quantum parallelism; (b) Quantum Fourier transform, which is applied on the output state of the previous stage; (c) Measurement of the register and post-processing using classical algorithms. \subsection{Pre-processing} Given an integer $N$ to be factorized, choose any integer $y<N$, and find the integer $q=2^L$ that satisfies \begin{equation} N^2 < q \leq 2 N^2. \label{eq:<q<} \end{equation} \noindent Prepare a register of $L$ qubits (later referred to as the main register) in the equal superposition state \begin{equation} | \eta \rangle = \frac{1}{\sqrt{q}} \sum_{a=0}^{q-1} | a \rangle. \end{equation} \noindent Next, use quantum operations to calculate $y^a \ {\rm mod}\ N$ for all the indices, $a=0,\dots,q-1$, of the basis states above, and store the results in an auxiliary register, giving rise to the joint state \begin{equation} \frac{1}{\sqrt{q}} \sum_{a=0}^{q-1} | a \rangle |y^a \ {\rm mod}\ N \rangle. \end{equation} \noindent This essentially completes the pre-processing stage. However, in order to present the next stage of the algorithm more clearly, it is helpful to measure the auxiliary register in the computational basis. Suppose that the result of the measurement is a state $| z \rangle$, where $z = y^l \ ({\rm mod}\ N)$ and $l$ is the smallest positive integer that gives the value $z$. The order of $y$ modulus $N$ is defined as an integer $r$ that satisfies $y^r = 1 \ ({\rm mod}\ N)$. The equality \begin{equation} y^{jr+l} = y^l \ \ ({\rm mod}\ N) \label{eq:repitition} \end{equation} \noindent is thus satisfied for any integer $j$. From Eq.~(\ref{eq:repitition}) it follows that the measurement will select from the main register all values of $a=l,l+r,l+2r,\ldots,l+Ar$, where $A$ is the largest integer which is smaller than $(q-1)/r$. The state of the register after the measurement is therefore \begin{equation} | \phi_l \rangle = \frac{1}{\sqrt{A+1}} \sum_{j=0}^A |jr+l\rangle. \label{eq:phi_l} \end{equation} \subsection{Quantum Fourier Transform} \begin{figure} \includegraphics[width=8.5cm]{fig1} \caption{The circuit of the quantum Fourier transform (QFT) performed on a $4$-qubit register. The operator $A$ is the Hadamard gate. The operators $B_{1}$, $B_{2}$ and $B_{3}$ are the controlled-phase gates $B_{k,m}$, where $m-k=1$, 2 and 3, respectively. } \label{fig:1} \end{figure} The quantum Fourier transform is given by \begin{equation} \sum_{a=0}^{q-1} f(a) |a\rangle \mapsto \sum_{c=0}^{q-1} \tilde{f}(c)|c\rangle, \label{eq:QFT1} \end{equation} \noindent where \begin{equation} \tilde{f}(c)=\frac{1}{\sqrt{q}} \sum_{a=0}^{q-1} \exp\left(\frac{2\pi iac}{q}\right) f(a). \end{equation} \noindent The quantum circuit of the QFT is shown in Fig. \ref{fig:1}. To obtain the transformation in Eq. (\ref{eq:QFT1}), the $L$ qubits of register $| a \rangle$ in the input (and throughout the quantum circuit) are indexed by $k=1,\dots,L$, from bottom to top. The output of the circuit is stored in register $| c \rangle$, whose qubits are indexed from top to bottom. We define the operator $A_k$ to be the Hadamard gate applied to qubit $k$, and the operator $B_{k,m}$ (where $m>k$) to be a controlled phase operator, which applies a phase of $\theta_{k,m}=\pi/2^{m-k}$ only if both qubits $k$ and $m$ are $1$. We also define \begin{equation} F_k = A_k B_{k,k+1} B_{k,k+2} \dots B_{k,L}, \label{eq:F_j} \end{equation} \noindent for $k=1,\dots,L$, where we follow the standard notation for quantum operators, namely, those on the right hand side operate first. With these definitions the sequence of quantum operations that perform the QFT is given by \begin{equation} {\rm QFT} = F_1 F_2 \dots F_L. \end{equation} \noindent The number of one-qubit and two-qubit gates required in the quantum circuit which performs QFT is polynomial in the size of the register. In the simple case in which $r$ divides $q$ exactly, namely $A+1=q/r$, one obtains \begin{equation} {\rm QFT} |\phi_l\rangle= \frac{1}{\sqrt{r}}\sum_{j=0}^{r-1} \exp\left(\frac{2\pi ilj}{r}\right) \left|j\frac{q}{r}\right\rangle. \label{eq:QFT} \end{equation} \noindent where $|\phi_l \rangle$ is defined in Eq. (\ref{eq:phi_l}). The resulting state is a superposition of all basis states with indices which are products of $q/r$. If $r$ is not a divisor $q$, namely, $q/r$ is not an integer, Eq.~(\ref{eq:QFT}) should be modified such that the large amplitude states are those which correspond to integers adjacent to $j q/r$, $j=0,1,\dots,r-1$. Our choice of $q$ in Eq.~(\ref{eq:<q<}) ensures that with high probability the measurement will yield only states whose indices are the nearest integers to $j q/r$. \subsection{Measurement and Post-Processing} The third part of the algorithm starts with a measurement of the register. It yields an integer approximation, $c$, of one of the values $j q/r$, $j=0,1,\dots,r-1$. Thus, $cr$ is approximately an integer multiple of $q$. Here, again, our choice of $q$ in Eq.~(\ref{eq:<q<}) ensures that in most cases there exist another integer $c'$ which satisfies $|rc-c'q|\leq r/2$. As a result \begin{equation} \left|\frac{c}{q}-\frac{c'}{r}\right|\leq\frac{1}{2q}. \label{eq:approx} \end{equation} \noindent Using a continued fraction expansion of $c/q$ it is possible to efficiently find $c'$ and $r$. There is only one such approximation which satisfies Eq.~(\ref{eq:approx}) for $r<N$. Thus, the correct value of $r$ is obtained. If $r$ is even we can define $x=y^{r/2}$ which satisfies \begin{equation} x^2-1=(x-1)(x+1) = 0 \ ({\rm mod}\ N). \label{eq:x^2-1} \end{equation} \noindent From Eq.~(\ref{eq:x^2-1}) we obtain that $x+1 \ ({\rm mod}\ N)$ and $x-1 \ ({\rm mod}\ N)$ are candidates for having a common divisor with $N$. Using Euclid's greatest common divisor (GCD) algorithm, this common divisor is found and the factoring process is completed. \section{The Groverian Measure of Entanglement} \label{sec:groverian} \subsection{Formal Definition} Consider a quantum algorithm, given by the unitary operator $U$, applied to the equal superposition state $| \eta \rangle$. For a certain class of quantum algorithms, the final, or target state \begin{equation} | t \rangle = U | \eta \rangle, \label{eq:m=Ae} \end{equation} \noindent is a computational basis state. This state stores the correct result of the calculation, which can be extracted by measurement. Not all quantum algorithms can be expressed in this form, because the final state, before the measurement is done, may be a superposition state. However, in the case of Grover's search algorithm with a single marked state, this description applies \cite{Biham2002}. Consider the case in which such algorithm, $U$, is applied to an arbitrary pure state $| \psi \rangle$. The probability of success is defined as the probability that the measurement will still give the state $| t \rangle$. This probability is given by $P_s = |\langle t | \psi \rangle |^2$. The success probability can be used to evaluate the entanglement of the state $| \psi \rangle$. To this end, before the algorithm $U$ is applied, one applies a local unitary operator, $U_k$, on each qubit $k=1,2,\dots,L$. These operators are chosen such that the success probability of the algorithm will be maximized. The maximal success probability is \begin{equation} P_{\max}=\max_{U_1,\dots,U_L} \left|\langle t|UU_1\otimes\dots\otimes U_L| \psi \rangle\right|^2. \end{equation} \noindent Using Eq.~(\ref{eq:m=Ae}) the success probability $P_{\max}$ can be expressed by \begin{equation} P_{\max}=\max_{U_1,\dots,U_L} \left|\langle\eta |U_1\otimes\dots\otimes U_L| \psi\rangle\right|^2. \end{equation} \noindent This can be re-written as \begin{equation} P_{\max}=\max_{|e_1\rangle,\dots,|e_L\rangle} \left|\langle e_1 \otimes \dots \otimes e_L| \psi \rangle \right|^2, \label{eq:Pmax} \end{equation} \noindent where the $|e_k\rangle$'s are single-qubit states. Eq.~(\ref{eq:Pmax}) means that for a given initial state $| \psi \rangle$, the maximal success probability of such algorithm, $U$, is equal to the maximal overlap of $| \psi \rangle$ with any product state. The Groverian measure of entanglement $G(\psi)$ is defined by \begin{equation} G(\psi) = \sqrt{1-P_{\max}}. \end{equation} \noindent For the case of pure states, for which $G(\psi)$ is defined, it is closely related to an entanglement measure introduced in Refs. \cite{Vedral1997,Vedral1997a,Vedral1998} and was shown to be an entanglement monotone. The latter measure is defined for both pure and mixed states. It can be interpreted as the distance between the given state and the nearest separable state and expressed in terms of the fidelity of the two states. Based on these results, it was shown \cite{Biham2002} that $G(\psi)$ satisfies: (a) $G(\psi) \geq 0$, with equality only when $|\psi\rangle$ is a product state; (b) $G(\psi)$ cannot be increased using local operations and classical communication (LOCC). Therefore, $G(\psi)$ is an entanglement monotone for pure states. A related result was obtained in Ref. \cite{Miyake2001}, where it was shown that the evolution of the quantum state during the iteration of Grover's algorithm corresponds to the shortest path in Hilbert space using a suitable metric. \subsection{Numerical Evaluation} Consider a pure quantum state of $L$ qubits \begin{equation} |\psi\rangle=\sum_{j=0}^{2^L-1}a_j|j\rangle. \end{equation} \noindent In order to find $G(\psi)$ we form a convenient representation of the tensor product states used in Eq. (\ref{eq:Pmax}). The state of each qubit in the product state is given by \begin{equation} |e_k\rangle = e^{i\delta_k}\left[\cos\theta_k|0\rangle+ e^{i\gamma_k}\sin\theta_k|1\rangle\right]. \label{eq:e_j} \end{equation} \noindent Let us denote \begin{equation} b_j^{(k)}=\left\{ \begin{array}{ll} \cos\theta_k & {\rm if} \, j_k=0 \\ e^{i\gamma_k}\sin\theta_k & {\rm if} \, j_k=1, \end{array} \right. \end{equation} \noindent where $j_k$, $k=1,\dots,L$ is the $k$'th most significant bit in the binary representation of $j$. The overlap between $| \psi \rangle$ and the product state $| e_1 \otimes \dots \otimes e_L \rangle$ is given by $f(\psi,\theta_1,\dots,\theta_L,\gamma_1,\dots,\gamma_L) = \langle e_1 \otimes\dots\otimes e_L | \psi \rangle$. It can then be written as \begin{equation} f(\psi,\theta_1,\dots,\theta_L,\gamma_1,\dots,\gamma_L) = \sum_{j=0}^{2^L-1} b_j^{(1)} b_j^{(2)}\dots b_j^{(L)} a_j. \label{eq:foverlap} \end{equation} \noindent The phases $\delta_k$ only introduce a global phase which can be ignored. The Groverian entanglement measure for the state $| \psi \rangle$ is given by \begin{equation} P_{\rm max} = \max_{\theta_1,\dots,\theta_L,\gamma_1,\dots,\gamma_L} \left|f(\psi,\theta_1,\dots,\theta_L,\gamma_1,\dots,\gamma_L) \right|^2, \end{equation} \noindent namely, the dimension of the parameter space in which the maximization is obtained is $2L$. However, the number of terms summed up in the calculation of $f$ increases exponentially with the number of qubits. Therefore, to make the calculation of $G(\psi)$ feasible one should minimize the number of evaluations of $f$. The commonly used steepest descent algorithm, requires a large number of evaluations of $f$ and is thus computationally inefficient. Here we accelerate the calculation by performing the maximization analytically and separately for a single pair of $\theta_k$ and $\gamma_k$. During each maximization step, all the other parameters are held fixed. In the maximization we have a function of the form \begin{equation} f = c_k \cos \theta_k + d_k e^{i \gamma_k} \sin \theta_k, \end{equation} \noindent where $a_k = |a_k| e^{i \alpha_k}$ and $b_j = |b_j| e^{i \beta_j}$ depend on the other $2L-2$ parameters. The maximization of $|f|^2$ vs. $\theta_k$ and $\gamma_k$ leads to \begin{equation} |f|^2 \rightarrow |c_k|^2 + |d_k|^2 \end{equation} \noindent where \begin{equation} \cos \theta_k \rightarrow \frac{|c_k|}{\sqrt{|c_k|^2 + |d_k|^2}} \end{equation} \noindent and \begin{eqnarray} \gamma_k \rightarrow \alpha_k - \beta_k. \end{eqnarray} \noindent Using this method, the number of evaluations of $f$ is significantly reduced. To find the global maximum, $P_{\rm max}$ and then $G(\psi)$ we perform several rounds of maximization over all the $2L$ parameters. Trying different initial conditions we find that the convergence to the global maximum is fast and no other local maxima are detected. \section{Entanglement During Shor's Algorithm} \label{sec:ent} Shor's factoring algorithm includes a pre-processing stage followed by QFT. Here we analyze the quantum states generated in each of these stages and evaluate their entanglement using the Groverian measure. \subsection{Entanglement Generated by the QFT Procedure} Here we evaluate the time evolution of the Groverian entanglement during the QFT process, shown in Fig. \ref{fig:1}. The Groverian measure is evaluated after each operation of the $B_{k,m}$ operator. The $A_k$ operators are local and do not change the entanglement, We first perform this analysis for general quantum states and then focus on the specific quantum states that appear in the factoring algorithm. \subsubsection{QFT Applied on General Quantum States} To examine the effect of QFT on the Groverian entanglement we construct an ensemble of random product states as well as random states of $n$ qubits. The state of each qubit in the random product states is described by Eq. (\ref{eq:e_j}) where $0 \le \theta_k < \pi$ and $0 \le \gamma_k < 2 \pi$ are chosen randomly. The random states are drawn from an isotropic distribution in the $2^L$-dimensional Hilbert space \cite{Shimoni2004}. These states turn out to be highly entangled. \begin{figure} \includegraphics[width=8.5cm]{fig2} \caption{The Groverian measure of entanglement for states created during the operation of the QFT on three randomly chosen tensor product states (dashed, dotted and dashed-dotted) as well as on a single random state (solid line). All the states are of nine qubits.} \label{fig:2} \end{figure} In Fig. \ref{fig:2} we present the time evolution of the Groverian measure during the processing of QFT on three random product states as well as on a random state of nine qubits. For the random product states one observes that during most time steps the entanglement remains unchanged. Most of the variation takes place at specific times, common to all the different states. Clearly, the entanglement is generated by the controlled phase operators $B_{k,m}$. The large variations in $G(\psi)$ are found to take place when $|m-k|$ is small, namely when $B_{k,m}$ is applied on pairs of adjacent qubits. The Groverian measure during the operation of QFT on a highly entangled random state is also shown in Fig. \ref{fig:2}. It exhibits only small variations with no obvious regularity. \subsubsection{QFT Within Shor's Factoring Algorithm} \begin{figure} \includegraphics[width=8.5cm]{fig3} \caption{The Groverian measure of entanglement for states created during the QFT stage of Shor's factoring algorithm. The solid line shows the factorization of $N=91$ using $y=41$. The dotted line (with zero entanglement) shows the factorization of $N=33$ using $y=23$. The dashed line shows the factorization of $N=33$ using $y=4$.} \label{fig:3} \end{figure} In Fig. \ref{fig:3} we present the time evolution of the Groverian measure during QFT, when it is applied on states obtained from the pre-processing stage of Shor's factoring algorithm. The different lines correspond to the factorization process of different numbers. Surprisingly, for all numbers that we have tested, the entanglement was essentially unchanged throughout the process, as implied by the horizontal lines. This is in contrast to the behavior observed when QFT is applied to general quantum states. A special property of the states generated by the pre-processing is that they are periodic. This motivated us to examine the time evolution of the Groverian measure during QFT of general periodic states. The state $\sum_m |l+mr\rangle$ (up to normalization factor) is a periodic state of $L$ qubits, with period $r$ and shift $l$. The summation is over all integers $m$ such that $0 \leq l+mr \leq q-1$, where $q=2^L$. It was found that the Groverian measure essentially does not change during the QFT process of such states, and that the changes which do occur vanish exponentially with the number of qubits. The value of the Groverian measure for these states depends almost solely on the odd part of the period $r$. More precisely, for a periodic state with period $r=2^M d$ (where $d$ is odd), we obtain $P_{\max} \simeq 1/d$. This is easy to explain for states with a period $r=2^M$, which are known to be tensor product states. For these states $d=1$, thus the correct result of $P_{\max}=1$ is obtained. For general periodic states we do not have an analytical derivation of the expression for $P_{\max}$. \subsection{Entanglement in the Pre-processing Stage} Having found that the QFT stage of Shor's algorithm does not alter the entanglement of states created by the pre-processing stage, it is clear that all the entanglement is produced during pre-processing. We have evaluated this entanglement generated during the factoring process of all the integers in the range $3 \le N \le 200$. To factorize an integer, $N$, one has to choose another integer $1 <y < N-1$. In our analysis, we examined all possible choices within this range, and for each of them we applied the pre-processing stage as described in Sec. \ref{sec:algorithm}. At the end of the pre-processing stage we evaluated the Groverian measure of the resulting state of the main register, following a measurement of the auxiliary register. In Fig. \ref{fig:4} we present the Groverian measure for the states obtained after pre-processing vs. $N$ for $3 \le N \le 200$. Each dot represents the Groverian measure after pre-processing for the integer $N$ and for a specific choice of $1 <y < N-1$. The solid line represents the function $\sqrt{1-1/(2N)}$. We observe that all the dots are below this line, which resembles the upper bound of the Groverian measure, namely that for any state $|\psi\rangle$ of $L$ qubits $G(\psi) \leq \sqrt{1-1/2^L}$. \begin{figure} \includegraphics[width=8.5cm]{fig4} \caption{The Groverian measure of entanglement for the states created by the pre-processing stage of Shor's algorithm. Each dot corresponds to a single choice of $2<N\leq200$ and $1<y<N-1$.} \label{fig:4} \end{figure} Additionally, there are many values of $N$ and choices of $y$ for which the Groverian measure is $G=0$, namely the factoring process does not involve any entanglement. For these particular choices, it should thus be possible to perform the factoring of $N$ efficiently using a classical algorithm \cite{Aharonov1996}. We find that for some of the pairs of $N$ and $y$ which produce no entanglement, GCD$(N,y) \ne 1$, thus a divisor of $N$ can be easily found classically. The rest of these pairs are found to satisfy $y^{2^n} = 1 \ {\rm mod}\ N$, for some integer $n$, which means that GCD$(y^{2^{n-1}} + 1 ,N)$ or GCD$(y^{2^{n-1}} - 1 ,N)$ are divisors of $N$, which can be easily found by classical algorithms. We thus find that in cases in which no entanglement is produced by the quantum algorithm, it offers no speedup compared to classical algorithms. This is consistent with the assumption that the entanglement generated by a quantum algorithm is correlated with the speedup it provides. \section{Discussion} \label{sec:discussion} It is found that the states prepared by the pre-processing stage of Shor's algorithm, like all periodic states, exhibit the property that their Groverian entanglement does not change throughout the QFT stage. One may take the view that the Groverian entanglement somehow represents the amount of quantum information present in a quantum state. This is rather like the von Neumann entropy. Taking this view, our result may seem natural because the information needed to perform the factoring is already present after the pre-processing stage. The QFT only rearranges the information such that it can be extracted by measurement. It is found that the Groverian measure of the states generated by Shor's algorithm is lower than that of random states, which are almost maximally entangled, with $G(\psi) \simeq \sqrt{1- 1/q}$ \cite{Biham2003,Shimoni2004}. Yet, the maximal entanglement created by the algorithm exhibits the same functional behaviour, where $q$ is raplaced by $2N$. Considering the fact that Shor's algorithm is exponentially faster than its known classical counterparts, it is expected to use all the entanglement available. Thus, our result provides further indication that classical algorithms are unlikely to perform factoring in polynomial time. Unlike Shor's algorithm, Grover's search algorithm is only polynomialy more efficient than its classical counterparts \cite{Grover1996,Grover1997}. Grover's algorithm also creates entanglement, which is bound by a constant lower than unity \cite{Biham2003}. A different approach to the analysis of the entanglement generated by Shor's factoring algorithm was presented in Ref. \cite{Kendon2005}, where the bi-partite entaglement between the main register and the auxiliary register was evaluated during both the pre-processing and QFT stages, using the negativity \cite{Peres1996,Karol1998} as an entanglement measure. It was found that the entanglement is primarily generated during the pre-processing stage, in agreement with our results. \section{Summary} \label{sec:summary} The quantum states created during the operation of Shor's factoring algorithm have been analyzed and the entanglement in these states was evaluated using the Groverian measure. It was found that the entanglement is generated during the pre-processing stage and remains unchanged during the QFT stage. It was shown that the latter feature is unique to periodic states, such as those obtained from the pre-processing stage, while QFT does affect the entanglement of general quantum states. Another interesting feature is that the entanglement is found to be correlated with the speedup achieved by the quantum algorithm compared to classical algorithms. This means that the cases where no entanglement is created are those in which classical factoring is efficient.
1,477,468,751,346
arxiv
\section{Introduction} \label{sec:introduction} \input{sections/introduction} \section{Background} \label{sec:background} \input{sections/background} \section{Multiscale PHATE} \label{sec:algorithm} \input{sections/algorithm} \section{Results} \label{sec:results} \subsection{Example visualization} \label{subsec:simple} \input{sections/simple} \subsection{Comparison to other visualization methods} \label{subsec:comparison} \input{sections/comparison} \subsection{Continual learning} \label{subsec:continual-learning} \input{sections/continual_learning} \subsection{Generalization} \label{subsec:generalization} \input{sections/generalization} \section{Conclusion} \label{sec:conclusion} \input{sections/conclusion} \bibliographystyle{unsrtnat} \subsection{Dimensionality reduction for visualization} Diffusion maps (DMs)~\citep{coifman2006diffusion} is an important nonlinear dimensionality reduction method that has been used to extract complex relationships between high-dimensional data ~\citep{He2009,Farbman:2010,Talmon2012,Mishne2013,coifman:diffusion_changing_data,Mishne2016,Banisch2017}. PHATE~\citep{moon:PHATE} aims to optimize diffusion maps for data visualization. We briefly review the two approaches. Given a high-dimensional dataset $\{x_i\}$, DMs operate on a pairwise similarity matrix $ \matr W$ (e.g., computed via a Gaussian kernel $\matr W(x_i,x_j)=\exp\{-\|x_i-x_j\|^2/\epsilon\}$). and return an embedding of the data in a low-dimensional Euclidean space. To compute this embedding, the rows of $ \matr W$ are normalized by $ \matr P= \matr D^{-1} \matr W$, where $\matr D_{ii} = \sum_j \matr W_{ij}$. The resulting matrix $ \matr P$ can be interpreted as the transition matrix of a Markov chain over the dataset and powers of the matrix, $ \matr P^t$, represents running the Markov chain forward $t$ steps. The matrix $ \matr P$ thus has a complete sequence of bi-orthogonal left and right eigenvectors $ \phi_i$, $ \psi_i$, respectively, and a corresponding sequence of eigenvalues $1=\lambda_0\geq|\lambda_1|\geq|\lambda_2|\geq\ldots$. Due to the fast spectrum decay of $\{\lambda_l\}$, we can obtain a low-dimensional representation of the data using only the top $\ell$ eigenvectors. Diffusion maps, defined as $ \Psi_t(x)=(\lambda _{1}^{t} \psi _{1}(x),\lambda _{2}^{t} \psi _{2}(x),\ldots ,\lambda _{\ell}^{t} \psi _{\ell}(x))$, embeds the data points into a Euclidean space $\mathbb{R}^\ell$ where the Euclidean distance approximates the diffusion distance: $$\matr D^2_t(x_i, x_j)=\sum_{x_k}{\frac{(p_t(x_i,x_k)-p_t(x_j,x_k))^2}{\phi_0(x_j)}} \approx \Vert \Psi_t(x_i) - \Psi_t(x_j)\Vert^2_2$$ Note that $ \psi_0$ is neglected because it is a constant vector. To enable successful data visualization, a method must reduce the dimensionality to two or three dimensions; diffusion maps, however, reduces only to the intrinsic dimensionality of the data, which may be much higher. Thus, to calculate a 2D or 3D representation of the data, PHATE applies MDS~\citep{cox:MDS} to the \textit{informational distance} between rows $i$ and $j$ of the diffusion kernel $ \matr P^t$ defined as $$\matr \Phi_t(i,j) = \Vert \log \matr P^t(i) - \log \matr P^t(j) \Vert_2$$ where $t$ is selected automatically as the knee point of the Von Neumann Entropy of the diffusion operator. For further details, see~\citet{moon:PHATE}. \subsection{Related work} We consider the evolving state of a neural network's hidden units as a dynamical system which can be represented as a \textit{multislice graph} on which we construct a pairwise affinity kernel. Such a kernel considers both similarities between hidden units in the same epoch or time-slice (denoted \textit{intraslice} similarities) and similarities of a hidden unit to itself across different time-slices (denoted \textit{interslice} similarities). The concept of constructing a graph for data changing over time is motivated by prior work both in harmonic analysis~\citep{coifman:diffusion_changing_data,lindenbaum2015multiview,lederman2018learning,marshall2018time,Banisch2017} and network science~\citep{mucha:multiscale_community}. For example, \citet{coifman:diffusion_changing_data} suggest an algorithm for jointly analyzing DMs built over data points that are changing over time by aligning the separately constructed DMs, while \citet{mucha:multiscale_community} suggest an algorithm for community detection in multislice networks by connecting each node in one network slice to itself in other slices, with identical \emph{fixed weights} for all intraslice connections. In both cases, such techniques are designed to detect changes in intraslice dynamics over time, yet interslice dynamics are not incorporated into the model. \section{Multislice graph construction} \label{sec:algorithm_apx} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/multisliceGraph.pdf} \caption{Example schematic of the multislice graph used in M-PHATE. The intra- and interslice kernels represent the similarities between the graph nodes at different time-points, providing PHATE with a time-aware distance to visualize the data with.} \label{fig:graph_schematic} \end{figure} In Section~\ref{sec:algorithm}, we describe a multislice affinity kernel $K$ built from an \textit{intraslice} kernel, which connects hidden units in the same epoch, and an \textit{interslice} kernel, which connects each hidden unit to itself at different epochs. We further clarify the intuition behind such an affinity kernel in two schematics. Figure~\ref{fig:graph_schematic} displays a graph of 10 hidden units in a dynamically changing graph structure over the course of four time slices. Each hidden unit's local neighborhood within its own time slice (its intraslice affinities) changes as the system evolves, with connectivity shown as black lines. Additionally, each hidden unit is connected to itself across different epochs, with strength of these interslice connections (shown as dotted lines) also dependent on similarities (rather than simply a fixed-weight connection). Figure~\ref{fig:kernel_schematic} displays the top left corner of an example of a multislice affinity kernel. The full multislice kernel ($\matr K((\tau, i), (\upsilon, j))$, left) is composed on the intraslice kernels placed down the block diagonal ($\matr K_\text{intraslice}^{(1)}(i,j), \ldots, \matr K_\text{intraslice}^{(\tau)}(i,j)$, middle) and the interslice kernels forming the diagonals of each off-diagonal block ($K_\text{interslice}^{(1)}(\tau,\upsilon), \ldots, K_\text{interslice}^{(i)}(\tau,\upsilon)$, right). \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/kernel.pdf} \caption{Example schematic of the multislice kernel used in M-PHATE. This kernel is a sum of intaslice and interslice affinities.} \label{fig:kernel_schematic} \end{figure} \section{Selection of representative subset \texorpdfstring{$Y$}{\textit{Y}}} \label{sec:train-data} In Section~\ref{sec:algorithm}, we state that the representative subset $Y$ is taken from points not used in training. However, there is no reason why this should be the case. To demonstrate that M-PHATE can be used successfully without accessing data external to the training set, we show in Figure~\ref{fig:generalization-train-data} a repetition of the generalization experiment, using only training data to build the visualization. Using the same quantification of variance and memorization as in Section~\ref{subsec:generalization}, we obtain an equally strong correlation (Spearman's $\rho = -0.95$, Table~\ref{tab:generalization-train-data}). Further, we note that the visualizations are qualitatively very similar to those obtained using training data, indicating that M-PHATE can be used to understand the generalization performance of a network without having access to an external validation set. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/generalization_train_data.png} \caption{Visualization of a 3-layer MLP trained to classify MNIST with different regularizations or manipulations applied to affect generalization performance, where the visualization is built using only training data.} \label{fig:generalization-train-data} \end{figure} \section{Parameters for visualization methods comparison} \label{sec:comparison_apx} In Section~\ref{subsec:comparison}, we compare M-PHATE to Diffusion Maps, t-SNE and Isomap in both a standard and multiscale context. Since t-SNE and Isomap require distance matrices, not affinity matrices, we convert the multislice kernel to geodesic distances by computing the shortest-path over the graph with the distance $D = -\log K'$. For standard application of Isomap and t-SNE, we use the default parameters in \texttt{sklearn}~\citep{sklearn}. Since diffusion maps can be applied to any symmetric non-negative affinity kernel and does not have a reference implementation, we apply diffusion maps to the adaptive bandwidth kernel built in PHATE. \section{Continual Learning} \label{sec:continual-learning_apx} \subsection*{Continual Learning Schemes} \begin{table} \caption{Summed variance per epoch of the PHATE visualization is associated with the difference between a network that is memorizing and a network that is generalizing, where the visualization is built using only training data. Memorization error refers to the difference between train loss and validation loss.} \label{tab:generalization-train-data} \centering \small \begin{tabular}{lrrrrrrrr} \toprule & & \multicolumn{2}{c}{Kernel} & & \multicolumn{2}{c}{Activity} & \multicolumn{2}{c}{Random} \\ \cmidrule(lr){3-4} \cmidrule(lr){6-7} \cmidrule(lr){8-9} {} & Dropout & L1 & L2 & Vanilla & L1 & L2 & Labels & Pixels \\ \midrule Memorization & -0.09 & 0.02 & 0.04 & 0.05 & 0.10 & 0.12 & 0.13 & 0.53 \\ Variance & 59 & 77 & 35 & 28 & 0.66 & 0.34 & 0.37 & 0.03 \\ \bottomrule \end{tabular} \end{table} \citet{hsu:continual-learning-baselines} describe three schemes of continual learning commonly used in the literature. Incremental \textit{task} learning describes the process of learning shared hidden units for separated output layers for each task; the output units for task $i$ are therefore protected from gradient signals during the training of task $j \ne i$. This is akin to the standard model of transfer learning, in which all but the final layer of a network are copied for a new task, with a fresh output layer attached for the new task. Incremental \textit{domain} learning describes the process of learning an entirely shared network which learns to perform all tasks separately, but with the same units; in this case the output units for task $i$ are the same units that are used in task $j$ and must learn to correctly classify training examples from separate tasks as though they were the same class. Incremental \textit{class} learning describes the process of learning an entirely shared network which learns to perform all tasks at once, with no knowledge of which task is currently being performed. The network contains separate output units for each task, but must select which output units to use, in contrast to incremental task learning in which the task is specified. This is by far the most difficult setting, since in training any one task, the optimal solution is to never predict the output classes of any other task; this strongly encourages catastrophic forgetting. Figure~\ref{fig:continual_learning_schema} demonstrates these three architectures on Split MNIST. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/task_switch_schema.png} \caption{Architectures for incremental learning scenarios. Reproduced with permission from~\citet{hsu:continual-learning-baselines}.} \label{fig:continual_learning_schema} \end{figure} \subsection*{Network Parameters} The networks in Section~\ref{subsec:continual-learning} are trained as follows. Input data is scaled from 0 to 1. All networks consist of a MLP with 2 layers of 400 units with ReLU activation, and a softmax classification output layer. All networks are trained with a batch size of 128, split to batches of 64 new data and 64 rehearsal data in the case of Naive Rehearsal. For the Adam optimizer, we use a learning rate of $1e^{-5}$. For the Adagrad optimizer, we use a learning rate of $1e^{-4}$. For Naive Rehearsal, we use the Adam optimizer. All networks are built and trained in Keras using a Tensorflow backend. \subsection*{Results} Figure~\ref{fig:continual_learning} shows the visualizations of the continual learning networks for a subset of 100 hidden units from each layer of the MLP with 2 layers of 400 units. Figures~\ref{fig:continual_learning_layer1} and~\ref{fig:continual_learning_layer2} show the full embedding of layers 1 and 2 respectively. In all cases, the visualizations are computed on all hidden units and subsampled for plotting purposes only. We note the striking difference between layer 1 and layer 2 in all visualizations. In every case, there is less ``structural collapse'' (see Section~\ref{subsec:generalization}) in layer 2 than in layer 1. Also, the vertical patterning in layer 2 is perfectly associated with time-slice; that is, in each task (composed of 16 time-slices), the majority of change in hidden representations in layer 2 occurs within the first two or three time slices. On the other hand, layer 1 continues to change throughout the task. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{figs/task_switch_layer1.png} \caption{Visualization of layer 1 of a 2 layer MLP trained on Split MNIST for five-task continual learning of binary classification. Accuracy is reported on a test set consisting of an even number of samples from all tasks.} \label{fig:continual_learning_layer1} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{figs/task_switch_layer2.png} \caption{Visualization of layer 2 of a 2 layer MLP trained on Split MNIST for five-task continual learning of binary classification. Accuracy is reported on a test set consisting of an even number of samples from all tasks.} \label{fig:continual_learning_layer2} \end{figure} \section{Generalization} \label{sec:generalization_apx} \subsection*{Network Parameters} The networks in Section~\ref{subsec:generalization} are trained as follows. Input data is scaled from 0 to 1. All networks consist of a MLP with 3 layers of 128 units with Leaky ReLU activation with $\alpha=0.1$, and a softmax classification output layer. All networks are trained with a batch size of 256 with the Adam optimizer and a learning rate of $1e^{-5}$. All regularizations are applied with a weight of $1e^{-4}$. Dropout is applied with $p=0.5$. For the scrambled network, we randomly permute the output labels of the training data, leaving the validation data intact. All networks are built and trained in Keras~\citep{chollet:keras} using a Tensorflow~\citep{abadi:tensorflow} backend. \section{M-PHATE parameters} \label{sec:parameters_apx} All multislice graphs are built with $k=2$, $\alpha=5$ and $\kappa=25$. We apply PHATE on the multislice affinity matrix with PHATE parameters $\gamma=0$ and $n\_landmark=3000$, and use the automatically selected parameter of $t$ provided by the PHATE algorithm. \section{Computing infrastructure} \label{sec:compute} All computation was done on a single 36-core workstation running Arch Linux with a NVIDIA TITAN X graphics card and 1TB of RAM. \subsection{Preliminaries} Let $F$ be a neural network with a total of $m$ hidden units applied to $d$-dimensional input data. Let $F_i : \mathbb{R}^d \to \mathbb{R}$ be the activation of the $i$th hidden unit of $F$, and $F^{(\tau)}$ be the representation of the network after being trained for $\tau \in \{1, \ldots, n\}$ epochs on training data $ X$ sampled from a dataset $\mathcal{X}$. A natural feature space for the hidden units of $F$ is the activations of the units with respect to the input data. Let $Y \subset \mathcal{X}$ be a representative sample of $p \ll | X|$ points. (In this paper, we use points not used in training; however, this is not necessary. Further discussion of this is given in Section~\ref{sec:train-data}.) Let $Y_k$ be the $k$th sample in $Y$. We use the hidden unit activations $F(Y)$ to compute a shared feature space of dimension $p$ for the hidden units. We can then calculate similarities between units from all layers. Note that one may instead consider the hidden units' learned parameters (e.g. weight matrices and bias terms); however, these are not suitable for our purposes as they are not necessarily the same shape between hidden layers, and additionally the parameters may contain information not relevant to the data (for example, in dimensions of $\mathcal{X}$ containing no relevant information.) We denote the \textit{time trace} $\matr T$ of the network as a $n \times m \times p$ tensor containing the activations at each epoch $\tau$ of each hidden unit $F_i$ with respect to each sample $Y_k \in Y$. We note that in practice, the major driver of variation in $\matr T$ is the bias term contributing a fixed value to the activation of each hidden unit. Further, we note that the absolute values of the differences in activation of a hidden unit are not strictly meaningful, since any differences in activation can simply be magnified by a larger kernel weight in the following layer. Therefore, to calculate more meaningful similarities, we first $z$-score the activations of each hidden unit at each epoch $\tau$ $$\matr T(\tau, i, k) = \frac{F_i^{(\tau)}(Y_k) - \frac{1}{p} \sum_\ell F_i^{(\tau)}(Y_\ell)}{\sqrt{\Var_\ell F_i^{(\tau)}(Y_\ell)}}.$$ \subsection{Multislice Kernel} The time trace gives us a natural substrate from which to construct a visualization of the network's evolution. We construct a kernel over $\matr T$ utilizing our prior knowledge of the temporal aspect of $\matr T$ to capture its dynamics. Let $\matr K$ be a $nm \times nm$ kernel matrix between all hidden units at all epochs (the $(\tau m + j)$th row or column of $K$ refers to $j$-th unit at epoch $\tau$) We henceforth refer to the $(\tau m+j)$th row of $\matr K$ as $\matr K((\tau,j),:)$ and the $(\tau m+j)$th column of $\matr K$ as $\matr K(:,(\tau,j))$. To capture both the evolution of a hidden unit throughout training as well as its community structure with respect to other hidden units, we construct a multislice kernel matrix which reflects both affinities between hidden units $i$ and $j$ in the same epoch $\tau$, or intraslice affinities $$\matr K^{(\tau)}_\text{intraslice}(i,j) = \exp \left( -{\Vert \matr T(\tau,i) - \matr T(\tau,j) \Vert^\alpha_2/\sigma_{(\tau,i)}^\alpha} \right)$$ as well as affinities between a hidden unit $i$ and itself at different epochs, or interslice affinities $$\matr K^{(i)}_\text{interslice}(\tau,\upsilon) = \exp \left( -{\Vert \matr T(\tau,i) - \matr T(\upsilon,i) \Vert^2_2/\epsilon^2} \right)$$ where $\sigma_{(\tau,i)}$ is the intraslice bandwidth for unit $i$ at epoch $\tau$, $\epsilon$ is the fixed intraslice bandwidth, and $\alpha$ is the adaptive bandwidth decay parameter. In order to maintain connectivity while increasing robustness to parameter selection for the intraslice affinities $\matr K^{(\tau)}_\text{intraslice}$, we use an adaptive-bandwidth Gaussian kernel (termed the \textit{alpha-decay kernel}~\citep{moon:PHATE}), with bandwidth $\sigma_{(\tau,i)}$ set to be the distance of unit $i$ at epoch $\tau$ to its $k$th nearest neighbor across units at that epoch: $\sigma_{(\tau,i)} = d_k(\matr T(\tau, i), \matr T(\tau, :)),$ where $d_k(x,X)$ denotes the $L_2$ distance from $x$ to its $k$th nearest neighbor in $X$. Note that the use of the adaptive bandwidth means that the kernel is not symmetric and will require symmetrization. In order to allow the kernel to represent changing dynamics of units over the course of learning, we use a fixed-bandwidth Gaussian kernel in the interslice affinities $\matr K^{(i)}_\text{interslice}$, where $\epsilon$ is the average across all epochs and all units of the distance of unit $i$ at epoch $\tau$ to its $\kappa$th nearest neighbor among the set consisting of the same unit $i$ at all other epochs $\epsilon = \frac{1}{nm} \sum_{\tau=1}^n \sum_{i=1}^{m} d_\kappa(\matr T(\tau, i), \matr T(:, i)).$ Finally, the multislice kernel matrix contains one row and column for each unit at each epoch, such that the intraslice affinities form a block diagonal matrix and the interslice affinities form off-diagonal blocks composed of diagonal matrices (see Figures~\ref{fig:graph_schematic} and~\ref{fig:kernel_schematic} for a diagram): $$\matr K((\tau,i),(\upsilon,j)) = \begin{cases} \matr K^{(\tau)}_\text{intraslice}(i,j), &\text{ if }\tau = \upsilon;\\ \matr K^{(i)}_\text{intraslice}(\tau,\upsilon), &\text{ if $i = j$};\\ 0, &\text{ otherwise.} \end{cases}$$ We symmetrize this kernel as $\matr K' = \frac{1}{2}(\matr K + \matr K^T)$, and row normalize it to obtain $\matr P=\matr D^{-1}\matr K$, which represents a random walk over all units across all epochs, where propagating from $(\tau,i)$ to $(\nu,j)$ is conditional on the transition probabilities between epochs $\tau$ and $\nu$. PHATE~\citep{moon:PHATE} is applied to $\matr P$ to visualize the time trace $\matr T$ in two or three dimensions.
1,477,468,751,347
arxiv
\section{Introduction} There has been a rapidly growing interest in goal reasoning in recent years; planning mechanisms for {agents} that {are} capable of explicitly reasoning about their goals and changing them whenever it becomes necessary \cite{aha2018goal,munoz2018adaptive}. The potential applications of goal reasoning spans over several research fields, for example, only to name a few, controlling underwater unmanned vehicles \cite{wilson2018goal}, playing digital games \cite{dannenhauer2015goal}, and air combat simulations \cite{floyd2017goal}. {One of the promising recent frameworks for goal-based planning and reasoning} is hierarchical {deep Q-networks} (hDQN) \cite{kulkarni2016hierarchical}, {which} consists of two layers: a meta-layer that plans strategically and an action-layer that plans local navigation. The meta-layer receives a state as its input and outputs a goal, a condition that can be evaluated in a given state. The action-layer receives a state and a goal as its input. Then, it selects and executes actions until the agent reaches a state where the goal is achieved. Both layers use a deep neural network similar to that of DQN with some important differences: the meta-layer selects goals in order to maximize external rewards from the environment, while the action-layer selects actions to maximize designer-defined intrinsic rewards (e.g., $1$ for reaching the goal state and $0$ otherwise). {In this work, we consider the problem of exploring an environment by a robot for classification purposes. Contrary to the standard assumptions made in the literature, we assume that robot can only partially observe the environment, where each observation depends on the actions taken by the robot. The first and second layers of our proposed architecture {are similar to those} of hDQN, while the third layer perform a classification task and evaluates the reward in a differentiable manner.} {Our approach has other differences from hDQN. First,} note that in hDQN requirement, the action-layer reaches a state achieving the goal. However, find that this assumption is too restrictive, {unnecessary, and potentially unrealizable due to partial observability} for our purposes. Instead, our method relaxes this requirement by allowing a robot to move a few steps towards the goal, but not necessarily reaching to it. This flexibility is needed because our intrinsic objective is to explore the environment. Therefore, the goal planner {should only dictate} a desired general direction of exploration rather than imposing a hard constraint to reach a specific {position}. In this sense, our goals play a similar role to tasks in hierarchical task network planning \cite{erol1996hierarchical}, where the tasks are processes inferred from the agent's execution (e.g., ``explore in this direction'') rather than goals, which need to be validated in a particular state (e.g., ``reach {coordinate} $(3,5)$''). {Second, the nature of our problem motivates a single unified reward for the meta-layer and action-layer rather than separate rewards. As already mentioned, this reward is the output of the classification layer. Lastly, the partial observability of our problem motivates derivation and use of policy-gradient approaches for learning the model parameters. As illustrated in \cite{mousavi2019multi}, such generalized policy gradient algorithms allow co-design of goal generator, action planner, and classifier modules. } Our methodology incorporates goal reasoning capabilities with deep reinforcement learning procedures for robot navigation by introducing intermediate goals, instead of requiring the robot to take a sequence of actions. In this way, our architecture provides transparency in terms of what the robot is trying to accomplish and, thereby, provides an explanation for its own course of action. {The statement of the classification problem is identical to that of \cite{mousavi2019multi}, but with some important differences. In \cite{mousavi2019multi}, we employ multiple agents with a recurrent network architecture, while robots do not enjoy goal reasoning capabilities. } \noindent{\emph{Related Literature:}} We cast the classification problem as a planning and perception mechanism with a three-layer architecture that is realized through a feedback loop. We are particularly interested in planning for perception. A related line of research is active perception: how to design a control architecture that learns to complete a task and at the same time, to focus its attention to collect necessary observations from the environment (see \cite{whitehead1990active,aloimonos2013active} and references therein). The coupling between action and perception has been also inspired by human body functionalities \cite{ballard1992hand}. Visual attention is another related line of work. It is based on the idea that for a given task, in general, only a subset of the environment may have necessary information, motivating the design of an attention mechanism \cite{tsotsos2011computational,balcarras2016attentional}. These have been motivating for saliency-based techniques for computer vision and machine learning, where the non-relevant parts of the data are purposely ignored \cite{mesquita2016object, bruce2015computational,bruno2019image,potapova2017survey,schauerte2016bottom}. \vspace{0.2cm} \noindent\emph{Notations:} The $i$'th element of a vector $\pi$ is denoted by $\pi[i]$, where indexing may start from $0$. For an integer $T >0$, $[T]$ denotes the sequence of labels $[0,1,\dots,T-1]$. For two images $y_1 \in \mathbb{R}^{c_1 \times n \times n}$ and $y_2 \in \mathbb{R}^{c_2 \times n \times n}$ that have the same dimensions but different number of channels, their concatenation is denoted by $\mathrm{concat}([y_1,y_2]) \in \mathbb{R}^{(c_1+c_2) \times n \times n}$. The categorical distribution over the elements of a probability matrix (or vector) $\pi$, whose elements add up to $1$, is denoted by $\mathrm{categorical}(\pi )$. For two probability vectors, $\pi_1,\pi_2 \in \mathbb{R}^D$, the cross-entropy between the corresponding categorical distributions is denoted by $\mathrm{CrossEntropy} (\pi_1,\pi_2)$. \section{Problem Statement} Let us consider an agent (robot) that is capable of moving in some pre-specified directions (such as up, down, right, and left) in order to explore an image (e.g., map of a region) during a sequence of $E > 0$ episodes, where the duration of each episode is $T > 0$ steps in time. For integers $c, n >0$, we represent an instance of an image by a $c \times n \times n$ {matrix}. Suppose that at the beginning of episode $e \in [E]$ a goal $g(e)$ is assigned to the robot and at every time step $t \in [T]$ (within that episode), the robot moves towards $g(e)$ to discover a portion of image $x \in \mathbb{R}^{ c \times n \times n}$ based on its current pose $p(e,t) \in \mathbb{R}^2$. The robot takes an action to update its position. Based on its past history, the agent has uncovered portions of $x$ up to time $t$, which is denoted by $y(e,t) \in \mathbb{R}^{c \times n\times n}$. The undiscovered portions of $x$ in $y(e,t)$ are set to $0$. Fig. \ref{fig:b} illustrates this scenario through an example, where the discovered image $y(e,t)$, the robot's position, and its goal are demonstrated at different episodes and times. \begin{figure}[t] \begin{center} \includegraphics[width=2.0cm]{images/images/g3/snp0.eps} \includegraphics[width=2.0cm]{images/images/g3/snp5.eps} \includegraphics[width=2.0cm]{images/images/g3/snp10.eps} \includegraphics[width=1.2cm]{images/dotss.pdf} \end{center} $~~~~~~y(0,0)~~~~~$ $~~~~~y(1,0)~~$ $~~~~~~~~y(2,0)~~$ \caption{Snapshots of the proposed problem at the beginning of three episodes. The blue and green squares point to the current {position} of the agent and the goal of each episode. During each episode, the agent has moved towards the goal. } \label{fig:b} \end{figure} The {\it problem} is to design a layered architecture that generates meaningful goals and plans navigation towards assigned goals, with the objective of performing image classifcation. \section{A Multi-layered Architecture} We propose an architecture where a robot collects local observations from an image, generates intermediate goals based on what it has been observed, takes local actions to move towards these goals, and, finally, makes a prediction based on the discovered information by the end of the last episode to classify the underlying image. This architecture consists of three layers, where each receives a different set of information as their inputs. These inputs are defined using some auxiliary internal variables. For given $e \in [E]$ and $t\in [T]$, we define an auxiliary image $l(e,t) \in \mathbb{R}^{n \times n}$ whose pixels are set to $1$ everywhere except over a $m \times m$ patch of pixels with $0$ values, {where $m$ denotes the width and height of the partial observation by the agent} This variable solely depends on the robot position $p(e,t)$. Similarly, we define an auxiliary image $h(e,t)\in \mathbb{R}^{n \times n}$ where the value of a pixel is set to $0$ if robot has visited that pixel before, otherwise to $1$. This variable keeps track of the history of the agent. \begin{figure*} \begin{center} \includegraphics[width=14.1cm]{images/goalBased1.pdf} \includegraphics[width=15.5cm]{images/goalBased2.pdf} \includegraphics[width=15.5cm]{images/goalBased3.pdf} \end{center} \caption{A schematic diagram of the 3-layered deep learning architecture for goal generator, action planner, and classifier. The dots correspond to repeating the preceding modules for $r$ times. In the planners, the number of channels in the convolutional filters is fixed and equal to $d$ in the consecutive layers. For the classification module, the number of output channels from the convolutions is doubled each time. Thus, in each case we will have {different numbers} of intermediate channels $q_g$, $q_a$, and $q_c$ (the components are not drawn). } \label{fig:schematic} \end{figure*} \subsection{Goal Planner} We consider a fully-convolutional architecture of ResNet style \cite{he2016deep} for the planner, where the skip connections are modified to have concatenation form instead of summation (similar to densely connected architecture \cite{huang2017densely}). The top portion of Fig. \ref{fig:schematic} illustrates our architecture. At the beginning of episode $e$, information input $u_g(e)\in {\mathbb{R}^{(c+2) \times n \times n }}$ is formed by concatenating three inputs: \vspace{0.1cm} \noindent {\it (i)} Undiscovered image up to {this episode and instant}, which is defined by \begin{align} y(e-1):=y(e-1,T-1) \in \mathbb{R}^{c\times n \times n}. \end{align} We recap that $y(e,t)$ is the undiscovered portions of the underlying image at episode $e$ and time $t$. \noindent{\it(ii)} An image that encapsulates the position of the robot in the environment by the end of the previous episode, which is defined by \begin{align} l(e-1):=l(e-1,T-1) \in \mathbb{R}^{n \times n}. \end{align} \noindent{\it(iii)} An image that encapsulates the history of all visited {positions} up to that episode, which is defined by \begin{align} h(e-1):=h(e-1,T-1) \in \mathbb{R}^{n \times n}. \end{align} We feed the following input to the planner \begin{align*} u_g(e):= \mathrm{concat} (\,\left [\,y(e-1),l(e-1),h(e-1),g_l(e-1)\,\right ]\, ), \end{align*} where $g_l(e-1)$ is derived from the previous goal $g(e-1)$ according to a procedure that is explained at the end of this subsection. Then, we utilize the convolution architecture that outputs a single channel $n \times n$ image. By applying $\mathrm{softmax}$ on this image, we arrive at {an} $n \times n$ probability matrix that can be characterized by a nonlinear map \begin{align} \pi_g(e) =f_1\big(u_g(e);\theta_1\big), \end{align} where $\theta_1$ is a trainable parameter. We define a categorical probability distribution over the pixels using $\pi_g(e)$, which will allow us to sample goal $g(e) \in \mathbb{R}^2$ from this distribution \begin{align} g(e)\,\sim \, \mathrm{categorical}\big(\pi_g(e)\big). \end{align} As a feedback signal for this layer and action-layer in the next episode, auxiliary variable $g_l(e) \in \mathbb{R}^{n\times n}$ is created, which is an image whose pixel values are set to $0$ only at the {$m \times m$ patch corresponding to the goal $g(e)$} and $1$ elsewhere (similar to $l(e)$). \subsection{Action Planner for Local Navigation} During each episode, the robot takes $T$ actions towards an assigned goal. It is assumed that the actions taken by the robot are at most a fixed number of pixels to the left, right, up, or down. Given the goal of the episode, one can inspect that there is always at most one horizontal action (either left or right) and one vertical action (either up or down) that we count as moving towards the goal. Therefore, given current {position} $p(e,t)$ and goal $g(e)$, the problem of planning local actions can be formulated as finding a probability vector $\pi_a(e,t) \in \mathbb{R}^2$ that will allow the robot to choose between vertical and horizontal actions and move towards the goal. In situations where only one of these actions takes the robot closer to the goal, we do not use this distribution. More precisely, robot's action protocol is given by $$ a(e,t)= \left \{ \begin{array}{l l} \hspace{-0.1cm} \text{vertical action} & \text{if } p(e,t)[0]= g(e)[0]\\ \hspace{-0.1cm} \text{horizontal action} & \text{else if } p(e,t)[1]= g(e)[1] \\ \hspace{-0.1cm} \text{sample from dist.} & \text{otherwise} \\ \end{array} \right .. $$ To evaluate the probability vector $\pi_a$, we consider a similar fully-convolutional architecture for choosing the local actions; we refer to the middle portion of Fig. \ref{fig:schematic}. The input to this architecture lives in $\mathbb{R}^{(c+3)\times n \times n}$ and is defined by \begin{align} u_a(e,t)=\mathrm{concat} \big (\left [\,y(e,t),\,l(e,t),\,h(e,t),\,g_l(e)\,\right ]\big ). \end{align} The convolutional mapping results in an image with $2$ channels. Then, we use global average-pooling from this output, which is followed by $\mathrm{softmax}$ normalization to get a vector $\pi_a (e,t) \in \mathbb{R}^2$. By composing all these maps, we can obtain the following characterization \begin{align} \pi_a(e,t)=f_2\big(u_a(e,t);\,\theta_2\big), \end{align} where $\theta_2$ is a trainable parameter. We construct a categorical distribution which will enable the robot to select among vertical or horizontal actions via random sampling, i.e., \begin{align} a(e,t)\,\sim \, \mathrm{categorical}\big(\pi_a(e,t) \big). \end{align} \subsection{Image Classifier} A similar convolutional architecture is considered for the classification module; we refer to the bottom portion of Fig. \ref{fig:schematic}. Classification is conducted at the end of the last episode, i.e., {at episode $E-1$ and time step $T-1$}. Let us tag the last explored image by $y_f:=y(E-1,T-1) \in \mathbb{R}^{c \times n \times n}.$ This will be the input to the classifier, i.e., \begin{align} u_c = y_f. \end{align} The output of the convolutional layer has $D$ channels, which is global average pooled before applying $\mathrm{softmax}$ to get the prediction vector $\pi_c \in \mathbb{R}^D$. Similar to the other two layers, the corresponding nonlinear map can be represented by \begin{align} \pi_c=f_3\big(u_c;\,\theta_3\big), \end{align} where $\theta_3$ is a trainable parameter. The reward is defined as \begin{align}\label{eq:r} r=-\mathrm{CrossEntropy} \Big (\,\pi_c\,,\,\pi_c^l \,\Big ), \end{align} in which {$\pi_c^l \in \mathbb{R}^D$ is the label probability vector. This vector is equal to unit coordinate vector in $j$'th direction, where $j \in [D]$ is the label.} \section{Reinforcement Learning Algorithm} We build upon our ideas from \cite{mousavi2019multi} and develop a learning algorithm to train various layers in our architecture. The robot's objective is to find an unbiased estimator for the expected reward whenever the reward of the reinforcement learning explicitly depends on the parameters of the neural network. Let us put all trainable parameters in one vector and represent it by $ \Theta:=\left [\theta_1^T,\theta_2^T,\theta_3^T \right ]^T $. The set of all trajectories is shown by $\mathcal{T}$ and the corresponding reward to a given trajectory $\tau \in \mathcal{T}$ by $r^\tau$. The objective is to maximize the expected reward, i.e., $$ \underset{\Theta}{\mathrm{maximize}}{}~J(\Theta),$$ where $ J(\Theta) =\mathbb{E} \{r^\tau\} ={\sum_{\tau \in \mathcal{T}} \pi^\tau \, r^\tau }$ and $\pi^\tau$ is the probability of choosing goals and actions given the value of the current parameter $\Theta$. The gradient of $J$ with respect to $\Theta$ can be written as \begin{align} \nabla J\,=\,\sum_{\tau \in \mathcal{T}} r^\tau \nabla \pi^\tau + \pi^\tau \nabla r^\tau. \end{align} The REINFORCE algorithm \cite{sutton2000policy} helps us rewrite the first term using the following identity $ \nabla \pi^\tau=\pi^\tau \nabla (\log \pi^\tau). $ Then, one can verify that \begin{align}\label{eq:nabla} \nabla J &\,=\,\sum_{\tau \in \mathcal{T}} \pi^\tau \nabla (\log \pi^\tau)r^\tau + \pi^\tau \nabla r^\tau \\ & \notag \, =\, \mathbb{E}\{\nabla (\log \pi^\tau)r^\tau+\nabla r^\tau\}. \end{align} Suppose that $N$ independent trajectories are created, i.e., $N$ rollouts\footnote{A rollout is executing a fixed policy given an identical initial setting with a random seed. Different rollouts are required when the outcome of the game is uncertain (i.e., stochastic)\cite{sutton2018reinforcement}.}, where $\pi^{(k)}$ and $r^{(k)}$ denote the probability of this particular trajectory and the resulting reward, respectively, for $k=1,\dots,N$. Let us define $\hat J$ to be \begin{align}\label{eq:hatJ} \hat J\,:=\,\dfrac{1}{N} \Big ( \sum_{k=1}^{N} \log \pi^{(k)} r_d^{(k)}+r^{(k)} \Big ), \end{align} where the value of the quantity $r_d^{(k)}$ is $r^{(k)}$, while it has been detached from the gradients. {This means that a machine learning {algorithm} should treat $r_d^{(k)}$ as a non-differentiable scalar during training\footnote{The reason for this treatment is {because of} the idea behind the chain rule: in $(fg)'=f'g+g'f$, $f$ and $g$ in the right hand side correspond to being kept constant while the other term varies.} Then, we inspect that \begin{align} \mathbb{E} \left \{\nabla \hat J \right \}\, =\, \nabla J, \end{align} i.e., $\nabla \hat J$ is an unbiased estimator of $\nabla J$ given by \eqref{eq:nabla}. This justifies the use of approximation $ \nabla J \, \approx \, \nabla \hat J. $ \begin{remark} The first term inside the summation in \eqref{eq:hatJ} is identical to the quantity that is derived in the policy gradient method with a reward which is independent of the parameters, i.e., the REINFORCE algorithm \cite{sutton2000policy}. The second term indicates that reward directly depends on $\Theta$. For example, if all goals and actions have equal probability of being selected, then it will suffice to consider {only} the second term inside the summation in \eqref{eq:hatJ}. \end{remark} \subsection{Hierarchical Training} \label{subsec:hi} \begin{table}[] \resizebox{8.5cm}{!}{% \begin{tabular}{|l|c|c|c|} \hline Layer & being trained & trained \& fixed & i.i.d. \\ \hline\hline Classifier & \checkmark & \checkmark & $\times$ \\ \hline Goal Planner & \checkmark & \checkmark & \checkmark \\ \hline Action Planner & \checkmark & \checkmark & \checkmark \\ \hline \end{tabular}% } \caption{Different possibilities for training of different layers. } \label{table:modes} \end{table} \begin{figure*}[t] \begin{center} \includegraphics[width=2.1cm]{images/images/f3/snp0.eps} \includegraphics[width=2.1cm]{images/images/f3/snp5.eps} \includegraphics[width=2.1cm]{images/images/f3/snp10.eps} \includegraphics[width=2.1cm]{images/images/f3/snp15.eps} \includegraphics[width=2.1cm]{images/images/f3/snp19.eps} \includegraphics[width=2.1cm]{images/images/f3/truelabel0.eps} \includegraphics[width=2.5cm]{images/images/f3/label.eps} \includegraphics[width=2.1cm]{images/images/f1/snp0.eps} \includegraphics[width=2.1cm]{images/images/f1/snp5.eps} \includegraphics[width=2.1cm]{images/images/f1/snp10.eps} \includegraphics[width=2.1cm]{images/images/f1/snp15.eps} \includegraphics[width=2.1cm]{images/images/f1/snp19.eps} \includegraphics[width=2.1cm]{images/images/f1/truelabel0.eps} \includegraphics[width=2.5cm]{images/images/f1/label.eps} \includegraphics[width=2.1cm]{images/images/f4/snp0.eps} \includegraphics[width=2.1cm]{images/images/f4/snp5.eps} \includegraphics[width=2.1cm]{images/images/f4/snp10.eps} \includegraphics[width=2.1cm]{images/images/f4/snp15.eps} \includegraphics[width=2.1cm]{images/images/f4/snp19.eps} \includegraphics[width=2.1cm]{images/images/f4/truelabel0.eps} \includegraphics[width=2.5cm]{images/images/f4/label.eps} \includegraphics[width=2.1cm]{images/images/g1/snp0.eps} \includegraphics[width=2.1cm]{images/images/g1/snp5.eps} \includegraphics[width=2.1cm]{images/images/g1/snp10.eps} \includegraphics[width=2.1cm]{images/images/g1/snp15.eps} \includegraphics[width=2.1cm]{images/images/g1/snp19.eps} \includegraphics[width=2.1cm]{images/images/g1/truelabel1.eps} \includegraphics[width=2.5cm]{images/images/g1/label.eps} \includegraphics[width=2.1cm]{images/images/g2/snp0.eps} \includegraphics[width=2.1cm]{images/images/g2/snp5.eps} \includegraphics[width=2.1cm]{images/images/g2/snp10.eps} \includegraphics[width=2.1cm]{images/images/g2/snp15.eps} \includegraphics[width=2.1cm]{images/images/g2/snp19.eps} \includegraphics[width=2.1cm]{images/images/g2/truelabel1.eps} \includegraphics[width=2.5cm]{images/images/g2/label.eps} \includegraphics[width=2.1cm]{images/images/g3/snp0.eps} \includegraphics[width=2.1cm]{images/images/g3/snp5.eps} \includegraphics[width=2.1cm]{images/images/g3/snp10.eps} \includegraphics[width=2.1cm]{images/images/g3/snp15.eps} \includegraphics[width=2.1cm]{images/images/g3/snp19.eps} \includegraphics[width=2.1cm]{images/images/g3/truelabel1.eps} \includegraphics[width=2.5cm]{images/images/g3/label.eps} \end{center} \caption{We demonstrate six sample trajectories from two data points. The snapshots have been taken at the beginning of $4$ episodes as well as the end of {the last} episode. The blue and green squares point to the current position and goal position. The prediction corresponding to the final unmasked image is also illustrated in each case. } \label{fig:samples} \end{figure*} The proposed multi-layered architecture as well as this policy gradient algorithm allow us conduct training of the three layers (i.e, goal planner, action planner, and classifier) with a wide range of flexibility. All three modules can be either in training mode or kept fixed after training. Moreover, for goal and action planning layers, we have an extra level of flexibility \emph{before} training: we can consider i.i.d. (i.e., independent and identically distributed) planning of goals or actions. This mode of operation for {goal planner} implies that the goals are chosen from a uniform distribution over all pixels. This model of operation for action planner means that taking horizontal or vertical actions towards the goal have always equal probability of $1/2$. Once we switch to learning the parameters for either of these planners, we cannot switch back to i.i.d. mode. In Table \ref{table:modes}, we have summarized these possibilities. In this paper, we consider a sequence of three different training modes:\\ \noindent (i) meta-layer and action-layer in i.i.d. mode, while classifier is being trained, \noindent (ii) action-layer in i.i.d. mode, while classifier and goal planner are being trained simultaneously, \noindent (iii) all layers being trained simultaneously. In every mode, reward $r^\tau$ is equal to $r$ given by \eqref{eq:r}. In mode (i), all goals and actions are identically distributed. Thus, we can arbitrarily set $\log p^\tau=0$ (or any other constant). In mode (ii), only the goals are actively decided. Therefore, the probability term is given by $$ \log \pi^\tau=\sum_{e \in [E]} \log \pi_g(e), $$ while for mode (iii), we need to set $$ \log \pi^\tau=\sum_{e \in [E]} \log \pi_g(e)+\sum_{e \in [E]} \sum_{t \in [T]}\mathrm{\chi}(e,t)\log \pi_a(e,t), $$ where $\chi({e,t})=1$ if the action at instant $(e,t)$ was decided by action distribution, and $\chi({e,t})=0$ otherwise. \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{images/conf.eps} \end{center} \caption{The confusion matrix of classification computed on the test dataset. The reported numbers are averaged over $20$ runs of data. } \label{fig:confusion} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.8cm]{images/progress2.pdf} \end{center} \caption{The testing data accuracy vs. data epoch using the hierarchical training sequence from scratch. } \label{fig:progress} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7.0cm]{images/progressCurveTransfer.eps} \end{center} \caption{Testing data accuracy vs. data epoch with transfer learning. } \label{fig:progresstf} \end{figure} \section{Numerical Experiment} We test the method on {the} MNIST dataset of handwritten digits \cite{lecun1998gradient}. The dataset consists of $60,000$ training examples and $10,000$ test images, each of $28 \times 28$ pixels. \noindent{\bf General Setup:} The dataset was normalized between $-0.5$ and $0.5$. In all experiments, the agent starts at a random {position} inside the image. The actions in any direction are $2$ pixels per step. In these experiments, we did not use the test set for hyper-parameter tuning. We used student's $t$-test for the confidence interval of stochastic accuracies with $\alpha$-value of $5\%$. The number of rollout per data point was 4 in the experiments (unless otherwise). We used Adam solver for the optimization with a mini-batch size of $60$ images. The model was built in PyTorch \cite{paszke2017automatic}. \noindent{\bf Sample Accuracy Results: } We conduct the training with patch size $m=6$ for $E=4$ episodes that each have a horizon of $T=5$. The training and testing accuracies for the trained model where $94.39 \pm 0.03\%$ and $94.61\pm 0.17\%$, respectively. This suggests an acceptable level of generalization for our trained model to unseen test set, while the accuracy on the test set has a slightly higher variance. \noindent{\bf Sample Trajectories: } In Fig. \ref{fig:samples}, we demonstrate $3$ sample trajectories on $2$ test data points next to resulting prediction probabilities. We have intentionally illustrated both high confidence and low confidence outcomes. For instance, on the test point with label $4$, the second trajectory results in a wrong prediction, which is likely due to the fact that the agent has not uncovered the upper region of $4$ in its limited temporal budget. As one observes, in most cases, the goals and actions are selected such that the agent can see the most informative parts of the image. \noindent{\bf Top Two Category Accuracy: } For the previously described model, we evaluate the top $2$ class accuracy (i.e., if the true label is among the top $2$ categories predicted by the model). Then, the training and testing accuracies increased to $98.27 \pm 0.02\%$ and $98.30\pm 0.04 \%$, respectively. \noindent{\bf Confusion Matrix:} For the trained model, we build the confusion matrix of the classification for the testing data. In Fig. \ref{fig:confusion}, we show this matrix. The reported accuracies are averaged over $20$ independent experiments. \noindent{\bf Performance of Classifier Module:} Let us consider the trained classifier module with complete (i.e, unmasked) image as its input; i.e., $u_c=x$. We can evaluate the performance of this isolated model, which turns out that the training and testing accuracies were $94.85\%$ and $94.92\%$, respectively. The accuracies for top two categories for the classifier module were $98.32\%$ and $98.52 \%$ on the training and test sets, respectively. This suggests that the planning layers (meta-layer and action-layer) are successfully revealing the most informative regions of the image. \noindent{\bf Accuracy Vs. Epoch:} In Fig. \ref{fig:progress}, we demonstrate the testing accuracy versus training epochs, which is based on hierarchical training sequence that was described in Subsection \ref{subsec:hi}. The introduces two random baselines in addition to the final model: the model in which the goals and actions are decided in i.i.d. manner, in addition to the model in which the goals are planned, but the actions are planned in i.i.d. manner. Fig. \ref{fig:progress} reveals that the errors in prediction have decreased by around $1/3$ after using the goal planner, and by almost another $1/3$ after incorporating the action planner. \noindent{\bf Transfer Learning:} In the previous experiment, all classification and planning layers were trained from scratch. However, transfer learning ideas suggest that we may accelerate training if we can pretrain some modules. To this end, first, we consider ResNet-18 architecture and pretrain it on the the dataset (with full images) for $15$ epochs. This resulted in more than $99\%$ testing accuracy on the full images. Then, we replace the classification architecture in our system with ResNet-18 and start training \emph{all layers} (i.e., planning and perception). The result of training is illustrated in Fig. \ref{fig:progresstf}, which shows that the maximum testing accuracy of $95.19\%$ was acheived in a considerably shorter period of training (by almost an order of magnitude). \section{Concluding Remarks} We introduced a three-layer architecture for active perception of an image that allow us to co-design planning layers for goal generation and local navigation as well as classification layer. The layered structure of the proposed mechanism and the unified definition of reward for all layers enable us to train the parameters of the deep neural networks using a policy gradient algorithm. We would like to discuss a number of final remarks. First, we did not use any overfitting prevention measures (dropouts, weight decay, etc.) in our models. However, even without use of validation sets, we observe a very good level of generalization of the current model. This may be explainable by use of fully-convolutional layers and global average pooling before evaluating the probability vectors, as suggested by \cite{lin2013network}. Second, % variations of the current architecture with recurrent memory (e.g., LSTM cells as used in \cite{mousavi2019multi}) are straight-forward to construct. {This could be particularly useful when we extend our results to multi-robot scenarios.} Third, the intrinsic partial observability of this problem motivates use of policy gradient algorithms rather than Q-learning approaches \cite{kulkarni2016hierarchical}. It is an interesting line of research to develop Q-learning techniques that perform at the same level as the sampling based approaches for this class of problems. \bibliographystyle{IEEEtran}
1,477,468,751,348
arxiv
\section{Introduction} \setcounter{equation}{0} Let $\mathfrak{g}$ denote a simply-laced affine Lie algebra with Cartan datum $(A, \{\alpha_i\}_{i \in I},$ \\ $ \{\alpha^\vee_i\}_{\i\in I})$, where $A= (a_{ij})_{i,j \in I}, I = \{0, 1, \cdots , n\}$ is a symmetric affine Cartan matrix and $U_q(\mathfrak{g})$ denote the corresponding quantum affine algebra. Let $P= \mathbb{Z} \Lambda_0 \oplus \mathbb{Z} \Lambda_1\oplus \cdots \oplus \mathbb{Z} \Lambda_n \oplus \mathbb{Z}\delta$ and $P^\vee = \mathbb{Z} \alpha^\vee_0 \oplus \mathbb{Z} \alpha^\vee_1 \oplus \cdots \oplus \mathbb{Z} \alpha^\vee_n \oplus \mathbb{Z} d $ denote the affine weight lattice and the dual affine weight lattice where $\delta$ and $d$ denote the simple imaginary root and the degree derivation respectively. For a dominant weight $\lambda \in P^+ = \{\mu \in P \mid \mu (h_i) \geq 0 \quad {\rm for \ \ all} \quad i \in I \}$ of level $l = \lambda ({\bf c})$ (${\bf c}$ is the canonical central element), $(L(\lambda), B(\lambda))$ denote the crystal base \cite{Kas1, Kas2, Lu} for the integrable highest weight $U_q(\mathfrak{g})$-module $V(\lambda)$. To give explicit realization of the crystal $B(\lambda)$, the notion of affine crystal and perfect crystal has been introduced in \cite{KMN1}. In particular, it is shown in \cite{KMN1, KMN2} that the affine crystal $B(\lambda)$ for the level $l \in \mathbb{Z}_{\geq 1}$ integrable highest weight $U_q(\mathfrak{g})$-module $V(\lambda)$ can be realized as the semi-infinite tensor product $\cdots \otimes B^l \otimes B^l \otimes B^l$, where $B^l$ is a perfect crystal of level $l$. This is known as the path realization of the crystal $B(\lambda)$. Subsequently it is noticed in \cite{KKM} that one needs a coherent family of perfect crystals $\{B^l\}_{l \geq 1}$ in order to give a path realization of the crystal $B^{\infty}$ of $U_q^-(\mathfrak{g})$. In particular, the crystal $B(\infty)$ can be realized as the semi-infinite tensor product $\cdots \otimes B^{\infty} \otimes B^{\infty} \otimes B^{\infty}$ where $B^{\infty}$ is the limit of the coherent family of perfect crystals $\{B^l\}_{l \geq 1}$. On the other hand the geometric crystal \cite{BK, N} for the simply-laced affine Lie algebra $\mathfrak{g}$ is a quadruple $\mathcal{V}(\mathfrak{g})=(X, \{e_i\}_{i \in I}, \{\gamma_i\}_{i \in I},$ $\{\varepsilon_i\}_{i\in I})$, where $X$ is an ind-variety, $e_i:\mathbb{C}^\times\times X\longrightarrow X$ $((c,x)\mapsto e^c_i(x))$ are rational $\mathbb{C}^\times$-actions and $\gamma_i,\varepsilon_i:X\longrightarrow \mathbb{C}$ $(i\in I)$ are rational functions satisfying the following: \begin{enumerate} \item $\{1\}\times X\subset {\rm dom}(e_i) \; {\rm for} \; {\rm any} \; i\in I,$ \item $\gamma_j(e^c_i(x))=c^{a_{ij}}\gamma_j(x),$ \item $\begin{cases} \begin{array}{lll} &\hspace{-20pt} \quad e^{c_1}_{i}e^{c_2}_{j} =e^{c_2}_{j}e^{c_1}_{i}& {\rm if }\,\,a_{ij}=a_{ji}=0,\\ &\hspace{-20pt} \quad e^{c_1}_{i}e^{c_1c_2}_{j}e^{c_2}_{i} =e^{c_2}_{j}e^{c_1c_2}_{i}e^{c_1}_{j}& {\rm if }\,\,a_{ij}=a_{ji}=-1,\\ \end{array} \end{cases}$ \item $\varepsilon_i(e_i^c(x))=c^{-1}\varepsilon_i(x)$ and $\varepsilon_i(e_j^c(x))=\varepsilon_i(x) \qquad {\rm if }\, a_{i,j}=a_{j,i}=0.$ \end{enumerate} The geometric crystal $\mathcal{V}(\mathfrak{g})$ is said to be positive if it has a positive structure \cite{BK, KNO, N}. Roughly speaking this means that each of the rational maps $e^c_i$, $\varepsilon_i$ and $\gamma_i$ are ratios of polynomial functions with positive coefficients. A remarkable relation between positive geometric crystals and algebraic crystals is the ultra-discretization functor $\mathcal {UD}$ between them \cite{BK}. Applying this functor, positive rational functions are transfered to piecewise linear functions by the simple correspondence: $$ x \times y \longmapsto x+y, \qquad \frac{x}{y} \longmapsto x - y, \qquad x + y \longmapsto {\rm max}\{x, y\}. $$ It was conjectured in \cite{KNO} that for each affine Lie algebra $\mathfrak{g}$ and each Dynkin index $i \in I \setminus \{0\}$, there exists a positive geometric crystal $\mathcal{V}(\mathfrak{g})=(X, \{e_i\}_{i \in I}, \{\gamma_i\}_{i \in I}, $ $ \{\varepsilon_i\}_{i\in I})$ whose ultra-discretization $\mathcal{UD}(\mathcal{V})$ is isomorphic to the limit $B^{\infty}$ of a coherent family of perfect crystals for the Langlands dual $\mathfrak{g}^L$. So far this conjecture has been proved for $( k= 1; \mathfrak{g} = A_n^{(1)}, B_n^{(1)}, C_n^{(1)}, D_n^{(1)}, A_{2n-1}^{(2)}, A_{2n}^{(2)}, D_{n+1}^{(2)}$) \cite{KNO}; ($k \geq 2; A_n^{(1)}$) \cite{MN1, MN2}; ($k = 1; G_2^{(1)}$) \cite{N2, N3}; $(k = 1; D_4^{(3)}$) \cite{IN, IMN}; ($k=5, D_5^{(1)}$) \cite{IMP}. In \cite{MP} we construct a positive geometric crystal for the affine Lie algebra $D_6^{(1)}$ at the Dynkin spin node $k= 6$. In this paper for $l \in \mathbb{Z}_{\geq 1}$, we show that the family of perfect crystals $\{B^{6, l}\}_{l\geq 1}$ for $D_6^{(1)}$ given in \cite{KMN2} is a coherent family of perfect crystals with limit $B^{6, \infty}$. Furthermore, we prove that the ultra-discretization of the positive geometric crystal $\mathcal{V} (D_6^{(1)})$ constructed in \cite{MP} is isomorphic as crystal to $B^{6, \infty}$ proving the conjecture \cite{KNO} in this case. \section{Perfect Crystals of type \bf{$D_6^{(1)}$}} From now on we assume $\mathfrak{g}$ to be the affine Lie algebra $D_6^{(1)}$ with index set $I = \{0,1,2,3,4,5,6\}$, Cartan matrix $A = (a_{ij})_{i,j \in I}$ where $a_{ii} = 2, a_{j,j + 1} = -1 = a_{j+1,j}, \; j = 1,2,3,4, a_{02} = a_{20} = a_{46} = a_{64} = -1, a_{ij} = 0$ otherwise, and Dynkin diagram: \begin{center} \begin{tikzpicture} \draw (-2,1)--(-1,0); \draw (-2,-1)--(-1,0); \draw (-1,0)--(.5,0); \draw (.5,0)--(2,0); \draw (2,0)--(3,1); \draw (2,0)--(3,-1); \draw [fill] (-2,1) circle [radius=0.1] node[left=.1pt] (a) {0}; \draw [fill] (-2,-1) circle [radius=0.1] node[left=.1pt] (b) {1}; \draw [fill] (-1,0) circle [radius=0.1] node[below=.3pt] (c) {2}; \draw [fill] (.5,0) circle [radius=0.1] node[below=.3pt] (d) {3}; \draw [fill] (2,0) circle [radius=0.1] node[below=.3pt] (e) {4}; \draw [fill] (3,1) circle [radius=0.1] node[right=.1pt] (f) {5}; \draw [fill] (3,-1) circle [radius=0.1] node[right=.1pt] (g) {6}; \end{tikzpicture} \end{center} Let $\{\alpha_0, \alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5, \alpha_6\}, \ \{\check{\alpha_0}, \check{\alpha_1}, \check{\alpha_2}, \check{\alpha_3}, \check{\alpha_4}, \check{\alpha_5}, \check{\alpha_6}\}$ and $\{\Lambda_0, \Lambda_1, \Lambda_2, \Lambda_3, \Lambda_4, \\\Lambda_5, \Lambda_6\}$ denote the set of simple roots, simple coroots and fundamental weights, respectively. Then ${\bf c} =\check{\alpha_0}+\check{\alpha_1}+2\check{\alpha_2}+2\check{\alpha_3}+2\check{\alpha_4}+\check{\alpha_5}+\check{\alpha_6}$ and $\delta = \alpha_0 +\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_6$ are the canonical central element and null root respectively. The sets $P_{cl} = \oplus_{j=0}^6 \mathbb{Z}\Lambda_j$ and $P = P_{cl}\oplus\mathbb{Z}\delta$ are called classical weight lattice and weight lattice respectively. In this section we reformulate the $D_6^{(1)}$-perfect crystals $\{B^{6,l}\}_{l\in\mathbb{Z}_{\geq 1}}$ corresponding to the spin node $k=6$ given in \cite{KMN2} in coordinatized form and show that it a coherent family of perfect crystal with limit $B^{6,\infty}$. For a positive integer $l$, we consider the sets $B^{6,l}$ and $B^{6,\infty}$ as follows. \begin{align*} B^{6,l} &= \left \{ b= (b_{ij})_{\scriptsize{\begin{array}{l}i \leq j \leq i+5,\\ 1 \leq i \leq 6\end{array}}} \middle| \begin{aligned} &b_{ij} \in \mathbb{Z}_{\geq 0},\ \sum_{j=i}^{i+5} b_{ij} = l,\ 1 \leq i \leq 6,\\ &\sum_{j=i}^{6-t} b_{ij} = \sum_{j=i+t}^{5+t} b_{i+t,j},\ 1 \leq i, t \leq 5,\\ & \sum_{j=i}^{t} b_{ij} \geq \sum_{j=i+1}^{t+1} b_{i+1,j},\ 1 \leq i \leq t \leq 5 \end{aligned} \right\}, \\ B^{6,\infty} &= \left \{ b= (b_{ij})_{\scriptsize{\begin{array}{l}i \leq j \leq i+5,\\ 1 \leq i \leq 6\end{array}}} \middle| \begin{aligned} &b_{ij} \in \mathbb{Z},\ \sum_{j=i}^{i+5} b_{ij} = 0,\ 1 \leq i \leq 6, \\ &\sum_{j=i}^{6-t} b_{ij} = \sum_{j=i+t}^{5+t} b_{i+t,j},\ 1 \leq i, t \leq 5 \end{aligned} \right\}. \end{align*} For $\mathcal{B} = B^{6,l} \, \text{or} \, B^{6,\infty}$ we define the maps $\tilde{e}_k, \tilde{f}_k : \mathcal{B} \longrightarrow \mathcal{B} \cup \{0\}$, $\varepsilon_k , \varphi_k : \mathcal{B} \longrightarrow \mathbb{Z}$, $0 \leq k \leq 6$ and $\text{wt} : \mathcal{B} \longrightarrow P_{cl}$, as follows. First we define conditions $(E_j), \ 1 \leq j \leq 14$: \begin{align*} (E_1) & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{15}+b_{22} > -b_{24}-b_{25}+b_{33}, \\ (E_2) & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35} > -b_{24}-b_{25}+b_{33}, \\ (E_3) & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{14}-b_{23}+b_{33}+b_{34} > -b_{24}-b_{25}+b_{33}, \\ (E_4) & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{23}-b_{25}+b_{33}+b_{34} > -b_{24}-b_{25}+b_{33}, \\ (E_5) & \hspace{5pt}-b_{13}-b_{14}+b_{33} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{14}+b_{33} > -b_{24}-b_{25}+b_{33}, \\ (E_6) & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{23}+b_{44}+b_{45} > -b_{24}-b_{25}+b_{33}, \\ (E_7) & \hspace{5pt}-b_{13}-b_{25}+b_{33} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} > -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{25}+b_{33} > -b_{24}-b_{25}+b_{33}, \\ (E_8) & \hspace{5pt}-b_{24}-b_{25}+b_{33} > b_{55}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} > -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} > -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{24}-b_{25}+b_{33} \geq -b_{13}-b_{25}+b_{33}, \\ (E_9) & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} > b_{55}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}-b_{34}+b_{44}+b_{45} > -b_{24}-b_{25}+b_{33}, \\ (E_{10}) & \hspace{5pt}-b_{13}+b_{44} > b_{55}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}+b_{44} > -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{13}+b_{44} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{13}+b_{44} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{13}+b_{44} \geq -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{13}+b_{44} > -b_{24}-b_{25}+b_{33}, \\ (E_{11}) & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} > b_{55}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} > -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} > -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{24}-b_{34}+b_{44}+b_{45} \geq -b_{24}-b_{25}+b_{33}, \\ (E_{12}) & \hspace{5pt}-b_{24}+b_{44} > b_{55}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{24}+b_{44} > -b_{35}+b_{44}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{24}+b_{44} \geq -b_{24}-b_{25}+b_{33}, \\ (E_{13}) & \hspace{5pt}-b_{35}+b_{44} > b_{55}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}+b_{44}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{24}+b_{44}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}-b_{35}+b_{44} \geq -b_{24}-b_{25}+b_{33}, \\ (E_{14}) & \hspace{5pt}b_{55} \geq -b_{13}-b_{23}+b_{44}+b_{45}, \\ & \hspace{5pt}b_{55} \geq -b_{13}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}b_{55} \geq -b_{13}+b_{44}, \\ & \hspace{5pt}b_{55} \geq -b_{24}-b_{34}+b_{44}+b_{45}, \\ & \hspace{5pt}b_{55} \geq -b_{24}+b_{44}, \\ & \hspace{5pt}b_{55} \geq -b_{35}+b_{44}, \\ & \hspace{5pt}b_{55} \geq -b_{13}-b_{14}-b_{15}+b_{22}, \\ & \hspace{5pt}b_{55} \geq -b_{13}-b_{14}-b_{23}-b_{24}+b_{33}+b_{34}+b_{35}, \\ & \hspace{5pt}b_{55} \geq -b_{13}-b_{14}-b_{23}+b_{33}+b_{34}, \\ & \hspace{5pt}b_{55} \geq -b_{13}-b_{23}-b_{25}+b_{33}+b_{34}, \\ & \hspace{5pt}b_{55} \geq -b_{13}-b_{14}+b_{33}, \\ & \hspace{5pt}b_{55} \geq -b_{13}-b_{25}+b_{33}, \\ & \hspace{5pt}b_{55} \geq -b_{24}-b_{25}+b_{33}. \end{align*} Then we define conditions $(F_j) \ (1 \leq j \leq 14)$ by replacing $>$ (resp. $\geq$) with $\geq$ (resp. $>$) in $(E_j)$. Let $b=(b_{ij}) \in \mathcal{B}$. Then for $\tilde{e_k}(b) = (b'_{ij})$ where \begin{align*} k=0 &: \small \begin{cases} b'_{11} = b_{11} - 1,b'_{16} = b_{16} + 1, b'_{22} = b_{22} - 1, b'_{27} = b_{27} + 1, b'_{36} = b_{36} - 1, \\ b'_{38} = b_{38} + 1, b'_{47} = b_{47} - 1, b'_{49} = b_{49} + 1, b'_{59} = b_{59} - 1, b'_{5,10} = b_{5,10} + 1, \\ b'_{69} = b_{69} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_1) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{15} = b_{15} + 1, b'_{22} = b_{22} - 1, b'_{26} = b_{26} + 1, b'_{35} = b_{35} - 1, \\ b'_{38} = b_{38} + 1, b'_{46} = b_{46} - 1, b'_{49} = b_{49} + 1, b'_{58} = b_{58} - 1, b'_{5,10} = b_{5,10} + 1, \\ b'_{69} = b_{69} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_2) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{15} = b_{15} + 1, b'_{22} = b_{22} - 1, b'_{24} = b_{24} + 1, b'_{25} = b_{25} - 1, \\ b'_{26} = b_{26} + 1, b'_{34} = b_{34} - 1, b'_{38} = b_{38} + 1, b'_{46} = b_{46} - 1, b'_{47} = b_{47} + 1, \\ b'_{48} = b_{48} - 1, b'_{49} = b_{49} + 1 b'_{57} = b_{57} - 1, b'_{5,10} = b_{5,10} + 1, b'_{69} = b_{69} - 1, \\ b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_3) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{14} = b_{14} + 1, b'_{22} = b_{22} - 1, b'_{26} = b_{26} + 1, b'_{34} = b_{34} - 1, \\ b'_{37} = b_{37} + 1, b'_{46} = b_{46} - 1, b'_{49} = b_{49} + 1, b'_{57} = b_{57} - 1, b'_{5,10} = b_{5,10} + 1,\\ b'_{69} = b_{69} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_4) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{15} = b_{15} + 1, b'_{22} = b_{22} - 1, b'_{23} = b_{23} + 1, b'_{25} = b_{25} - 1, \\ b'_{26} = b_{26} + 1, b'_{33} = b_{33} - 1, b'_{38} = b_{38} + 1, b'_{46} = b_{46} - 1, b'_{47} = b_{47} + 1, \\ b'_{48} = b_{48} - 1, b'_{49} = b_{49} + 1 b'_{57} = b_{57} - 1, b'_{58} = b_{58} + 1, b'_{59} = b_{59} - 1, \\ b'_{5,10} = b_{5,10} + 1, b'_{68} = b_{68} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_5) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{14} = b_{14} + 1, b'_{22} = b_{22} - 1, b'_{25} = b_{25} + 1, b'_{34} = b_{34} - 1, \\ b'_{36} = b_{36} + 1, b'_{45} = b_{45} - 1, b'_{49} = b_{49} + 1, b'_{56} = b_{56} - 1, b'_{5,10} = b_{5,10} + 1,\\ b'_{69} = b_{69} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_6) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{14} = b_{14} + 1, b'_{22} = b_{22} - 1, b'_{23} = b_{23} + 1, b'_{24} = b_{24} - 1, \\ b'_{26} = b_{26} + 1, b'_{33} = b_{33} - 1, b'_{37} = b_{37} + 1, b'_{46} = b_{46} - 1, b'_{48} = b_{48} + 1,\\ b'_{57} = b_{57} - 1, b'_{58} = b_{58} + 1, b'_{59} = b_{59} - 1, b'_{5,10} = b_{5,10} + 1, b'_{68} = b_{68} - 1,\\ b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_7) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{13} = b_{13} + 1, b'_{22} = b_{22} - 1, b'_{26} = b_{26} + 1, b'_{33} = b_{33} - 1, \\ b'_{37} = b_{37} + 1, b'_{46} = b_{46} - 1, b'_{48} = b_{48} + 1, b'_{57} = b_{57} - 1, b'_{5,10} = b_{5,10} + 1,\\ b'_{68} = b_{68} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_8) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{14} = b_{14} + 1, b'_{22} = b_{22} - 1, b'_{23} = b_{23} + 1, b'_{24} = b_{24} - 1, \\ b'_{25} = b_{25} + 1, b'_{33} = b_{33} - 1, b'_{36} = b_{36} + 1, b'_{45} = b_{45} - 1, b'_{49} = b_{49} + 1, \\ b'_{56} = b_{56} - 1, b'_{58} = b_{58} + 1 b'_{59} = b_{59} - 1, b'_{5,10} = b_{5,10} + 1, b'_{68} = b_{68} - 1, \\ b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_9) \vspace{1pt}\\ \end{cases}\\ k=0&: \begin{cases} b'_{11} = b_{11} - 1, b'_{14} = b_{14} + 1, b'_{22} = b_{22} - 1, b'_{23} = b_{23} + 1, b'_{24} = b_{24} - 1,\\ b'_{25} = b_{25} + 1, b'_{33} = b_{33} - 1, b'_{34} = b_{34} + 1, b'_{35} = b_{35} - 1, b'_{36} = b_{36} + 1,\\ b'_{44} = b_{44} - 1, b'_{49} = b_{49} + 1 b'_{56} = b_{56} - 1, b'_{57} = b_{57} + 1, b'_{59} = b_{59} - 1,\\ b'_{5,10} = b_{5,10} + 1, b'_{67} = b_{67} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_{10}) \vspace{1pt} \\ b'_{11} = b_{11} - 1, b'_{13} = b_{13} + 1, b'_{22} = b_{22} - 1, b'_{25} = b_{25} + 1, b'_{33} = b_{33} - 1, \\ b'_{36} = b_{36} + 1, b'_{45} = b_{45} - 1, b'_{48} = b_{48} + 1, b'_{56} = b_{56} - 1, b'_{5,10} = b_{5,10} + 1, \\ b'_{68} = b_{68} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_{11}) \vspace{1pt}\\ b'_{11} = b_{11} - 1, b'_{13} = b_{13} + 1, b'_{22} = b_{22} - 1, b'_{25} = b_{25} + 1, b'_{33} = b_{33} - 1,\\ b'_{34} = b_{34} + 1, b'_{35} = b_{35} - 1, b'_{36} = b_{36} + 1, b'_{44} = b_{44} - 1, b'_{48} = b_{48} + 1,\\ b'_{56} = b_{56} - 1, b'_{57} = b_{57} + 1 b'_{58} = b_{58} - 1, b'_{5,10} = b_{5,10} + 1, b'_{67} = b_{67} - 1,\\ b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_{12}) \vspace{1pt}\\ b'_{11} = b_{11} - 1, b'_{13} = b_{13} + 1, b'_{22} = b_{22} - 1, b'_{24} = b_{24} + 1, b'_{33} = b_{33} - 1,\\ b'_{36} = b_{36} + 1, b'_{44} = b_{44} - 1, b'_{47} = b_{47} + 1, b'_{56} = b_{56} - 1, b'_{5,10} = b_{5,10} + 1, \\ b'_{67} = b_{67} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_{13}) \vspace{1pt}\\ b'_{11} = b_{11} - 1,b'_{13} = b_{13} + 1, b'_{22} = b_{22} - 1, b'_{24} = b_{24} + 1, b'_{33} = b_{33} - 1, \\ b'_{35} = b_{35} + 1, b'_{44} = b_{44} - 1, b'_{46} = b_{46} + 1, b'_{56} = b_{56} - 1, b'_{5,10} = b_{5,10} + 1,\\ b'_{67} = b_{67} - 1, b'_{6,11} = b_{6,11} + 1 \ \text{if} \ (E_{14}) \end{cases} \\ k=1 &: b'_{11} =b_{11} +1, b'_{12} =b_{12} -1, b'_{6,10} =b_{6,10} +1, b'_{6,11} =b_{6,11} -1 \\ k=2 &: \begin{cases} b'_{12} =b_{12} +1, b'_{13} =b_{13} -1, b'_{59} =b_{59} +1, b'_{5,10} =b_{5,10} -1 \ \text{if} \ b_{12} \geq b_{23}\\ b'_{22} =b_{22} +1, b'_{23} =b_{23} -1, b'_{69} =b_{69} +1, b'_{6,10} =b_{6,10} -1 \ \text{if} \ b_{12} < b_{23} \end{cases} \\ k=3 &: \begin{cases} b'_{13} =b_{13} +1, b'_{14} =b_{14} -1, b'_{48} =b_{48} +1, b'_{49} =b_{49} -1 \\ \hspace{1cm} \text{if} \ b_{13} \geq b_{24}, b_{13}+b_{23} \geq b_{24}+b_{34} \\ b'_{23} =b_{23} +1, b'_{24} =b_{24} -1, b'_{58} =b_{58} +1, b'_{59} =b_{59} -1 \\ \hspace{1cm} \text{if} \ b_{13} < b_{24}, b_{23} \geq b_{34}\\ b'_{33} =b_{33} +1, b'_{34} =b_{34} -1, b'_{68} =b_{68} +1, b'_{69} =b_{69} -1 \\ \hspace{1cm} \text{if} \ b_{13}+b_{23} < b_{24}+b_{34}, b_{23} < b_{34} \end{cases} \\ k=4 &: \begin{cases} b'_{14} =b_{14} +1, b'_{15} =b_{15} -1, b'_{37} =b_{37} +1, b'_{38} =b_{38} -1 \\ \hspace{1cm} \text{if} \ b_{14} \geq b_{25}, b_{14}+b_{24} \geq b_{25}+b_{35}, b_{14}+b_{24}+b_{34} \geq b_{25}+b_{35}+b_{45}\\ b'_{24} =b_{24} +1, b'_{25} =b_{25} -1, b'_{47} =b_{47} +1, b'_{48} =b_{48} -1 \\ \hspace{1cm} \text{if} \ b_{14} < b_{25}, b_{24} \geq b_{35}, b_{24}+b_{34} \geq b_{35}+b_{45} \end{cases} \\ k=4 &: \begin{cases} b'_{34} =b_{34} +1, b'_{35} =b_{35} -1, b'_{57} =b_{57} +1, b'_{58} =b_{58} -1 \\ \hspace{1cm} \text{if} \ b_{14}+b_{24} < b_{25}+b_{35}, b_{24} < b_{35}, b_{34} \geq b_{45}\\ b'_{44} =b_{44} +1, b'_{45} =b_{45} -1, b'_{67} =b_{67} +1, b'_{68} =b_{68} -1 \\ \hspace{1cm} \text{if} \ b_{14}+b_{24}+b_{34} < b_{25}+b_{35}+b_{45}, b_{24}+b_{34} < b_{35}+b_{45}, b_{34} < b_{45} \end{cases} \\ k=5 &: \begin{cases} b'_{25} =b_{25} +1, b'_{26} =b_{26} -1, b'_{36} =b_{36} +1, b'_{37} =b_{37} -1 \\ \hspace{1cm} \text{if} \ b_{25}+b_{44}+b_{45} \geq b_{33}+b_{34}\\ b'_{45} =b_{45} +1, b'_{46} =b_{46} -1, b'_{56} =b_{56} +1, b'_{57} =b_{57} -1 \\ \hspace{1cm} \text{if} \ b_{25}+b_{44}+b_{45} < b_{33}+b_{34} \end{cases} \\ k=6 &: \begin{cases} b'_{15} =b_{15} +1, b'_{16} =b_{16} -1, b'_{26} =b_{26} +1, b'_{27} =b_{27} -1 \\ \hspace{1cm} \text{if} \ b_{15}+b_{33}+b_{34}+b_{35} \geq b_{22}+b_{23}+b_{24}, \\ \hspace{1.5cm} b_{15}+b_{33}+b_{34}+2b_{35}+b_{55} \geq b_{22}+b_{23}+b_{24}+b_{44}\\ b'_{35} =b_{35} +1, b'_{36} =b_{36} -1, b'_{46} =b_{46} +1, b'_{47} =b_{47} -1 \\ \hspace{1cm} \text{if} \ b_{15}+b_{33}+b_{34}+b_{35} < b_{22}+b_{23}+b_{24}, b_{35}+b_{55} \geq b_{44}\\ b'_{55} =b_{55} +1, b'_{56} =b_{56} -1, b'_{66} =b_{66} +1, b'_{67} =b_{67} -1 \\ \hspace{1cm} \text{if} \ b_{15}+b_{33}+b_{34}+2b_{35}+b_{55} < b_{22}+b_{23}+b_{24}+b_{44}, \\ \hspace{1.5cm} b_{35}+b_{55} < b_{44} \end{cases} \end{align*} and $b'_{ij} = b_{ij}$ otherwise. Also $\tilde{f_k}(b) = (b'_{ij})$ where \begin{align*} k=0 &: \small \begin{cases} b'_{11} = b_{11} + 1,b'_{16} = b_{16} - 1, b'_{22} = b_{22} + 1, b'_{27} = b_{27} - 1, b'_{36} = b_{36} + 1, \\ b'_{38} = b_{38} - 1, b'_{47} = b_{47} + 1, b'_{49} = b_{49} - 1, b'_{59} = b_{59} + 1, b'_{5,10} = b_{5,10} - 1, \\ b'_{69} = b_{69} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_1) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{15} = b_{15} - 1, b'_{22} = b_{22} + 1, b'_{26} = b_{26} - 1, b'_{35} = b_{35} + 1, \\ b'_{38} = b_{38} - 1, b'_{46} = b_{46} + 1, b'_{49} = b_{49} - 1, b'_{58} = b_{58} + 1, b'_{5,10} = b_{5,10} - 1, \\ b'_{69} = b_{69} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_2) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{15} = b_{15} - 1, b'_{22} = b_{22} + 1, b'_{24} = b_{24} - 1, b'_{25} = b_{25} + 1, \\ b'_{26} = b_{26} - 1, b'_{34} = b_{34} + 1, b'_{38} = b_{38} - 1, b'_{46} = b_{46} + 1, b'_{47} = b_{47} - 1,\\ b'_{48} = b_{48} + 1, b'_{49} = b_{49} - 1, b'_{57} = b_{57} + 1, b'_{5,10} = b_{5,10} - 1, b'_{69} = b_{69} + 1, \\ b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_3) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{14} = b_{14} - 1, b'_{22} = b_{22} + 1, b'_{26} = b_{26} - 1, b'_{34} = b_{34} + 1, \\ b'_{37} = b_{37} - 1, b'_{46} = b_{46} + 1, b'_{49} = b_{49} - 1, b'_{57} = b_{57} + 1, b'_{5,10} = b_{5,10} - 1, \\ b'_{69} = b_{69} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_4) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{15} = b_{15} - 1, b'_{22} = b_{22} + 1, b'_{23} = b_{23} - 1, b'_{25} = b_{25} + 1,\\ b'_{26} = b_{26} - 1, b'_{33} = b_{33} + 1, b'_{38} = b_{38} - 1, b'_{46} = b_{46} + 1, b'_{47} = b_{47} - 1,\\ b'_{48} = b_{48} + 1, b'_{49} = b_{49} - 1 b'_{57} = b_{57} + 1, b'_{58} = b_{58} - 1, b'_{59} = b_{59} + 1, \\ b'_{5,10} = b_{5,10} - 1, b'_{68} = b_{68} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_5) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{14} = b_{14} - 1, b'_{22} = b_{22} + 1, b'_{25} = b_{25} - 1, b'_{34} = b_{34} + 1,\\ b'_{36} = b_{36} - 1, b'_{45} = b_{45} + 1, b'_{49} = b_{49} - 1, b'_{56} = b_{56} + 1, b'_{5,10} = b_{5,10} - 1,\\ b'_{69} = b_{69} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_6) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{14} = b_{14} - 1, b'_{22} = b_{22} + 1, b'_{23} = b_{23} - 1, b'_{24} = b_{24} + 1,\\ b'_{26} = b_{26} - 1, b'_{33} = b_{33} + 1, b'_{37} = b_{37} - 1, b'_{46} = b_{46} + 1, b'_{48} = b_{48} - 1,\\ b'_{57} = b_{57} + 1, b'_{58} = b_{58} - 1, b'_{59} = b_{59} + 1, b'_{5,10} = b_{5,10} - 1, b'_{68} = b_{68} + 1,\\ b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_7) \vspace{1pt} \\ b'_{11} = b_{11} + 1,b'_{13} = b_{13} - 1, b'_{22} = b_{22} + 1, b'_{26} = b_{26} - 1, b'_{33} = b_{33} + 1,\\ b'_{37} = b_{37} - 1, b'_{46} = b_{46} + 1, b'_{48} = b_{48} - 1, b'_{57} = b_{57} + 1, b'_{5,10} = b_{5,10} - 1, \\ b'_{68} = b_{68} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_8) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{14} = b_{14} - 1, b'_{22} = b_{22} + 1, b'_{23} = b_{23} - 1, b'_{24} = b_{24} + 1,\\ b'_{25} = b_{25} - 1, b'_{33} = b_{33} + 1, b'_{36} = b_{36} - 1, b'_{45} = b_{45} + 1, b'_{49} = b_{49} - 1,\\ b'_{56} = b_{56} + 1, b'_{58} = b_{58} - 1 b'_{59} = b_{59} + 1, b'_{5,10} = b_{5,10} - 1, b'_{68} = b_{68} + 1,\\ b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_9) \vspace{1pt}\\ \end{cases}\\ k=0 &: \small \begin{cases} b'_{11} = b_{11} + 1, b'_{14} = b_{14} - 1, b'_{22} = b_{22} + 1, b'_{23} = b_{23} - 1, b'_{24} = b_{24} + 1,\\ b'_{25} = b_{25} - 1, b'_{33} = b_{33} + 1, b'_{34} = b_{34} - 1, b'_{35} = b_{35} + 1, b'_{36} = b_{36} - 1,\\ b'_{44} = b_{44} + 1, b'_{49} = b_{49} - 1 b'_{56} = b_{56} + 1, b'_{57} = b_{57} - 1, b'_{59} = b_{59} + 1, \\ b'_{5,10} = b_{5,10} - 1, b'_{67} = b_{67} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_{10}) \vspace{1pt}\\ b'_{11} = b_{11} + 1, b'_{13} = b_{13} - 1, b'_{22} = b_{22} + 1, b'_{25} = b_{25} - 1, b'_{33} = b_{33} + 1,\\ b'_{36} = b_{36} - 1, b'_{45} = b_{45} + 1, b'_{48} = b_{48} - 1, b'_{56} = b_{56} + 1, b'_{5,10} = b_{5,10} - 1,\\ b'_{68} = b_{68} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_{11}) \vspace{1pt}\\ b'_{11} = b_{11} + 1, b'_{13} = b_{13} - 1, b'_{22} = b_{22} + 1, b'_{25} = b_{25} - 1, b'_{33} = b_{33} + 1,\\ b'_{34} = b_{34} - 1, b'_{35} = b_{35} + 1, b'_{36} = b_{36} - 1, b'_{44} = b_{44} + 1, b'_{48} = b_{48} - 1,\\ b'_{56} = b_{56} + 1, b'_{57} = b_{57} - 1 b'_{58} = b_{58} + 1, b'_{5,10} = b_{5,10} - 1, b'_{67} = b_{67} + 1,\\ b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_{12}) \vspace{1pt}\\ b'_{11} = b_{11} + 1, b'_{13} = b_{13} - 1, b'_{22} = b_{22} + 1, b'_{24} = b_{24} - 1, b'_{33} = b_{33} + 1,\\ b'_{36} = b_{36} - 1, b'_{44} = b_{44} + 1, b'_{47} = b_{47} - 1, b'_{56} = b_{56} + 1, b'_{5,10} = b_{5,10} - 1,\\ b'_{67} = b_{67} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_{13}) \vspace{1pt}\\ b'_{11} = b_{11} + 1,b'_{13} = b_{13} - 1, b'_{22} = b_{22} + 1, b'_{24} = b_{24} - 1, b'_{33} = b_{33} + 1, \\ b'_{35} = b_{35} - 1, b'_{44} = b_{44} + 1, b'_{46} = b_{46} - 1, b'_{56} = b_{56} + 1, b'_{5,10} = b_{5,10} - 1,\\ b'_{67} = b_{67} + 1, b'_{6,11} = b_{6,11} - 1 \ \text{if} \ (F_{14}) \end{cases} \\ k=1 &: b'_{11} =b_{11} -1, b'_{12} =b_{12} +1, b'_{6,10} =b_{6,10} -1, b'_{6,11} =b_{6,11} +1 \\ k=2 &: \begin{cases} b'_{12} =b_{12} -1, b'_{13} =b_{13} +1, b'_{59} =b_{59} -1, b'_{5,10} =b_{5,10} +1 \ \text{if} \ b_{12} > b_{23}\\ b'_{22} =b_{22} -1, b'_{23} =b_{23} +1, b'_{69} =b_{69} -1, b'_{6,10} =b_{6,10} +1 \ \text{if} \ b_{12} \leq b_{23} \end{cases} \\ k=3 &: \begin{cases} b'_{13} =b_{13} -1, b'_{14} =b_{14} +1, b'_{48} =b_{48} -1, b'_{49} =b_{49} +1 \\ \hspace{1cm} \text{if} \ b_{13} > b_{24}, b_{13}+b_{23} > b_{24}+b_{34} \\ b'_{23} =b_{23} -1, b'_{24} =b_{24} +1, b'_{58} =b_{58} -1, b'_{59} =b_{59} +1 \\ \hspace{1cm} \text{if} \ b_{13} \leq b_{24}, b_{23} > b_{34}\\ b'_{33} =b_{33} -1, b'_{34} =b_{34} +1, b'_{68} =b_{68} -1, b'_{69} =b_{69} +1 \\ \hspace{1cm} \text{if} \ b_{13}+b_{23} \leq b_{24}+b_{34}, b_{23} \leq b_{34} \end{cases} \\ k=4 &: \begin{cases} b'_{14} =b_{14} -1, b'_{15} =b_{15} +1, b'_{37} =b_{37} -1, b'_{38} =b_{38} +1 \\ \hspace{1cm} \text{if} \ b_{14} > b_{25}, b_{14}+b_{24} > b_{25}+b_{35}, b_{14}+b_{24}+b_{34} > b_{25}+b_{35}+b_{45}\\ b'_{24} =b_{24} -1, b'_{25} =b_{25} +1, b'_{47} =b_{47} -1, b'_{48} =b_{48} +1 \\ \hspace{1cm} \text{if} \ b_{14} \leq b_{25}, b_{24} > b_{35}, b_{24}+b_{34} > b_{35}+b_{45}\\ b'_{34} =b_{34} -1, b'_{35} =b_{35} +1, b'_{57} =b_{57} -1, b'_{58} =b_{58} +1 \\ \hspace{1cm} \text{if} \ b_{14}+b_{24} \leq b_{25}+b_{35}, b_{24} \leq b_{35}, b_{34} > b_{45}\\ b'_{44} =b_{44} -1, b'_{45} =b_{45} +1, b'_{67} =b_{67} -1, b'_{68} =b_{68} +1 \\ \hspace{1cm} \text{if} \ b_{14}+b_{24}+b_{34} \leq b_{25}+b_{35}+b_{45}, b_{24}+b_{34} \leq b_{35}+b_{45}, b_{34} \leq b_{45} \end{cases} \\ k=5 &: \begin{cases} b'_{25} =b_{25} -1, b'_{26} =b_{26} +1, b'_{36} =b_{36} -1, b'_{37} =b_{37} +1 \\ \hspace{1cm} \text{if} \ b_{25}+b_{44}+b_{45} > b_{33}+b_{34}\\ b'_{45} =b_{45} -1, b'_{46} =b_{46} +1, b'_{56} =b_{56} -1, b'_{57} =b_{57} +1 \\ \hspace{1cm} \text{if} \ b_{25}+b_{44}+b_{45} \leq b_{33}+b_{34} \end{cases} \\ k=6 &: \begin{cases} b'_{15} =b_{15} -1, b'_{16} =b_{16} +1, b'_{26} =b_{26} -1, b'_{27} =b_{27} +1 \\ \hspace{1cm} \text{if} \ b_{15}+b_{33}+b_{34}+b_{35} > b_{22}+b_{23}+b_{24}, \\ \hspace{1.5cm} b_{15}+b_{33}+b_{34}+2b_{35}+b_{55} > b_{22}+b_{23}+b_{24}+b_{44}\\ b'_{35} =b_{35} -1, b'_{36} =b_{36} +1, b'_{46} =b_{46} -1, b'_{47} =b_{47} +1 \\ \hspace{1cm} \text{if} \ b_{15}+b_{33}+b_{34}+b_{35} \leq b_{22}+b_{23}+b_{24}, b_{35}+b_{55} > b_{44}\\ b'_{55} =b_{55} -1, b'_{56} =b_{56} +1, b'_{66} =b_{66} -1, b'_{67} =b_{67} +1 \\ \hspace{1cm} \text{if} \ b_{15}+b_{33}+b_{34}+2b_{35}+b_{55} \leq b_{22}+b_{23}+b_{24}+b_{44}, b_{35}+b_{55} \leq b_{44} \end{cases} \end{align*} and $b'_{ij} = b_{ij}$ otherwise. For $b \in B^{6,l}$ if $\tilde{e}_k(b)$ or $\tilde{f}_k(b)$ does not belong to $B^{6,l}$, then we assume it to be $0$. The maps $\varepsilon_k(b), \ \varphi_k(b)$ and $\text{wt}_k(b)$ for $k=0,1,2,3,4,5,6$ are given as follows. We observe that $\text{wt}_k(b) = \varphi_k(b) - \varepsilon_k(b)$, $\varphi(b) = \sum_{k=0}^6\varphi_k(b)\Lambda_k$, $\varepsilon(b) = \sum_{k=0}^6\varepsilon_k(b)\Lambda_k$ and $\text{wt}(b) = \varphi(b) - \varepsilon(b)$. \begin{align*} \varepsilon_0(b) &= \begin{cases} l + \mathcal{A}_1 \, \ \text{if} \, \ b \in B^{6,l}, \\ \mathcal{A}_1 \, \ \text{if} \, \ b \in B^{6, \infty}, \end{cases}\\ \text{where} \ \mathcal{A}_1 &= \text{max} \{-b_{56}-b_{57}-b_{58}-b_{59}-b_{5,10}, -b_{13}-b_{23}-b_{46}-b_{47}-b_{48}-b_{49}, \\ & \hspace{15pt} -b_{13}-b_{34}-b_{46}-b_{47}-b_{48}-b_{49}, -b_{13}-b_{45}-b_{46}-b_{47}-b_{48}-b_{49}, \\ & \hspace{15pt} -b_{24}-b_{34}-b_{46}-b_{47}-b_{48}-b_{49}, -b_{24}-b_{45}-b_{46}-b_{47}-b_{48} -b_{49},\\ & \hspace{15pt} -b_{35}-b_{45}-b_{46}-b_{47}-b_{48}-b_{49}, -b_{13}-b_{14}-b_{15}-b_{23}-b_{24} -b_{25}\\ & \hspace{15pt} -b_{26}-b_{27}, -b_{13}-b_{14}-b_{23}-b_{24}-b_{36}-b_{37}-b_{38}, -b_{13}-b_{14} -b_{23} \\ & \hspace{15pt} -b_{35}-b_{36}-b_{37}-b_{38}, -b_{13}-b_{23}-b_{25}-b_{35}-b_{36}-b_{37}-b_{38}, -b_{13} \\ & \hspace{15pt} -b_{14}-b_{34}-b_{35}-b_{36}-b_{37}-b_{38}, -b_{13}-b_{25}-b_{34}-b_{35}-b_{36} -b_{37}\\ & \hspace{15pt} -b_{38}, -b_{24}-b_{25}-b_{34}-b_{35}-b_{36}-b_{37}-b_{38} \}.\\ \varepsilon_1(b) &= b_{12},\\ \varepsilon_2(b) &= \text{max} \{b_{13}, -b_{12}+b_{13}+b_{23}\},\\ \varepsilon_3(b) &= \text{max} \{b_{14}, -b_{13}+b_{14}+b_{24}, -b_{13}+b_{14}-b_{23}+b_{24}+b_{34}\},\\ \varepsilon_4(b) &= \text{max} \{b_{15}, -b_{14}+b_{15}+b_{25}, -b_{14}+b_{15}-b_{24}+b_{25}+b_{35}, \\ & \hspace{15pt} -b_{14}+b_{15}-b_{24}+b_{25}-b_{34}+b_{35}+b_{45}\}, \\ \varepsilon_5(b) &= \text{max} \{b_{11}+b_{12}+b_{13}+b_{14}-b_{22}-b_{23}-b_{24}-b_{25},\\ &\hspace{15pt} b_{11}+b_{12}+b_{13}+b_{14}-b_{22}-b_{23}-b_{24}-2b_{25}+b_{33}+b_{34}-b_{44}-b_{45}\},\\ \varepsilon_6(b) &= \begin{cases} l + \mathcal{A}_2 \ \text{if} \ b \in B^{6,l},\\ \mathcal{A}_2 \ \ \text{if} \ b \in B^{6,\infty}, \end{cases} \end{align*} \begin{align*} \text{where} \ \mathcal{A}_2 = &\text{max} \{-b_{11}-b_{12}-b_{13}-b_{14}-b_{15}, -b_{11}-b_{12}-b_{13}-b_{14}-2b_{15} \\ & +b_{22}+b_{23}+b_{24}-b_{33}-b_{34}-b_{35}, -b_{11}-b_{12}-b_{13}-b_{14}-2b_{15} \\ & +b_{22}+b_{23}+b_{24}-b_{33}-b_{34}-2b_{35}+b_{44}-b_{55}\}. \end{align*} \begin{align*} \varphi_0(b) &= \begin{cases} l+ \mathcal{A}_3, \ \text{if} \ b \in B^{6,l} , \\ \mathcal{A}_3, \ \text{if} \ b \in B^{6, \infty}, \end{cases} \\ \text{where} \ \mathcal{A}_3 &=\text{max}\{-b_{11}-b_{12}+b_{23}+b_{24}+b_{25}+b_{26}+b_{27}-b_{56}-b_{57}-b_{58}-b_{59}-b_{5,10},\\ &\hspace{15pt} -b_{11}-b_{12}-b_{13}+b_{24}+b_{25}+b_{26}+b_{27}-b_{46}-b_{47}-b_{48} -b_{49},\\ &\hspace{15pt} -b_{11}-b_{12}-b_{13}+b_{23}+b_{24}+b_{25}+b_{26}+b_{27}-b_{34}-b_{46}-b_{47} -b_{48}-b_{49},\\ &\hspace{15pt} -b_{11}-b_{12}-b_{13}+b_{23}+b_{24}+b_{25}+b_{26}+b_{27}-b_{45}-b_{46}-b_{47}-b_{48}-b_{49},\\ &\hspace{15pt} -b_{11}-b_{12}+b_{23}+b_{25}+b_{26}+b_{27}-b_{34}-b_{46}-b_{47}b_{48}-b_{49}, \\ &\hspace{15pt} -b_{11}-b_{12}+b_{23}+b_{25}+b_{26}+b_{27}-b_{45}-b_{46}-b_{47}-b_{48}-b_{49},\\ &\hspace{15pt} -b_{11}-b_{12}+b_{23}+b_{24}+b_{25}+b_{26}+b_{27}-b_{35}-b_{45}-b_{46}-b_{47}-b_{48}-b_{49}, \\ &\hspace{15pt} -b_{11}-b_{12}-b_{13}-b_{14}-b_{15}, -b_{11}-b_{12}-b_{13}-b_{14}+b_{25}+b_{26}+b_{27}-b_{36}-b_{37} \\ &\hspace{15pt} -b_{38}, -b_{11}-b_{12}-b_{13}-b_{14}+b_{24}+b_{25}+b_{26} +b_{27}-b_{35}-b_{36}-b_{37}-b_{38},\\ &\hspace{15pt} -b_{11}-b_{12}-b_{13}+b_{24}+b_{26}+b_{27}-b_{35} -b_{36}-b_{37}-b_{38},-b_{11}-b_{12}-b_{13}\\ &\hspace{15pt} -b_{14}+b_{23}+b_{24}+b_{25}+b_{26}+b_{27}-b_{34}-b_{35}-b_{36}-b_{37}-b_{38},\\ &\hspace{15pt} -b_{11}-b_{12}-b_{13}+b_{23}+b_{24}+b_{26}+b_{27} -b_{34}-b_{35}-b_{36}-b_{37}-b_{38},\\ &\hspace{15pt} -b_{11}-b_{12}+b_{23}+b_{26}+b_{27}-b_{34}-b_{35} -b_{36}-b_{37}-b_{38} \}. \\ \varphi_1(b) &= b_{11}-b_{22},\\ \varphi_2(b) &= \text{max} \{b_{22}-b_{33}, b_{12}+b_{22}-b_{23}-b_{33}\},\\ \varphi_3(b) &= \text{max} \{b_{33}-b_{44}, b_{23}+b_{33}-b_{34}-b_{44}, b_{13}+b_{23}-b_{24}+b_{33}-b_{34}-b_{44}\},\\ \varphi_4(b) &= \text{max} \{b_{44}-b_{55}, b_{34}+b_{44}-b_{45}-b_{55}, b_{24}+b_{34}-b_{35}+b_{44}-b_{45}-b_{55}, \\ &\hspace{15pt} b_{14}+b_{24}-b_{25}+b_{34}-b_{35}+b_{44}-b_{45}-b_{55}\},\\ \varphi_5(b) &= \text{max} \{b_{45}, b_{25}-b_{33}-b_{34}+b_{44}+2b_{45}\},\\ \varphi_6(b) &= \begin{cases} l+ \mathcal{A}_4, \ \text{if} \ b \in B^{6, i}, \\ \mathcal{A}_4, \ \text{if} \ b \in B^{6, \infty}, \end{cases} \end{align*} \begin{align*} \text{where} \ \mathcal{A}_4 = &\text{max} \{-b_{56}-b_{57}-b_{58}-b_{59}-b_{5,10}, b_{35}-b_{44}+b_{55}-b_{56}-b_{57}\\ & -b_{58}-b_{59}-b_{5,10}, b_{15}-b_{22}-b_{23}-b_{24}+b_{33}+b_{34}+2b_{35}-b_{44}\\ & +b_{55}-b_{56}-b_{57}-b_{58}-b_{59}-b_{5,10}\}. \end{align*} \begin{align*} \text{wt}_0(b) &= -b_{11}-b_{12} +b_{23}+b_{24}+b_{25}+b_{26}+b_{27},\\ \text{wt}_1(b) &= b_{11}-b_{12}-b_{22},\\ \text{wt}_2(b) &= b_{12} -b_{13}+b_{22}-b_{23}-b_{33},\\ \text{wt}_3(b) &= b_{13}-b_{14}+b_{23}-b_{24}+b_{33}-b_{34}-b_{44},\\ \text{wt}_4(b) &= b_{14}-b_{15} +b_{24}-b_{25}+b_{34}-b_{35}+b_{44}-b_{45} -b_{55},\\ \text{wt}_5(b) &= -b_{11}-b_{12} -b_{13}-b_{14}+b_{22}+b_{23}+b_{24}+2b_{25}-b_{33}-b_{34}+b_{44}+2b_{45},\\ \text{wt}_6(b) &= b_{11}+b_{12} +b_{13}+b_{14}+2b_{15}-b_{22}-b_{23}-b_{24}+b_{33}+b_{34}+2b_{35}-b_{44}+b_{55}\\ &\hspace{15pt} -b_{56}-b_{57}-b_{58}-b_{59}-b_{5,10}. \end{align*} \noindent Choose elements $b^0_{\bf{0}}, b^0_{\bf{1}}, b^0_{\bf{2}}, b^0_{\bf{3}}, b^0_{\bf{4}}, b^0_{\bf{5}}, b^0_{\bf{6}}$ where \begin{align*} (b^0_{\bf{0}})_{ij} &=1 &\text{if} \ (i,j) &= (1,6),(2,7),(3,8),(4,9),(5,10),(6,11), \\ (b^0_{\bf{1}})_{ij} &=1 &\text{if} \ (i,j) &= (1,1),(2,6),(3,7),(4,8),(5,9), (6,10), \\ (b^0_{\bf{2}})_{ij} &=1 &\text{if} \ (i,j) &= (1,1),(1,5),(2,2),(2,6),(3,6),(3,8),(4,7),(4,9),(5,8),(5,10),\\ &&& \qquad(6,9),(6,11), \\ (b^0_{\bf{3}})_{ij} &=1 &\text{if} \ (i,j) &= (1,1),(1,4),(2,2),(2,5),(3,3),(3,6),(4,6),(4,9),(5,7),(5,10),\\ &&& \qquad (6,8),(6,11), \\ (b^0_{\bf{4}})_{ij} &=1 &\text{if} \ (i,j) &= (1,1),(1,3),(2,2),(2,4),(3,3),(3,5),(4,4),(4,6),(5,6),(5,10),\\ &&& \qquad (6,7),(6,11), \\ (b^0_{\bf{5}})_{ij} &=1 &\text{if} \ (i,j) &= (1,2),(2,3),(3,4),(4,5),(5,6),(6,11), \\ (b^0_{\bf{6}})_{ij} &=1 &\text{if} \ (i,j) &= (1,1),(2,2),(3,3),(4,4),(5,5),(6,6), \end{align*} and $(b^0_{\bf{k}})_{ij} =0$ \ otherwise,\ for $0 \leq k \leq 6$. As shown in \cite{KMN2}, the crystal $B^{6,l}$ is a perfect crystal with the set of minimal elements: \begin{align*} (B^{6,l})_\text{min}& = \{b \in B^{6,l} \mid \langle {\bf c} , \varepsilon (b)\rangle = l\}\\ & =\left \{ \sum_{k=0}^6 a_k b^0_{\bf k} \mid a_k \in \mathbb{Z}_{\geq 0},\ a_0+a_1+2a_2+2a_3+2a_4+a_5+a_6=l \right\}. \end{align*} For $\lambda \in P_{cl}$, consider the crystal $T_{\lambda} = \{t_\lambda\}$ with \begin{align*} &\tilde{e_k} (t_\lambda) = \tilde{f_k} (t_\lambda) =0, &\varepsilon_k (t_\lambda) = \varphi_k (t_\lambda) =-\infty, \\ &\text{wt}(t_\lambda)=\lambda, \end{align*} for $k=0,1,2,3,4,5,6$. Then for $\lambda, \mu \in P_{cl}$, $T_{\lambda}\otimes B^{6,l} \otimes T_{\mu}$ is a crystal with the structure given by \begin{align*} \tilde{e_k}(t_{\lambda}\otimes b \otimes t_{\mu})&= t_{\lambda}\otimes \tilde{e_k}b \otimes t_{\mu}, &\tilde{f_k}(t_{\lambda}\otimes b \otimes t_{\mu})&= t_{\lambda}\otimes \tilde{f_k}b \otimes t_{\mu}, \\ \varepsilon_k(t_{\lambda}\otimes b \otimes t_{\mu})&= \varepsilon_k(b) - \langle \check\alpha_k,\lambda \rangle, &\varphi_k(t_{\lambda}\otimes b \otimes t_{\mu})&= \varphi_k(b) + \langle \check\alpha_k,\mu \rangle, \\ \text{wt}(t_{\lambda}\otimes b \otimes t_{\mu})&= \lambda+\mu+\text{wt}(b) \end{align*} where $t_{\lambda}\otimes b \otimes t_{\mu} \in T_{\lambda}\otimes B^{6,l} \otimes T_{\mu}$. The notion of a coherent family of perfect crystals and its limit is defined in \cite{KKM}. In the following theorem we prove that the family of $D_6^{(1)}$ crystals $\{B^{6,l}\}_{l \geq 1}$ form a coherent family with limit $B^{6, \infty}$ containing the special vector $b^{\infty} = {\bf{0}}$ (i.e. $(b^{\infty})_{ij} = 0$ for $i \leq j \leq i+5,\ 1 \leq i \leq 6$). \begin{theorem} \label{perfectcrystal} The family of perfect crystals $\{B^{6,l}\}_{l \geq 1}$ forms a coherent family and the crystal $B^{6,\infty}$ is its limit. \end{theorem} \begin{proof} Set $J=\{(l,b)| l \in \mathbb{Z}_{>0}, b \in (B^{6,l})_\text{min}\}$. By (\cite{KKM}, Definition 4.1), we need to show that \begin{enumerate} \item wt$(b^{\infty})=0, \varepsilon(b^{\infty})=\varphi(b^{\infty})=0,$ \item for any $(l,b) \in J$, there exists an embedding of crystals $$f_{(l,b)} : T_{\varepsilon(b)} \otimes B^{6, l} \otimes T_{-\varphi(b)} \longrightarrow B^{6, \infty}$$ where $f_{(l,b)}(t_{\varepsilon(b)} \otimes b \otimes t_{-\varphi(b)}) = b^{\infty}$, \item $B^{6, \infty} = \cup_{(l,b) \in J}$ Im $f_{(l,b)}$. \end{enumerate} Since $\varepsilon_k(b^{\infty}) = 0, \varphi_k(b^{\infty}) = 0, 0 \leq k \leq 6$, we have $\varepsilon(b^{\infty}) = 0, \varphi(b^{\infty}) = 0$ and hence wt$(b^{\infty}) = 0$ which proves $(1)$. Let $l \in \mathbb{Z}_{>0}$ and $b^0 = (b^0_{ij})$ be an element of $(B^{6,l})_{\text{min}}$. Then there exist $a_k \in \mathbb{Z}_{\geq 0}, 0 \leq k \leq 6$ such that $a_0+a_1+2a_2+2a_3+2a_4+a_5+a_6=l$ and \begin{align*} b^0_{11} &= a_1+a_2+a_3+a_4+a_6, \ b^0_{12} =a_5, \ b^0_{13} = a_4, \ b^0_{14} =a_3, \ b^0_{15}=a_2, \ b^0_{16} =a_0, \\ b^0_{22} &= a_2+a_3+a_4+a_6, \ b^0_{23} =a_5, \ b^0_{24} = a_4, \ b^0_{25} = a_3, \ b^0_{26} = a_1+a_2, \ b^0_{27} = a_0, \\ b^0_{33} &= a_3+a_4+a_6,\ b^0_{34} =a_5, \ b^0_{35}=a_4, \ b^0_{36} =a_2+a_3, \ b^0_{37} = a_1,\ b^0_{38} =a_0+a_2, \\ b^0_{44} &= a_4+a_6, \ b^0_{45} = a_5, \ b^0_{46} = a_3+a_4,\ b^0_{47} =a_2, \ b^0_{48} =a_1, \ b^0_{49} = a_0+a_2+a_3, \\ b^0_{55} &=a_6, \ b^0_{56} = a_4+a_5, \ b^0_{57} = a_3, \ b^0_{58} = a_2, \ b^0_{59} = a_1, \ b^0_{5,10} =a_0+a_2+a_3+a_4, \\ b^0_{66} &=a_6, \ b^0_{67} = a_4, \ b^0_{68} = a_3, \ b^0_{69} = a_2, \ b^0_{6,10} = a_1, \ b^0_{6,11} =a_0+a_2+a_3+a_4+a_5, \\ \varepsilon(b^0)& = a_6\Lambda_0+a_5\Lambda_1+a_4\Lambda_2+a_3\Lambda_3+a_2\Lambda_4+a_1\Lambda_5+a_0\Lambda_6,\\ \varphi(b^0)&= a_0\Lambda_0+a_1\Lambda_1+a_2\Lambda_2+a_3\Lambda_3+a_4\Lambda_4+a_5\Lambda_5+a_6\Lambda_6. \end{align*} For any $b= (b_{ij}) \in B^{6,l}$, we define a map $$f_{(l,b^0)} : T_{\varepsilon(b^0)} \otimes B^{6,l} \otimes T_{-\varphi(b^0)} \longrightarrow B^{6,\infty}$$ by $f_{(l,b^0)}( t_{\varepsilon(b^0)} \otimes b \otimes t_{-\varphi(b^0)} )= b' =(b'_{ij}) $ where $b'_{ij} = b_{ij} - b^0_{ij}$ for all $i \leq j \leq i+5,\ 1 \leq i \leq 6$. Then it is easy to see that \begin{align*} \varepsilon_k(b') &= \varepsilon_k(b) - a_{6-k} = \varepsilon_k(b) - \langle \check\alpha_k, \varepsilon(b^0)\rangle \ \ \text{for} \; 0 \leq k \leq6,\\ \varphi_k(b') &= \varphi_k(b) - a_k = \varphi_k(b) + \langle \check\alpha_k, -\varphi(b^0)\rangle \ \ \text{for} \; 0 \leq k \leq6. \end{align*} Hence we have \begin{align*} \varepsilon_k(b') &= \varepsilon_k(b) - \langle \check\alpha_k, \varepsilon(b^0)\rangle = \varepsilon_k(t_{\varepsilon(b^0)}\otimes b\otimes t_{-\varphi(b^0)}), \\ \varphi_k(b') &= \varphi_k(b) + \langle \check\alpha_k, -\varphi(b^0)\rangle=\varphi_k(t_{\varepsilon(b^0)}\otimes b\otimes t_{-\varphi(b^0)}),\\ \text{wt}(b') &= \sum_{k=0}^6(\varphi_k(b') - \varepsilon_k(b'))\Lambda_k = \text{wt}(b) + \sum_{k=0}^6 \langle \check\alpha_k, -\varphi(b^0)\rangle\Lambda_k + \sum_{k=0}^6 \langle \check\alpha_k, \varepsilon(b^0)\rangle\Lambda_k \\ &= \text{wt}(b) - \varphi(b^0) + \varepsilon(b^0) = \text{wt}(t_{\varepsilon(b^0)}\otimes b\otimes t_{-\varphi(b^0)}). \end{align*} For $0 \leq k \leq 6, b \in B^{6,l}$, it can be checked easily that the conditions for the action of $\tilde{e}_k$ on $b' = b - b^0$ hold if and only if the conditions for the action of $\tilde{e}_k$ on $b$ hold. Hence from the defined action of $\tilde{e}_k$, we see that $\tilde{e}_k(b') = \tilde{e}_k(b) - b^0, 0\leq k \leq6$. This implies that \[ f_{(l,b^0)} (\tilde{e_k}( t_{\varepsilon(b^0)} \otimes b \otimes t_{-\varphi(b^0)})) = f_{(l,b^0)} ( t_{\varepsilon(b^0)} \otimes \tilde{e}_k(b) \otimes t_{-\varphi(b^0)}) \] \[= \tilde{e}_k(b) - b^0 = \tilde{e}_k(b') = \tilde{e}_k ( f_{(l, b^0)}( t_{\varepsilon(b^0)} \otimes b \otimes t_{-\varphi(b^0)})). \] Similarly, we have $f_{(l,b^0)} (\tilde{f_k}( t_{\varepsilon(b^0)} \otimes b \otimes t_{-\varphi(b^0)})) = \tilde{f}_k ( f_{(l, b^0)}( t_{\varepsilon(b^0)} \otimes b \otimes t_{-\varphi(b^0)}))$. Clearly the map $f_{(l,b^0)}$ is injective with $f_{(l,b^0)} ( t_{\varepsilon(b^0)} \otimes b^0 \otimes t_{-\varphi(b^0)})=b^{\infty}$. This proves (2). We observe that $ \sum_{j=i}^{i+5} b'_{ij} = \sum_{j=i}^{i+5} b_{ij} - \sum_{j=i}^{i+5} b^0_{ij} = l-l = 0$ for all $1 \leq i \leq 6$. Also, \begin{align*} b'_{11} &= b_{11} - b^0_{11} \\ &= b_{66} + b_{67} + b_{68} + b_{69} + b_{6,10} - a_1 - a_2 - a_3 -a_4 - a_6\\ &= b'_{66} + b'_{67} + b'_{68} + b'_{69} + b'_{6,10},\\ b'_{11} + b'_{12} &= b_{11} - b^0_{11} + b_{12} - b^0_{12} \\ &= b_{55} + b_{56} + b_{57} + b_{58} + b_{59} - a_1 - a_2 - a_3 - a_4 - a_5 - a_6 \\ &= b'_{55} + b'_{56} + b'_{57} + b'_{58} + b'_{59},\\ b'_{11} + b'_{12} + b'_{13} &= b_{11} - b^0_{11} + b_{12} - b^0_{12} + b_{13} - b^0_{13} \\ &= b_{44} + b_{45} + b_{46} + b_{47} + b_{48} - a_1 - a_2 - a_3 - 2a_4 - a_5 - a_6 \\ &= b'_{44} + b'_{45} + b'_{46} + b'_{47} + b'_{48},\\ b'_{11} + b'_{12} + b'_{13} + b'_{14} &= b_{11} - b^0_{11} + b_{12} - b^0_{12} + b_{13} - b^0_{13} + b_{14} - b^0_{14} \\ &= b_{33} + b_{34} + b_{35} + b_{36} + b_{37} - a_1 - a_2 - 2a_3 - 2a_4 - a_5 - a_6 \\ &= b'_{33} + b'_{34} + b'_{35} + b'_{36} + b'_{37}, \\ b'_{11} + b'_{12} + b'_{13} + b'_{14}+b'_{15} &= b_{11} - b^0_{11} + b_{12} - b^0_{12} + b_{13} - b^0_{13} + b_{14} - b^0_{14} + b_{15} - b^0_{15} \\ &= b_{22} + b_{23} + b_{24} + b_{25} + b_{26} - a_1 - 2a_2 - 2a_3 - 2a_4 - a_5 - a_6 \\ &= b'_{22} + b'_{23} + b'_{24} + b'_{25} + b'_{26}. \end{align*} Similarly, we can show that $\sum_{j=i}^{6-t} b'_{ij} = \sum_{j=i+t}^{5+t} b'_{i+t,j},$\ for $2 \leq i \leq 5,\ 1 \leq t \leq 5$. Hence we have $B^{6,\infty} \supseteq \cup_{(l,b) \in J}$ Im $f_{(l,b)}$. To prove (3) we also need to show that $B^{6,\infty} \subseteq \cup_{(l,b) \in J}$ Im $f_{(l,b)}$. Let $b' = (b'_{ij}) \in B^{6,\infty}$. By (2), we can assume that $b' \neq b^{\infty}$. Set \begin{align*} a_1 &= \text{max} \{- b'_{11} + b'_{22} ,- b'_{11}- b'_{12}+b'_{22}+b'_{23}, - b'_{11}- b'_{12}- b'_{13}+b'_{22}+b'_{23}+b'_{24}, \\ &\hspace{40pt} - b'_{11}- b'_{12}- b'_{13} - b'_{14}+b'_{22}+b'_{23}+b'_{24}+b'_{25}, 0\},\\ a_2 &= \text{max} \{- b'_{22}+b'_{33} , - b'_{22} - b'_{23}+b'_{33}+b'_{34}, - b'_{22} - b'_{23} - b'_{24}+b'_{33}+b'_{34}+b'_{35}, \\ &\hspace{40pt} - b'_{15}, -b'_{26} - a_1, 0\}, \\ a_3 &= \text{max} \{- b'_{33}+b'_{44} , - b'_{33} - b'_{34}+b'_{44}+b'_{45},- b'_{14}, - b'_{25}, -b'_{36} - a_2, 0\}, \\ a_4 &= \text{max} \{ -b'_{44}+b'_{55}, - b'_{13}, - b'_{24}, - b'_{35}, -b'_{46} - a_3, 0\},\\ a_5 &= \text{max} \{ -b'_{12}, -b'_{23}, -b'_{34}, -b'_{45}, -b'_{56} - a_4, 0 \}, \\ a_6 &= \text{max} \{ -b'_{11} - a_1 - a_2 - a_3-a_4, -b'_{22} - a_2 - a_3 - a_4, -b'_{33} - a_3 - a_4, -b'_{44} \\ &\hspace{40pt} - a_4, -b'_{55}, 0 \}, \\ a_0 &= \text{max} \{ b'_{11} - a_2 - a_3 - a_4-a_5, b'_{11}+b'_{12} - a_2 - a_3 -a_4, b'_{11}+b'_{12}+b'_{13} - a_2 \\ &\hspace{40pt} -a_3,b'_{11}+b'_{12}+b'_{13}+b'_{14} - a_2, b'_{11}+b'_{12}+b'_{13}+b'_{14}+b'_{15}, 0 \}. \end{align*} Let $l=a_0+a_1+2a_2+2a_3+2a_4+a_5+a_6$. Let $b^0 = (b^0_{ij})$ where \begin{align*} b^0_{11} &= a_1+a_2+a_3+a_4+a_6, \ b^0_{12} =a_5, \ b^0_{13} = a_4, \ b^0_{14} =a_3, \ b^0_{15}=a_2, \ b^0_{16} =a_0, \\ b^0_{22} &= a_2+a_3+a_4+a_6, \ b^0_{23} =a_5, \ b^0_{24} = a_4, a^0_{25} = a_3, \ b^0_{26} = a_1+a_2, \ b^0_{27} = a_0, \\ b^0_{33} &= a_3+a_4+a_6, \ b^0_{34} =a_5, \ b^0_{35}=a_4, \ b^0_{36} =a_2+a_3, \ b^0_{37} =a_1, \ b^0_{38} = a_0+a_2, \\ b^0_{44} &= a_4+a_6, \ b^0_{45} = a_5, \ b^0_{46} = a_3+a_4,\ b^0_{47} =a_2, \ b^0_{48} =a_1, \ b^0_{49} =a_0+a_2+a_3, \\ b^0_{55} &= a_6, \ b^0_{56} = a_4+a_5, \ b^0_{57} = a_3,\ b^0_{58} =a_2, \ b^0_{5,9} =a_1, \ b^0_{5,10} =a_0+a_2+a_3+a_4, \\ b^0_{66} &=a_6, \ b^0_{67} = a_4, \ b^0_{68} = a_3, \ b^0_{69} = a_2, \ b^0_{6,10} = a_1, \ b^0_{6,11} = =a_0+a_2+a_3+a_4+a_5. \end{align*} Then $\varepsilon(b^0) = a_6\Lambda_0+a_5\Lambda_1+a_4\Lambda_2+a_3\Lambda_3+a_2\Lambda_4+a_1\Lambda_5+a_0\Lambda_6$ and $\varphi(b^0) = a_0\Lambda_0+a_1\Lambda_1+a_2\Lambda_2+a_3\Lambda_3+a_4\Lambda_4+a_5\Lambda_5+a_6\Lambda_6$. It is easy to see that $b^0 \in (B^{6,l})_{\text{min}}$. Set $b = (b_{ij})$ where $b_{ij} = b'_{ij} +b^0_{ij}$. Then $\sum_{j=i}^{i+5} b_{ij} = \sum_{j=i}^{i+5} b'_{ij} + \sum_{j=i}^{i+5} b^0_{ij} = 0 + l = l,\ 1 \leq i \leq 6$ and we observe that \begin{align*} b_{11} &= b'_{11} + b^0_{11} = b'_{11} + a_1+a_2+a_3+a_4+a_6 \geq 0, \\ & \hspace{4.5cm} \text{since} \; a_6 \geq -b'_{11} - a_1 - a_2 - a_3-a_4,\\ b_{12} &= b'_{12} + b^0_{12} = b'_{12} + a_5 \geq 0, \ \text{since} \; a_5 \geq - b'_{12},\\ b_{13} &= b'_{13} + b^0_{13} = b'_{13} + a_4 \geq 0, \ \text{since} \; a_4 \geq - b'_{13},\\ b_{14} &= b'_{14} + b^0_{14} = b'_{14} + a_3 \geq 0, \ \text{since} \; a_3 \geq - b'_{14},\\ b_{15} &= b'_{15} + b^0_{15} = b'_{14} + a_2 \geq 0, \ \text{since} \; a_2 \geq - b'_{15},\\ b_{16} &= b'_{16} + b^0_{16} = b'_{15} + a_0 = - b'_{11} - b'_{12} - b'_{13} - b'_{14} - b'_{15} + a_0 \geq 0, \\ & \hspace{4.5cm} \text{since} \; a_0 \geq b'_{11}+b'_{12}+b'_{13}+b'_{14} +b'_{15}. \end{align*} Similarly, we can show that $b_{ij} \in \mathbb{Z}_{\geq 0}$ for $i \leq j \leq i+5,\ 2 \leq i \leq 6$. We also have \begin{align*} b_{44} &= b'_{44} + b^0_{44} = b'_{66} + b'_{67} + a_4 + a_6 = b_{66} + b_{67},\\ b_{44} + b_{45} &= b'_{44} + b^0_{44} + b'_{45} + b^0_{45} = b'_{55} + b'_{56} + a_4 + a_5 + a_6 = b_{55} + b_{56},\\ b_{55} &= b'_{55} + b^0_{55} = b'_{66} + a_6 = b_{66}. \end{align*} Similarly, we see that $\sum_{j=i}^{6-t} b_{ij} = \sum_{j=i+t}^{5+t} b_{i+t,j},\ 1 \leq i \leq 3, 1 \leq t \leq 5$. \ Also, \begin{align*} b_{11} &= b'_{11} + b^0_{11} \geq b'_{22} + b^0_{22} = b_{22},\ \text{since} \; b^0_{11} - b^0_{22} = a_1 \geq - b'_{11} + b'_{22},\\ b_{22} &= b'_{22} + b^0_{22} \geq b'_{33} + b^0_{33} = b_{33},\ \text{since} \; b^0_{22} - b^0_{33} = a_2 \geq - b'_{22} + b'_{33},\\ b_{33} &= b'_{33} + b^0_{33} \geq b'_{44} + b^0_{44} = b_{44},\ \text{since} \; b^0_{33} - b^0_{44} = a_3 \geq - b'_{33} + b'_{44}, \\ b_{44} &= b'_{44} + b^0_{44} \geq b'_{55} + b^0_{55} = b_{55},\ \text{since} \; b^0_{44} - b^0_{55} = a_4 \geq - b'_{44} + b'_{55} . \end{align*} Similarly, $\sum_{j=i}^{t} b_{ij} \geq \sum_{j=i+1}^{t+1} b_{i+1,j}, 1 \leq i < t \leq 5$. Hence $b \in B^{6,l}$. Then $f_{(l,b^0)}(t_{\varepsilon(b^0)} \otimes b \otimes t_{-\varphi(b^0)}) = b',$ and $b' \in \cup_{(l,b) \in J}$ Im $f_{(l,b)}$ which proves (3). \end{proof} \section{Ultra-discretization of \bf{$\mathcal{V}(D_6^{(1)})$}} It is known that the ultra-discretization of a positive geometric crystal is a Kashiwara crystal \cite{BK, N}. In this section we apply the ultra-discretization functor $\mathcal{UD}$ to the positive geometric crystal $\mathcal{V}=\mathcal{V}(D_6^{(1)})$ for the affine Lie algebra $D_6^{(1)}$ at the spin node $k=6$ in (\cite{MP},Theorem 5.1). Then we show that as crystal it is isomorphic to the crystal $B^{6, \infty}$ given in the last section which proves the conjecture in \cite{KNO} for this case. As a set $\mathcal{X}=\mathcal{UD}(\mathcal{V}) = \mathbb{Z}^{15}$. We denote the variables $x_m^{(l)}$ in $\mathcal{V}$ by the same notation $x_m^{(l)}$ in $\mathcal{UD}(\mathcal{V}) = \mathcal{X}$. Let $x=(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) \\ \in \mathcal{X}$. By applying the ultra-discretization functor $\mathcal{UD}$ to the positive geometric crystal $\mathcal{V}$ in (\cite{MP},Theorem 5.1), we have for $0 \leq k \leq 6$: \begin{align*} \mathcal{UD}(\gamma_k)(x) &= \small \begin{cases} -x_2^{(2)}-x_2^{(1)}, &k=0,\\ 2x_1^{(1)}-x_2^{(2)}-x_2^{(1)}, &k=1,\\ -x_1^{(1)}+2x_2^{(2)}+2x_2^{(1)} -x_3^{(3)}-x_3^{(2)}-x_3^{(1)}, &k=2,\\ -x_2^{(2)}-x_2^{(1)}+2x_3^{(3)}+2x_3^{(2)}+2x_3^{(1)}-x_4^{(4)}-x_4^{(3)}&\\ \hspace{15pt }-x_4^{(2)}-x_4^{(1)}, &k=3,\\ -x_3^{(3)}-x_3^{(2)}-x_3^{(1)}+2x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+2x_4^{(1)} &\\ \hspace{15pt} -x_5^{(2)}-x_5^{(1)}-x_6^{(3)}-x_6^{(2)}-x_6^{(1)}, &k=4,\\ -x_4^{(4)}-x_4^{(3)}-x_4^{(2)}-x_4^{(1)}+2x_5^{(2)}+2x_5^{(1)}, &k=5, \\ -x_4^{(4)}-x_4^{(3)}-x_4^{(2)}-x_4^{(1)}+2x_6^{(3)}+2x_6^{(2)}+2x_6^{(1)}, &k=6. \end{cases} \\ \mathcal{UD}(\varepsilon_k)(x) &= \small \begin{cases} \text{max} \{x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)} &\\ \hspace{15pt} +x_3^{(1)}-x_4^{(2)}+x_5^{(1)},x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}&\\ \hspace{15pt} -x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, x_3^{(2)}-x_4^{(3)}+x_4^{(1)},x_4^{(2)}+x_4^{(1)} &\\ \hspace{15pt} -x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)} &\\ \hspace{15pt} -x_4^{(3)},x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}&\\ \hspace{15pt} -x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},x_2^{(2)}+x_3^{(1)}-x_4^{(4)},&\\ \hspace{15pt} x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, &k=0,\\ -x_1^{(1)}+x_2^{(2)}, &k=1,\\ \text{max} \{-x_2^{(2)}+x_3^{(3)}, x_1^{(1)}-2x_2^{(2)}-x_2^{(1)}+x_3^{(3)}+x_3^{(2)}\}, &k=2,\\ \text{max} \{-x_3^{(3)}+x_4^{(4)}, x_2^{(2)}-2x_3^{(3)}-x_3^{(2)}+x_4^{(4)}+x_4^{(3)}, &\\ \hspace{15pt}x_2^{(2)}+x_2^{(1)}-2x_3^{(3)}-2x_3^{(2)}-x_3^{(1)}+x_4^{(4)}+x_4^{(3)}+x_4^{(2)}\}, &k=3,\\ \end{cases} \\ \mathcal{UD}(\varepsilon_k)(x) &= : \small \begin{cases} \text{max} \{-x_4^{(4)}+x_6^{(3)}, x_3^{(3)}-2x_4^{(4)}-x_4^{(3)}+x_5^{(2)}+x_6^{(3)}, &\\ \hspace{15pt} x_3^{(3)}+x_3^{(2)}-2x_4^{(4)}-2x_4^{(3)}-x_4^{(2)}+x_5^{(2)}+x_6^{(3)}+x_6^{(2)},&\\ \hspace{15pt} x_3^{(3)}+x_3^{(2)}+x_3^{(1)} -2x_4^{(4)}-2x_4^{(3)}-2x_4^{(2)}-x_4^{(1)}+x_5^{(2)}&\\ \hspace{15pt} +x_5^{(1)}+x_6^{(3)}+x_6^{(2)} \}, &k=4,\\ \text{max} \{x_4^{(4)}-x_5^{(2)}, x_4^{(4)}+x_4^{(3)}+x_4^{(2)}-2x_5^{(2)}-x_5^{(1)}\}, &k=5, \\ \text{max} \{-x_6^{(3)}, x_4^{(4)}+x_4^{(3)}-2x_6^{(3)}-x_6^{(2)}, x_4^{(4)}+x_4^{(3)}+x_4^{(2)} &\\ \hspace{15pt} +x_4^{(1)}-2x_6^{(3)}-2x_6^{(2)}-x_6^{(1)}\}, &k=6. \end{cases} \end{align*} We define \begin{align*} \breve{c_2} &=\text{max}\{c+x_2^{(2)}+x_2^{(1)}, x_3^{(2)}+x_1^{(1)}\}-\text{max}\{x_2^{(2)}+x_2^{(1)}, x_3^{(2)}+x_1^{(1)}\},\\ \breve{c_{3_1}} &=\text{max}\{c+x_3^{(3)}+2x_3^{(2)}+x_3^{(1)}, x_2^{(2)}+x_3^{(2)}+x_3^{(1)}+x_4^{(3)},x_2^{(2)}+x_2^{(1)}+x_4^{(3)}+x_4^{(2)}\}\\ & \hspace{15pt} - \text{max}\{x_3^{(3)}+2x_3^{(2)}+x_3^{(1)}, x_2^{(2)}+x_3^{(2)}+x_3^{(1)}+x_4^{(3)},x_2^{(2)}+x_2^{(1)}+x_4^{(3)}+x_4^{(2)}\},\\ \breve{c_{3_2}} &=\text{max}\{c+x_3^{(3)}+2x_3^{(2)}+x_3^{(1)}, c+x_2^{(2)}+x_3^{(2)}+x_3^{(1)}+x_4^{(3)},x_2^{(2)}+x_2^{(1)}+x_4^{(3)}\\ & \hspace{15pt} +x_4^{(2)}\}-\text{max}\{c+x_3^{(3)}+2x_3^{(2)}+x_3^{(1)}, x_2^{(2)}+x_3^{(2)}+x_3^{(1)}+x_4^{(3)},x_2^{(2)}+x_2^{(1)}\\ & \hspace{15pt} +x_4^{(3)}+x_4^{(2)}\},\\ \breve{c_{4_1}} &=\text{max}\{c+x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}, x_3^{(3)}+x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}+x_5^{(2)}, x_3^{(3)} \\ & \hspace{15pt} +x_3^{(2)}+x_4^{(2)}+x_4^{(1)}+x_5^{(2)}+x_6^{(2)}, x_3^{(3)}+x_3^{(2)}+x_3^{(1)}+x_5^{(2)}+x_5^{(1)}+x_6^{(2)}\}\\ & \hspace{15pt} - \text{max}\{x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}, x_3^{(3)}+x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}+x_5^{(2)}, x_3^{(3)}+x_3^{(2)} \\ & \hspace{15pt}+x_4^{(2)}+x_4^{(1)}+x_5^{(2)}+x_6^{(2)}, x_3^{(3)}+x_3^{(2)}+x_3^{(1)}+x_5^{(2)}+x_5^{(1)}+x_6^{(2)}\}\\ \breve{c_{4_2}} &=\text{max}\{c+x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}, c+x_3^{(3)}+x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}+x_5^{(2)}, x_3^{(3)}+ \\ & \hspace{15pt} x_3^{(2)}+x_4^{(2)}+x_4^{(1)}+x_5^{(2)}+x_6^{(2)}, x_3^{(3)}+x_3^{(2)}+x_3^{(1)}+x_5^{(2)}+x_5^{(1)}+x_6^{(2)}\}-\\ & \hspace{15pt} \text{max}\{c+x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}, x_3^{(3)}+x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}+x_5^{(2)}, \\ & \hspace{15pt} x_3^{(3)}+x_3^{(2)} +x_4^{(2)}+x_4^{(1)}+x_5^{(2)}+x_6^{(2)}, x_3^{(3)}+x_3^{(2)}+x_3^{(1)}+x_5^{(2)}+x_5^{(1)}+x_6^{(2)}\}\\ \breve{c_{4_3}} &=\text{max}\{c+x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}, c+x_3^{(3)}+x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}+x_5^{(2)}, \\ & \hspace{15pt} c+x_3^{(3)}+x_3^{(2)}+x_4^{(2)}+x_4^{(1)}+x_5^{(2)}+x_6^{(2)}, x_3^{(3)}+x_3^{(2)}+x_3^{(1)}+x_5^{(2)}+x_5^{(1)}\\ & \hspace{15pt} +x_6^{(2)}\}-\text{max}\{c+x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)}, c+x_3^{(3)}+x_4^{(3)}+2x_4^{(2)}+x_4^{(1)} \\ & \hspace{15pt} +x_5^{(2)},x_3^{(3)}+x_3^{(2)}+x_4^{(2)}+x_4^{(1)}+x_5^{(2)}+x_6^{(2)}, x_3^{(3)}+x_3^{(2)}+x_3^{(1)}+x_5^{(2)}+x_5^{(1)}\\ & \hspace{15pt} +x_6^{(2)}\}\\ \breve{c_5} &=\text{max}\{c+x_5^{(2)}+x_5^{(1)}, x_4^{(3)}+x_4^{(2)}\}-\text{max}\{x_5^{(2)}+x_5^{(1)}, x_4^{(3)}+x_4^{(2)}\},\\ \breve{c_{6_1}} &=\text{max}\{c+x_6^{(3)}+2x_6^{(2)}+x_6^{(1)}, x_4^{(4)}+x_4^{(3)}+x_6^{(2)}+x_6^{(1)},x_4^{(4)}+x_4^{(3)}+x_4^{(2)}+x_4^{(1)}\}\\ & \hspace{15pt} - \text{max}\{x_6^{(3)}+2x_6^{(2)}+x_6^{(1)}, x_4^{(4)}+x_4^{(3)}+x_6^{(2)}+x_6^{(1)},x_4^{(4)}+x_4^{(3)}+x_4^{(2)}+x_4^{(1)}\},\\ \breve{c_{6_2}} &=\text{max}\{c+x_6^{(3)}+2x_6^{(2)}+x_6^{(1)}, c+x_4^{(4)}+x_4^{(3)}+x_6^{(2)}+x_6^{(1)},x_4^{(4)}+x_4^{(3)}+x_4^{(2)}\\ & \hspace{15pt} +x_4^{(1)}\} - \text{max}\{c+x_6^{(3)}+2x_6^{(2)}+x_6^{(1)}, x_4^{(4)}+x_4^{(3)}+x_6^{(2)}+x_6^{(1)},x_4^{(4)}+x_4^{(3)}\\ & \hspace{15pt} +x_4^{(2)}+x_4^{(1)}\}, \\ \breve{K} & = \text{max} \{x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)},\\ &\hspace{15pt} x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, x_3^{(2)}-x_4^{(3)}+x_4^{(1)},\\ &\hspace{15pt} x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, \\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},\\ &\hspace{15pt} x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, \end{align*} Then we have \begin{align*} \mathcal{UD}(e_k^{c})(x) &= \begin{cases} (x_6^{(3)'},x_4^{(4)'}, x_3^{(3)'}, x_2^{(2)}-c, x_5^{(2)'}, x_4^{(3)'}, x_3^{(2)'}, x_6^{(2)'}, x_4^{(2)'}, x_5^{(1)'},&\\ \hspace{15pt} x_1^{(1)}-c, x_2^{(1)}-c, x_3^{(1)'}, x_4^{(1)'}, x_6^{(1)'}), &k=0,\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)}, x_1^{(1)}+c,&\\ \hspace{15pt} x_2^{(1)}, x_3^{(1)},x_4^{(1)}, x_6^{(1)}), &k=1,\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}+\breve{c_2}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},&\\ \hspace{15pt} x_1^{(1)}, x_2^{(1)}+c-\breve{c_2}, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}), &k=2,\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}+\breve{c_{3_1}}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}+\breve{c_{3_2}}, x_6^{(2)},x_4^{(2)}, &\\ \hspace{15pt} x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}+c-\breve{c_{3_1}}-\breve{c_{3_2}}, x_4^{(1)}, x_6^{(1)}), &k=3,\\ (x_6^{(3)},x_4^{(4)}+\breve{c_{4_1}}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}+\breve{c_{4_2}}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}&\\ \hspace{15pt} +\breve{c_{4_3}},x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}+c-\breve{c_{4_1}}-\breve{c_{4_2}}-\breve{c_{4_3}}, x_6^{(1)}), &k=4,\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}+\breve{c_5}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)}&\\ \hspace{15pt} +c-\breve{c_5},x_1^{(1)}, x_2^{(1)},x_3^{(1)}, x_4^{(1)}, x_6^{(1)}), &k=5,\\ (x_6^{(3)}+\breve{c_{6_1}},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}+\breve{c_{6_2}}, x_4^{(2)}, &\\ \hspace{15pt} x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}+c-\breve{c_{6_1}}-\breve{c_{6_2}}), &k=6,\\ \end{cases} \\ \text{where} &\\ x_3^{(1)'} &=x_3^{(1)} + \breve{K} - \text{max} \{c+x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},\\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},c+x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, c\\ &\hspace{15pt} +x_2^{(2)} -x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, \\ x_3^{(2)'} &=-c + x_3^{(2)} + \text{max} \{c+x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)},c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},\\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, c+x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ &\hspace{15pt} c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\} - \text{max} \{c\\ &\hspace{15pt} +x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}\\ &\hspace{15pt} +x_5^{(1)},x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}\\ &\hspace{15pt} -x_4^{(3)}+x_4^{(1)},c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, c+x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , c+x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},c+x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)} -x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, \\ x_3^{(3)'} &=-c + x_3^{(3)} + \text{max} \{c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)},x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, c+x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)} -x_6^{(3)} , c+x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},\\ &\hspace{15pt} x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)} -x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}\\ &\hspace{15pt} -x_5^{(2)}\} - \breve{K},\\ x_4^{(1)'} &=x_4^{(1)} + \breve{K} - \text{max} \{c+ x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}\\ &\hspace{15pt} +x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} ,\\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, \\ x_4^{(2)'} &=x_4^{(2)} + \breve{K} + \text{max} \{c+ x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}-x_4^{(2)}+x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}\\ &\hspace{15pt} +x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} ,\\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\} - \text{max} \{c+ x_6^{(1)} + \text{max} \{ c\\ &\hspace{15pt} +x_6^{(1)},c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}\\ &\hspace{15pt} -x_4^{(2)} +x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}\\ &\hspace{15pt} +x_5^{(1)}, c+x_3^{(2)} -x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , \\ &\hspace{15pt} x_2^{(2)} +x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},c\\ &\hspace{15pt} +x_2^{(2)}+x_2^{(1)}- x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, c+x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ &\hspace{15pt} c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}- x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, c+x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)} -x_3^{(3)}-x_3^{(2)}+x_5^{(1)} + \breve{K}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} \\ &\hspace{15pt} + \breve{K}, c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)} + \breve{K}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+ x_5^{(1)} \\ &\hspace{15pt}+ \breve{K}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)} + \text{max} \{x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}\\ &\hspace{15pt} +x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}\\ &\hspace{15pt} -x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}- x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}\\ &\hspace{15pt} +x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)},\\ &\hspace{15pt} x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, c+x_4^{(2)}+x_4^{(1)}\\ &\hspace{15pt} -x_6^{(2)} + \text{max} \{ c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}\\ &\hspace{15pt} +x_4^{(2)},c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, c+x_2^{(2)}+x_3^{(1)}\\ &\hspace{15pt} -x_4^{(4)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_6^{(3)} +\text{max} \{c+x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}\\ &\hspace{15pt} +x_5^{(1)}, x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , \\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, x_2^{(2)}+x_2^{(1)}+x_6^{(2)} -x_4^{(4)}\\ &\hspace{15pt} -x_4^{(3)}+\text{max} \{c+x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}-x_4^{(2)}+x_5^{(1)},x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ &\hspace{15pt} x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} +x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}\\ &\hspace{15pt} -x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}\\ &\hspace{15pt} -x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}- x_4^{(4)}+x_4^{(2)} + \breve{K}, \\ &\hspace{15pt} c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} + \breve{K}, c+x_2^{(2)}+x_3^{(1)}\\ &\hspace{15pt} -x_4^{(4)} + \breve{K}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} + \breve{K}, c+x_3^{(2)}+x_3^{(1)}\\ &\hspace{15pt} -x_5^{(2)} + \breve{K}\} ,\\ x_4^{(3)'} &=-c +x_4^{(3)} + \text{max} \{c+ x_6^{(1)} + \text{max} \{ c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}\\ &\hspace{15pt} +x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c\\ &\hspace{15pt} +x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}\\ &\hspace{15pt} +x_4^{(1)}-x_6^{(2)},x_2^{(2)}+x_2^{(1)}-x_6^{(3)} ,x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, c\\ &\hspace{15pt} +x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}\\ &\hspace{15pt} +x_4^{(2)}-x_5^{(2)}, c+x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}\\ &\hspace{15pt} -x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \\ &\hspace{15pt} + \breve{K}, c+x_2^{(2)}-x_3^{(3)}+ x_3^{(1)}-x_4^{(2)}+x_5^{(1)} + \breve{K}, c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)} \\ &\hspace{15pt} + \breve{K}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} + \breve{K}, c+x_3^{(2)}-x_4^{(3)} +x_4^{(1)} \\ &\hspace{15pt} + \text{max} \{x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)} -x_4^{(2)}\\ &\hspace{15pt} +x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} +x_4^{(1)}, x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}\\ &\hspace{15pt} - x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}\\ &\hspace{15pt} +x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ &\hspace{15pt} x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}+ \text{max} \{ c+x_6^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, c\\ &\hspace{15pt} +x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} +x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}\\ &\hspace{15pt} -x_4^{(4)}-x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}\\ &\hspace{15pt} -x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, c+x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, c+x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)} +x_4^{(3)}-x_5^{(2)}, c +x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, x_2^{(2)}+x_2^{(1)}- x_6^{(3)} +\text{max} \{c\\ &\hspace{15pt} +x_6^{(1)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)} +x_5^{(1)},\\ &\hspace{15pt} x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, x_3^{(2)}-x_4^{(3)}+x_4^{(1)},\\ &\hspace{15pt} c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)} \\ &\hspace{15pt} -x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}\\ &\hspace{15pt} +x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}\\ &\hspace{15pt} +x_3^{(1)}-x_5^{(2)}\}, x_2^{(2)}+x_2^{(1)}+x_6^{(2)} -x_4^{(4)}-x_4^{(3)} +\text{max} \{c+x_6^{(1)}, x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, x_4^{(2)}+x_4^{(1)}\\ &\hspace{15pt} -x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)} -x_4^{(3)}, \ x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)} +x_4^{(2)}-x_5^{(2)}, x_2^{(2)}\\ &\hspace{15pt} +x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)} +x_3^{(1)}-x_5^{(2)}\}, c\\ &\hspace{15pt} +x_2^{(2)}+x_2^{(1)}-x_3^{(2)}- x_4^{(4)}+x_4^{(2)} + \breve{K}, c+x_2^{(2)}+x_2^{(1)} -x_3^{(3)}-x_3^{(2)}\\ &\hspace{15pt} +x_4^{(3)}+x_4^{(2)}-x_5^{(2)} + \breve{K}, c+x_2^{(2)}+x_3^{(1)}- x_4^{(4)} + \breve{K}, c +x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}+x_4^{(3)}-x_5^{(2)} + \breve{K}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)} + \breve{K}\} - \breve{K} - \text{max} \{c\\ &\hspace{15pt} +x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}\\ &\hspace{15pt} +x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, c\\ &\hspace{15pt} +x_3^{(2)}-x_4^{(3)}+x_4^{(1)},c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},c+x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_3^{(3)}-x_3^{(2)} +x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\} \\ x_4^{(4)'} &=-c + x_4^{(4)} + \text{max} \{c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}\\ &\hspace{15pt} -x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}\\ &\hspace{15pt} +x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}\\ &\hspace{15pt}+x_4^{(2)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},x_2^{(2)}+x_3^{(1)}\\ &\hspace{15pt}-x_4^{(4)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\} - \breve{K},\\ x_5^{(1)'} &=x_5^{(1)} + \breve{K} - \text{max} \{c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)},c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},\\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, \\ x_5^{(2)'} &=-c + x_5^{(2)} + \text{max} \{c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)},c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},\\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\} - \breve{K},\\ x_6^{(1)'} &=x_6^{(1)} + \breve{K} - \text{max} \{c+ x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}\\ &\hspace{15pt} +x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ &\hspace{15pt} x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} +x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}\\ &\hspace{15pt} -x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}\\ &\hspace{15pt} -x_5^{(2)}, x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\}, \\ x_6^{(2)'} &=x_6^{(2)} + \text{max} \{c+ x_6^{(1)}, x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, x_3^{(3)}+x_3^{(2)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, x_3^{(2)}\\ &\hspace{15pt} -x_4^{(3)}+x_4^{(1)}, x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , x_2^{(2)}+x_2^{(1)}+x_6^{(2)}\\ &\hspace{15pt} -x_4^{(4)}-x_4^{(3)}, x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)},x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}\\ &\hspace{15pt} +x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)},\\ &\hspace{15pt} x_3^{(2)}+x_3^{(1)}-x_5^{(2)}\} - \text{max} \{ c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ &\hspace{15pt} c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)},c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}\\ &\hspace{15pt} +x_3^{(2)} -x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ &\hspace{15pt} x_2^{(2)}+x_2^{(1)}-x_6^{(3)} , c+x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_3^{(2)}-x_4^{(4)}+x_4^{(2)},c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},\\ &\hspace{15pt} c+x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}\\ &\hspace{15pt} +x_3^{(1)}-x_5^{(2)}\}, \\ x_6^{(3)'} &=-c + x_6^{(3)} + \text{max} \{ c+x_6^{(1)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, c+x_2^{(2)}\\ &\hspace{15pt} -x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, c+x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, c+x_3^{(3)}+x_3^{(2)}-x_4^{(3)}\\ &\hspace{15pt} -x_4^{(2)}+x_5^{(1)}, c+x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, c+x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, x_2^{(2)}+x_2^{(1)}\\ &\hspace{15pt} -x_6^{(3)} , c+x_2^{(2)}+x_2^{(1)}+x_6^{(2)}-x_4^{(4)}-x_4^{(3)}, c+x_2^{(2)}+x_2^{(1)}-x_3^{(2)}\\ &\hspace{15pt} -x_4^{(4)} +x_4^{(2)},c+x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)},c+x_2^{(2)}\\ &\hspace{15pt} +x_3^{(1)} -x_4^{(4)}, c+x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, c+x_3^{(2)}+x_3^{(1)}\\ &\hspace{15pt} -x_5^{(2)}\} - \breve{K}. \end{align*} As shown in \cite{BK, N}, $\mathcal{X}$ with maps $\tilde{e}_k, \tilde{f}_k :\mathcal{X} \longrightarrow \mathcal{X}\cup \{0\}, \; \varepsilon_k, \varphi_k : \mathcal{X} \longrightarrow \mathbb{Z}, \; 0\leq k \leq 6$ and $\text{wt}: \mathcal{X} \longrightarrow P_{cl}$ is a Kashiwara crystal where for $x \in \mathcal{X}$ \begin{align*} \tilde{e}_k(x) &= \mathcal{UD}(e_k^c)(x)\arrowvert_{c=1}, \; \tilde{f}_k(x) = \mathcal{UD}(e_k^c)(x)\arrowvert_{c=-1}, \\ \text{wt}(x) &= \sum_{k=0}^6\text{wt}_k(x)\Lambda_k, \text{where} \; \text{wt}_k(x) = \mathcal{UD}(\gamma_k)(x), \\ \varepsilon_k(x) &= \mathcal{UD}(\varepsilon_k)(x), \; \varphi_k(x) = \text{wt}_k(x) + \varepsilon_k(x). \end{align*} In particular, the explicit actions of $\tilde{f}_k, 1\leq k \leq 6$ on $\mathcal{X}$ is given as follows. \begin{align*} \tilde{f_1}(x) & =(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}-1, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ & \hspace{15pt} x_6^{(1)}),\\ \tilde{f_2}(x) & =\begin{cases} (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}-1, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_2^{(2)}+x_2^{(1)} > x_1^{(1)}+x_3^{(2)},\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}-1, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_2^{(2)}+x_2^{(1)} \leq x_1^{(1)}+x_3^{(2)}, \end{cases}\\ \tilde{f_3}(x) & =\begin{cases} (x_6^{(3)},x_4^{(4)}, x_3^{(3)}-1, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_3^{(3)}+x_3^{(2)} > x_2^{(2)}+x_4^{(3)},\\ \hspace{1.7cm} x_3^{(3)}+2x_3^{(2)}+x_3^{(1)} > x_2^{(2)}+x_2^{(1)}+x_4^{(3)}+x_4^{(2)}, \end{cases}\\ \tilde{f_3}(x) & =\begin{cases} (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}-1, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_3^{(3)}+x_3^{(2)} \leq x_2^{(2)}+x_4^{(3)}, \ x_3^{(2)}+x_3^{(1)} > x_2^{(1)}+x_4^{(2)},\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}-1, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_3^{(2)}+x_3^{(1)} \leq x_2^{(1)}+x_4^{(2)}, \\ \hspace{1.7cm} x_3^{(3)}+2x_3^{(2)}+x_3^{(1)} \leq x_2^{(2)}+x_2^{(1)}+x_4^{(3)}+x_4^{(2)}, \end{cases}\\ \tilde{f_4}(x) & =\begin{cases} (x_6^{(3)},x_4^{(4)}-1, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_4^{(4)}+x_4^{(3)}> x_3^{(3)}+x_5^{(2)}, \\ \hspace{1.7cm} x_4^{(4)}+2x_4^{(3)}+x_4^{(2)} > x_3^{(3)}+x_3^{(2)}+x_5^{(2)}+x_6^{(2)}, \\ \hspace{25pt} x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)} > x_3^{(3)} +x_3^{(2)} +x_3^{(1)} +x_5^{(2)}+x_5^{(1)}+x_6^{(2)}, \\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}-1, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_4^{(4)}+x_4^{(3)} \leq x_3^{(3)}+x_5^{(2)}, \ x_4^{(3)}+x_4^{(2)}> x_3^{(2)}+x_6^{(2)}, \\ \hspace{1.7cm} x_4^{(3)}+2x_4^{(2)}+x_4^{(1)} > x_3^{(2)}+x_3^{(1)}+x_5^{(1)}+x_6^{(2)}, \\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}-1, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt}x_6^{(1)}) \ \text{if} \ x_4^{(4)}+2x_4^{(3)}+x_4^{(2)} \leq x_3^{(3)}+x_3^{(2)}+x_5^{(2)}+x_6^{(2)}, \\ \hspace{1.7cm} x_4^{(3)}+x_4^{(2)} \leq x_3^{(2)}+x_6^{(2)},x_4^{(2)}+x_4^{(1)}> x_3^{(1)}+x_5^{(1)}, \\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}-1, \\ \hspace{15pt}x_6^{(1)}) \ \text{if} \ x_4^{(2)}+x_4^{(1)} \leq x_3^{(1)}+x_5^{(1)}, \\ \hspace{1.7cm} x_4^{(3)}+2x_4^{(2)}+x_4^{(1)} \leq x_3^{(2)}+x_3^{(1)}+x_5^{(1)}+x_6^{(2)}, \\ \hspace{25pt} x_4^{(4)}+2x_4^{(3)}+2x_4^{(2)}+x_4^{(1)} \leq x_3^{(3)}+x_3^{(2)}+x_3^{(1)} +x_5^{(2)}+x_5^{(1)}+x_6^{(2)}, \end{cases}\\ \tilde{f_5}(x) & =\begin{cases} (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}-1, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt}x_6^{(1)}) \ \text{if} \ x_5^{(2)}+x_5^{(1)} > x_4^{(3)}+x_4^{(2)},\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)}-1,x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt}x_6^{(1)}) \ \text{if} \ x_5^{(2)}+x_5^{(1)} \leq x_4^{(3)}+x_4^{(2)}, \end{cases} \\ \tilde{f_6}(x) & =\begin{cases} (x_6^{(3)}-1,x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_6^{(3)}+x_6^{(2)} > x_4^{(4)}+x_4^{(3)},\\ \hspace{1.7cm} x_6^{(3)}+2x_6^{(2)}+x_6^{(1)} > x_4^{(4)}+x_4^{(3)}+x_4^{(2)}+x_4^{(1)}, \\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}-1, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}) \ \text{if} \ x_6^{(3)}+x_6^{(2)} \leq x_4^{(4)}+x_4^{(3)}, \ x_6^{(2)}+x_6^{(1)} > x_4^{(2)}+x_4^{(1)},\\ (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, \\ \hspace{15pt} x_6^{(1)}-1) \ \text{if} \ x_6^{(2)}+x_6^{(1)} \leq x_4^{(2)}+x_4^{(1)}, \\ \hspace{2.3cm} x_6^{(3)}+2x_6^{(2)}+x_6^{(1)} \leq x_4^{(4)}+x_4^{(3)}+x_4^{(2)}+x_4^{(1)}. \end{cases} \end{align*} To determine the explicit action of $\tilde{f_0}(x)$ we define conditions $(\breve{F1})-(\breve{F14})$ as follows. \begin{align*} (\breve{F1}) & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_6^{(3)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F2}) & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F3}) & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F4}) & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F5}) & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_3^{(1)}-x_4^{(4)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F6}) & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F7}) & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F8}) & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} \geq x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(3)}-x_5^{(2)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ (\breve{F9}) & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F10}) & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} \geq x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_2^{(2)}-x_3^{(3)}+x_4^{(1)} \geq x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F11}) & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} \geq x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} \geq x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)} > x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F12}) & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} \geq x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_3^{(2)}-x_4^{(3)}+x_4^{(1)} > x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F13}) & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} \geq x_6^{(1)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_4^{(2)}+x_4^{(1)}-x_6^{(2)} > x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \\ (\breve{F14}) & \hspace{5pt}x_6^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}-x_4^{(2)}+x_5^{(1)} , \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}-x_3^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_6^{(1)} > x_3^{(2)}+x_3^{(1)}-x_4^{(3)}-x_4^{(2)}+x_5^{(1)}, \\ & \hspace{5pt}x_6^{(1)} > x_3^{(2)}-x_4^{(3)}+x_4^{(1)}, \\ & \hspace{5pt}x_6^{(1)} > x_4^{(2)}+x_4^{(1)}-x_6^{(2)}, \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}+x_2^{(1)}-x_6^{(3)}, \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}+x_2^{(1)}-x_4^{(4)}-x_4^{(3)}+x_6^{(2)}, \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(2)}-x_4^{(4)}+x_4^{(2)}, \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}+x_2^{(1)}-x_3^{(3)}-x_3^{(2)}+x_4^{(3)}+x_4^{(2)}-x_5^{(2)}, \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}+x_3^{(1)}-x_4^{(4)}, \\ & \hspace{5pt}x_6^{(1)} > x_2^{(2)}-x_3^{(3)}+x_3^{(1)}+x_4^{(3)}-x_5^{(2)}, \\ & \hspace{5pt}x_6^{(1)} > x_3^{(2)}+x_3^{(3)}-x_5^{(2)}, \end{align*} Then for $x \in \mathcal{X}$ we have $\tilde{f}_0(x) = \mathcal{UD}(e_0^c)(x)\arrowvert_{c=-1}$ given by \begin{align*} \tilde{f_0}(x) = \begin{cases} &(x_6^{(3)}+1,x_4^{(4)}+1, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}+1, x_4^{(3)}+1, x_3^{(2)}+1, x_6^{(2)}, x_4^{(2)}, \\ & \hspace{15pt} x_5^{(1)},x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F1}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}+1, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}+1, x_4^{(3)}+1, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}, \\ & \hspace{15pt}x_5^{(1)},x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F2}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}+1, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}+1, x_4^{(3)}, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)},x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F3}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}+1, x_4^{(3)}+1, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)},x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F4}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}+1, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}+1, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)},x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F5}), \vspace{1pt} \end{cases} \end{align*} \begin{align*} \tilde{f_0}(x) = \begin{cases} &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}, x_4^{(3)}+1, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)}+1,x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F6}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}+1, x_4^{(3)}+1, x_3^{(2)}, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)},x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F7}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}+1, x_5^{(2)}+1, x_4^{(3)}+1, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)},x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F8}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}, x_4^{(3)}+1, x_3^{(2)}, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)}+1,x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F9}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}+1, x_2^{(2)}+1, x_5^{(2)}, x_4^{(3)}+1, x_3^{(2)}, x_6^{(2)}+1, x_4^{(2)}, \\ & \hspace{15pt}x_5^{(1)}+1,x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}+1, x_6^{(1)}) \ \text{if} \ (\breve{F10}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}+1, x_5^{(2)}, x_4^{(3)}+1, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)}+1,x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}, x_6^{(1)}) \ \text{if} \ (\breve{F11}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}+1, x_5^{(2)}, x_4^{(3)}+1, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}, \\ & \hspace{15pt}x_5^{(1)}+1,x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}+1, x_6^{(1)}) \ \text{if} \ (\breve{F12}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}+1, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}+1, x_6^{(2)}+1, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)}+1,x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}+1, x_6^{(1)}) \ \text{if} \ (\breve{F13}), \vspace{1pt}\\ &(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}+1, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}+1, x_6^{(2)}, x_4^{(2)}+1, \\ & \hspace{15pt}x_5^{(1)}+1,x_1^{(1)}+1, x_2^{(1)}+1, x_3^{(1)}+1, x_4^{(1)}+1, x_6^{(1)}+1) \ \text{if} \ (\breve{F14}). \end{cases} \end{align*} \begin{theorem} The map \begin{displaymath} \begin{array}{lccc} \Omega : & B^{6,\infty} & \rightarrow & \mathcal{X},\\ &b=(b_{ij})_{i \leq j \leq i+5,\ 1 \leq i \leq 6} &\mapsto & x=(x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, \\ & & & \qquad x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) \end{array} \end{displaymath} defined by \begin{align*} x_m^{(l)} &= \begin{cases} \sum_{j=m-l+1}^m b_{m-l+1, j}, \ \ \text{for} \; \ \ m= 1, 2, 3, 4\\ \sum_{j=m-2l+1}^m b_{m-2l+1, j}, \ \ \text{for} \; \ \ m= 5 \\ \sum_{j=m-2l+1}^{m-1} b_{m-2l+1, j}, \ \ \text{for} \; \ \ m= 6. \end{cases} \end{align*} is an isomorphism of crystals. \end{theorem} \begin{proof} First we observe that the map $\Omega^{-1} : \mathcal{X} \rightarrow B^{6, \infty}$ is given by $\Omega^{-1}(x)=b$ \begin{align*} &\text{where}\ b_{11} = x_1^{(1)}, \ b_{12} =x_2^{(2)}-x_1^{(1)}, \ b_{13} = x_3^{(3)}-x_2^{(2)}, \ b_{14} =x_4^{(4)}-x_3^{(3)}, \\ &b_{15}=x_6^{(3)}-x_4^{(4)}, \ b_{16} =-x_6^{(3)}, \ b_{22} = x_2^{(1)}, \ b_{23} =x_3^{(2)}-x_2^{(1)}, \ b_{24} = x_4^{(3)}-x_3^{(2)}, \\ & b_{25} = x_5^{(2)}-x_4^{(3)}, \ b_{26} = x_6^{(3)}-x_5^{(2)}, \ b_{27} = -x_6^{(3)}, \ b_{33} = x_3^{(1)}, \ b_{34} =x_4^{(2)}-x_3^{(1)}, \\ & b_{35}=x_6^{(2)}-x_4^{(2)}, \ b_{36} =x_5^{(2)}-x_6^{(2)}, \ b_{37} = x_4^{(4)}-x_5^{(2)},\ b_{38} = -x_4^{(4)}, \ b_{44} = x_4^{(1)}, \\ & b_{45} = x_5^{(1)}-x_4^{(1)}, \ b_{46} = x_6^{(2)}-x_5^{(1)},\ b_{47} =x_4^{(3)}-x_6^{(2)}, \ b_{48} =x_3^{(3)}-x_4^{(3)}, \ b_{49} =-x_3^{(3)}, \\ & b_{55} = x_6^{(1)}, \ b_{56} = x_5^{(1)}-x_6^{(1)}, \ b_{57} = x_4^{(2)}-x_5^{(1)}, \ b_{58} = x_3^{(2)}-x_4^{(2)}, \ b_{59} = x_2^{(2)}-x_3^{(2)}, \\ &b_{5,10} = -x_2^{(2)},\ b_{66} = x_6^{(1)}, \ b_{67} = x_4^{(1)}-x_6^{(1)}, \ b_{68} = x_3^{(1)}-x_4^{(1)}, \ b_{69} = x_2^{(1)}-x_3^{(1)}, \\ &b_{6,10} = x_1^{(1)}-x_2^{(1)}, \ b_{6,11} = -x_1^{(1)}. \end{align*} Hence the map $\Omega$ is bijective. To prove that $\Omega$ is an isomorphism of crystals we need to show that for $b \in B^{6,\infty}$ and $0 \leq k \leq 6$ we have: \begin{align*} \Omega(\tilde{f_k} (b)) &= \tilde{f_k} (\Omega(b)),\\ \Omega(\tilde{e_k} (b)) &= \tilde{e_k} (\Omega(b)),\\ \text{wt}_k (\Omega(b)) &= \text{wt}_k(b),\\ \varepsilon_k (\Omega(b)) &= \varepsilon_k(b). \end{align*} Hence $\varphi_k(\Omega(b)) = \text{wt}_k (\Omega(b)) + \varepsilon_k (\Omega(b)) = \text{wt}_k(b) + \varepsilon_k(b) = \varphi_k(b)$. We observe that the conditions for the action of $\tilde{f}_k$ on $\Omega(b)$ in $\mathcal{X}$ hold if and only if the corresponding conditions for the action of $\tilde{f}_k$ on $b$ in $B^{6, \infty}$ hold for all $0\leq k \leq 6$. Suppose $\Omega(b) = x$ and $x_2^{(2)} + x_2^{(1)} > x_1^{(1)} + x_3^{(2)}$, then $b_{11} + b_{12} + b_{22} > b_{11} + b_{22} + b_{23}$ and $\tilde{f}_2(x) = (x_6^{(3)},x_4^{(4)}, x_3^{(3)}, x_2^{(2)}-1, x_5^{(2)}, x_4^{(3)}, x_3^{(2)}, x_6^{(2)}, x_4^{(2)}, x_5^{(1)},x_1^{(1)}, x_2^{(1)}, x_3^{(1)}, x_4^{(1)}, x_6^{(1)}) = \\\Omega(\tilde{f}_2(b))$. Similarly, we can show $\Omega(\tilde{f_k} (b)) = \tilde{f_k} (\Omega(b))$ and $\Omega(\tilde{e_k} (b)) = \tilde{e_k} (\Omega(b))$ for $k=0,1,3,4,5,6$. We also have $\text{wt}_0(\Omega(b)) = \text{wt}_0(x) = -x_2^{(2)} - x_2^{(1)} = - b_{11} - b_{12} - b_{22} = - b_{11} - b_{12} + b_{23} + b_{24} + b_{25} + b_{26} + b_{27} = \text{wt}_0(b)$ for all $b \in B^{6,\infty}$. Similarly, $\text{wt}_k (\Omega(b)) = \text{wt}_k(b)$ for $1 \leq k \leq 6$. Also, $\varepsilon_6 (\Omega(b)) = \varepsilon_6(x) = \text{max} \{-x_6^{(3)}, x_4^{(4)}+x_4^{(3)}-2x_6^{(3)}-x_6^{(2)}, x_4^{(4)}+x_4^{(3)}+x_4^{(2)}+x_4^{(1)}-2x_6^{(3)}-2x_6^{(2)}-x_6^{(1)}\} = \text{max} \{-b_{11}-b_{12}-b_{13}-b_{14}-b_{15}, -b_{11}-b_{12}-b_{13}-b_{14}-2b_{15}+ b_{22}+b_{23}+b_{24}-b_{33}-b_{34}-b_{35}, -b_{11}-b_{12}-b_{13}-b_{14}-2b_{15}+b_{22}+b_{23}+b_{24}-b_{33}-b_{34}-2b_{35}+b_{44}-b_{55}\}= \varepsilon_6(b)$. Similarly, $\varepsilon_k (\Omega(b)) = \varepsilon_k(b)$ for $0 \leq k \leq 5$ which completes the proof. \end{proof} \bibliographystyle{amsalpha}
1,477,468,751,349
arxiv
\section{Introduction} A new class of compact dwarf galaxies, called UCDs, was discovered a decade ago in the Fornax cluster \citep{Hilker+99,Drinkwater+00,PDGJ01}. These objects, being brighter and much larger than globular clusters (GCs), but still far below luminosities and sizes of both dwarf elliptical (dE) and compact elliptical (cE) galaxies, fill an empty region on the Fundamental Plane \citep{DD87}. Their origin still remains a matter of debate; several alternatives are considered: (1) UCDs are the result of the evolution of primordial density fluctuations \citep{PDGJ01}; (2) they have been formed through mergers of GCs or simply represent the extreme high-luminosity end of the GC luminosity function \citep{MHI02}; (3) UCDs are nuclei of tidally stripped (``threshed'') nucleated dE (dE,N) galaxies \citep{BCDS03} or dE,Ns with very low surface brightness; (4) UCDs are created as tidal superclusters during major mergers of galaxies \citep{FK05,KJB06}. Mass-to-light ratios of UCDs vary quite significantly \citep{Drinkwater+03,Hasegan+05,Hilker+07}, suggesting the presence of dark matter in some of them. \cite{Hasegan+05} propose to use $M/L$ ratio (i.e. presence of dark matter) as a criterion to distinguish between ``UCD galaxies'' and massive GCs. Developing this idea, we conclude that the presence of dark matter in a compact stellar system rejects two formation scenarii -- in this case UCDs cannot be GCs neither they can be created as tidal superclusters during galaxy mergers. On the other hand, if the stellar population is not old and metal-poor, the primordial density fluctuation scenario will become implausible, leaving the only channel of UCD formation to be the tidal stripping of dE,Ns. Stellar population analysis may also help to choose the formation scenario. Presently published data \citep{MHIJ06,EGDH07} based on the analysis of absorption line strengths (Lick indices, \citealp{WFGB94}) suggest that UCDs are old and rather metal-poor. In this paper, we present stellar population parameters for 6 Fornax cluster UCDs, compare them with dE,N nuclei, and derive stellar masses to check for the presence of dark matter assuming different stellar IMFs. \section{Data: Sources, Reduction, Analysis} We have used the data obtained in the courses of two independent studies of compact stellar systems in the Fornax cluster by G. Bergond et al. (program 074.A-0756) and M. Drinkwater et al. (program 074.A-0508). Both datasets are publicly available through the ESO Data Archive. The data have been obtained with the ESO Very Large Telescope using the FLAMES/Giraffe spectrograph \citep{Pasquini+02} in the multi-object ``MEDUSA'' mode (130 fibres in a 25~arcmin circular field of view), in the LR04 setup giving a resolving power $R\approx6300$ in the wavelength range 5010--5831~\AA\ (dispersion 0.2 \AA\,pix$^{-1}$, $\sigma_{\rm{inst}} \approx 18$ km\,s$^{-1}$), and reduced in exactly the same way as the data for the Abell~496 cluster as described in \cite{Chil08A496}. The 1.2~arcsec-wide FLAMES/Giraffe fibres corresponding to a spatial size of about 110~pc at the Fornax distance (19 Mpc), are significantly larger than the typical effective radii of UCDs \citep{EGDH07}. Therefore, our stellar population and velocity dispersion measurements referenced below should be considered as global values (however, see discussion about aperture corrections in \citealp{Mieske+08}). Observations of the central part of the Fornax cluster produced about 900 individual spectra (see details on observations in \citealp{Bergond+07} and \citealp{Firth+07}). We inspected them visually to identify those having reasonable signal-to-noise ratios and also to take out background galaxies; spectra of the objects common to the two studies have been co-added. We ended up with a list including about 40 spectra of foreground Milky Way stars and members of the Fornax cluster of different nature: GCs, UCDs, dE and non-dwarf galaxies. In this paper we analyse six UCDs and two dE,N nuclei (FCC~182 and FCC~266), while the brightest GCs and other Fornax cluster members will be presented in detail in the forthcoming paper. Our sample is presented in Table~\ref{tabucdlist}. \begin{table} \caption{Final sample of UCDs and dEs. (2) and (3) give identification according to Bergond et al. (2007) and Evstigneeva et al. (2007), (4)--(7) provide the number of individual exposures and the total exposure times (seconds) in the two observational programmes (B and D indices are for Bergond et al. and Drinkwater et al.), (8) lists approximate signal-to-noise ratio per pixel at $\lambda = 5300$~\AA for the combined spectra. \label{tabucdlist} } \begin{tabular}{llcccccc} \hline $n$ & ID$_{\rm B}$ & ID$_{\rm E}$ & $n_{\rm B}$ & $t_{\rm B}$, s & $n_{\rm D}$ & $t_{\rm D}$, s & S/N\\ \hline 1 & ucd257.5 & {\bf UCD1} & 3 & 10200 & 3 & 7200 & 19 \\ 2 & ucdA & {\bf UCD2} & 3 & 10200 & - & - & 9 \\ 3 & & {\bf UCD3} & - & - & 3 & 6600 & 15 \\ 4 & ucdB & {\bf UCD4} & 2 & 7200 & 3 & 6600 & 13 \\ 5 & & {\bf UCD5} & - & - & 3 & 6000 & 9 \\ 6 & {\bf ucd329.7} & & 3 & 11200 & - & - & 15 \\ \hline 7 & {\bf FCC182} & & 3 & 10200 & - & - & 40 \\ 8 & {\bf FCC266} & & 3 & 9704 & - & - & 17 \\ \hline \end{tabular} \end{table} We have fit the high-resolution {\sc pegase.hr} (Le Borgne et al. 2004) simple stellar population (SSP) models against the observational data using the {\sc NBursts} full spectral fitting technique \citep{CPSA07,CPSK07}. The fitting algorithm works as follows: (1) a grid of SSP spectra with a fixed set of ages (nearly logarithmically spaced from 20~Myr to 18~Gyr) and metallicities (from $-$2.0 to $+$0.5~dex) is convolved with the instrumental response of FLAMES/Giraffe as explained in Section~4.1 of \cite{CPSA07}; (2) a non-linear least square fitting against an observed spectrum is done for a template picked up from the pre-convolved SSP grid using 2D-spline interpolation on $\log t$ and $Z$, broadened according to the line-of-sight velocity distribution (LOSVD) parametrised by $v$ and $\sigma$ and multiplied pixel-by-pixel by the $n^{\rm{th}}$ order Legendre polynomial, resulting in $n + 5$ parameters determined by the non-linear fitting. For the spectra presented in this paper we used the pure Gaussian representation of the LOSVD and did not fit the $h_3$ and $h_4$ coefficients of the Gauss-Hermite LOSVD parametrization \citep{vdMF93} often used to perform the dynamical modelling of galaxies due to low signal-to-noise ratio and insufficient sampling of the LOSVD. The procedure and input parameters of the fitting (15$^{\rm{th}}$ order multiplicative continuum, etc.) were exactly the same as ones applied to the sample of Abell~496 low-luminosity early-type galaxies \citep{Chil08A496}, thus we refer to that paper for all details concerning the spectral fitting. The only difference introduced here is that we use SSP models computed for two different stellar IMFs: \cite{Salpeter55} and \cite{KTG93}. We use the two grids of template spectra (Salpeter and Kroupa SSPs hereafter) in a completely independent way and provide comparison of the results obtained. The key to the precise determination of low velocity dispersions from absorption-line spectra is the knowledge of the instrumental resolution as a function of wavelength and fibre number. We found that fibre-to-fibre variations in the ``MEDUSA'' mode of FLAMES/Giraffe are negligible, while wavelength dependence has to be taken into account. We performed Monte-Carlo simulations aimed at studying the precision of kinematical and stellar population parameters determined by our spectral fitting technique for objects having very low intrinsic velocity dispersions, close to or smaller than the instrumental resolution of the spectrograph. We used two {\sc pegase.hr} SSP models for the age of 10~Gyr and [Fe/H] $=-1.0$ and $-$0.3~dex, and broadened them using the wavelength-dependent information about the spectral line spread of FLAMES/Giraffe in the LR04 setup. Then we generated sets of mock data (20 realisations for every parameter set) for signal-to-noise ratios of 5, 10, 20, and 30 and internal velocity dispersions of 6, 8, 10, 15, and 20~km\,s$^{-1}$, resulting in 800 mock spectra, which then were fit using the {\sc NBursts} code. The results are presented in Fig~\ref{figsimsig}. Our simulations clearly demonstrate that: (1) FLAMES/Giraffe-LR04 is sufficient to measure internal velocity dispersion down to 8--10~km\,s$^{-1}$ at a signal-to-noise ratio of 20 with a precision of 10--15 per cent even for metal-poor ([Fe/H] $=-1.0$) objects; (2) for metal-rich ([Fe/H] $=-0.3$) objects we reach twice higher precision of internal velocity dispersion measurements compared to metal-poor ones. We notice that uncertainties of age, metallicity and radial velocity determinations returned by the least square fitting are consistent with the results of the Monte-Carlo simulations. \begin{figure} \includegraphics[width=\hsize]{sig_precision.ps} \caption{Precision of velocity dispersion determinations from the full spectral fitting of FLAMES/Giraffe-LR04 spectra computed with Monte-Carlo simulations for different signal-to-noise ratios, internal velocity dispersion and metallicities. 5 curves correspond to input velocity dispersions from 6 to 20~km\,s$^{-1}$.\label{figsimsig}} \end{figure} \section{Results} In Table~\ref{tabresucd} we present the values of the (heliocentric) radial velocities, velocity dispersions, SSP-equivalent ages and metallicities, and $B$-band stellar mass-to-light ratios for six UCD galaxies and two dE,N nuclei computed for the two IMFs mentioned above. We compare our results with the literature for some of the objects. Absolute magnitudes for UCD 1 to 5 are taken from Evstigneeva et al. (2006) and converted into the $B$ band, for ucd329.7 values are from Bergond et al. (in prep.), and for FCC~182 and FCC~266 from \cite{KDG03}. Velocity dispersion values ($\sigma_{\rm{lit}}$) for UCD 1 to 5 are ``adopted global velocity dispersions'' from Table~6 of \cite{Hilker+07}; metallicities [Fe/H]$_{\rm{lit}}$ for UCD 2, 3, and 4 are from \cite{MHIJ06}. \begin{table*} \caption{Internal kinematics, stellar populations and stellar $B$-band mass-to-light ratios of 6 UCDs and 2 dEs ($n=7,8$) in the Fornax cluster. Columns (5)--(7) and (8)--(10) are for SSP models computed with Salpeter and Kroupa et al. (2003) IMF, respectively. \label{tabresucd}} \begin{tabular}{lccccccccc|cc} \hline $n$ & $M_B$ & $v_{\rm{hel}}$ & $\sigma$ & $t_{\rm{Salp.}}$ & [Fe/H]$_{\rm S.}$ & ($M/L$)$_{*B}$ & $t_{\rm{Kroupa}}$ & [Fe/H]$_{\rm K.}$ & ($M/L$)$_{*B}$ & $\sigma_{\rm{lit}}$ & [Fe/H]$_{\rm{lit}}$ \\ & mag & km\,s$^{-1}$ & km\,s$^{-1}$ & Gyr & dex & Salpeter & Gyr & dex & Kroupa & km\,s$^{-1}$ & dex \\ \hline 1 & -11.39 & 1557$\pm$1 & 29$\pm$1 & 9.1$\pm$2.4 & -0.46$\pm$0.04 & 5.2$\pm$1.3 & 13.1$\pm$3.1 & -0.51$\pm$0.07 & 3.8$\pm$0.9 & 27$\pm$2 & \\ 2 & -11.47 & 1230$\pm$1 & 23$\pm$2 & 5.0$\pm$1.7 & -0.24$\pm$0.07 & 3.8$\pm$1.3 & 5.0$\pm$1.9 & -0.23$\pm$0.08 & 2.3$\pm$0.8 & 22$\pm$2 & -0.90 \\ 3 & -12.77 & 1500$\pm$1 & 26$\pm$1 & 13.0$\pm$2.5 & -0.23$\pm$0.05 & 8.6$\pm$1.5 & 17.6$\pm$2.7 & -0.22$\pm$0.05 & 6.3$\pm$1.1 & 23$\pm$3 & -0.52 \\ 4 & -11.65 & 1889$\pm$1 & 26$\pm$1 & 8.1$\pm$2.4 & -0.67$\pm$0.06 & 4.0$\pm$1.1 & 9.9$\pm$3.7 & -0.68$\pm$0.07 & 2.6$\pm$0.9 & 25$\pm$3 & -0.85 \\ 5 & -11.19 & 1280$\pm$2 & 16$\pm$3 & 3.9$\pm$0.9 & -0.95$\pm$0.04 & 1.8$\pm$0.4 & 3.9$\pm$0.9 & -0.93$\pm$0.04 & 1.1$\pm$0.3 & 19$\pm$3 & \\ 6 & -10.78 & 1379$\pm$1 & 28$\pm$1 & 11.2$\pm$2.4 & -0.30$\pm$0.04 & 7.1$\pm$1.4 & 14.0$\pm$4.0 & -0.33$\pm$0.08 & 4.8$\pm$1.3 & & \\ \hline 7 & -16.50 & 1700$\pm$1 & 38$\pm$1 & 6.8$\pm$0.5 & -0.10$\pm$0.02 & 5.6$\pm$0.5 & 8.0$\pm$0.7 & -0.13$\pm$0.01 & 3.6$\pm$0.2 & & \\ 8 & -15.40 & 1551$\pm$1 & 19$\pm$1 & 1.8$\pm$0.3 & -0.24$\pm$0.02 & 1.4$\pm$0.3 & 2.4$\pm$0.4 & -0.30$\pm$0.02 & 1.2$\pm$0.2 & 22$\pm$7$^{1}$ & -0.47$^{2}$\\ \hline \multicolumn{12}{l}{$^1$\footnotesize{Central $\sigma$ given by S. De Rijcke (priv. comm.) is lower than the global value (44~km\,s$^{-1}$) from \cite{deRijcke+05}.}} \\ \multicolumn{12}{l}{$^2$\footnotesize{Metallicity estimation from the full spectral fitting of the VLT FORS1 spectrum \citep{Michielsen+07}.}} \end{tabular} \end{table*} \begin{figure*} \includegraphics[angle=90,width=\hsize]{UCD_spectra.ps} \caption{FLAMES/Giraffe spectra, their best-fitting templates (Kroupa IMF), fitting residuals and confidence levels of the age and metallicity determinations (inner panels) for 6 UCDs and 2 dE,Ns. All graphs are smoothed with the 7~pixel wide box-car for presentation purposes. \label{figspec}} \end{figure*} In Fig.~\ref{figspec} the discussed spectra are displayed together with their best-fitting {\sc pegase.hr} SSPs. Inner panels show confidence levels of the age and metallicity determinations for the \cite{KTG93} IMF. The velocity dispersions for 5 of the 6 UCDs are between 23 and 30~km\,s$^{-1}$, which is higher than typical values for GCs (e.g. \citealp{EGDH07}). One notices an excellent agreement of our velocity dispersion measurements for the five UCDs (UCD1--5) and global velocity dispersions from \cite{Hilker+07} obtained using completely different instrumentation (VLT UVES, Keck ESI for UCD1) and data analysis technique. Stellar populations of UCD1, UCD3, UCD4 and ucd329.7 are older than 8~Gyr and exhibit metallicities between $-$0.67 and $-$0.23~dex. UCD2 has intermediate age and quite metal-rich stellar population, although with large uncertainties. Our estimates of metallicities for UCD3, and UCD4 are somewhat (0.2--0.25) higher than values reported by \cite{MHIJ06}, but the discrepancy is even larger ($\approx$0.65~dex) for UCD2. The UCD5 spectrum does not contain strong absorption lines in the FLAMES/Giraffe LR04 spectral range, the Mg$b$ triplet is strongly contaminated by a cosmic ray hit. This causes very vague stellar population parameter determination: 3~$\sigma$ confidence contour remains open in the age direction, i.e. age is undetermined. The global $\chi^2$ minimum corresponds to a young population ($t=1.2$~Gyr, [Fe/H] $=-$0.49~dex). There is a 1.7$\sigma$ confidence secondary minimum ($t=3.9$~Gyr, [Fe/H] $= -$0.95~dex). In order to chose between the two possible solutions for UCD5 we have reduced and fit its FLAMES/Giraffe spectra obtained in the blue LR02 setup with the wavelength coverage between 3970 \AA\ and 4545 \AA. Quite low efficiency of the spectrograph in this setup is well compensated by the blue colour of the object and the presence of very strong age- and metallicity-sensitive absorption lines (Ca I ``H'', G-band, H$\gamma$). The fitting of the LR02 data results in the stellar population parameters compatible with the secondary minimum of the LR04 setup, therefore we adopt the secondary LR04 solution, corresponding to the metal-poor intermediate age population through the rest of the paper. Fitting with the Kroupa SSPs results in slightly older ages for intermediate and old stellar populations, whereas the metallicities and velocity dispersions remain intact. Mass-to-light ratios are 40--50 per cent lower compared to the Salpeter IMF. This effect is easy to understand keeping in mind that stellar populations with the Salpeter IMF contain larger amount of faint red dwarf stars, weakly contributing to the total light, but increasing the total mass. Therefore, Kroupa IMF-based SSPs having older ages are required to fit ``red'' spectra. \section{Discussion} \subsection{Comparison of Stellar and Dynamical Masses} Given the stellar population parameters and luminosities, we derive stellar masses of UCDs in our sample. The stellar mass estimates, computed from the mass-to-light ratios provided by {\sc pegase.2} \citep{FR97} for Salpeter and Kroupa et al. IMFs are given in Table~\ref{tabmlcomp}. In the fourth column we provide the corrected dynamical masses, derived by re-normalising the values of \cite{Hilker+07} by our velocity dispersion measurements ($M_{\rm{d, corr}} = M_{\rm{d}} (\sigma / \sigma_{\rm{lit}})^2$. The fifth and sixth column contain the dark matter fractions estimated from Salpeter and Kroupa SSPs and corrected dynamical masses. For UCD1, 2, 4, and 5 the Salpeter SSP stellar masses are consistent with the dynamical ones within uncertainties, although in average the stellar masses tend to be lower. The Kroupa et al. IMF decreases them more resulting in a 40--50 per cent (65 for UCD5) upper limits of the dark matter content. However, for UCD3 the stellar mass derived using the Salpeter IMF becomes significantly larger than the dynamical estimate (note negative dark matter fraction), whereas Kroupa IMF provides almost a perfect match between the two, suggesting zero dark matter content. Therefore, if we assume no object-to-object IMF variation, the observations are more in favour of the Kroupa et al. IMF. There is a possibility that the dynamical model of UCD3 used by \cite{Hilker+07} was not correct (for example, (1) the outer component of the UCD3 is not spherically symmetric or (2) there is significant rotation not taken into account, or (3) velocity dispersions are anisotropic), which may lead to an underestimated dynamical mass. At the same time, we cannot exclude that SSP models do not represent well the spectrum (e.g. the object contains a metal-rich sub-population, which is not properly modeled). In this case the stellar mass may be overestimated. Given the large uncertainties of stellar population parameters and, consequently, stellar mass-to-light ratios, we cannot give a decisive answer on a question \emph{``Is there dark matter in UCDs?''} However, the main conclusion we draw is that \emph{UCDs are not dark matter dominated objects} and at present level of detection, the dark matter content can be explained by uncertainties of the measurements and looseness of the models used to derive dynamical and stellar masses. \begin{table} \caption{Comparison of stellar masses of 6 UCDs for Salpeter (2) and Kroupa (3) IMFs; and the dynamical masses for 5 objects (5) from Hilker et al. (2007) corrected using our velocity dispersion estimations, (6)--(7) the dark matter content (per cent) for the Salpeter and Kroupa et al. IMF.\label{tabmlcomp}} \begin{tabular}{cccccc} \hline $n$ & $M_{*{\rm{Salp.}}}$ & $M_{*{\rm{Kroupa}}}$ & $M_{\rm{d, corr}}$ & DM$_{\rm{S.}}$ & DM$_{\rm{K.}}$ \\ & 10$^7 M_{\odot}$ & 10$^7 M_{\odot}$ & 10$^7 M_{\odot}$ & \% & \% \\ \hline 1 & 2.9$\pm$0.7 & 2.1$\pm$0.5 & 3.7$\pm$0.5 & 20 & 45 \\ 2 & 2.3$\pm$0.8 & 1.4$\pm$0.5 & 2.4$\pm$0.3 & 0 & 40 \\ 3 & 17.5$\pm$3.0 & 12.8$\pm$2.2 & 12.0$\pm$2.4 &--45 & 0 \\ 4 & 2.9$\pm$0.8 & 1.9$\pm$0.6 & 4.0$\pm$1.0 & 30 & 50 \\ 5 & 0.9$\pm$0.2 & 0.5$\pm$0.1 & 1.4$\pm$0.5 & 35 & 65 \\ 6 & 2.3$\pm$0.4 & 1.6$\pm$0.4 & & & \\ \hline \end{tabular} \end{table} \subsection{Metallicity-$M_B$ and metallicity-$\sigma$ relations} \begin{figure} \includegraphics[width=0.95\hsize]{UCD_A496_sigma_met.ps}\\ \includegraphics[width=0.95\hsize]{UCD_A496_MB_met.ps} \caption{Metallicity-velocity dispersion (top) and metallicity-luminosity (bottom) relations for early-type galaxies, UCDs, and GCs. The data sources are described in the text. Outlined crosses represent Galactic GCs with multi-component stellar populations (see Section 4.2 for details). \label{figSLZ}} \end{figure} In Fig.~\ref{figSLZ} we plot metallicities versus velocity dispersions and luminosities of the 6 Fornax cluster UCDs and 2 dE,N nuclei from our sample. They are compared to: (a) Milky Way GCs from \cite{MvdM05}, outlined crosses show GCs with direct evidences of multiple stellar populations revealed by the analysis of colour-magnitude diagrams \citep{Piotto08}; (b) Local Group dEs and dwarf spheroidals (dSph) from \cite{Mateo98}; (c) Abell~496 low-luminosity early-type galaxies from \cite{Chil08A496}; (d) a sample of intermediate-luminosity and bright early-type galaxies from \cite{SGCG06a,SGCG06c}; two compact stellar systems in the Virgo cluster with the spectra available in SDSS DR6 \citep{SDSS_DR6}: transitional UCD/cE ``M59cO'' \citep{CM08} and VUCD~7, where the data have been processed exactly in the same way as for M59cO. The aperture size for the Abell~496 dE/dS0 is around 0.8~kpc, so the dE nuclei do not dominate the light, therefore metallicities and velocity dispersions should be closer to the global than to the central values, given flat velocity dispersion profiles usually observed in dEs \citep{SP02, GGvdM03, vZSH04}. In the metallicity-luminosity relation (Fig.~\ref{figSLZ}, bottom panel) there is a continuous sequence $Z \varpropto L_B^{0.45}$, spanning over 6 orders of magnitude in luminosity, formed by early-type galaxies: from the faintest dSph on the left to the brightest cluster ellipticals on the right. UCD galaxies lie significantly above (0.7--1.0~dex) this sequence, compared to the brightest Local Volume dSph's, having similar luminosities (see \citealp{Mateo98} and references therein). In this sense, UCDs are similar to cE galaxies (e.g. \citealp{Chilingarian+07}) having high metallicities for their luminosities, which probably represent end-products of the galaxy tidal threshing \citep{BCD01}. At the same time, on the metallicity-velocity dispersion plot (Fig.~\ref{figSLZ}, top panel) the loci of UCDs practically coincide with those of dEs. We consider this as an argument for the scenario of tidal threshing of dE,Ns as a way to create UCDs. In this case, a velocity dispersion of a compact nucleus will not change very strongly, while the total luminosity of a progenitor will drop by several magnitudes. For comparison with massive GCs \citep{MvdM05} we chose a subsample of Galactic GCs with available measurements of velocity dispersions. Many GCs follow the behaviour of early-type galaxies. The three strongest outliers, namely NGC~104, NGC~6388, and NGC~6441, similarly to UCDs, reside significantly above the sequence of early-type galaxies on the metallicity -- luminosity plot (Fig~\ref{figSLZ}). It is remarkable that the latter two exhibit direct evidences of multiple stellar populations \citep{Piotto08}. Among the three other GCs (NGC~1851, NGC~2808, $\omega$~Cen) demonstrating composite stellar populations only $\omega$~Cen follows exactly the trend defined by early-type galaxies. We notice, that the third strongest outlier, 47~Tuc (NGC~104), being at least as massive as NGC~6388 does not have an evident double main sequence \citep{Piotto08}. In the frame of the tidal stripping scenario, we can also propose an explanation for the large spread of UCD metallicities ($-$0.25 to $-$0.93) on the [Fe/H]~vs.~$\sigma$ plot. It may be a superposition of the two factors: (1) the relatively high spread of metallicities of dE progenitors of UCDs due to their own environmentally-driven evolution (see discussion in \citealp{Chil08A496}); (2) different conditions during the tidal stripping, which may lead to some changes of the velocity dispersion values compared to the progenitors. \subsection{Comparison of dE,N nuclei and UCDs} In Fig.~\ref{figtZ} we compare ages and metallicities of the 6 Fornax cluster UCDs with nuclei of dE,Ns in the Fornax (2 objects, this study) and Virgo (26 objects) clusters, transitional cE/UCD object M59cO \citep{CM08} and VUCD7, another UCD in the Virgo cluster. Stellar population parameters for 22 Virgo cluster galaxies (shown in red), as well as for VUCD7 and M59cO are obtained by analysing SDSS DR6 spectra. For the four remaining Virgo dE,Ns shown in light blue we used the results based on the 3D spectroscopic observations presented in \cite{CPSA07,CSAP07}: diamonds with error-bars correspond to the ages and metallicities of the nuclei and the blue vectors point to the parameters of the ``main bodies'' of the galaxies. Apart from the 2 intermediate age objects (UCD4 and UCD5), all UCDs are old, at the same time spanning a large range of metallicities. Most of the dE,Ns exhibit considerably younger stellar populations than UCDs. However, there is a number of old dE,N nuclei (including FCC~182) with ages comparable to those of UCDs. A scenario, assuming that dE,N nuclei are results of repeated or extended star formation episodes in the dE centres, leading to metal enrichment, can explain the observed quite high metallicities of dE,N nuclei. \begin{figure} \includegraphics[width=0.95\hsize]{agemet_ucd_new.ps} \caption{Comparison of ages and metallicities of UCDs with a sample of Virgo cluster dE,N galaxies from SDSS.\label{figtZ}} \end{figure} \subsection{Origin of UCD galaxies} The stellar population parameters obtained by our SSP fitting, namely metallicities higher than $-$1.0 dex, allow us to exclude the scenario of evolving primordial density fluctuations \citep{PDGJ01}, because one would expect much lower metallicities for objects at this mass range. Low dark matter content leaves space for all remaining channels of the UCD formation: \cite{BCDS03} showed that in case of dE stripping (``threshing'') the progenitor's nucleus must not be dark matter dominated; the two other alternatives, GC merging and formation of UCDs as tidal superclusters, assume zero dark matter content. However, the scenario of merging GCs \citep{MHI02} fails to explain why we do not observe metal-poor UCDs. It is known that GCs exhibit a dichotomy in the metallicity distribution (e.g. review by \citealp{BS06}), but observed UCDs correspond only to metal-rich GCs. Why don't metal-poor GCs merge? Moreover, in case of a merger of metal-poor and metal-rich GCs of the same mass, the resulting luminosity-weighted metallicity in our wavelength range will be lower than the mean value, because metal-poor stellar populations have lower $M/L$ ratios than metal-rich ones. Composite stellar populations observed in massive Galactic GCs \citep{Piotto08} comprise two to four SSPs, sometimes significantly different in metallicities, which is evident from deep colour-magnitude diagrams. These objects look good candidates for the GC merger scenario. In addition, we do see metal-poor ``composite'' GCs (NGC~1851, NGC~2808, $\omega$~Cen). Indeed, they are an order of magnitude (except $\omega$~Cen, where at least four SSPs are evident) fainter than the UCDs we are discussing here. On the statistical basis, UCDs are too frequent to be representatives of the high-luminosity end of the GC luminosity function (GCLF). After the initial discussion in \cite{MHI02} a significant number of UCDs was discovered. The extrapolation of the GCLF (see e.g. \citealp{ML07}) towards bright objects results in the statistical over-population of $M_V < -11$ objects. Although, the exact value depends on the adopted GCLF parameters and representation (i.e. Gaussian or $t_5$), we consider this fact important, making this channel of UCD formation scarcely probable. The old ages of most UCDs, compared to dE,N nuclei, suggest that if we consider the dE,N tidal stripping scenario, it must have happened a long time ago. However, there is a difficulty in explaining the formation of the most metal-rich UCDs, because 8--10~Gyr ago dE,N nuclei must have been less metal rich than presently observed. A possible explanation is a tidal stripping of the most massive dE representatives (dE/E transitional objects) such as IC~3653 or FCC~182. Another possibility is to create them as stellar superclusters (Fellhauer \& Kroupa 2005) during interactions of massive galaxies, in this case metal-rich population formed from the metal-rich gas of the progenitors will be observed in the UCDs. Both scenarii are compatible with low dark matter content. A possible diagnosis is to measure $\alpha$/Fe abundance ratios: populations formed in a short and intense star formation episode will be $\alpha$-overabundant (e.g. \citealp{Matteucci94}). With the present low S/N UCD data we are not able to carry out this test. Finally, we are left with the two alternatives of UCD formation: UCDs with low metallicities ($[\mbox{Fe/H}] < -0.5$~dex) are in favour of dE,N tidal stripping, while tidally created superclusters better explain metal-rich UCDs. At present we cannot exclude the diversity of the UCD origin suggested by \cite{MHIJ06}. \section*{Acknowledgments} We thank participants of the ``Nuclear Star Clusters Across the Hubble Sequence'' workshop for fruitful discussions of the preliminary results, S. Mieske for useful advices and discussions of UCD origin, our anonymous referee for valuable comments, P. Di Matteo for providing a link to the presentation of G.~Piotto. Special thanks to Gary Mamon for the critical reading of the manuscript. GB is supported at he IAA/CSIC by an I3P contract (I3P-PC2005-F) funded by the European Social Fund, with additional support by DGI grant AYA 2005-07516-C02-01 and the Junta de Andaluc\'\i a. \bibliographystyle{mn2e}
1,477,468,751,350
arxiv
\section{Introduction} Non-Abelian strings in a class of four-dimensional ${\cal N}=2\;$\, gauge theories were discovered and explored recently \cite{ht1,ABEKY,SYmon,ht2} (for reviews see \cite{Trev}). In addition to translational (and supertranslational) moduli characterizing the position of the string center in the perpendicular plane, non-Abelian strings are endowed with orientational (and superorientational) moduli on the string world sheet. The orientational moduli emerge from the fact that the bulk theories supporting such strings possess a color-flavor locked SU$(N)_{c+f}$ global symmetry while a particular string solution preserves only an ${\rm SU}(N-1) \times {\rm U}(1)$ subgroup. Therefore, in fact, we deal with a $\mathbb{CP}^{N-1}$ family of solutions; the orientational moduli describe how each particular string solution from this family is embedded in SU$(N)_{c+f}$. These strings are BPS saturated, and the worldsheet theory retains ${\cal N}=(2,2)$ supersymmetry. As a result, holomorphy protects certain (chiral) quantities, such as tensions, which are then exactly calculable. Soon after the non-Abelian strings, it was discovered that kinks in the world-sheet theories on non-Abelian strings describe confined monopoles \cite{SYmon,ht2}. These kinks cannot detach themselves from the strings and can be at strong coupling even in the weakly coupled bulk theory. This observation provides a physical, and very transparent, explanation for the earlier detected coincidence of the BPS spectra of two theories \cite{Dorey:1998yh}: the one on the world sheet and the four-dimensional ${\cal N}=2\;$\, theory in the $r=N$ vacuum on the Coulomb branch. Deformations of various parameters of the bulk theory present an excellent research laboratory. The gauge symmetry of the bulk ${\cal N}=2\;$ theories is U$(N)$, and they have $N$ quark flavors (i.e. $N$ hypermultiplets in the fundamental representation). Moreover, they are endowed with the Fayet--Iliopoulos (FI) term $\xi$. If $\xi\gg \Lambda^2$ the bulk theory is at weak coupling (here $\Lambda$ is the scale parameter of ${\cal N}=2\;$ SQCD). Other dimensional parameters of the bulk theory are the (s)quark mass terms. Physically observable are the differences $\Delta m = m_i-m_j$. As was mentioned, the world sheet theory is \cite{SYmon,ht2} $\mathbb{CP}^{N-1}$ sigma model. In fact, if $\Delta m \neq 0$, we deal with the $\mathbb{CP}^{N-1}$ model with twisted masses \cite{twisted}. One can start from $\xi=0$ and $|\Delta m |$ large (compared to $\Lambda$), and continuously deform $\xi$, increasing its value, and, simultaneously, decreasing $|\Delta m |$. One can trace this deformation from the beginning to the end. At $\xi=0$ we have conventional 't~Hooft--Polyakov monopoles, then, as $\xi$ increases, the non-Abelian strings are formed and attach themselves to the 't~Hooft--Polyakov monopoles squeezing their magnetic flux into flux tubes. The tension of the flux tubes grows and they become thinner while the monopoles become exceedingly fuzzier albeit they retain their BPS nature. At the end, at $\sqrt\xi \gg |\Delta m |$, they turn into kinks in the world-sheet theory. The mass of the monopoles/kinks does not depend on $\xi$. At $|\Delta m | \gg \Lambda$ this mass stays the same independently of whether the monopoles are confined or unconfined. The deformation process is described in detail in \cite{SYrev}. Let us discuss in more detail the bulk theory which has the U(2) gauge group and two flavors. If $\Lambda \ll |\Delta m | \ll \sqrt{\xi}$, quantum fluctuations on the string world sheet are tempered, and two distinct elementary strings (i.e. those with the minimal tension $2\pi\xi$) are easily identifiable. The SU(2) orientational moduli (described by O(3)=$\mathbb{CP}^{1}$ model with the twisted masses) weakly fluctuate around two (vacuum) points: either $S_3=1$ or $S_3=-1$, i.e. the flux is oriented in the group space in the direction of either the north or south pole.\footnote{ We will refer to them as $| \pm \rangle$ states. Needless to say, geometrically both magnetic fields, from ${\rm U}(1)_0$ and $ {\rm U}(1)_3$, are aligned along the string axis.} The magnetic flux has the following decomposition in terms of ${\rm U}(1)_0$ and ${\rm U}(1)_3$: \begin{eqnarray} (1,0): &&\quad \frac{1}{2}\left( \begin{array}{ccc} 1 & \\ & 1 \end{array} \right)_0 + \frac{1}{2} \left( \begin{array}{ccc} 1 & \\ & -1 \end{array} \right)_3 = \left( \begin{array}{ccc} 1 & \\ & 0 \end{array} \right)\ ;\nonumber \\ [3mm] (0,1): &&\quad \frac{1}{2} \left( \begin{array}{ccc} 1 & \\ & 1 \end{array} \right)_0 - \frac{1}{2} \left( \begin{array}{ccc} 1 & \\ & -1 \end{array} \right)_3 = \left( \begin{array}{ccc} 0 & \\ & 1 \end{array} \right) \,, \label{dom2} \end{eqnarray} where the subscript 3 marks the U(1) subgroup generated by the third generator of SU(2). We call these strings $(1,0)$ and $(0,1)$, respectively, since in the former case it is only the first flavor that winds, while in the latter case it is the second flavor. Note that a ``basic" winding in ${\rm U}(1)_0$ for the non-Abelian string is by $\pi$ rather than by the conventional $2\pi$ of the Abrikosov--Nielsen--Olesen (ANO) string. But the sum of the two ${\rm U}(1)$ windings (in ${\rm U}(1)_0$ and ${\rm U}(1)_3$) creates an ordinary $2\pi$ winding locked to the first flavor or to the second. If the ${\rm U}(1)_0$ magnetic field $\vec B$ inside the string points from right to left, then in the $(1,0)$ string the ${\rm U}(1)_3$ magnetic fields $\vec B^{\, 3}$ is directed from right to left too while it is directed from left to right in the $(0,1)$ string. The combined $\vec B^{\, 3}$ magnetic flux for two strings attached to the kink (which either inflows or outflows the kink, depending on whether we have the $(1,0)$-$(0,1)$ or $(0,1)$-$(1,0)$ string junction) is one unit of the magnetic monopole flux. The monopole carries flux under ${\rm U}(1)_3$. This is depicted in Fig.~\ref{fone}. \begin{figure}[h!t] \epsfxsize=8cm \centerline{\epsfbox{fone.eps}} \caption{{\footnotesize The confined monopole is a kink that changes the string state from $| + \rangle$ to $| - \rangle$ or vice versa. }} \label{fone} \end{figure} Given the confined-monopole/kink correspondence outlined above, it se\-ems necessary and timely to address two questions: (a) manifestation of the unit-flux monopoles in {\em composite} strings; (b) {\em multiple monopole} configurations. We will show that mono\-poles with the {\em unit} magnetic charge manifest themselves as junctions of the type (2,0)-(1,1), while {\em multi}monopole states, with the magnetic charge 2 and higher, exist as a chain of junctions of the {\em composite} strings. It is impossible to confine two monopoles\,\footnote{We mean here two monopoles rather than the monopole-antimonopole pair with the vanishing net magnetic charge.} on the elementary non-Abelian string. Magnetic charge-2 configurations necessarily belong to composite strings built of two (or more) constituent strings. We explicitly construct, in the U(2) bulk theory with two coaxial elementary strings, a continuous family of composite kink solutions (2,0)-(1,1)-(0,2). This is depicted in Fig.~\ref{ftwo}. \begin{figure}[h!t] \epsfxsize=11cm \centerline{\epsfbox{ftwo.eps}} \caption{{\footnotesize Two monopoles can be confined on a composite string as a composite kink. }} \label{ftwo} \end{figure} \vspace{5mm} If $|\Delta m | = 0$, the two-string configuration acquires a compact part of the moduli space associated with the relative orientations in the group space. Switching $\Delta m \neq 0$ we lift the continuous degeneracy of this part of the moduli space. One of the goals of this paper is to trace how exactly the moduli space of multiple strings is affected by the twisted mass deformation. Assume we have two separate elementary strings (at rest) at a certain fixed distance distance $L$ from each other. How many states this system has? Since all bulk excitations are massive (there is a mass gap in the bulk) the thickness $\ell$ of each elementary string is finite and is related to the inverse masses of the bulk particles. We assume that $L>\ell$. Since each can be in two different states we have a total of four states. The four states can then be grouped in three possible two-string configurations, \begin{eqnarray} {\rm (i)}\,\,\qquad\qquad && \quad \; (1,0) +(1,0)\ ; \label{B2}\\[1mm] {\rm (ii)}\,\qquad\qquad && \quad \; (0,1) +(0,1)\ ; \label{B3} \\[1mm] {\rm (iii)}\qquad\qquad && \left\{ \begin{array}{c} (1,0) +(0,1)\ ; \\[1mm] (0,1) +(1,0) \ . \end{array} \right. \label{Bfour} \end{eqnarray} In all three cases, if we take a large circle encompassing both strings in the perpendicular plane, the U(1)$_0$ winding of the matter fields is $2\pi$. This winding is noncontractible. In the first two cases (\ref{B2}), (\ref{B3}) the ${\rm U}(1)_3$-winding in SU(2) is $\pm 2\pi$. It is topologically contractible to no winding in SU(2). (There is a potential barrier, however, determined by $\Delta m\neq 0$.) In the third case the overall ${\rm U}(1)_3$-winding in SU(2) can be contracted to no winding without any barrier. The ANO string is a part of this sector, with no separating barrier. The configurations are dynamically stable. A way to see that the last two must belong to the same sector, is to realize that they can be connected by a physical exchange of two strings. If the two-string configuration above are BPS saturated,\footnote{At $L\to\infty$ all three configurations, (i), (ii) and (iii) above, are BPS saturated. Since the multiplet is short in ${\cal N}=2\;$, the property of the BPS saturation cannot disappear as we vary $L$.} the tensions of the composite objects is $4\pi\xi$, i.e. twice the tension of the elementary strings. The elementary string has two ground states, $| \pm \rangle$. Since each of two strings can be in two different states we have a total of four states. The moduli space (at $\Delta m \neq 0$) has only three disconnected components, not four (Fig.~\ref{4vs3}). Two states (\ref{Bfour}) belong to one and the same manifold ${\mathcal M}_{+-}$. They could be classified according to interchange symmetry. However, when the inter-string distance $L$ tends to zero, only one state survives on ${\mathcal M}_{+-}$. Therefore, in our set up, we will deal with three distinct composite strings corresponding to three points marked by ``x" in three plots in Fig.~\ref{4vs3}. The manifolds ${\mathcal M}_{++}$ and ${\mathcal M}_{--}$ are similar to the moduli space of double vortex in the U$(1)$ theory \cite{Samols:1991ne}. Asymptotically it is the cone obtained from the complex plane modulus by a $Z_2$ reflection. The singularity at the tip of the cone is resolved at the scale of the string thickness. This implies, in particular, the $\pi/2$ scattering for head-on-head collisions. The manifold ${\mathcal M}_{+-}$ does not have this $Z_2$ factorization and presents a plane asymptotically. In the head-on-head scattering the two strings pass one trough the other, and the scattering angle is $\pi$ rather than $\pi/2$. \begin{figure}[h!t] \epsfxsize=13cm \centerline{\epsfbox{4vs3.eps}} \caption{{\footnotesize The moduli space of vortices for the mass deformed theory, for $n=2$, has three disconnected components: ${\cal M}_{++}$, ${\cal M}_{+-}$, and ${\cal M}_{--}$. }} \label{4vs3} \end{figure} Solutions for the solitonic 2-strings with the coinciding axes in the given bulk theory were found and studied previously \cite{knp,knp2,mmatrix,knp3} for $\Delta m =0$. The reduced moduli space (with $L=0$) was shown \cite{knp2,knp3} to be topologically equivalent to $\mathbb{CP}^{2}/{Z}_{2}$. The metric of the full moduli space, including the collective coordinates associated with $L\neq 0$, remains unknown. Unlike the metric for the elementary string moduli space, for composite strings it cannot be determined on the basis of symmetry considerations due to entanglement of the orientational and translational moduli. What is available at the moment is a model suggested by Hanany and Tong \cite{ht1,ht2} who embedded the bulk gauge theory in a stringy set-up made of intersections of D4 and NS5 branes in type IIA string theory. The bulk gauge theory of interest is defined as a certain decoupling limit of the low-energy description of the D4 branes. The flux tubes then correspond to D2 branes. The $(1+1)$-dimensional world-sheet theory is a ${\rm U}(k)$ sigma model with ${\cal N}=(2,2)$, one adjoint field $Z$ and $N$ fundamentals $n$.\footnote{We will consider the case $N=k=2$.} The Hanany--Tong (HT) model admittedly captures only some features of the 2-string solutions. For instance, at large $L$ the string interaction in the HT model falls off in a power-like manner, while in fact, with the gapped bulk theory, it should fall off exponentially. It was argued, however, that the HT model is in the same universality class as the (unknown) genuine world-sheet theory and, therefore, correctly describes holomorphic quantities and reproduces physics of the BPS objects. We will use the HT model (with the twisted masses switched on) just for these purposes. Our findings can be seen as a confirmation that it works well in this context. \vspace{1mm} Our main results can be summarized as follows. We introduce the twisted masses in the ${\cal N}=(2,2)$ HT model, and find three distinct minima of the potential energy, corresponding to three different 2-strings (i) -- (iii). Acting in the subspace $L=0$ of the moduli space we find BPS-saturated kinks interpolating between each pair of vacua. Two kinks interpolating between (2,0) and (1,1) and (1,1) and (0,2) can be called elementary. They emanate one unit of the magnetic flux. In essence, they are the same confined monopoles as those found in \cite{SYmon,ht2}. They have the same mass as the kinks in \cite{SYmon,ht2}, which, in turn, have the same mass as the conventional 't~Hooft--Polyakov monopole in the $r=N$ vacuum on the Coulomb branch of the bulk theory ($\xi =0$). The kink interpolating between (2,0) and (0,2) represents a composite monopole, with twice the minimal magnetic flux. Its mass is twice the mass of the elementary confined monopole (see the bottom part of Fig.~\ref{ftwo}.) We discuss instantons effects in composite strings in the limit $\Delta m \to 0$. We are able to find explicit instanton solution in the Hanany--Tong model. At $L\to 0$, this is the strong coupling limit on the world sheet. We argue that the quantum moduli space of two coincident strings is in fact built of three disconnected components. Finally, we study the renormalization group flow. \vspace{2mm} The paper is organized as follows. In Sect.~\ref{theosetting} we briefly review the basic bulk theory supporting non-Abelian strings. We review both, elementary strings and what is known about composite strings of nonminimal winding. In Sect.~\ref{httheo} we introduce the Hanany--Tong model including the twisted mass deformation. The limits of validity of the HT model following from the string set up are discussed. We then explore in detail the moduli space of composite vortices, with the twisted-mass-generated potential, at $L=0$. Three isolated supersymmetric vacua are identified. Section~\ref{spoex} treats the spectrum of excitations. There are elementary excitations -- oscillations near the vacua. Of more interest to us are solitonic excitations -- BPS kinks -- on which we focus. In Sect.~\ref{costr} we discuss the limit $\Delta m \ll \Lambda$ in which dynamics is determined by strong quantum effects. Section \ref{rg} is devoted to quantum effects from the standpoint of the sigma-model renormalization-group flow. Section~\ref{conclu} summarizes our findings. In Appendix we consider strings with the opposite directions of $\vec{B}^3$ and generic $L$ (i.e. $L\neq 0$). \section{Flux tube in four dimensions} \label{theosetting} \subsection{Theoretical setting} \label{wthse} We consider $\mathcal{N}=2$ SQCD with $N_f=N_c=N=2$ in the bulk, with the Fayet--Iliopolous term ($D$ term) and masses for the quark hypermultiplets, \begin{equation} m_1=-m_2=m \, . \label{twisma} \end{equation} The original gauge group is U(2). The bosonic part of the action (in the Euclidean notation) is \begin{eqnarray} {\cal L} &=& \int d^4 x \bigg[ \frac{1}{4 e_3^2} |F_{\mu \nu}^k|^2 +\frac{1}{4 e_0^2} |F_{\mu \nu}|^2+ \frac{1}{ e_3^2} |D_{\mu} a^k|^2+ \frac{1}{ e_0^2} |\partial_{\mu} a|^2 \nonumber\\[3mm] &+& \hbox{\rm Tr}\, (\nabla_\mu Q)^{\dagger} (\nabla_\mu Q)+ \hbox{\rm Tr}\, (\nabla_\mu \tilde{Q}) (\nabla_\mu \tilde{Q}^{\dagger})+ V(Q,\tilde{Q},a^k,a) \bigg], \label{azione-tutta} \end{eqnarray} where $e_0$ and $e_3$ are the gauge couplings for U(1) and SU(2) factors, respectively, and \begin{eqnarray} V&=& \frac{e_3^2}{8} \left( \frac{2}{e_3^2} \epsilon^{ijk} \bar{a}^{j} a^k + \hbox{\rm Tr}\, (Q^\dagger \sigma^i Q) - \hbox{\rm Tr}\,(\tilde{Q} \sigma^i \tilde{Q}^\dagger) \right)^2 \nonumber\\[3mm] & +& \frac{e_0^2}{8} \left( \hbox{\rm Tr}\, (Q^\dagger Q)- \hbox{\rm Tr}\,(\tilde{Q} \tilde{Q}^\dagger)- 2 \xi \right)^2 \nonumber\\[3mm] & +& \frac{e_3^2}{2} \left| \hbox{\rm Tr}\, (\tilde{Q} \sigma^i Q)\right|^2 + \frac{e_0^2}{2} \left| \hbox{\rm Tr}\, (\tilde{Q} Q ) \right|^2 \nonumber\\[3mm] & +& \frac{1}{2} \sum_{f=1}^2 |(a+\sigma^i a^i- m_f) Q_f |^2+ |(a+\sigma^i a^i-m_f) \tilde{Q}^{\dagger}_{f} |^2 \,. \end{eqnarray} The vacuum expectation values (VEVs) of the squark fields are given by the following expression: \begin{equation} Q=\sqrt{\xi} \left(\begin{array}{cc} 1 & 0\\ 0 & 1 \\ \end{array}\right) \, , \qquad \tilde{Q}= \left(\begin{array}{cc} 0 & 0\\ 0 & 0 \\ \end{array}\right) \, , \qquad a_3=m \, . \end{equation} For a thorough review see \cite{SYrev}. \subsection{Minimal-winding flux tube} \label{dom3} The minimal-winding vortex solution c an be found using the {\em ansatz} \begin{eqnarray} Q &=& \left(\begin{array}{cc} \phi_1 e^{i \varphi}& 0\\[2mm] 0 & \phi_2 \\ \end{array}\right) \, , \nonumber\\[3mm] {A_i} &=& \frac { \epsilon_{ij} x_j} {r^2} \left( \sigma^3 \frac{1-f_3}{2}+ 1 \frac{1-f}{2} \right) \, . \label{wansatz} \end{eqnarray} The classical solution is $1/2$ BPS-saturated leaving four supercharges unbroken. Using a color+flavor rotation, we can write a family of solutions, \begin{eqnarray} Q &=& U\cdot \left(\begin{array}{cc} \phi_1 e^{i \varphi}& 0\\ 0 & \phi_2 \\ \end{array}\right) \cdot U^\dagger=\frac{\phi_1 e^{i \varphi}+\phi_2}{2} 1 + n^a \sigma^a \frac{\phi_1 e^{i \varphi} -\phi_2}{2} \, , \nonumber\\[3mm] {A_i} &=& \frac { \epsilon_{ij} x_j} {r^2} \left[ \left(n^a \sigma^a\right) \frac{1-f_3}{2}+ 1 \frac{1-f}{2} \right] , \label{wcloso} \end{eqnarray} where $U$ is an arbitrary SU(2) matrix, and $n^a$ parametrize the internal $\mathbb{CP}^1$ moduli. Moreover, $x_j$ ($j=1,2$) parametrizes two coordinates in the perpendicular plane. For general $N$, the compact part of the classical moduli space is obviously \begin{equation} \frac{{\rm SU}(N)_{c+f}}{\left({\rm U}(1) \times {\rm SU}(N-1)\right)_{c+f}}=\mathbb{CP}^{N-1} \, , \label{copams} \end{equation} rather than $\mathbb{CP}^{1}$. Next, we promote the classical moduli to fields living on the string world sheet. The resulting effective theory is the ${\cal N}=(2,2)$ $\mathbb{CP}^{N-1}$ sigma model. The quark mass terms (more exactly, their differences) descend to the world sheet in the form of the twisted masses. \subsection{Two coincident strings} Using the index theorem, one can show \cite{ht1} that in the $\mathcal{N}=2$ theory with $N_c=N_f=N$, the moduli space of the winding-$k$ vortices is a manifold with real dimension $ 2 k N $. In the limit of large distance between the $k$ elementary vortices, this has a simple interpretation: $2 k$ of these coordinates correspond to the position of each elementary string (translational moduli) while $2 k (N-1)$ correspond to the orientation of each constituent in the internal $\mathbb{CP}^{N-1}$ space (orientational moduli). As was mentioned, we focus on the case $k=N=2$. An explicit solution for two coincident vortices was found in \cite{knp2} by virtue of the {\em ansatz} \begin{eqnarray} Q &=& \left(\begin{array}{cc} -\cos \frac{\gamma}{2} e^{ 2 i \varphi} \kappa_1 & \sin \frac{\gamma}{2} e^{ i \varphi} \kappa_2 \\[3mm] - \sin \frac{\gamma}{2} e^{ i \varphi} \kappa_3 & -\cos \frac{\gamma}{2} \kappa_4 \\ \end{array}\right), \label{ans1} \\[4mm] A^0_{(i)} &=& -\frac{\epsilon_{ij} x_j}{r^2} (2-f_0)\,, \qquad A^3_{(i)} = -\frac{\epsilon_{ij} x_j}{r^2} \left[(1+\cos \gamma )-f_3\right], \label{ans2} \\[3mm] A^1_{(i)} &=& -\frac{\epsilon_{ij} x_j}{r^2} (\sin \gamma) (\cos\varphi) (1-g), \label{ans3} \\[3mm] A^2_{(i)} &=& +\frac{\epsilon_{ij} x_j}{r^2} (\sin \gamma) (\sin \varphi) (1-g)\,, \label{ans4} \end{eqnarray} where the functions $\kappa_{1,2,3,4}$, $f_{0,3}$, and $g$ depend only on $r=\sqrt{x_1^2+x_2^2}$, the angle $\varphi$ is the polar angle in the plane perpendicular to the string axis, while $\gamma$ is the angle characterizing the relative group orientation of two strings comprising the 2-string in question (for further details see \cite{knp2}). Now we can apply an ${\rm SU}(2)_{c+f}$ rotation to this solution. For generic $\gamma$ all generators of this symmetry are spontaneously broken on the string. Thus, the moduli space for coincident strings has dimension four. A more general solution, corresponding to strings with arbitrary orientation and relative separation, can be found in the framework of the moduli matrix approach \cite{mmatrix}. It is difficult to carry out an honest-to-god derivation of the ${\cal N}=(2,2)$ sigma model on the 2-string world sheet directly from the bulk theory. The world-sheet description involves a sigma model with a highly non-trivial metric, not determined by the symmetries of the problem. With nonvanishing masses for the quark hypermultiplets, (with $|\Delta m |\ll\xi$), in addition, there is a nontrivial potential on the moduli space, which is also difficult to calculate in full from the four-dimensional theory. In the absence of a genuine world-sheet model derived from the first principles we will settle for a simplified substitute believed to describe well some crucial aspect of the world-sheet physics. \section{Two-dimensional effective theory} \label{httheo} \subsection{The brane construction} To begin with, let us briefly review the Hanany--Tong construction \cite{ht1,ht2}, based on the string theory/brane realization \cite{hw,witten97}, type IIB or A, for $2+1$ or $3+1$-dimensional bulk, respectively. Focusing on the latter case, we start from two parallel NS$5$-branes extended in the directions $x^{0,1,2,3,4,5}$ and separated by some distance $\Delta x^6$ in the direction $x^{6}$ (see Fig.~\ref{fonep}). The gauge D$4$-branes (we have $N$ such branes) are extended in the directions $x^{0,1,2,3}$ and $x^6$, between the above two NS$5$-branes. Moreover, $$ 1/e^2 \sim \frac{\Delta x^6}{g_s l_s}\,, $$ where $e$ is the induced gauge coupling, and the flavor D$4$-branes are semi-infinite in $x^{6}$ and attached only to one of the NS$5$-branes, say NS$5'$. When the gauge and flavor branes are locked, the NS$5'$ can be moved; a global translation in the $x^9$ direction corresponds to the induced FI term $$\xi \sim \frac{\Delta x^9}{ g_s l_s^3}\,.$$ The field theory living on $x^{0,1,2,3}$ of the D$4$-branes, is obtained by the decoupling of the Kaluza--Klein modes ($\sim 1/\Delta_6$) as well as the string modes ($\sim 1/l_s$). This decoupling limit is \begin{equation} \Delta x^6 = \delta_6 g _s l_s\,, \qquad \Delta x^9 = \delta_9 g_s l_s\,, \qquad g_s \to 0\,. \label{wachie} \end{equation} The scaling formula (\ref{wachie}) reproduces, at energy scales much lower than $1/l_s$, a $3+1$-dimensional theory with the fixed values of $e$ and $\xi$. To be able to consistently include Higgsing of the bulk theory we must require \begin{equation} \delta_9 \ll 1\,, \,\delta_6\,. \label{tbad} \end{equation} To impose the classical limit $e \to 0$, it is necessary to have \begin{equation} \delta_6 \gg 1\,. \label{tbadd} \end{equation} We do not take into account strong coupling effects here. \begin{figure}[h!t] \epsfxsize=8cm \centerline{\epsfbox{branes.eps}} \caption{{\footnotesize The brane set-up in Type IIA string theory. The $k$-string configurations correspond to $k$ D2-branes stretching between the NS$5$ and $N$ D$4$-branes. $N$ chiral multiplets $n_j$ in the fundamental representation arise from fundamental strings stretching between the D$4$ and the D$2$ branes.}} \label{fonep} \end{figure} In this set-up the flux tubes correspond to D$2$-branes extended in the directions $x^{0,3,9}$ and stretched between one NS$5$-brane and $N$ distinct D$4$ branes. As a result, the two-dimensional ${\cal N}=(2,2)$ theory on the world sheet of $k$ parallel strings is a ${\rm U}(k)$ gauge theory with one chiral multiplet $Z$ in the adjoint representation (which corresponds to the position moduli of the vortex strings on the transverse plane $x_{1,2}$) and $N$ chiral multiplets $n_j$ in the fundamental representation (which arise from fundamental strings stretching between the D$4$ and the D$2$ branes). The adjoint gauge multiplet is the remnant of the D$2$-brane gauge theory, compactified on the segment $\Delta x^9$. The flavor multiplet corresponds to the strings with one end on the D$2$-branes and the other on the D$4$-branes. For the strings far apart, $$ {\rm U}(k) \to {\rm U}(1)^k\,, $$ and $$Z=\hbox{\rm diag}(Z_1,\dots, Z_n)\,.$$ The theory reduces to $k$ distinct factorized ${\cal N}=(2,2)$ $\mathbb{CP}^{N-1}$ models. The induced gauge coupling and the FI term of this two-dimensional theory are $$1/g^2 \sim \frac{\Delta x^9 l_s }{ g_s}\, ,\qquad r \sim \frac{\Delta x^6 }{ g_s l_s} \sim \frac{1}{e^2}\,,$$ respectively. The field theory described above is valid in the limit in which we can honestly treat the vortices as stretched D$2$ branes. In other words, we must be able to neglect the effect of the junctions between the D$2$ and D$4$ branes. A D$2$ brane terminating on D$4$ can be described as a spike of D$4$. The profile of the spike is $\propto l_s^2 / r$. The decoupling of the junction happens for sufficiently large $ \Delta x^9 \gg l_s$, so that the junction is very small, namely, \begin{equation} \frac{1}{g_s} \ll \delta_9 \,. \label{ptpbad} \end{equation} This assures, in particular, the gauge coupling $g \sim (\sqrt{\delta_9} l_s)^{-1}$ to be smaller than the string scale. We see a conflict between the two validity limits, (\ref{ptpbad}) on the one hand and (\ref{wachie}) -- (\ref{tbadd}), on the other. Indeed, (\ref{tbad}) (with $ g_s \to 0$) is the requirement that the scale of Higgsing in the bulk theory is smaller than the string scale. It is obviously incompatible with the constraint (\ref{ptpbad}). Thus, the Hanany--Tong model on the world sheet cannot be obtained in the field-theoretic set-up. \subsection{Some preliminary comments} \label{dom4} Here we pause to mention an issue which elucidates the distinctions between the two formulations: the D-brane and the soliton. If the bulk theory is a weakly coupled field theory (e.g. the model described in Sect.~\ref{wthse}, see \cite{SYrev}), the string thickness is $\ell \sim 1/(e \sqrt{\xi})$ with $e\ll 1$. It is parametrically larger than $1/\sqrt{\xi}$, the length scale set by the tension $T$ ($T\sim \xi$) because $e\ll 1$. Under these circumstances, a weakly coupled sigma-model description for the translational modes is possible. The metric starts varying when the strings start to overlap in the perpendicular plane. Thus, it is very smooth in the tension scale. In other words, if we change the distance between the strings by $\delta L \sim 1/\sqrt{\xi}$, the variation of the metric is negligible. The D-brane description is, instead, completely different. The D-branes are infinitely thin objects, and the low-energy physics is described by the massless open-string modes: a non-Abelian gauge theory with the translational modes in the adjoint representation. The non-Abelian gauge symmetry is spontaneously broken by the inter-brane distances. At large distances, the number of translational modes is $N$, as is the number of branes. When the separation is zero, the number of massless models becomes $N^2$. This is a crucial difference with the sigma-model description of solitons where the dimension of the moduli space is never enhanced. Let us consider two $\mathbb{CP}^{1}$ non-Abelian strings of thickness $\ell$, tension $T$ and relative distance $L$. Focus on the state in which these strings have the opposite $\vec{B}^3$ orientations (i.e. (1,0) + (0,1)). In field theory we have in general $\ell \gg 1/\sqrt{T}$. If we descend to $L<\ell$, it is not possible to describe separately the orientational moduli for the two strings. If the elementary strings, comprising the 2-string, overlap in the transverse plane, the non-Abelian magnetic fluxes are summed up and the ${\rm U}(1)_3$ magnetic fluxes in the (1,1) string should annihilate each other, with no relative orientation moduli surviving. For a field-theoretic realization of the D-brane physics, we would need $\ell \ll T^{-1/2}$. Then we could have, simultaneously, $\ell\ll L$ and $LT^{1/2}\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 1$. If $LT^{1/2}\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$} 1$ elementary strings in the 2-string configuration can be viewed as coinciding. At the same time, the magnetic fluxes of the constituent strings do not overlap, because $\ell\ll L$. Then, the configuration (\ref{Bfour}) would indeed be characterized by a well defined set of independent orientational moduli. That is what we see in the D-brane description. This regime does not seem to be achievable in weakly coupled bulk theories. The strategy we use in this paper is to take the HT model {\em per se}, and then use it in the field-theory domain of validity. That is, we consider the sigma model obtained upon integrating out the gauge fields of the HT model. This is the limit in which the gauge fields becomes just auxiliary fields. Needless to say, this is not going to reproduce the ``exact'' sigma model that one could derive in field theory, nor even describe the D2-brane dynamics in the limit of validity (\ref{ptpbad}). But many features are hopefully captured. (For example, in the HT model the elementary strings start interacting when $L=1/(\sqrt{\xi} e_3)$, which is consistent with the bulk expectations, see Eq. (\ref{omegamax})). The BPS sector lives up to this promise in full. \subsection{Hanany--Tong model} \label{whtmo} As was mentioned, the bosonic sector of the HT model is described by a U$(k)$ gauge field with field strength $F_{01}$; a complex scalar $\sigma$ in the adjoint of U$(k)$ (which correspond to the position of the D$2$ brane in the $x_{4,5}$ plane) in the same hypermultiplet as the gauge field; a complex scalar $Z$ in the adjoint representation of U$(k)$ (which correspond to the position of the D$2$ brane in the $x_{1,2}$ plane); $N$ scalars $n_j$ in the fundamental of U$(k)$, which we can combine in a $k \times N$ matrix $n_j^l$ (where $j$ is a global SU$(N)$ index and $l$ is a gauge U$(k)$ index). The parameters of the model are: (i) the two-dimensional U$(k)$ gauge coupling $g$ (with the dimension of a mass); (ii) the twisted mass $m_j$; (iii) the dimensionless Fayet--Iliopoulos parameter $r$; and (iv) the theta angle $\theta$. (In the notation of Ref.~\cite{SYrev}, one has $r=2\beta$.) The FI parameter $r$ is not to be confused with $r=\sqrt{x_1^2+x_2^2}$ which will not appear below. The classical value of the FI term $r$ is directly related to the four-dimensional gauge coupling, \begin{equation} r=\frac{4 \pi}{e^2_{3}} \, . \end{equation} For each of the $N$ chiral multiplets $n_j$ one can introduce a different twisted mass parameter $m_j$. Only the differences between the twisted masses are physically significant; $\sum_{i=1,...,N} m_i$ can be set to zero by a linear shift in the trace of $\sigma$. Due to the chiral anomaly one can always set the vacuum angle $\theta=0$ by virtue of a phase rotation of the complex mass parameters $m_i$. The action of ${\cal N}=(2,2)$ U$(k)$ two-dimensional gauge model can be obtained by dimensional reduction of the four-dimension $\mathcal{N}=1$ theory. The standard conventions are summarized in \cite{witten}. The bosonic part of the Lagrangian takes the form \begin{eqnarray} && \frac{1}{g^2} \hbox{\rm Tr}\, \left( -\frac{1}{2} F^{\mu \nu } F_{\mu \nu} +\frac{1}{2} | {\mathcal D}_\mu \sigma |^2 - \frac{1}{8} ([\sigma,\sigma^\dagger])^2+\frac{1}{2} D^2 -g^2 \, r \, D \right) \nonumber\\[3mm] && +\left( ({\mathcal D}^\mu n_i^\dagger) ({\mathcal D}_\mu n_i)- \frac{1}{2} n_i^\dagger \{ \sigma- \mathbb{I} \, m_i, \sigma^\dagger - \mathbb{I} \, m_i^* \} n_i + n_i^\dagger D n_i \right) \nonumber\\[3mm] && + \hbox{\rm Tr}\, \left( | {\mathcal D}_\mu Z |^2 -\frac{1}{2} \{\sigma,\sigma^\dagger \} \{ Z, Z^\dagger \} + (Z^\dagger \sigma Z \sigma^\dagger+ Z^\dagger \sigma^\dagger Z \sigma ) +Z^\dagger [D,Z] \right) . \nonumber\\ \label{ans44} \end{eqnarray} The symbol $\mathbb{I}$ is used for the $k \times k$ identity matrix. The scalar fields in this action have the following dimensions: \[ Z, \, n_i \propto [{\rm mass}]^0 \, , \qquad D \propto [{\rm mass}]^2 \, , \qquad \sigma \propto [{\rm mass}] \, . \] The eigenvalues of $Z$ correspond to the positions of the component strings in the perpendicular plane, measured in the units of $1/\sqrt{T}$ where $T$ is the vortex tension. The trace of $Z$ is completely decoupled from dynamics; therefore, we can (and will) set it to zero.\footnote{In terms of the parameter $L$ used previously, $2|z| = L\sqrt{T}$, see Eq.~(\ref{tfiga}).} The classical vacua are given by the condition of vanishing of the $D$-terms, \begin{equation} D=-g^2 \left( [Z,Z^\dagger]+n n^\dagger - \mathbb{I} \, r \right) = 0 \, . \label{tvand} \end{equation} For $m_i=0$ this constraint gives us the classical moduli space. If the adjoint field $Z$ were not present, the theory would correspond to the gauged formulation of the ${\cal N}=(2,2)$ sigma model with target space in the Grassmannian space \begin{equation} G_{N,k}=\frac{{\rm U}(N)}{{\rm U}(N-k) \times {\rm U}(k)} \, . \label{tgrassm} \end{equation} The $Z$ field introduces new degrees of freedom in the Lagrangian making the sigma model at hand more contrived. The eigenvalues of $Z$ are the classical moduli which must survive switching on quantum corrections. In the limit when the difference between the eigenvalues of $Z$ is $\gg 1$ the U$(k)$ gauge group is Higgsed to U(1)$^k$. The adjoint field $Z$ is then decoupled, and we recover $k$ copies of the supersymmetric sigma model with the target space \begin{equation} \mathbb{CP}^{N-1}=\frac{{\rm U}(N)}{{\rm U}(N-1) \times {\rm U}(1)} \, . \label{ttargsp} \end{equation} In the opposite limit, in which the eigenvalues of ${Z}$ fuse at a common value $z_0$, the corresponding dynamics is richer and more interesting. In this limit the matrix $Z$ can be put in a triangular form (with nonvanishing elements at the main diagonal and above it). Both diagonal entries are $z_0$. The degrees of freedom corresponding to the upper-triangle elements of $Z$ are classically massless and couple nontrivially to other degrees of freedom of the U$(k)$ theory. At the quantum level the Fayet--Iliopolous term $r$, which determines the strength of interaction on the world sheet, runs logarithmically at one loop; by dimensional transmutation it is traded for a dynamical scale $\Lambda_{1+1}$ (see Sec.~\ref{rg}). For $k=1$ this corresponds to the running coupling of the asymptotically free $\mathbb{CP}^{N-1}$ sigma model. In what follows we limit ourselves to $N=k=2$. In order to study the system at weak coupling we introduce the twisted mass term \begin{equation} m_1=-m_2=m \, , \qquad | m |\gg \Lambda_{1+1}\,. \label{ttm} \end{equation} For our purposes it is sufficient to assume $m$ real. \subsection{ Moduli Space} \label{tvms} For $N=2$ and $k=2$, we can use the gauge fixing \begin{equation} Z= \left(\begin{array}{cc} z & r^{1/2} \, \omega\, e^{i \zeta} \\ 0 & -z \\ \end{array}\right) \, , \qquad n=\left(\begin{array}{cc} a_1 & a_2 \\ b_1 & b_2 \\ \end{array}\right) \, , \label{tfiga} \end{equation} where $\omega$ is a real positive parameter. This does not completely fix the gauge; it remains to fix continuous U(1)'s, \begin{equation} {\rm U}(1)_1 \, {\rm :} \quad U= \left(\begin{array}{cc} e^{i \varphi} & 0 \\ 0 & 1 \\ \end{array} \right) \,, \qquad {\rm U}(1)_2 \, {\rm :} \quad U=\left(\begin{array}{cc} 1 & 0 \\ 0 & e^{i \varphi} \\ \end{array}\right) , \label{t27} \end{equation} under which $z$ is uncharged, $$\tilde{\omega}=\omega e^{i \zeta}$$ transforms as $(1,-1)$, $\, a_i$ as $(1,0)$ and $b_i$ as $(0,1)$. There is also some discrete subgroup of the gauge to fix. With this parametrization, the $D$-term constraints have the form \begin{equation} \sum_i |a_i|^2 = r \, (1-\omega^2) \, , \qquad \sum_i |b_i|^2= \, r(1+\omega^2) \, , \qquad a_1 b_1^* +a_2 b_2^*=2 \sqrt{r} \, z^* \omega \, . \label{tdtc} \end{equation} It follows that for fixed $|z|$ the allowed range for $\omega$ is \begin{equation} 0 \leq \omega \leq \omega_{\rm max}=\sqrt{\frac{\sqrt{r^2+4 |z|^4}-2 |z|^2}{r}} \, . \label{omegamax} \end{equation} The value of $\omega_{\rm max}$ gives us the measure of how much the two elementary strings interact with each other. In the limit of $|z| \rightarrow \infty $, $$ \omega_{\rm max} \approx \sqrt{r} /|z| \,. $$ In this limit $a_i$ and $b_i$ parametrize two decoupled $\mathbb{CP}^1$'s with radii $\sqrt{r}$. In order for the two copies of $\mathbb{CP}^1$ to interact, $z$ should be of the same order of magnitude as $\sqrt{r}$. This is completely consistent with what we expect from the bulk theory in the weakly coupled limit: we know that the string thickness is of the order of \begin{equation} \sqrt{\frac{r}{T}} \propto \sqrt{\frac{1}{\xi}} \frac{1}{e_3} \, . \end{equation} It is straightforward to check that the corrections to the metric of the two decoupled $\mathbb{CP}^1$'s for large $z$ are proportional to $1/z^2$; this is inconsistent with what we expect from the four-dimensional gapped bulk theory in which these corrections should fall off exponentially. The opposite limit $z=0$ corresponds to the requirement of orthogonality of the vectors $a_i$ and $b_i$. In this case $$ 0 \leq \omega \leq 1\,. $$ The section with $\omega^2=1$ corresponds to a $\mathbb{CP}^1$ submanifold (the orientational moduli of the component strings are aligned in the group space). The section with $\omega^2=0$ corresponds to a point (the component strings' orientations in the group space are antiparallel). At $z=0$ we use the following gauge fixing: \begin{eqnarray} a_i &=& r^{1/2} \sqrt{1-\omega^2} \, (\cos \alpha, e^{i \beta} \sin \alpha) \, , \nonumber\\[3mm] b_i &=& r^{1/2} \sqrt{1+\omega^2} \, ( e^{-i \beta} \sin \alpha, - \cos \alpha) \, . \label{tfgf} \end{eqnarray} As a result, the matrix $Z$ takes a very simple form \begin{equation} Z= \left(\begin{array}{cc} 0 & \sqrt{r} \, \omega \, e^{i \zeta} \\[1mm] 0 & 0 \\ \end{array}\right) \, . \label{tvsf} \end{equation} The orientational moduli are encoded in the real parameter $\omega$ and three angles, $(\zeta,\alpha,\beta)$. \subsection{Kinetic term} \label{tkintt} In order to get the metric on the moduli space, we have to find the saddle-point value of the gauge field $A_{\mu}$ and plug it back in the Lagrangian. We work in the limit of coincident strings, $z=0$. With our gauge choice a straightforward calculation gives \begin{eqnarray} A_\mu^0 &=& \frac{2 \omega^4 \left[(\partial_\mu \zeta)-2 \sin^2 \alpha (\partial_\mu \beta)\right]} {1+2 \omega^2-\omega^4} \, , \nonumber\\[3mm] A_\mu^3 &=& \frac{2 \left[\sin^2 \alpha \, (1-\omega^4) (\partial_\mu \beta) +r \omega^2 (\partial_\mu \zeta)\right]} {1+2 \omega^2-\omega^4} \, , \nonumber\\[3mm] A_\mu^1 &=& -2 \sqrt{\frac{1-\omega^2}{1+\omega^2}} \left[ \sin \beta \, (\partial_\mu \alpha)+ \sin \alpha \, \cos \alpha \, \cos \beta \, (\partial_\mu \beta) \right] \, , \nonumber\\[3mm] A_\mu^2 &=& 2 \sqrt{\frac{1-\omega^2}{1+\omega^2}} \left[ - \cos \beta \, (\partial_\mu \alpha)+ \sin \alpha \, \cos \alpha \, \sin \beta \, (\partial_\mu \beta )\right] \, , \label{tspv} \end{eqnarray} where \[ A_\mu= \frac{A_\mu^0 \, \mathbb{I} + A_\mu^k \, \sigma_k}{2} \, ,\] and $\sigma_k$ are the Pauli matrices. To find the moduli space metric we have to substitute these expressions in the kinetic term, \begin{eqnarray} && r \left( \, \frac{1+2 \omega^2 -\omega^4 }{1-\omega^4} (\partial_\mu \omega)^2 + 2 \omega^2 \left( (\partial_\mu \alpha)^2 +\left( \frac{\sin 2 \alpha}{2}\ \partial_\mu \beta \right)^2 \right) + \right. \nonumber\\[3mm] &&\left. + \frac{\omega^2(1-\omega^4)}{1+2 \omega^2-\omega^4} (\partial_\mu \zeta- 2 (\sin^2 \alpha) \partial^\mu \beta)^2 \right) \, . \label{tsppv} \end{eqnarray} The term proportional to $(\partial_\mu \omega)^2$ diverges at $\omega=1$. Luckily this is not a bad divergence. It can be eliminated by virtue of a change of variables. Indeed, define \begin{equation} \kappa=\sqrt{1-\omega}\, , \qquad \omega=1-\kappa^2 \, . \label{kappona} \end{equation} Then the relevant piece of the metric is \begin{equation} 4 r \, A \, (\partial_\mu \kappa)^2 \, ,\qquad A=\frac{\kappa^8 -4 \kappa^6 + 4 \kappa^4-2 } {\kappa^6 -4 \kappa^4 + 6 \kappa^2 -4 } \, . \end{equation} It is completely smooth at $\kappa=0$ (which corresponds to $\omega=1$ in the previous choice of variables). \subsection{Some topology} \label{tsoto} The coordinates in the moduli space that we have introduced vary in the following intervals: \begin{equation} 0 \leq \omega \leq 1 \, , \qquad 0 \leq \alpha \leq \frac{\pi}{2} \, , \qquad 0 \leq \zeta \leq 2 \pi \,, \qquad 0 \leq \beta \leq 2 \pi \, . \label{tinter} \end{equation} First we will consider sections at generic values of $\omega \neq 0,\,\, \sqrt{r}$. We can pass to an alternative gauge fixing, \begin{eqnarray} a_i &=& r^{1/2} \, \sqrt{1-\omega^2} \, (\cos \alpha, e^{i \beta} \sin \alpha) \, e^{-i \zeta/2}\, , \nonumber\\[3mm] b_i &=& r^{1/2} \, \sqrt{1+\omega^2} \, ( e^{-i \beta} \sin \alpha, - \cos \alpha) \, e^{+i \zeta/2} \, , \nonumber\\[3mm] \tilde{\omega} &=& \omega\,. \label{taltgf} \end{eqnarray} The point with the coordinates $(\omega,\alpha,\beta,\zeta)$ is then identified with the point with the coordinates $(\omega,\alpha,\beta,\zeta+ 2 \pi)$. The topology of the sections at constant $\omega$ is then given by $S^3 / \mathbb{Z}_2$. This is due to the fact that the point $(a_i,b_i)$ is identified with $-(a_i,b_i)$. At $\omega=0$ the section is given by just a point. At $\omega=\sqrt{r}$ the section is given by $S^2=\mathbb{CP}^1$, parametrized by $(\alpha,\beta)$. The topology of the moduli space is $\mathbb{CP}^2/{Z}_2$. \subsection{Twisted mass term} \label{ttmtt} To warm up we start with the simple case of the elementary string, $k=1$. Then we can choose the gauge in such a way that \begin{equation} n_1=\cos \alpha \, , \qquad n_2=e^{i \beta} \sin \alpha \, , \label{tngc} \end{equation} where $(\alpha,\beta)$ parametrize the $\mathbb{CP}^1$ moduli. To find the mass-term-generated effective potential we integrate out $\sigma$. The only nonvanishing part of the potential is \begin{equation} V= \sum_i n_i^\dagger (\sigma-m_i) (\sigma^*-m_i^*) n_i \label{tnonvpp} \end{equation} implying the following saddle-point value of $\sigma$: \begin{equation} \sigma=m (\cos^2 \alpha -\sin^2 \alpha) \, . \label{tspvs} \end{equation} Substituting (\ref{tspvs}) in (\ref{tnonvpp}) we get \begin{equation} V= m^2 \, r \, \sin^2 (2 \alpha) \, . \label{ttpot} \end{equation} This is the standard twisted mass term in the ${\cal N}=(2,2)$ $\mathbb{CP}^1$ sigma model. After this successful exercise we turn to the $k=2$ case. For 2-strings we have to determine $\sigma$ from the potential \begin{eqnarray} V &=& \frac{1}{8} ([\sigma,\sigma^\dagger])^2 + \frac{1}{2} n_i^\dagger \{ \sigma-\mathbb{I} \,m_i, \sigma^\dagger - \mathbb{I} \, m_i^*\} n_i \nonumber\\[3mm] &+& \frac{1}{2} \{\sigma,\sigma^\dagger \} \{ Z, Z^\dagger \} - (Z^\dagger \sigma Z \sigma^\dagger+ Z^\dagger \sigma^\dagger Z \sigma ) \, . \label{tanpot} \end{eqnarray} Integrating out $\sigma$, we arrive at \begin{equation} \sigma=m \, \left( \begin{array}{cc} \frac{(1-3 \omega^4) \cos 2 \alpha }{1+2 \omega^2 -\omega^4} & \frac{e^{i \beta} \sqrt{1-\omega^2} \sin 2 \alpha }{\sqrt{1+\omega^2}} \\[3mm] \frac{e^{-i \beta} \sqrt{1-\omega^2} \sin 2 \alpha }{\sqrt{1+\omega^2}} & -\frac{(1+ \omega^4) \cos 2 \alpha }{1+2 \omega^2 -\omega^4} \\[2mm] \end{array} \right) \, . \label{ssigma} \end{equation} With this saddle-point value of $\sigma$ the potential takes the form \begin{equation} V=m^2 \, r \, \frac{\omega^2 (3 + 2 \omega^2 - 3 \omega^4 +(1-2 \omega^2-\omega^4) \cos 4 \alpha ) }{1+2 \omega^2-\omega^4} \, . \label{tsadpop} \end{equation} It depends only on $\omega$ and $\alpha$. A plot of the potential (\ref{tsadpop}) is displayed in Fig.~\ref{led}. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize 6 cm \epsffile{plotpotential.eps} \end{center} \caption{\footnotesize Potential as a function of $\omega$ and $\alpha$.} \label{led} \end{figure} Note that in this plot the line $\omega=0$ corresponds to a single point in the moduli space (the $(1,1)$ string). At $\omega^2=1$ the potential reduces to $$V=2 m^2 \, r \, \sin^2 2 \alpha\,,$$ exactly twice the potential on the elementary string (cf. Eq.~(\ref{ttpot})). \section{Spectrum of excitations} \label{spoex} \subsection{Perturbative excitations} \label{ptpeex} After the potential on the 2-string world sheet is found, we can compute the mass of the perturbative excitations near each of three vacua. Let us start from the $(1,1)$ string, which corresponds to $\omega=0$ and $\kappa=1$ (the minimum on the left-hand side in Fig.~\ref{led}). The mass-squared of the excitations is given by \begin{equation} M^2=\frac{\partial^2_{\kappa,\kappa} V}{4 A r}=2 m^2 (3+\cos 4 \alpha) \, . \label{tmse} \end{equation} There are two normal modes, one at $\alpha=0$ and another at $\alpha=\pi/4$. Thus, there are two scalar excitations with mass $2\sqrt{2} m$ plus two scalar excitations with mass $2 m$ (and their superpartners, of course). For the $(2,0)$ string (at $\omega=1$ and $\kappa=0$) the situation is slightly different. The oscillations can be both in the $\alpha$ and $\kappa$ coordinates. The mixed term $\partial^2_{\kappa,\alpha} V$ vanishes. The mass of each of these excitations is \begin{equation} M_\kappa^2=\frac{\partial^2_{\kappa,\kappa} V}{4 A r}= 8 m^2 \, , \qquad M_\alpha^2=\frac{\partial^2_{\alpha,\alpha} V}{2 r \omega^2}= 8 m^2 \, .\end{equation} So there are a total of four scalar states with masses $2\sqrt{2} m$ (plus their superpartners). It is, of course, the same for the $(0,2)$ string. \subsection{The BPS-saturated kinks} \label{kkki} For the elementary kink (which interpolates between the vacuum at $\omega=\sqrt{r}$, $\alpha=0$ and the vacuum at $\omega=0$ and has the unit magnetic flux), we can choose the ansatz $\alpha=\beta=\zeta=0$ and introduce a profile function $\kappa(x)$. Using the variable (\ref{kappona}), the energy functional for this kink can be written as \begin{equation} {\mathcal E}= \int d x \, r \, \left[ 4 \,A \, (\partial_x \kappa)^2 + \frac{4 m^2 \kappa^2}{A} (1-\kappa^2 )^2 \right] \, , \label{tenfuek} \end{equation} where $x$ is the coordinate along the 2-string axis, and the boundary conditions on $\kappa$ are \begin{equation} \kappa (x= - \infty ) =0 \,,\qquad \kappa (x=+\infty ) = 1 \,. \label{tboco} \end{equation} The Bogomol'nyi completion is straightforward, \begin{equation} {\mathcal E}= \int d x \, r \left[ \left( 2 \sqrt{A} (\partial_x \kappa ) \pm \frac{(2 m \kappa)(1-\kappa^2)}{\sqrt{A}} \right)^2 \mp 2 m r \, \partial_x \left( 2 \kappa^2-\kappa^4\right) \right]. \label{tbogc} \end{equation} For BPS (elementary) kinks one must have \begin{equation} 2 \, \sqrt{A} (\partial_x \kappa ) \pm \frac{(2 m \kappa)(1-\kappa^2)}{\sqrt{A}} =0\,. \label{bpsele} \end{equation} If this equation is satisfied (and it is, see below) the tension of the elementary kink is \begin{equation} T_{(2,0)\to (1,1)}=2 m r \, . \label{tmek} \end{equation} The solution to Eq.~(\ref{bpsele}) with the boundary conditions (\ref{tboco}) can be found numerically (see Fig.~\ref{kink1}). \begin{figure}[h] \begin{center} \leavevmode \epsfxsize 6 cm \epsffile{single.eps} \end{center} \caption{\footnotesize The profile function $\kappa(x)$ for the elementary kink between the $(2,0)$ and $(1,1)$ 2-strings. } \label{kink1} \end{figure} Now, we can consider a composite kink, interpolating between the $(2,0)$ and the $(0,2)$ strings. In our notation this corresponds to an interpolation between the vacuum at $\alpha=0$, $\omega=1$ and the one at $\alpha=\pi/2$, $\omega=1$. As we will show shortly, the mass of the composite BPS-saturated kink is $4 m r$, twice larger than in Eq.~(\ref{tmek}). This means that there is no interaction between the elementary kinks $(2,0)\to (1,1)$ and $(1,1)\to (0,2)$ comprising the $(2,0)\to (0,2)$ kink. Hence, the relative distance between the component elementary kinks is a modulus. The simplest solution (one of a family) can be found keeping $\omega$ constant. The energy functional then reduces to that given by the sine-Gordon model, \begin{equation} {\mathcal E}= \int d x \, \left[ 2 r (\partial_x \alpha)^2 + 2 m^2 r \sin^2 (2 \alpha) \right] \, . \end{equation} The Bogomol'nyi completion is \begin{equation} {\mathcal E}=\int d x \, \left\{ \left( \sqrt{2 r} (\partial_x \alpha) \pm \sqrt{2 r} \, m \, \sin (2 \alpha) \right)^2 \pm \partial_x \left( 2 r m \cos (2 \alpha) \right) \right\} \, . \end{equation} Assuming that \begin{eqnarray} && \sqrt{2 r} (\partial_x \alpha) \pm \sqrt{2 r} \, m\, \sin (2 \alpha) =0\,, \nonumber\\[2mm] && \alpha (x= - \infty ) =0 \,,\qquad \alpha(x=+\infty ) = \frac{\pi}{2} \,, \label{tabc} \end{eqnarray} we find the tension \begin{equation} T_{(2,0)\to (0,2)}=4 m r = 2T_{(2,0)\to (1,1)} \, . \label{22tens} \end{equation} Next, we have to check that the first-order equation ~(\ref{tabc}) does have solutions. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize 6 cm \epsffile{doppiokink.eps} \end{center} \caption{\footnotesize The family of degenerate composite kinks (interpolating between the $(2,0)$ and the $(0,2)$ strings) in the $(\omega,\alpha)$ plane. The line at $\omega=1$ corresponds to the kink with the smallest thickness. In the large thickness limit the solution degenerates in two elementary kinks at an (almost) infinite distance. } \label{kinks} \end{figure} To find the most general solution we have to introduce two profile functions now, $\alpha(x)$ and $\kappa(x)$, determining the energy functional \begin{equation} {\mathcal E}= \int d x \, \left[ 4 r \,A \, (\partial_x \kappa)^2 + 2 r (1-\kappa^2)^2 (\partial_x \alpha)^2+ V \right] \, , \label{tdetef} \end{equation} where \begin{equation} V= 2 \, r \, m^2 \, (\sin^2 2 \alpha )\, (1-\kappa^2)^2 + \frac{4 m^2 \, r \, (\cos^2 2 \alpha ) \, \kappa^2 \, (1-\kappa^2)^2}{A} \, . \end{equation} The generic Bogomol'nyi completion takes the form \begin{eqnarray} &&{\mathcal E} = \int d x \, r \, \left\{ \left( 2 \sqrt{A} (\partial_x \kappa ) \pm \frac{(2 m \kappa)(1-\kappa^2)(\cos 2 \alpha)}{\sqrt{A}} \right)^2 + \right. \nonumber\\[3mm] && \left. +\left( \sqrt{2} (1-\kappa^2) (\partial_x \alpha \mp m \sin 2 \alpha) \right)^2 \mp 2 m \partial_{x} \left( (1-\kappa^2)^2 \cos 2 \alpha \right) \right\} \, . \nonumber \\ \label{genericB} \end{eqnarray} The BPS equations are \begin{eqnarray} && 2 \sqrt{A} (\partial_x \kappa ) \pm \frac{(2 m \kappa)(1-\kappa^2)(\cos 2 \alpha)}{\sqrt{A}} =0\,, \nonumber\\[3mm] && \partial_x \alpha \mp m \sin 2 \alpha =0\,. \end{eqnarray} They can be solved numerically, as shown in Fig.~\ref{kinks} in the $(\omega,\alpha)$ plane. Needless to say, the mass of every solution in this family obeys Eq.~(\ref{22tens}). \subsection{R symmetries} The ${\cal N}=(2,2)$ U$(k)$ theory has some interesting $R$-symmetries, which are the same as in the $k=1$ case \cite{witten,sy2009}. Let us denote the superpartners of $n_i$ and $Z$ by $(\psi^{n_i}, \psi^{Z})$; $\lambda$ is the world-sheet gaugino. There exists a vectorial symmetry which acts only on the following fermions: \begin{equation} \psi^{n_i}_{L,R} \rightarrow e^{i \gamma} \psi^{n_i}_{L,R} \, , \qquad \psi^{Z}_{L,R} \rightarrow e^{i \gamma} \psi^{Z}_{L,R} \, , \qquad \lambda_{R,L} \rightarrow e^{-i \gamma} \lambda_{L,R} \, . \end{equation} This classical symmetry is unbroken by quantum effects and unbroken by the twisted mass term. In addition, in the limit of vanishing twisted masses, there is an axial U(1) symmetry which is broken to $Z_{2 N}$ by the quantum (chiral) anomaly, \begin{eqnarray} && \psi^{n_i}_{L} \rightarrow e^{- i \gamma} \psi^{n_i}_{L} \, , \qquad \psi^{n_i}_{R} \rightarrow e^{ i \gamma} \psi^{n_i}_{R} \, , \nonumber\\[3mm] && \psi^{Z}_{L} \rightarrow e^{- i \gamma} \psi^{Z}_{L} \, , \qquad \psi^{Z}_{R} \rightarrow e^{ i \gamma} \psi^{Z}_{R} \, , \nonumber\\[3mm] && \lambda_{L} \rightarrow e^{- i \gamma} \lambda_{L} \, , \qquad \lambda_{R} \rightarrow e^{ i \gamma} \lambda_{R} \, , \qquad \sigma \rightarrow e^{2 i \gamma} \sigma \, . \label{qchan} \end{eqnarray} The twisted mass terms generically break this symmetry. However, with the particular choice \begin{equation} m_i = m \left(e^{2 \pi i/N}, e^{4 \pi i/N }, \ldots , e^{2(N-1) \pi i/N}, 1 \right) \, \label{znsymmc} \end{equation} a discrete $Z_{2N}$ subgroup survives the inclusion of both the anomaly and mass terms, \begin{eqnarray} && \psi^{n_i}_{L} \rightarrow e^{- i \gamma_k} \psi^{n_{i-k}}_{L} \, , \qquad \psi^{n_i}_{R} \rightarrow e^{ i \gamma_k} \psi^{n_{i-k}}_{R} \, , \nonumber\\[3mm] && \psi^{Z}_{L} \rightarrow e^{- i \gamma_k} \psi^{Z}_{L} \, , \qquad \psi^{Z}_{R} \rightarrow e^{ i \gamma_k} \psi^{Z}_{R} \, , \nonumber\\[3mm] && \lambda_{L} \rightarrow e^{- i \gamma_k} \lambda_{L} \, , \qquad \lambda_{R} \rightarrow e^{ i \gamma_k} \lambda_{R} \, , \qquad \sigma \rightarrow e^{2 i \gamma_k} \sigma \, , \nonumber\\[3mm] && n_i \rightarrow n_{i-k} \, , \qquad \gamma_k=\frac{\pi k}{2N} \, \, \, \, {\rm with} \, \, \, \, k=1, \ldots ,2N \, . \label{survi} \end{eqnarray} In the special case $k=N=2$ under consideration, we choose $m_1=-m_2=m$. As a result, there is a discrete $\mathbb{Z}_4$ symmetry. From Eq. (\ref{ssigma}) we can check that for the $(2,0)$ vacuum $\sigma_0 \neq 0$ and $\vec{\sigma}=0$ while for the $(1,1)$ vacuum $\sigma_0=0$ and $\vec{\sigma}\neq 0$. A VEV for $\sigma_0$ spontaneously breaks $\mathbb{Z}_4$ to $\mathbb{Z}_2$, while a VEV for $\vec{\sigma}$ does {\em not} break the $\mathbb{Z}_4$ symmetry at all, because the phase can be eliminated by a gauge transformation. Hence, the discrete $\mathbb{Z}_4$ symmetry is spontaneously broken to $\mathbb{Z}_2$ in the $(0,2)$ vacuum. It is unbroken in the $(1,1)$ vacuum. \subsection{A general perspective} \label{tsum} The sigma model on the 2-string world sheet is quite unconventional; the moduli space is not a homogeneous space and its topology, that of $\mathbb{CP}^2/{Z}_2$, is rather weird. At $m=0$ the physics described by this model is strongly coupled and hard to work with. On the other hand, in the limit $m\gg\Lambda_{1+1}$ we are at weak coupling and can study the problem in a (quasi)classical way. We found three vacua which we can be identified with the $(2,0)$, $(0,2)$ and $(1,1)$ strings of the four-dimensional theory. In the ${\cal N}=(2,2)$ theory, because of the Witten index, the number of vacua should not change as a function of $m$. Therefore, we conclude that the theory has three vacua not only at large $m$, but also in the $m\rightarrow 0$ limit. We see that two of these three vacua (which correspond to the $(2,0)$ and the $(0,2)$ strings in the $m\gg \Lambda_{1+1}$ limit ) spontaneously break the anomaly-free $\mathbb{Z}_4$ symmetry of the model down to $\mathbb{Z}_2$. The third vacuum (which corresponds to the $(1,1)$ vortex in the $m \gg \Lambda_{1+1}$ limit) leaves this symmetry unbroken. This implies, in turn, that in the latter vacuum the fermionic condensate must vanish. This is an important finding. The BPS kinks interpolating between various pairs of vacua which we found correspond to monopoles of the four-dimensional theory. It is remarkable that in all three cases the masses of the kinks are exactly equal to the masses of the 't Hooft--Polyakov monopole (and double monopole in the third case) on the Coulomb branch of the bulk theory. This is exactly the phenomenon first observed in \cite{SYmon}. It lends credence to the HT model as the theory correctly describing the BPS sector in the composite strings. It should be possible to study dyonic kink. Moreover, for $k$-strings with $k>2$ we should be able to see kinks describing confined monopoles with the magnetic charges 1,2, ..., up to $k$. \section{Composite strings at $m\to 0$} \label{costr} \subsection{Quantum moduli space} \label{quantum} The problem of complete characterization of the quantum moduli space for 2-strings is quite complicated; no final solution is known at the moment. However, our previous analysis of the $m \neq 0$ case provides us with some hints which we would like to summarize here. If $m\to 0$ the potential vanishes, and we are left with the sigma model dynamics. When we speak of the elementary non-Abelian strings, the translational sector is decoupled, and we can consider the ${\cal N}=(2,2)$ ${\mathbb{CP}^{N-1}}$ sigma model living on the world sheet of an infinite straight string. In composite strings, even if we restrict ourselves to the low-energy approximation, we cannot decouple the translational sector from the orientational one. Only the overall translational coordinate can be factored out, while the relative translations are inevitably entangled with the orientational modes. Thus, we have to quantize a theory of entangled moduli, some of them are noncompact (the relative positions) while others are compact (the orientational moduli). The classical moduli space of $k$ non-Abelian elementary strings in the bulk theory with $N$ colors and $N$ flavors will be referred to as ${\cal M}_{k,N}$. The real dimension of this moduli space is $2 k N$. For well separated constituent strings, this moduli space decomposes into the product of $k$ distinct factors ${\mathcal M}_{1,N} = {\mathbb{CP}^{N-1}} \times C$, modulo permutation group $S_k$. Intuition obtained in the elementary-string problem teaches us that quantum effects have a very different impact on the compact and noncompact parts of the moduli space. Sigma models on the compact manifolds, generically, are subject to strong-coupling effects and develop a mass gap -- only a discrete number of vacuum states survives. Noncompact directions, instead, survive in the infrared as genuine moduli. Thus, we expect that in the 2-string problem the quantum vacuum manifold will be spanned on the moduli describing relative position of the elementary strings and will consist of a few sectors labeled by appropriate fermion condensates. That is the quantum counterpart of Fig.~\ref{4vs3}. Since the problem is defined in $1+1$, there are also long-range logarithmic fluctuations of the non-compact moduli to be considered (see Sect.~\ref{tf}). One can apply the following strategy: fix the spatial distance between the constituent strings and then quantize the compact manifold obtained in this way. Then vary the distance adiabatically. Finally, check whether or not quantum fluctuation of the translational moduli (the non-compact part of the moduli space) alter the result.\footnote{We are grateful to D.~Tong for pointing out to us the necessity of such a verification.} The number of states we start from may be larger than the number of discrete moduli subspaces in which they are grouped. This was the case with $m\neq 0$. Let us see how the vacua evolve as the distance varies from infinity to zero, in the specific example of ${\cal M}_{2,2}$. When the distance is large $L \gg \ell$, we have to quantize two separate $ {\mathbb{CP}^{1}}$ models on the world sheets of two separate strings. More exactly, the overall theory is a sigma model with the target space $(\rm C \times {\mathbb{CP}^{1}} \times {\mathbb{CP}^{1}}) /Z_2$ where the $\mathrm Z_2$ factor is the exchange between two ${\mathbb{CP}^{1}}$'s and parity in $\rm C$ (the relative position coordinate). This $\mathrm Z_2$ factor is crucial in what follows. At infinite separation we can quantize the two ${\mathbb{CP}^{1}}$'s separately, and then introduce the $\mathrm Z_2$ factorization, at the level of the spectrum. Each string has two ground states where the wave function is spread uniformly around the ${\mathbb{CP}^{1}}$ manifold, while the (bi)fermion condensates are $\langle \bar\psi\psi \rangle = \pm \Lambda$. We call these ground states $|\pm\rangle_{1}$ and $|\pm\rangle_{2}$ respectively for the first and second strings. In total we have four states, \begin{equation} |+\rangle_1|+\rangle_2 \ , \qquad |+\rangle_1|-\rangle_2 \ , \qquad |-\rangle_1|+\rangle_2 \ , \qquad |-\rangle_1 |-\rangle_2 \ . \label{stati} \end{equation} Now we have to take into account the $\mathrm Z_2$ factor. The first and fourth states are invariant under the exchange $1 \leftrightarrow 2$. They, thus, belong to two separate manifolds ${\cal M}_{++}$ and ${\cal M}_{--}$. Since the exchange acts also on the relative position, the two manifolds are cones over the $ {\rm S}^1 / \mathrm Z_2$ angular variable. The second and third states interchange under $\mathrm Z_2$. That means that they belong to the same manifold ${\cal M}_{+-}$, which is asymptotically a cone over ${\rm S}^1$. The two states are antipodal with respect to the angular variable. The ground states of the 2-string are thus grouped exactly as in Fig.~\ref{4vs3}. There is a conceptual difference, though. In the mass-deformed theory, the three manifolds are distinguished by the total, conserved, non-Abelian magnetic flux. In the $m=0$ case the the wave function is always spread uniformly around the ${\mathbb{CP}^{1}}$ manifolds, and thus the average non-Abelian magnetic flux vanishes for all of them. What distinguishes them is the action of the residual $\mathrm Z_4$ $R$-symmetry. This symmetry exchanges ${\cal M}_{++}$ and ${\cal M}_{--}$. Inside the manifold ${\cal M}_{+-}$ it acts as parity. Clearly, the central element of ${\cal M}_{+-}$ is the only state invariant under the residual $R$ symmetry. As the distance between the elementary strings becomes small enough, one can no longer quantize two ${\mathbb{CP}^{1}}$'s separately, in isolation from the translational part of the moduli space. One can argue, however, that the number of the ground states must remain the same. In particular, at zero separation there are only three ground states. The second and third in (\ref{stati}) must coalesce into a unique state, the central element of ${\cal M}_{+-}$. The fourth vacuum state is not seen at zero separation. In the HT model the separated strings imply an expectation value of $Z$ of the form \begin{equation} Z=d t_3 = \left( \begin{array}{ll} L/2 & × \\[2mm] × & -L/2 \end{array} \right) \ , \label{dom8} \end{equation} which leads, in turn, to the gauge group breaking, ${\rm U}(2) \to {\rm U}(1) \times {\rm U}(1)$. In this language, the fermion condensate is represented by the adjoint scalar field $\sigma$, \begin{equation} \sigma = \bar\psi^{n_i} \psi^{n_i} =\sigma_0 \,\mathbb{I} +\vec{\sigma}\cdot \vec{\tau}\,, \label{dom8p} \end{equation} which is a member of the auxiliary gauge multiplet. If we could compute the quantum-generated effective potential $V_{\rm eff}(\sigma)$ for this scalar field, the problem is solved. At large separations $V_{\rm eff}(\sigma)$ reduces to \begin{equation} V_{\rm eff}(\sigma) = V_{\mathbb{CP}^{1}}(\sigma_0 + \sigma_3) + V_{\mathbb{CP}^{1}}(\sigma_0 - \sigma_3)\,, \label{dom8pp} \end{equation} i.e. the sum of two ${\mathbb{CP}^{1}}$ effective potentials, which are, of course, known in the literature. The four vacua (\ref{stati}) can be pictured in the space of fermion condensates, see Fig.~\ref{three-vacua}. \begin{figure}[h!t] \epsfxsize=8cm \centerline{\epsfbox{three-vacua.eps}} \caption{{\footnotesize The four vacua in the space of fermion condensate $\sigma$. As the separation goes to zero, two of them coalesce into a unique $\mathrm Z_4$-invariant state.}} \label{three-vacua} \end{figure} Now let us vary the separation and try to infer what happens with the vacua at $L\to 0$. At $L=0$ the ${\rm SU}(2)$ symmetry is restored; hence, the effective potential must depend only on $\sigma_0$ and $|\sigma|$. Choosing the unitary gauge one can always set $\vec\sigma$ in the third direction. Conservation of the number of vacua, together with the symmetry $\sigma_0 \to - \sigma_0$, implies that the second and third vacua in (\ref{stati}) must coalesce at $\sigma_0 =0$. Invariance under $Z_4$ implies $\sigma_3 =0 $. This state is topologically equivalent to the ANO string. See Sect.~\ref{tf} for a discussion of transversal fluctuations. Summarizing, the quantum effective potential for $\sigma$, at zero separation, must have three vacua at $\sigma_0 =\pm \Lambda$ and $\sigma_0 =0$. These are the quantum analogs of three states $(0,2)$, $(2,0)$, and $(1,1)$ in the mass-deformed theory. \subsection{Instantons} \label{instantons} Now we will address another topological aspect of the HT model, namely instantons. Their role is important. By virtue of the index theorem they generate fermion zero modes, which, in turn, in conjunction with ${\cal N}=(2,2)$ supersymmetry lead to bifermion condensates (for a review see e.g. \cite{nsvz}). We again fix the separation $L$, and consider quantization of the compact manifold obtained at given $L$. The topology of this manifold, for $L \neq 0$, is the same as that of ${\mathbb{CP}^{1}} \times {\mathbb{CP}^{1}}$. The existence/nonexistence of instantons is determined by the second homotopy group of the manifold, $\pi_2({\mathbb{CP}^{1}} \times {\mathbb{CP}^{1}}) =\mathrm Z \oplus \mathrm Z$, and, thus, we have two distinct winding numbers, one for each ${\mathbb{CP}^{1}} $. At $L=0$ the topology is that of $\mathbb{CP}^{2}/{Z}_{2}$. Defining $\mathbb{CP}^{2}$ as the identification $$ (z_1,z_2,z_3) \simeq (\lambda z_1, \lambda z_2, \lambda z_3)\,, $$ the $\mathrm Z_2$ action is $(z_1,z_2,z_3) \to (z_1,-z_2,-z_3)$. The ANO string corresponds here to the fixed point of the orbifold $(1,0,0)$. Other fixed points are the $\mathbb{CP}^{1}$ submanifold defined by $z_1=0$. Note that the metric does not coincide exactly with that of $\mathbb{CP}^{2}/{Z}_{2}$. However, for the purpose of discussion of the instanton numbers and their zero modes, the result is the same. The drastic change of topology in passing from $L\neq 0$ to $L=0$ affects the instanton number which becomes $\pi_2 (\mathbb{CP}^{2}/{Z}_{2}) = \mathrm Z$ where $\mathrm Z$ is is in one-to-one correspondence with the relative orientation. For example, the $(1,0)$ and $(0,-1)$ instantons, at $L=0$ merge into a unique topological sector.\footnote{ The notation used above to mark instantons is self-evident.} They are two elements of the instanton moduli space, obtained by the action of the ${\rm SU}(2)$ symmetry between the coordinates $z_2$ and $z_3$ of the orbifold. The cycle $(1,1)$ becomes contractible at $L=0$. The instanton moduli space for $\mathbb{CP}^{1}$ has real dimension four: two translations, one phase and the scale factor (the instanton radius). By ${\cal N}=(2,2)$ supersymmetry this implies four fermion zero modes, which explicitly demonstrates that the axial U(1) symmetry is anomalous, and only a discrete subfactor of it survives, namely, $${\rm U}(1) \to \mathrm Z_4\,.$$ Further braking $\mathrm Z_4 \to \mathrm Z_2$ due to the bifermion condensate is dynamical, due to strong coupling. For homogeneous spaces, such as $\mathbb{CP}^{1}$, the choice of the base point for the homotopic cycle is irrelevant. In field theory this is the point where the boundary at infinity maps onto the target manifold. For the $\mathbb{CP}^{2}/{Z}_{2}$ orbifold we have to make a distinction between two cases: (i) the base is the fixed point $(1,0,0)$; (ii) the base is any other point. In the case (ii) the extra moduli space generated by the ${\rm SU}(2)$ symmetry between the coordinates $z_2$ and $z_3$ of the orbifold moves the point at infinity, and thus does not generate any additional zero modes in the instanton moduli space. If the base is instead the Abelian fixed point (case (i)), the ${\rm SU}(2)$ symmetry generates zero modes. The total number of real bosonic zero modes for the instanton with the boundary at the fixed point is thus six.\footnote{ Alternatively, we could establish this fact by considering instantons in $\mathbb{CP}^{2}$, and then reducing by $\mathrm Z_2$. Instantons in $\mathbb{CP}^{2}$ have six bosonic zero modes -- the position, the size, the phase and two other extra coordinates that correspond to the choice of an $S^2$ inside $\mathbb{CP}^2$ -- and six fermion superpartners. If the base point is invariant under the orbifold projection, the six zero modes remain in the orbifold, even if the metric is not exactly that of $\mathbb{CP}^{2}/{Z}_{2}$.} We want to explicitly derive the instantons solutions in the HT model. At $m=0$, the isometry group of our sigma model is SU(2)$_{c+f}$, acting in the standard way on the three-sphere parametrized by $(\alpha,\beta,\gamma)$. The coordinate $\omega$ does not transform under this SU(2). The isometry group of $\mathbb{CP}^2$ with the standard metric is SU(3), which is much larger. From the topological standpoint the $\mathbb{CP}^2/{Z}_2$ instantons should be rather similar to the $\mathbb{CP}^2$ case. The only difference is that in $\mathbb{CP}^2/{Z}_2$ configurations with the $\mathbb{CP}^2$ topological charge $1/2$ are allowed. In the sigma model under consideration the metric is very different from that on the homogeneous $\mathbb{CP}^2$ space. It has much fewer isometries. Hence, the explicit instanton solutions are different. Also, instantons, in principle, will change if we vary the vacuum expectation value of $\omega$. There is no symmetry of the theory which relates two different values of $\omega$. Let us consider some explicit instanton {\em ans\"atze}. In what follows $m=0$. \subsubsection{Instanton A} \label{instA} One possibility is to consider configurations at $\omega=1$ (which corresponds to $\kappa=0$) and generic $(\alpha,\beta)$. These are exactly the instantons of the classical $\mathbb{CP}^1$ sigma model at $\omega=1$. Let us parametrize by $(\rho,\varphi)$ the two-dimensional world sheet. We can use the {\em ansatz} \begin{equation} \alpha(\rho,\varphi)=\alpha(\rho) \, , \qquad \beta(\rho,\varphi)= \varphi \, . \label{dom7} \end{equation} Then the action is given by \begin{equation} S=4 \pi \, r \, \int \rho \, d\rho \left( (\partial_\rho \alpha)^2 +\frac{\sin^2 \alpha \cos^2 \alpha}{\rho^2} \right) \, . \label{f64} \end{equation} The Bogomol'nyi completion is \begin{equation} S= 4 \pi \, r \, \int d \rho \left[ \rho \, \left( \partial_\rho \alpha +\frac{\sin \alpha \cos \alpha}{\rho} \right)^2 + \partial_\rho ( \cos^2 \alpha ) \right]. \label{f65} \end{equation} For this action the instanton solution is given by the well-known result \begin{equation} \alpha=\frac{1}{2} \arccos \left(\frac{\rho^2-a^2}{\rho^2+a^2} \right)\, , \label{finss} \end{equation} where $a$ is the instanton size. The action for this instanton is \begin{equation} S_{\rm inst} = 4 \pi r\,. \label{actins} \end{equation} It is easy to check that this configuration has at least four real bosonic zero modes: the position, the size $a$ and a phase corresponding to a constant shift in $\beta$. We will see that it can be interpreted as a composite instanton. Therefore, in fact it must have more zero modes than those indicated above. The situation is similar to the composite kink discussed in Sect.~\ref{kkki}. \subsubsection{Instanton B} \label{instB} Now, let us try another simple {\em ansatz}. Choose $\alpha=0$ and a nontrivial $(\kappa,\zeta)$, \begin{equation} \kappa(\rho,\varphi)=\kappa(\rho) \, , \qquad \zeta(\rho,\varphi)= \varphi \, . \label{fnontr} \end{equation} Then the action is given by \begin{equation} S= 2 \pi r \, \rho \, \int d\rho \, \left[ 4A (\partial_\rho \kappa)^2 + \frac{1}{\rho^2} \frac{(1-\kappa^2)^2 (1-(1-\kappa^2)^4) }{2 -4 \kappa^4+4 \kappa^6 -\kappa^8} \right] \, , \label{fnontrp} \end{equation} and its Bogomol'nyi completion takes the form \begin{eqnarray} S &=& 2 \pi r \, \int d \rho \left[ \rho \, \left( 2 \sqrt{A} (\partial_\rho \kappa)-\frac{1}{\rho} \sqrt{ \frac{(1-\kappa^2)^2 (1-(1-\kappa^2)^4) }{2 -4 \kappa^4+4 \kappa^6 -\kappa^8}} \,\,\, \right)^{\! 2} \right. \nonumber\\[4mm] &-& \left. \partial_\rho \left( \kappa^2 \, (\kappa^2-2 ) \right) \rule{0mm}{8mm} \right] \, . \label{bogcoB} \end{eqnarray} The solution to the equation \begin{equation} 2 \sqrt{A} (\partial_\rho \kappa)=\frac{1}{\rho} \sqrt{ \frac{(1-\kappa^2)^2 (1-(1-\kappa^2)^4) }{2 -4 \kappa^4+4 \kappa^6 -\kappa^8}} \label{soltoe} \end{equation} can be found numerically. The instanton action in this case is \begin{equation} S_{\rm inst} = 2 \pi r\,. \label{actinsp} \end{equation} This instanton has a total of six bosonic zero modes: the position, the size and three extra zero modes which can be generated by using the SU(2)$_{c+f}$ rotation (one of these modes corresponds to a trivial constant shift in $\zeta$). Therefore, in the vacuum with $\omega=0$ the dimension of the bosonic part of the instanton moduli space is six. The instanton A is a configuration with the topological charge twice larger than that of the instanton B. The instanton B is, therefore, the elementary instanton, while the instanton A is a composite object. The instanton A is not the most general instanton with topological charge 2. It is just a very special solution which can be found by a trivial embedding of the $\mathbb{CP}^1$ instanton. \subsection{Transversal fluctuations} \label{tf} As was mentioned previously, fixing the position in the noncompact part of the manifold (the distance in the case of 2-strings), and then quantizing the compact part is an approximation. In quantum field theories in $2+1$ dimensions or higher this strategy is easily justifiable since distinct vacua labeled by different expectation values of scalar fields obviously form separate nonoverlapping sectors in the Hilbert space. In $1+1$ dimensions the situation is subtler, and we must check the effect of long-range transversal fluctuations. A free scalar field in $1+1$ dimensions has a correlation function \begin{equation} \langle \varphi(0) \varphi(z)\rangle \propto \log z \, . \end{equation} At large distance it diverges; therefore, it seems impossible to set $\varphi(z) $ to constant (equal to $\varphi_0$) at every point $z$. Translated in our context, this seemingly implies that the string position cannot be set constant on the world sheet. To regularize the problem one can consider a flux tube with a {\em finite} length $R$, attached to some probe infinitely massive monopole and antimonopole. The quantum mechanical wave function of the flux tube connecting the probe charges has a nonvanishing width $\tilde \ell $, which was computed in \cite{lmw}, \begin{equation} \tilde \ell^2 =\frac{1}{\pi T } \, \ln \frac{R}{\lambda} \, , \label{tilel} \end{equation} where $T=2 \pi \xi$ is the flux tube tension and $\lambda$ is a parameter which is of the same order of magnitude as the intrinsic thickness of the string $\ell \approx 1/(e_3 \sqrt{\xi})$, beyond which the string model is no longer applicable to the flux tube. The intrinsic string thickness $\ell$ is the parameter that must be compared with the width of transversal fluctuations. We thus obtain an estimate for the critical distance $R_c$ at which the transversal fluctuations become comparable with the intrinsic string thickness, \begin{equation} R_c \approx \frac{c}{e_3 \sqrt{\xi}} \exp \left(\frac{1}{e_3^2}\right) \approx c \, \ell \, \exp \left(\frac{1}{e_3^2}\right) \, , \label{dom9} \end{equation} where $c$ is a positive constant. In the limit of the weak bulk coupling, $e_3^2\ll 1$, we have $R_c\gg \ell$. If the string length $R$ is smaller than $R_c$, it is fully legitimate to treat the component vortices as coincident and to quantize just the compact part of the moduli space. Note that $R_c$ is of the same order of magnitude of $1/\Lambda_{1+1}$. This is the natural infrared cutoff for the quantization of coincident vortices. In the mass-deformed theory with $|\Delta m | \gg \Lambda_{1+1}$, it is possible to consider flux tubes that are short enough so that the transversal fluctuations are completely irrelevant. In the quantum case $|\Delta m | \rightarrow 0$ one must be more careful. Quantization of the internal manifold gives rise to states -- the kinks -- with thickness $1/\Lambda_{1+1}$ and this is exactly the length scale where the transversal fluctuations are as large as the string thickness. We can trust the result of the previous approximation (i.e. keeping fixed the distance and then quantize the internal manifold) only if the internal manifold does not vary considerably if the distance changes by an amount comparable with the string thickness. \section{Renormalization group flow: an attempt} \label{rg} The renormalization group (RG) flow for nonlinear sigma models with generic metric was studied in \cite{friedan,agfm}. The basic idea is that the RG flow changes geometry of the sigma model. In the homogeneous spaces case (such as $\mathbb{CP}^{N-1}$) the change of geometry amounts just to a change in an overall factor in front of the metric. This factor is identified as the coupling coupling constant; it describes the overall scale of the target space. Say, for $\mathbb{CP}^1$ this is related to the radius of the sphere $S_2$. For more general geometries all elements of the metric $g_{ij}$, not just the overall scale, change due to the RG flow. The renormalization is governed by a $\beta_{ij}$ function which generalizes the well-known $\beta$ function in the homogeneous spaces, \begin{equation} \mu \frac{\partial g_{ij}}{\partial \mu} =\beta_{ij} \, , \qquad \beta_{ij}=R_{ij} \, , \label{RG} \end{equation} where $R_{ij}$ is the Ricci tensor \footnote{In the mathematical literature, this corresponds to the Ricci flow. Ricci flow in relation to vortex moduli space has been considered in a classical context in \cite{manton}.}. Equation (\ref{RG}) is valid at one loop. The two loop contribution is non-zero and is proportional to \cite{agfm}: \begin{equation} \mathcal{D}^k \mathcal{D}_k R_{ij} +2 R_{ikjl} R^{kl} + 2 R_{ik} R_j^k \, ,\end{equation} where $\mathcal{D}_k$ is the covariant derivative from the standard Christoffel symbols obtained from the metric $g_{ij}$. As usual in this paper, we put $z=0$, so that the metric and the Ricci tensor depend on four coordinates. Then the Ricci tensor and the metric tensor have a similar structure which will allow us to write the RG flow equations in a relatively simple form (\ref{a1}) -- (\ref{a3}). The HT metric at z =0 is only topologically equivalent to that of $\mathbb{CP}^2/Z_2$, while geometrically they are different. That's why in $\mathbb{CP}^2$ the RG flow reduces to a variation of a single parameter, while in the HT case we will have to introduce three functions. In addition to the RG change of the overall scale factor (which certainly does take place), geometry gets ``distorted" in all directions too. The RG variations are faster in some directions and slower in others. If it were not for these distortions we will have to conclude that $r$ runs in the same way as in the $\mathbb{CP}^2$ model. After these preliminary remarks we move on to consider a class of metrics which generalize the one obtained in Sect.~\ref{tkintt}, \begin{equation} f_1(\kappa) d \kappa^2+ f_2(\kappa) \left[ d \alpha^2 +\left(\frac{\sin 2 \alpha}{2}\right)^2 d \beta^2 \right]+ f_3(\kappa) \left(d \zeta -2 (\sin^2 \alpha) \, d \beta \right)^2 \, , \label{mmetric} \end{equation} where $f_{1,2,3}$ are functions of $\kappa$. Those functions that we found in Sect.~\ref{tkintt} correspond to \begin{eqnarray} f_1&=& r \frac{4 (\kappa^8-4 \kappa^6 +4 \kappa^4-2)}{\kappa^6 -4 \kappa^4 +6 \kappa^2 -4} \, , \nonumber\\[2mm] f_2&=& r \, 2 (\kappa^2-1)^2 \, , \nonumber\\[2mm] f_3&=& r \frac{\kappa^6 -4 \kappa^4 +6 \kappa^2 -4}{\kappa^8-4 \kappa^6 +4 \kappa^4-2} (\kappa^2-1)^2 \kappa^2 \,,\label{bou} \end{eqnarray} with $0 \leq \kappa \leq 1$. The metric of $\mathbb{CP}^2/\mathbb{Z}_2$ is, instead, given by \begin{equation} f_1=r \, , \qquad f_2=r \cos^2 \kappa \, , \qquad f_3=r \frac{\sin^2(2 \kappa)}{16} \, ,\end{equation} with $0 \leq \kappa \leq \pi/2$. \begin{figure}[h] \begin{center} $\begin{array}{c@{\hspace{.2in}}c@{\hspace{.2in}}c} \epsfxsize=1.5in \epsffile{pl1.eps} & \epsfxsize=1.5in \epsffile{pl2.eps} & \epsfxsize=1.5in \epsffile{pl3.eps} \end{array}$ \end{center} \caption{The functions $f_1(\kappa),\,\, f_2(\kappa),\,\, f_3(\kappa)$ in Eq. (\ref{bou}) for $r=1$.} \label{profili} \end{figure} It is important to stress that in the metric (\ref{mmetric}) there is a freedom to redefine the variable $\kappa$ by an arbitrary function. In other words, the above parametrization in terms of three functions $f_1$, $f_2$ and $f_3$ is redundant. To fix this redundancy we can introduce a new variable, \begin{equation} \lambda(\kappa)=\int_0^\kappa \sqrt{ f_1(\eta) } d \eta \, , \label{nonred} \end{equation} and then express $f_2$ and $f_3$ in terms of $\lambda$. The resulting metric can then be written as \begin{equation} d \lambda^2+ f_2(\lambda) \left[ d \alpha^2 +\left(\frac{\sin 2 \alpha}{2}\right)^2 d \beta^2 \right]+ f_3(\lambda) \left(d \zeta -2 (\sin^2 \alpha) \, d \beta \right)^2 \, . \label{mmetric2} \end{equation} The functions $f_2(\lambda)$ and $f_3(\lambda)$, together with the range of the the $\lambda$ variation, $$0<\lambda<\lambda_f\, ,$$ specify the metric in a a way that is not redundant. However, to write the RG equations, it is inconvenient to fix the redundancy as in Eq. (\ref{mmetric2}). A nice property of the class of metrics (\ref{mmetric}) is that the we can write the one-loop RG equations as a system of differential equation for $f_{1,2,3}$. If we compute the Ricci tensor from the metric (\ref{mmetric}) and plug it back in Eq. (\ref{RG}), we get the following system of equations: \begin{eqnarray} && r \mu \frac{\partial f_1}{\partial \mu} -\frac{f_3''}{2 f_3}+ \frac{(f_3')^2}{4 f_3^2} +\frac{f_1' \, f_3'}{4 f_1 f_3} -\frac{f_2''}{f_2} +\frac{(f_2')^2}{2 f_2^2}+ \frac{f_1' f_2'}{2 f_1 f_2} =0 \, , \label{a1} \\[3mm] && r \mu \frac{\partial f_2}{\partial \mu} -\frac{f_2''}{2 f_1}-\frac{f_2' f_3'}{4 f_1 f_3} + \frac{f_1' f_2'}{4 f_1^2} -8 \frac{f_3}{f_2} +4 =0 \, , \label{a2} \\[3mm] && r \mu \frac{\partial f_3}{\partial \mu} -\frac{f_3''}{2 f_1}+\frac{(f_3')^2}{4 f_1 f_3}-\frac{f_2' f_3'}{2 f_1 f_2} +\frac{f_1' f_3'}{4 f_1^2} +8 \frac{f_3^2}{f_2^2} =0 \, , \label{a3} \end{eqnarray} where the prime denotes differentiation with respect to $\kappa$. This is a nontrivial property for the metric of the form (\ref{mmetric}); usually the Ricci tensor is a very complicated expression in terms of the metric. In our case it is quite simple, that's the reason why we managed to convert Eq. (\ref{RG}) in (\ref{a1}) -- (\ref{a3}). When we try to solve Eqs. (\ref{a1}) -- (\ref{a3}), we find problems nearby $\kappa=1$, corresponding to the $(1,1)$ vortex. The solution for the profile $f_1$ is highly unstable and is not trustworthy. \begin{figure}[h] \begin{center} \leavevmode \epsfxsize 6 cm \epsffile{curvature.eps} \end{center} \caption{\footnotesize Scalar curvature as a function of $\kappa$ for $r=1$. At $\kappa=1$ the scalar curvature $R$ diverges. This is a signal of a singularity associated with the $(1,1)$ vortex. } \label{curvature} \end{figure} We would like to emphasize that, strictly speaking, we can trust Eq.~({\ref{RG}}) for the RG flow only far away from $\kappa=1$ (and, remember, $\kappa=1$ corresponds to the $(1,1)$ string). The problem is that the one-loop expression is trustworthy only in the limit of small scalar curvature $R$, \begin{eqnarray} R &=& \frac{1}{r} \left( \frac{8}{f_2} -\frac{8 f_3}{f_2^2} +\frac{f_1' f_2'}{f_1^2 f_2} +\frac{(f_2')^2}{2 f_1 f_2^2} +\frac{f_1' f_3'}{2 f_1^2 f_3} \right. \nonumber\\[3mm] &-& \left. \frac{f_2' f_3'}{f_1 f_2 f_3} +\frac{(f_3')^2}{2 f_1 f_3^2} -\frac{2 f_2''}{f_1 f_2}-\frac{f_3''}{f_1 f_3} \right). \end{eqnarray} In our example this quantity diverges at $\kappa=1$ as shown in Fig.~\ref{curvature}. Hence, we can not one-loop calculation in this domain. This is probably the origin of the difficulties that we find when we try to solve (\ref{a1}) -- (\ref{a3}) numerically. This is also consistent with the fact that the subspace corresponding to coincident vortices is not a manifold nearby the $(1,1)$ vortex (there is a conical singularity already in the topology). A possible way out is to consider the full metric, including the $z$ dependence. It could be that the singularity in the metric which makes the scalar curvature to diverge will disappear once we consider the full six dimensional metric and that this will make the RG flow calculation well defined \footnote{It is important to stress that the moduli space of coincident vortices has already a singularity in the topology in correspondence of the $(1,1)$ vortex, because the space, strictly speaking, is not a manifold in the neighborhood of this point. The singularity in the topology disappears if we consider the full moduli space with arbitrary separation and orientation \cite{recon}; the full moduli space is then topologically a manifold in the neighborhood of every point.}. It is also possible that the divergence of the curvature nearby the $(1,1)$ vortex signals a general problem in studying the physics of that vacuum in a weakly coupled regime. A more detailed study of the full six-dimensional sigma model would be desirable in order to understand this point. In Appendix another section of the moduli space is considered; it corresponds to antiparallel vortices at arbitrary distance $z$. Also in this sub-manifold the curvature in correspondence of the $(1,1)$ vortex is diverging. \section{Conclusions} \label{conclu} We studied several aspects of coincident non-Abelian vortex strings using an effective description proposed in \cite{ht1,ht2}, suggested by the D-brane realization of $\mathcal{N}=2$ SQCD in type II A string theory \cite{hw,witten97}. In the case of coincident strings we argued that the HT model describes, in a consistent way, a number of ``protected" aspects of the world-sheet dynamics, such as the number of vacua, their symmetries and the masses of the confined monopoles. Topology of the string moduli space in field theory and the one found from the brane construction \cite{ht1,ht2} coincide \cite{knp,knp2,knp3}. The situation with the metric is more murky; we know that for large string separations the two metrics are different. For this reason the HT model cannot be viewed as fully realistic. Despite this, we claim that the results presented in this paper would stay valid in the ``true'' model of multiple strings. The most important of them is the fact that composite monopoles can be confined on composite strings, and retain their BPS nature. The HT model emerges as a valuable (and in some instances, unique) tool in analyzing non-Abelian strings. On the other hand this model is of a significant interest {\em per se}. There are two obvious problems which should be addressed in the future: large-$N$ solution of the HT model in the regimes (i) $k \sim N$ and (ii) $k\sim N^0$. \section*{Acknowledgments} We are grateful to D. Tong, A. Vainshtein, W. Vinci and A. Yung for very useful discussions. The work of MS was supported in part by DOE grant DE-FG02-94ER408. \section*{Appendix: Antiparallel-flux strings} \renewcommand{\thesection.\arabic{equation}}{A.\arabic{equation}} \setcounter{equation}{0} The general six-dimensional metric for 2-strings is difficult to write in an explicit way. The main topic of this paper was the metric restricted to $z=0$, a much simpler task. There is another natural section of the moduli space where it is easy to write down the metric and the potential; it can be obtained restricting to $\omega=0$. It corresponds to elementary vortices with the opposite internal orientations, i.e. the composite system of (1,0) +(0,1). The following gauge fixing can be used: \begin{eqnarray} a_i &=& r^{1/2} \, (\cos \alpha, e^{i \beta} \sin \alpha) \, , \nonumber\\[3mm] b_i &=& r^{1/2} \, ( e^{-i \beta} \sin \alpha, - \cos \alpha) \, , \qquad Z= \left(\begin{array}{cc} z & 0 \\[1mm] 0 & -z \\ \end{array}\right) \, . \end{eqnarray} By a straightforward calculation similar to those in Sects. \ref{tkintt} and \ref{ttmtt}, we can find both the metric and potential for this section. The kinetic term is \begin{equation} 2 (\partial_\mu z)^2+8 r \, \frac{z^2}{r^2+4 z^2} \, \left[(\partial_\mu \alpha)^2 +\left(\frac{\sin 2 \alpha}{2}\right)^2 (\partial_\mu \beta)^2 \right] \, , \label{pa2} \end{equation} while the potential induced by the twisted mass term is \begin{equation} V= 8 m^2 r \left(\sin^2 2 \alpha\right)\, \frac{z^2}{r^2+ 4 z^2} \, . \end{equation} From these expressions it is easy to infer that the kinetic term for the $S^2$ part approaches the asymptotic value in a power-like manner, instead of the exponential law we would expect in the gapped bulk theory (this is a bad feature of the model). We can also check that the interactions between the component strings start to be relevant at $z\approx \sqrt{r}$, which is consistent with the expected vortex thickness in the weakly coupled limit (this is a good feature). \begin{figure}[h] \begin{center} \leavevmode \epsfxsize 6 cm \epsffile{curvature2.eps} \end{center} \caption{\footnotesize Scalar curvature for the moduli space section corresponding to the strings with the opposite values of $\vec{B}^3$, as a function of $z$ for $r=1$. } \label{curvature2} \end{figure} It is instructive to compute the scalar curvature for the metric (\ref{pa2}); we get \begin{equation} R= \frac{-2 r^3+28 r^2 z^2+48 r z^4+64 z^6}{r z^2 \left(r+4 z^2\right)^2} \rule{0mm}{9mm} \, . \label{pa4} \end{equation} This expression is plotted in Fig.~\ref{curvature2}. It diverges, $R\to -\infty$, at the point $z\to 0$. It is unclear what would happen if we could lift the restriction $\omega =0$. In the full moduli space the scalar curvature at $z\to 0$ could still be finite, or tend to $-\infty$ as in (\ref{pa4}).
1,477,468,751,351
arxiv
\subsubsection*{\bibname}} \usepackage{header} \newcommand{\mathrm{stack}}{\mathrm{stack}} \newcommand{\mathrm{bayes}}{\mathrm{bayes}} \newcommand{\mathrm{boost}}{\mathrm{boost}} \newcommand{\textsc{MHGP}\xspace}{\textsc{MHGP}\xspace} \newcommand{Mean Hierarchical GP\xspace}{Mean Hierarchical GP\xspace} \newcommand{\textsc{HGP}\xspace}{\textsc{HGP}\xspace} \newcommand{Hierarchical GP\xspace}{Hierarchical GP\xspace} \newcommand{\textsc{SHGP}\xspace}{\textsc{SHGP}\xspace} \newcommand{Sequential Hierarchical GP\xspace}{Sequential Hierarchical GP\xspace} \newcommand{\textsc{MTGP}\xspace}{\textsc{MTGP}\xspace} \newcommand{Multi-Task GP\xspace}{Multi-Task GP\xspace} \newcommand{\textsc{MTKGP}\xspace}{\textsc{MTKGP}\xspace} \newcommand{Multi-Task-Single-$k$ GP\xspace}{Multi-Task-Single-$k$ GP\xspace} \newcommand{\textsc{BHGP}\xspace}{\textsc{BHGP}\xspace} \newcommand{Boosted Hierarchical GP\xspace}{Boosted Hierarchical GP\xspace} \newcommand{\textsc{WSGP}\xspace}{\textsc{WSGP}\xspace} \newcommand{Weighted Source GP\xspace}{Weighted Source GP\xspace} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{X}}{\mathbf{X}} \newcommand{\funcdist}[3]{f_{{#2}}^{{#1}}{#3}} \newcommand{\stackdist}[2]{\funcdist{\mathrm{stack}}{#1}{#2}} \newcommand{\uidist}[2]{\funcdist{\mathrm{bayes}}{#1}{#2}} \newcommand{\spldist}[2]{\funcdist{\mathrm{boost}}{#1}{#2}} \newcommand{\expectation}[1]{\mathbb{E}\left(#1\right)} \newcommand{\funcmean}[3]{\expectation{\funcdist{#1}{#2}{#3}}} \newcommand{\stackmean}[2]{\funcmean{\mathrm{stack}}{#1}{#2}} \newcommand{\uimean}[2]{\funcmean{\mathrm{bayes}}{#1}{#2}} \newcommand{\splmean}[2]{\funcmean{\mathrm{boost}}{#1}{#2}} \newcommand{\cov}[1]{\mathrm{cov}\left(#1\right)} \newcommand{\var}[1]{\mathrm{var}\left(#1\right)} \newcommand{\funccov}[3]{\cov{\funcdist{#1}{#2}{#3}}} \newcommand{\stackcov}[2]{\funccov{\mathrm{stack}}{#1}{#2}} \newcommand{\uicov}[2]{\funccov{\mathrm{bayes}}{#1}{#2}} \newcommand{\splcov}[2]{\funccov{\mathrm{boost}}{#1}{#2}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\PDF}[1]{p\left(#1\right)} \newcommand{\condPDF}[2]{p\left(#1\middle| #2\right)} \newcommand{\normalPDF}[3]{p\left(#1; #2, #3\right)} \newcommand{\GP}[2]{\mathcal{GP}\left(#1, #2\right)} \newcommand{\normaldist}[2]{\mathcal{N}\left(#1, #2\right)} \newcommand{\alpha_{*,t}}{\alpha_{*,t}} \begin{document} \twocolumn[ \aistatstitle{Transfer Learning with Gaussian Processes for Bayesian Optimization} \aistatsauthor{Petru Tighineanu \And Kathrin Skubch \And Paul Baireuther} \aistatsauthor{ Attila Reiss \And Felix Berkenkamp \And Julia Vinogradska} \aistatsaddress{Bosch Center for Artificial Intelligence, Renningen, Germany } \runningauthor{Tighineanu, Skubch, Baireuther, Reiss, Berkenkamp, Vinogradska} ] \doparttoc \faketableofcontents \begin{abstract} Bayesian optimization is a powerful paradigm to optimize black-box functions based on scarce and noisy data. Its data efficiency can be further improved by transfer learning from related tasks. While recent transfer models meta-learn a prior based on large amount of data, in the low-data regime methods that exploit the closed-form posterior of Gaussian processes (GPs) have an advantage. In this setting, several analytically tractable transfer-model posteriors have been proposed, but the relative advantages of these methods are not well understood. In this paper, we provide a unified view on hierarchical GP models for transfer learning, which allows us to analyze the relationship between methods. As part of the analysis, we develop a novel closed-form boosted GP transfer model that fits between existing approaches in terms of complexity. We evaluate the performance of the different approaches in large-scale experiments and highlight strengths and weaknesses of the different transfer-learning methods. \end{abstract} \section{INTRODUCTION} Bayesian optimization (BO) is an elegant and powerful approach to black-box optimization. It has been successfully applied to several challenging black-box optimization problems, ranging from hyperparameter optimization~\citep{snoek_hpo} to materials design \citep{zhang2020} and controller tuning \citep{calandra2016bayesian}. One major advantage of BO is sample efficiency: it is specifically tailored to expensive black-box functions, where each function evaluation is costly, e.g., when laborious experiments on physical systems are involved. For such applications, the benefit of fewer function evaluations outweighs the increase in computational cost to make informed decisions about where to measure next. A key ingredient to BO's sample efficiency is a probabilistic surrogate model of the objective. In the low-data regime, the most commonly employed model is a Gaussian process (GP) \citep{rasmussengp}, as it provides closed-form posteriors with accurate uncertainty estimates to guide the search for the global optimum. Many black-box optimization problems are not one-off tasks, but rather several related instances of the tasks can be encountered. Here, the data efficiency of optimization can be further improved by transferring knowledge from related tasks. Especially in the low-data regime that BO is specialized on, transfer learning provides great value. To this end, \citet{NIPS2013_swersky,EnvGP,pmlr-v54-shilton17a,golovin2017stackgp} propose several approaches to transfer learning for BO. These transfer learning models combine related task data and the data of the current task into a joint model that guides BO in the search for the current task's global optimum. This setting implies a certain asymmetry: while we may use data from related tasks, we are only interested in informative models for the current task. Further, the related task data are considered as given and additional samples cannot be acquired. One common approach to account for this asymmetry, often referred to as \emph{hierarchical model}, is to model the difference between the current and related tasks. However, when combining different task data into a single GP model, the computational complexity scales cubically with the number of data points. This complexity can easily become prohibitive if data from multiple related tasks are available. Several other transfer models with tractable posteriors and more favourable scaling have been proposed, e.g., an ensemble of GPs~\citep{feurer2018rgpe} or a hierarchical model for the mean prior~\citep{golovin2017stackgp}. However, their relation to the joint models with full cubic complexity, their relative strengths and weaknesses, and underlying assumptions are not well understood. \section{RELATED WORK AND CONTRIBUTIONS} The problem of speeding up Bayesian optimization by using additional information besides the objective function evaluations is a focus topic in the black-box optimization community. A large body of literature promotes the idea to meta-learn models \citep{pmlr-v70-finn17a,ABLR,flennerhag2018transferring}, whole optimizers \citep{pmlr-v70-chen17e} or acquisition functions \citep{Volpp2020Meta-Learning} for BO. These methods are powerful, but require large amounts of training data to extract the characteristics of a class of tasks. In the low-data regime, methods that rely on the uncertainty estimates of a GP posterior to guide the search outperform large-scale meta-learning. Several GP-based transfer learning approaches have been proposed, which can mostly be summarized into two categories that either \textit{(i)} build a global GP model, or \textit{(ii)} maintain separate GPs for each data set. Several options exist to aggregate this data into a single global model: multitask GPs by \citet{NIPS2013_swersky,pmlr-v33-yogatama14} model correlations between tasks, \citet{NIPS2017_MISO} model task biases with a joint linear coregionalization kernel and some independence assumptions, \citet{EnvGP} propose envelope GPs that adapt the noise model to accommodate all data, and DiffGPs by \citet{pmlr-v54-shilton17a} regress on prediction bias corrected related task data. In multi-fidelity BO, which only differs from the transfer learning setting by removing the constraint that related task data cannot be acquired during optimization, \citet{forrester2007multi,Marco17VirtualvsReal} successfully employ GP models. All these approaches ultimately compute a GP model on a data set of the size of the joint related and target task data sets and suffer from the cubic complexity. On the other hand, \citet{golovin2017stackgp}, \citet{feurer2018rgpe} and \citet{Wistuba} maintain separate GPs for each data set to avoid the accumulated scaling. In the hierarchical model presented in \citet{golovin2017stackgp}, this reduction in computational complexity stems from neglecting the uncertainty of the source models for the transfer, which is detrimental to the optimization process. Instead, RGPE~\citep{feurer2018rgpe} builds a ranking-based ensemble of the separate GPs, which involves careful tuning of the weight estimator and also does not result in a Bayesian treatment of the uncertainty. In light of the described properties of these two classes of GP-based transfer learning models, the need for models that can bridge the gap between fully considering source uncertainty and not considering it at all becomes evident. Such an intermediate regime may be particularly relevant for optimizing monetarily expensive processes with relatively little duration such as destructive measurements. In the engineering community, efficient hierarchical models have been developed for the special case where related tasks are evaluated on decreasing subsets of inputs, see e.g. \cite{kennedy2000predicting, le2014recursive}. However, this setting is not suitable for transfer learning from generic historical data. While computationally efficient approximations to GPs exist that could be applied to any GP model (including those proposed in this paper), e.g. by \citet{lazaro2010sparse}, this is an orthogonal research direction that is agnostic to the structure of the underlying data and treats every data point the same. Contextual optimization \citep{krause2011contextual} is another related line of research in which context variables influence the process to-be-optimized. By contrast, transfer learning leverages discrete historical tasks as context. \paragraph{Contributions} In this paper, we provide a unifying view on GP based transfer learning models and shed light on their relation and assumptions. This unified view enables a thorough analysis of the methods: their computational complexity, their modelling assumptions and their respective advantages for different transfer scenarios. In addition, we fill the gaps in this unified framework. Firstly, we present a modular version of Hierarchical GP\xspace that we name \emph{Sequential Hierarchical GP\xspace} with improved scaling while maintaining competitive performance. Secondly, we develop a new transfer learning model that is a middle ground between existing approaches in terms of computational complexity and optimization performance. This method is conceptually inspired by boosting architectures, and yet results in an analytically tractable posterior, which we name \emph{Boosted Hierarchical GP\xspace}. A comprehensive empirical evaluation of all methods supports our analytical insights and provides valuable guidance on the different methods' comparative advantages. \section{PROBLEM STATEMENT} \label{sec:problem_statement} Our goal is to find the optimum of a \emph{target} black-box function $f_t \colon D \to \mathbb{R}$, based on noisy observations $\mathcal{D}_t = \{ \mathbf{x}_n, y_n \}_{n=1}^{N_t}$ where each observation $y_n = f_t(\mathbf{x}_n) + \varepsilon_n$ is corrupted by \iid zero-mean Gaussian noise, $\varepsilon_n \sim \mathcal{N}(0, \sigma_t^2)$. We model our prior belief over the target function $f_t$ with a GP \citep{rasmussengp}, $ f_t \sim \GP{m}{k}, $ with mean function $m(\cdot)$ and kernel function $k(\cdot, \cdot)$. Conditioned on the data, the posterior distribution is a GP again with the posterior mean and variance at a query point $\mathbf{x}_*$ \begin{equation*} \begin{split} \label{eq:basic_posteriors} \expectation{f_t \mid \mathbf{x}_*, \mathcal D_t} &=m(\mathbf{x}_*) + k(\mathbf{x}_*, \mathbf{X}_t)\\ &\times\left(k(\mathbf{X}_t, \mathbf{X}_t) + \sigma_t^2 \mathbf{I}\right)^{-1} \left(\mathbf{y}_t-m(\mathbf{X}_t)\right), \\ \var{f_t \mid \mathbf{x}_*, \mathcal D_t} &= k(\mathbf{x}_*, \mathbf{x}_*) - k(\mathbf{x}_*, \mathbf{X}_t )\\ &\times\left(k(\mathbf{X}_t, \mathbf{X}_t) +\sigma_t^2\mathbf{I}\right)^{-1} k(\mathbf{X}_t, \mathbf{x}_*), \end{split} \end{equation*} where $\mathbf{X}_t = (\mathbf{x}_1, \dots, \mathbf{x}_{N_t})$ and $\mathbf{y}_t = (y_1, \dots, y_{N_t})$ is the vector of corresponding, noisy observations. Given this belief over the function $f_t$, BO aims to actively select a new query point $\mathbf{x}_{N_t + 1}$ that is informative about the optimum of $f_t$. This requires trading off exploitation and exploration and is usually accomplished through an acquisition function $\alpha$ that depends on the posterior, \begin{equation} \mathbf{x}_{N_t + 1} = \argmax_{\mathbf{x} \in D} \alpha( f_t \mid \mathbf{x}, \mathcal{D}_t ). \end{equation} Common choices for $\alpha$ are the expected improvement \citep{jones1998efficient} and the upper confidence bound \citep{srinivas10gaussian}, among several others. The data-efficiency of these methods crucially depends on how fast the posterior distribution collapses around the true target function $f_t$. To enable faster optimization, transfer learning additionally exploits data from $n_s \geq 1$ related \emph{source} tasks based on functions $f_s\colon D \rightarrow \mathbb{R}$. While all methods discussed in the following apply to this general setting, we focus on $n_s = 1$ for ease of exposition. In this case, we have $N_s$ additional data points $\mathcal D_s= \{\mathbf{X}_s, \mathbf{y}_s\}$ that can be used to improve the target model. Further, we assume without loss of generality that the source is modelled with a GP, $f_s \sim \GP{0}{k_s}$, with zero mean prior and arbitrary kernel function $k_s.$ \section{GAUSSIAN PROCESS TRANSFER MODELS} \label{sec:models} In this section, we provide a unified overview of existing GP models for transfer learning that allow for a closed-form posterior distribution. \paragraph{Multi-Task GP\xspace (\textsc{MTGP}\xspace)} Our starting point is the kernel function $k$ of the \emph{joint} model of source and target. Let $(\mathbf{x}, i), (\mathbf{x}', j)$ be two points from tasks $i,j \in \{ s, t \}$. We assume that $k$ is a sum of separable kernels \begin{equation} \label{eq:general_kernel} k(( \mathbf{x}, i), ( \mathbf{x}', j)) = \sum_{\nu\in \{ s, t \}} [\mathbf{W}_\nu]_{i, j} \, k_\nu( \mathbf{x}, \mathbf{x}') + \delta_{\mathbf{x} \mathbf{x}'}\delta_{ij}\sigma_i^2 , \end{equation} where the dirac-delta function $\delta_{ij}$ is equal to one if $i=j$ and zero otherwise, and $k_\nu$ are arbitrary kernel functions \citep{alvarez2012kernels}. The positive semi-definite matrices $\mathbf{W}_\nu$ are often referred to as \emph{coregionalization matrices}, since their diagonal (off-diagonal) entries control correlations within (between) data sets. When counting the hyperparameters, we assume that each kernel function has at least one hyperparameter. For $n_s>1$ sources, this model has $\mathcal{O}(n_s^3)$ scalar hyperparameters from the matrices $\mathbf{W}_\nu$, $\mathcal{O}(n_s)$ scalar noise hyperparameters $\sigma_i$, and, in addition, the hyperparameters of the kernels $k_\nu$. The computational complexity of training this model with a given set of hyperparameters scales as $\mathcal{O}(N^3)$, where $N$ is the total number of data points from all tasks. This makes it expensive and challenging to optimize the hyperparameters of the model. In the following, we introduce several simplifications from the literature that have fewer hyperparameters and/or better scaling properties. \paragraph{Multi-Task-Single-$k$ GP\xspace (\textsc{MTKGP}\xspace)} A common heuristic is to assume that the source and target functions share a common structure and consider $k_s=k_t$. The joint kernel contains only one coregionalization matrix, which reduces the number of scalar hyperparamters to $\mathcal{O}(n_s^2)$, while the computational complexity is the same as \cref{eq:general_kernel}. \paragraph{Weighted Source GP\xspace (\textsc{WSGP}\xspace)} Another common simplification is to set correlations between different source data sets to zero \begin{align}\label{linear-combinations} [\mathbf{W}_{s}]_{i,j} &= \delta_{i,s}\delta_{j,s} + w_{st}, &&[\mathbf{W}_{t}]_{i,j} = \delta_{i,t}\delta_{j,t}. \end{align} The hyperparameters $w_{st}$ quantify how much the source is correlated with the target. Each coregionalization matrix has at most one parameter, which reduces the total parameter number to $\mathcal{O}(n_s)$. \paragraph{Hierarchical GP\xspace (\textsc{HGP}\xspace)} The asymmetric setting in transfer learning, i.e., modelling the target but not the source tasks, motivates a hierarchical approach to model the differences of target to source data with an additive kernel defined by % \begin{align} \label{eq:hierarchical-kernel} [\mathbf{W}_{s}]_{i,j} &= 1, &&[\mathbf{W}_{t}]_{i,j} = \delta_{i,t}\delta_{j,t}. \end{align} As in \textsc{WSGP}\xspace, the number of scalar parameters is of order $\mathcal{O}(n_s).$ If optimized jointly, the computational complexity is the same as of \textsc{MTGP}\xspace. \paragraph{Mean Hierarchical GP\xspace (\textsc{MHGP}\xspace)} \citet{golovin2017stackgp} propose a simpler way to transfer information from a pretrained source model to the target and only use the posterior mean of the source model as prior mean for the target. All correlations of the source model are neglected, which leads to \begin{align} [\mathbf{W}_{s}]_{i,j} &= \delta_{i,s}\delta_{j,s} &&[\mathbf{W}_{t}]_{i,j} = \delta_{i,t}\delta_{j,t}. \end{align} The kernel is block diagonal in the different tasks. The training complexity of \textsc{MHGP}\xspace is therefore the complexity of training the target model plus the additional cost of evaluating the source posterior mean at the target points. Note that \citet{golovin2017stackgp} additionally combine source and target uncertainty heuristically, which we do not consider for ease of exposition. \begin{table*} \caption{Computational complexity for one source task. Training the source model has a complexity of $\mathcal{O}(N_s^3)$. Querying the target has a complexity of $\mathcal{O}(N_t^2 + N_s)$ for MHGP and $\mathcal{O}((N_t + N_s)^2)$ for the other techniques. In \cref{ap:complexity_multiple_sources} we discuss the generalization to multiple sources.} \label{tab:computational_complexity} \begin{center} \begin{tabular}{lllll} \toprule Models & Abbreviation & HPs & Joint HPO & Training of target \\ \midrule Multi-Task GP\xspace & \textsc{MTGP}\xspace & $\mathcal{O}(n_s^3)$ & yes & $\mathcal{O}((N_t + N_s)^3)$ \\ Multi-Task-Single-$k$ GP\xspace & \textsc{MTKGP}\xspace & $\mathcal{O}(n_s^2)$ & yes & $\mathcal{O}((N_t + N_s)^3)$ \\ Weighted Source GP\xspace & \textsc{WSGP}\xspace & $\mathcal{O}(n_s)$ & yes & $\mathcal{O}((N_t + N_s)^3)$ \\ Hierarchical GP\xspace & \textsc{HGP}\xspace & $\mathcal{O}(n_s)$ & yes & $\mathcal{O}((N_t + N_s)^3)$ \\ Sequential Hierarchical GP\xspace & \textsc{SHGP}\xspace & $\mathcal{O}(n_s)$ & no & $\mathcal{O}(N_t^3 + N_t^2 N_s + N_t N_s^2)$ \\ Boosted Hierarchical GP\xspace & \textsc{BHGP}\xspace & $\mathcal{O}(n_s)$ & no & $\mathcal{O}(N_t^3 + N_t^2 N_s + N_t N_s^2)$ \\ Mean Hierarchical GP\xspace & \textsc{MHGP}\xspace & $\mathcal{O}(n_s)$ & no & $\mathcal{O}(N_t^3 + N_t N_s)$ \\ \bottomrule \end{tabular} \end{center} \end{table*} \section{INTERMEDIATE-COMPLEXITY TRANSFER MODELS} \label{sec:intermediate-complexity-models} The Bayesian methods presented above require an expensive optimization of the joint likelihood. In this section, we introduce our novel models, Sequential Hierarchical GP\xspace (\textsc{SHGP}\xspace) and Boosted Hierarchical GP\xspace (\textsc{BHGP}\xspace), which leverage the asymmetric setting of transfer learning to lower the complexity. \paragraph{Sequential Hierarchical GP\xspace (\textsc{SHGP}\xspace)} The starting point is the \textsc{HGP}\xspace model from \cref{eq:hierarchical-kernel} in which both the source, $p(\theta_s|\mathcal{D})$, and target, $p(\theta_t|\mathcal{D})$, model parameters are influenced by \emph{all} task data, $\mathcal{D}=\mathcal{D}_s\cup\mathcal{D}_t$. Here, the quantities $\theta_s$ and $\theta_t$ parametrize the prior distribution of the GP. The asymmetric nature of transfer learning, in which the target data are accumulated during optimization while the source data remain invariant, motivates the use of a modular Bayesian approach in which the target data do not influence the source model \citep{bayarri2009modularization}. Training such a model involves (i) finding a suitable $p(\theta_s | \mathcal{D}_s)$ by optimizing the \emph{partial} likelihood, $\mathcal{L}_s$, and (ii) finding $p(\theta_t | \mathcal{D})$ by optimizing the total likelihood while keeping $\mathcal{L}_s$ fixed to the value from the first step. This sequential training procedure is the telltale of \textsc{SHGP}\xspace. Under more restrictive assumptions and scope, a similar approach was proposed in \citet{kennedy2000predicting, le2014recursive}. The modular nature of \textsc{SHGP}\xspace leads to a large decrease in training complexity compared to \textsc{HGP}\xspace as shown later. The price to pay for the complexity reduction is the inability of the source model to react to the target data, which may become an issue in case $p(\theta_s|D_s)$ is misspecified. We focus, for concreteness, on hierarchical models in this paper but note that such modularizations, which decrease the training complexity, can, in principle, be conducted for other Bayesian designs from \cref{sec:models}, too. Exploring such approaches is left for future work. \paragraph{Boosted Hierarchical GP\xspace (\textsc{BHGP}\xspace)} In \textsc{SHGP}\xspace, the source model parameters, $\theta_s$, do not depend on the target data but the source posterior distribution does. We propose \textsc{BHGP}\xspace, which makes also the latter independent of the target data. This ad-hoc assumption takes \textsc{BHGP}\xspace outside the Bayesian realm but provides an elegant connection to boosting, a well-known approach in machine learning that combines an ensemble of weak learners into a strong one \citep{schapire2003boosting}. We study this connection in \cref{sec:equivalences}. In contrast to \textsc{HGP}\xspace, this leads to an asymmetric treatment of observed target data and query points. In \cref{sec:boostedgp} we show that this approach results in the kernel \begin{equation} \label{eq:boosted_kernel} k((\mathbf{x}, i), (\mathbf{x}', j)) = \sum_{r} [\mathbf{W}_\nu]_{i, j} k_\nu(\mathbf{x}, \mathbf{x}') + \delta_{xx'}\delta_{ij}\sigma_i^2 \end{equation} with $i, j, r \in \{s, t, *\}$, $k_* = k_t + \Sigma_*^{\mathrm{boost}}$, $\sigma_* = 0$, and \begin{align*} [\mathbf{W}_*]_{ij} &= \delta_{i*} \delta_{j*}, &&[\mathbf{W}_{s}]_{ij} = \delta_{is}\delta_{js}, &&[\mathbf{W}_{t}]_{ij} = \delta_{it}\delta_{jt}. \end{align*} Note that \textsc{BHGP}\xspace introduces an additive term $ \Sigma_*^{\mathrm{boost}} = \Sigma^s_{*, *} + \alpha_{*,t}\Sigma^s_{t, t} \alpha_{*,t}^T -\alpha_{*,t}\Sigma^s_{t, *} -\Sigma^s_{*, t} \alpha_{*,t}^T $ to the covariance of the query points, which resembles a robustified version of the target model. Here $\alpha_{*,t} = k_t(\mathbf{x}_*, \mathbf{X}_t)\left(k_t(\mathbf{X}_t, \mathbf{X}_t) +\sigma_t^2\mathbf{I}\right)^{-1}$, and $\Sigma_{t,t}^s$, $\Sigma_{t,*}^s$, $\Sigma_{*,t}^s$, $\Sigma_{*,*}^s$ are the blocks of the posterior covariance matrix of the source evaluated at target and query points. The number of hyperparameters is equal to \textsc{MHGP}\xspace, while the computational complexity is the same as of \textsc{SHGP}\xspace. \section{THEORETICAL FOUNDATIONS } \label{sec:equivalences} In \cref{sec:models,sec:intermediate-complexity-models} we introduced several transfer-learning models that rely on the hierarchical kernel architecture. In the following, we present insights on the connection between these methods and their design choices. Mean Hierarchical GP\xspace is the simplest of the models. It trains a GP on the source data and propagates the posterior mean to be the prior mean of the target model. This approach has the lowest computational complexity, see \cref{tab:computational_complexity} (\textsc{MHGP}\xspace). Neglecting the transfer of uncertainty from the source to target comes at a cost, as shown in the experimental section (\cref{sec:experiments}), since well-calibrated uncertainty estimates are at the core of BO's sample efficiency and neglecting the uncertainty may be detrimental to the optimization. So far we have adopted a Bayesian view when presenting the transfer models in \cref{sec:models,sec:intermediate-complexity-models}. This view is, however, not particularly useful for the non-Bayesian nature of \textsc{BHGP}\xspace. We therefore switch to an alternative and unifying view in which uncertainty propagates via the stochastic realizations of the models. In particular, we study the creation of an ensemble of target models based on functional samples from the source posterior. Averaging over the ensemble would then lead to the desired target model. In this section, we show that both our proposed transfer-learning models, \textsc{SHGP}\xspace and \textsc{BHGP}\xspace, are closed-form solutions to such averaging procedures. Performing model averaging over the ensemble of target \emph{priors} results in the target model of Hierarchical GP\xspace, as we show in \cref{sec:bayesian_averaging}. In light of the ensemble averaging, this kernel naturally lends itself to a sequential optimization of hyperparameters resulting in our proposed Sequential Hierarchical GP\xspace. The key advantages of this approach are the accurate uncertainty estimates with reduced computational complexity, see \cref{tab:computational_complexity} (\textsc{HGP}\xspace vs \textsc{SHGP}\xspace), and the sequential optimization of weakly correlated subsets of hyperparameters, which stabilizes training and may improve performance, see \cref{fig:experiments_multi_source}. By contrast, averaging over the ensemble of target \emph{posterior distributions} leads to Boosted Hierarchical GP\xspace, as we show in \cref{sec:models}. Here, each target posterior in the ensemble is a GP with a sample from the source posterior as prior mean function. This model inherits the uncertainty in a non-Bayesian fashion and is fundamentally different from Sequential Hierarchical GP\xspace. Before diving into the theoretical analysis, we illustrate the fundamental differences between the three hierarchical models in \cref{fig:comparison}. In the plots, we train the techniques on data generated from an Alpine function family, $f(x; c)=x\sin(x+\pi) + cx$, with one input, $x\in(-10, 10)$, and one parameter defining the family, $c\in\R$. The source data are generated uniformly within $x\in(-10,0)$ with $c=1/2$, which is why the source posterior distribution is uncertain on the right, $x\in(0,10)$, see \cref{fig:comparison}(a). The target data are generated with $c=-1/2$ for $X_t=(1, 2, 3, 4)$. The posterior of Mean Hierarchical GP\xspace\ underestimates the uncertainty of the target function on the right, since its uncertainty originates solely from the target data, see \cref{fig:comparison}(b). Boosted Hierarchical GP\xspace alleviates this shortcoming on the right, while on the left it has the same uncertainty since the source model is confident. The posterior of Sequential Hierarchical GP\xspace is Bayesian and correctly captures the variation of the target function on the right, see \cref{fig:comparison}(c). On the left, Sequential Hierarchical GP\xspace follows the source model, since the target points are well explained by the source model alone, see \cref{fig:comparison}(a), and no extra uncertainty is required. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/comparison.pdf} \caption{Visualization of key hierarchical transfer-learning models presented in the paper. The posteriors of the source model (a), of \textsc{MHGP}\xspace\ and \textsc{BHGP}\xspace\ (b), and of \textsc{SHGP}\xspace\ (c) are shown in terms of the mean function and 95\% confidence intervals.} \label{fig:comparison} \end{figure} \subsection{Bayesian model averaging}\label{sec:bayesian_averaging} In this section, we show that Bayesian model averaging over prior mean functions of the target, which are distributed according to the posterior of the source, leads to the \textsc{HGP}\xspace model defined by \cref{eq:hierarchical-kernel}. \begin{restatable}{proposition}{claimuikernel}\label{claim:ui_kernel} Let $f_s \sim \GP{0}{k_s}$, $f_t|f_s\sim \GP{f_s}{k_t}$, and \begin{equation} \label{eq:bayesianaverage} p(\uidist{t}{} \mid \mathcal{D}_t)=\int p(f_t \mid f_s, \mathcal{D}_t) p(f_s \mid \mathcal{D}_t) df_s. \end{equation} Then, the joint model of $\funcdist{}{s}{}$ and $\uidist{t}{}$ is a GP with zero mean prior and the kernel \cref{eq:hierarchical-kernel}. \end{restatable} We provide a closed form solution for the marginalization of \cref{eq:bayesianaverage} over the source posterior in \cref{ap:equivalences}. Since both integrands are Gaussian, the integral yields a Gaussian distribution for $p(\uidist{t}{} \mid \mathcal{D}_t)$. By conditioning the joint model from \cref{claim:ui_kernel} on the source data, we can derive the following prior for the target model. \begin{restatable}{corollary}{claimuiprior} \label{claim:uiprior} Let $ k_\Sigma = k_t + \cov{f_s \mid \mathcal D_s}. $ Then under the assumptions of \cref{claim:ui_kernel} it holds that $\uidist{t}\sim\ \GP{\expectation{f_s \mid \mathcal D_s}}{k_\Sigma}.$ \end{restatable} We provide the detailed derivations in \cref{ap:equivalences}. The training complexity scales as $\mathcal{O}(N_t^3 + N_t^2 N_s + N_t N_s^2)$, see \cref{ap:subsec:complexity-shgp} for the proof. We therefore save, during training, the steep complexity contribution of $\mathcal{O}(N_s^3)$ of the conventional Bayesian methods. \subsection{Posterior prediction averaging}\label{sec:boostedgp} In an alternative approach we take inspiration from the well-known principle of boosting \citep{schapire2003boosting} and average over the posterior distributions of the target models. In contrast to Bayesian model averaging in Sequential Hierarchical GP\xspace, this approach neglects the implicit dependency of the source model on the target data, $p(f_s\mid\mathcal{D}_t)\rightarrow p(f_s)$ in~\cref{eq:bayesianaverage}. This distribution is analytically tractable and equals the posterior distribution of a GP with kernel~\cref{eq:boosted_kernel}, i.e., the Boosted Hierarchical GP\xspace. \begin{restatable}{proposition}{claimboosting} \label{claim:boosting} Let $f_s, f_t$ be as in \cref{claim:ui_kernel} and \begin{equation*} p(\spldist{t}{} \mid \mathcal{D}_t)=\int p(f_t \mid f_s, \mathcal{D}_t) p(f_s) df_s. \end{equation*} Then, the joint model of $\funcdist{}{s}{}$ and $\spldist{t}{}$ is a GP with zero mean and the covariance function defined in \cref{eq:boosted_kernel}. Further, $\spldist{t}{}\mid\mathbf{x}_*, \mathcal D_t$ is multi-variate normal with mean $\expectation{f_s\mid\mathbf{x}_*, \mathcal D_s} - \alpha_{*,t}(\mathbf{y}_t-\expectation{f_s\mid\mathbf{X}_t, \mathcal D_s})$ and covariance matrix $k_t(\mathbf{x}_*, \mathbf{x}_*) -\alpha_{*,t} k_t(\mathbf{X}_t,\mathbf{x}_*)+\Sigma_*^{\mathrm{boost}}.$ \end{restatable} The proof follows the same ideas as for \cref{claim:uiprior} and can be found in \cref{ap:equivalences}. Since the prior for the target of \textsc{BHGP}\xspace coincides with the prior of \textsc{MHGP}\xspace, the hyperparameter optimization is the same. The training complexity is the same as for \textsc{SHGP}\xspace, $\mathcal{O}(N_t^3 + N_t^2 N_s + N_t N_s^2)$, see \cref{ap:subsec:complexity-bhgp} for proof. \section{EXPERIMENTS}\label{sec:experiments} We provide an experimental evaluation of the transfer-learning algorithms discussed in \cref{tab:computational_complexity}. In addition, we evaluate three baselines: GP-based BO (GPBO) without any source data, RGPE \citep{feurer2018rgpe} in which the target function is modelled as a weighted sum of the predictions of all task GPs, and GC3P \citep{salinas2020quantile} in which a Gaussian Copula regression with a parametric prior is employed to scale to large data. We implement the models using GPy \citep{gpy2014} and run BO with Emukit \citep{emukit2019}, licensed under BSD 3 and Apache 2.0, respectively. For GC3P we use the publicly available code~\citep{salinas2020quantile}. GP hyperparameters are optimized by maximizing the likelihood for the observed data. We focus on maximum likelihood for simplicity but, in high dimensions, it may be preferable to employ a Bayesian treatment of hyperparameters. Code to reproduce our results can be found on Github\footnote{\url{https://github.com/boschresearch/transfergpbo}}. \subsection{Synthetic Function Families} The synthetic function families that are derived from conventional benchmark functions by placing probability distributions on their parameters. We consider a broad spectrum of families, where the optimum of tasks varies locally for some and in the entire domain for others: the one-dimensional Alpine \citep{momin2013literature} function, and the multi-dimensional Branin, Hartmann3 and Hartmann6. Details about the function families are provided in \cref{ap:synthetic_functions}. Performance is reported via mean simple regret over 50 runs together with standard error of the mean. \begin{figure*} \includegraphics[width=\textwidth]{figures/exp_ss.pdf} \caption{Performance on the single-source benchmarks: the two-dimensional Branin (left), three-dimensional Hartmann3 (middle), and six-dimensional Hartmann6 (right). The source data are sampled randomly from the source function and contain $20N_\mathrm{dim}$ points with $N_\mathrm{dim}$ being the input dimension of the benchmark. We add i.i.d. observational noise of standard deviation $\sigma_s = \sigma_t = 0.1$ for Hartmann3, Hartmann6, and $\sigma_s = \sigma_t = 1.0$ for Branin during data generation. } \label{fig:experiments_single_source} \end{figure*} \paragraph{Single-source benchmark functions}\label{sec:single_source_experiments} We start the experimental analysis in the simplest setting with a single source task. This fundamental regime provides valuable insight into the trade-off between performance and scalability of the transfer learning algorithms. Applications with scarce historical data where the transfer efficiency is important may particularly benefit from this analysis. The algorithms are benchmarked on three function families, see \cref{fig:experiments_single_source}. Results on two further function families are available in \cref{ap:results_1d_benchmarks}. A notable difference in the convergence on the different benchmarks can be observed. This is likely to be caused by the different particularities of each function family, which we discuss in \cref{ap:synthetic_functions}. Despite these differences, there are a few important generalizing features that characterize the algorithms: (i) The Bayesian techniques \textsc{SHGP}\xspace, \textsc{HGP}\xspace, and \textsc{WSGP}\xspace outperform the non-Bayesian algorithms. This is not surprising, since Bayesian models compute highly informed probabilistic predictions. The best and most consistent performers are the \textsc{HGP}\xspace and \textsc{WSGP}\xspace algorithms. We attribute this to the flexibility of their design. During training, all kernel's hyperparameters are jointly optimized allowing for superior model quality. (ii) Our modular Bayesian technique, \textsc{SHGP}\xspace, is competitive and an attractive alternative with lower computational complexity. The other non-Bayesian methods trail behind, which is likely due to their inability to reliably propagate model uncertainty. Among these, our technique, \textsc{BHGP}\xspace, performs consistently better than other non-Bayesian techniques and provides a reasonable compromise between computational complexity and efficiency. (iii) Surprisingly, the most advanced and expensive techniques, \textsc{MTKGP}\xspace and \textsc{MTGP}\xspace, perform worse than the other Bayesian techniques. This is likely caused by the challenging training procedure, where a large number of hyperparameters are optimized. In addition to the asymptotic complexities in \cref{tab:computational_complexity}, we demonstrate the lower computational complexity of our methods in a direct runtime analysis of model training in \cref{fig:experiments_num_source_and_noise}(a). \textsc{SHGP}\xspace and \textsc{BHGP}\xspace are orders of magnitude faster than the full Bayesian methods for moderately large source data sets. The runtimes reported in \cref{fig:experiments_num_source_and_noise}(a) are representative of the entire optimization runtime since model training has steeper complexity than acquisition-function optimization, see \cref{ap:runtime_analysis} for empirical evidence. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/exp_abl.pdf} \caption{Runtime analysis (a) and the impact of the size of the source data set (b) and observational noise (c) on the algorithm performance. In (a) we plot the training time in seconds versus number of points in the source data set for the Hartmann6 function family. We consider the timing of one single step of gradient ascent during the optimization of the likelihood function. The target data set is set to 100 randomly sampled points. Statistics are acquired via 7 independent runs. The final simple regret versus number of points for constant observational noise, $\sigma_s = \sigma_t = 0.01$ (b), and standard deviation of observational noise for 60 source points (c), is plotted for the Hartmann3 function family. The performance of GPBO is independent of the number of historical points. } \label{fig:experiments_num_source_and_noise} \end{figure*} \paragraph{Observational noise and propagation of uncertainty}\label{subsec:exp_impact_noise} Beyond these generic trends, the performance of the techniques depends on other factors like the amount of source data and the magnitude of the observational noise. Such a study is presented in \cref{fig:experiments_num_source_and_noise}(b, c); more results are available in \cref{ap:ablation_studies}. We distinguish three data regimes: (i) In the limit of scarce source data, which is insufficient to describe the global shape of the source function, the kernel methods with joint HPO, \textsc{HGP}\xspace and \textsc{WSGP}\xspace, outperform the other methods significantly. The reason is that they model all data jointly, in contrast to the methods trained sequentially that end up with source models of poor quality. (ii) The case of moderate amount of source data, which is sufficient to describe the source function probabilistically but not deterministically, is discussed in the previous section. (iii) In the limit of lots of source data, which is sufficient to deterministically describe the function, all transfer-learning methods converge to a similar performance. Here, propagation of uncertainty is not required and the non-Bayesian approaches are appealing due to their favorable scaling. The observational noise affects the boundary between the aforementioned data regimes. Enhanced noise leads to an increased amount of data required by a model of fixed quality. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/exp_ms.pdf} \caption{Performance on the multi-source benchmarks: the three-source Hartmann6 (left), five-source Alpine (middle), and ten-source OpenML SVM benchmark (right). The source data are sampled randomly and contain 60, 20, and 60 points per task for Hartmann6, Alpine, and SVM, respectively. I.i.d. observational noise of standard deviation $\sigma_s = \sigma_t = 0.1$ is added to the source and target data of the synthetic benchmarks.} \label{fig:experiments_multi_source} \end{figure*} \paragraph{Multi-source benchmark functions}\label{sec:multi_source_experiments} For the more general case of multiple source data sets, the algorithms are benchmarked on two function families: (i) the six-dimensional Hartmann6 family with three source data sets in which the functions are sampled randomly as in \cref{sec:single_source_experiments}, and (ii) the one-dimensional Alpine family with five source data sets, where the source and target functions are fixed as in \citep{feurer2018rgpe}. \textsc{MTGP}\xspace is not benchmarked here because of its complexity. The performance of the algorithms on the three-source Hartmann6 is consistent with the single-source benchmarks, compare \cref{fig:experiments_single_source,fig:experiments_multi_source}. \textsc{HGP}\xspace and \textsc{WSGP}\xspace still perform best, while our less complex \textsc{SHGP}\xspace provides a competitive compromise. The picture changes in the Alpine benchmark, where the more structured approaches to HPO show superior performance. Our techniques, \textsc{SHGP}\xspace and \textsc{BHGP}\xspace, outperform others despite the lower computational complexity. This is likely due to the stable, hierarchical training procedure, where at most three hyperparemeters are optimized at a time. By contrast, \textsc{HGP}\xspace optimizes 18 hyperparameters jointly, \textsc{WSGP}\xspace 23, and \textsc{MTKGP}\xspace 33. Algorithms employing hierarchical hyperparameter optimization are therefore ever more appealing for increasing number of source tasks. \subsection{Meta-Learning Surrogate Benchmarks} In the following we study the performance of the transfer-learning techniques on hyperparameter optimization problems in machine learning. We follow \cite{perrone2018scalable} and use evaluations of support vector machine (SVM) models on 28 data sets. The data were published by \citet{kuhn2018automatic} on the OpenML platform \citep{vanRijn2013Collaborative} under the creative commons license \citep{kuhn2018openml}. More details are available in \cref{app:meta-learning-benchmarks}. As in \cite{salinas2020quantile}, we carry out a discrete optimization at the observed evaluations to avoid potential problems related to the bias of a fitted surrogate model. Since the data sets contain over fifty thousand evaluations, which is too much for the closed-form Bayesian techniques to handle, we perform downsampling and choose 11 tasks at random. The data of the target task are used as the ground truth for the discrete optimization, while the data of the source tasks are downsampled to 60 points per task. We present results averaged over 500 independent runs. For better comparability, we employ a rescaled version of the regret function called \emph{average distance to the global minimum} \citep{Wistuba}, where the regret values on each data set are rescaled between zero and one \emph{before} the averaging procedure. The results are shown in the right panel of \cref{fig:experiments_multi_source}. Despite the markedly different nature of the data compared to the synthetic benchmarks, the performance of the different transfer-learning techniques exhibits somewhat similar trends. The performance is dominated by \textsc{WSGP}\xspace, which achieves achieve a regret of 0.01 more than three times faster than classical GPBO. Interestingly, \textsc{HGP}\xspace performs worse. We hypothesize this to be caused by the incompatibility between the hierarchical nature of \textsc{HGP}\xspace and the structure of the SVM data set. Our techniques, \textsc{SHGP}\xspace and \textsc{BHGP}\xspace, provide, as before, a tradeoff between complexity and performance, and achieve the same regret level about twice as fast as GPBO. \section{DISCUSSION AND CONCLUSION}\label{sec:conclusion} In this paper, we have presented a unified view on transfer learning methods based on Gaussian processes with tractable model posteriors. One end of this spectrum is populated by models that maintain a full covariance matrix of all source and target data and perform proper Bayesian inference on all data jointly. While these are powerful models with well-calibrated uncertainty estimates, they quickly become computationally infeasible due to their cubic complexity in the cumulated number of source and target data points. The other side of the spectrum is occupied by heuristics that aggregate the predictions of separate, independently trained, task models. These methods exhibit a much more favourable computational complexity, but unfortunately fail to properly propagate uncertainty between the individual task models. To amend this unsatisfactory binary choice between computational feasibility and uncertainty propagation, we proposed two novel transfer learning models, Sequential Hierarchical GP\xspace and Boosted Hierarchical GP\xspace, that both provide a compromise between these extremes: they add some computational complexity to account for the uncertainty of the involved models with some slight approximations as compared to the full covariance approaches. Our analysis is supported by comprehensive experiments that pinpoint strengths, weaknesses, and trade-offs of these transfer learning methods. The benchmarks demonstrate the appeal of Sequential Hierarchical GP\xspace and Boosted Hierarchical GP\xspace as robust and competitive techniques. \subsubsection*{Limitations} The low-data regime is challenging for any method. In particular, the empirical success of Gaussian processes hinges on the smoothness assumptions encoded in the kernel. At the same time, it is not obvious how non-smooth functions can be optimized efficiently. Another limitation is that they do not scale to large scale data without approximations, which is not a problem in our setting since we specifically focus on the low-data regime. We are not aware of any negative societal impacts of our work, since we focus on improving the data efficiency of optimization methods. \subsubsection*{Acknowledgements} We would like to thank Stefan Falkner for insightful discussions on the results. \clearpage \bibliographystyle{plainnat}
1,477,468,751,352
arxiv
\section{\label{sec:intro}INTRODUCTION} The world's only \ep collider, HERA at DESY, Hamburg, concluded operations in 2007 after 15 years of data taking. Until then, integrated luminosities of about \unit{0.5}{\invfb} had been collected by each of the two colliding-beam experiments H1 and ZEUS. An integral part of the HERA physics programme are studies of the hadronic final state, often in the shape of hard hadronic jets, with the aim of testing predictions of (perturbative) QCD (pQCD). Consequently, much effort has gone into extractions of the strong coupling constant, \alpS, from jet measurements.\\ Values of \alpS have also been extracted from inclusive measurements which are sensitive to \alpS and to the gluon density in the proton, $g$, only via scaling violations. Furthermore, in these measurements $g$ and \alpS are strongly correlated. The corresponding results on \alpS from the HERA kinematic regime are therefore less precise than those from jets. One way out of this problem is to include jet data in fits of the proton \emph{parton distribution functions} (PDFs), thus introducing leading-order sensitivity to $g$ and \alpS and adding quark-induced contributions to the cross section which break the strong $g$--\alpS correlation mentioned above. A recent PDF fit result makes use of this feature (HERA-PDF 1.6). \\ This article reviews measurements of the strong coupling constant at HERA, with an emphasis on the relevant jet measurements, on combinations of \alpS values and on combined extractions of the PDFs and \alpS. \section{\label{sec:environment}EXPERIMENTAL ENVIRONMENT} The HERA \ep collider\footnote{ HERA could accelerate both electrons and positrons. Since the lepton charge sign is of no importance for the physics discussed here, the term ``electron'' will be used generically for both electrons and positrons. } was in operation from 1992--2007. The electron beam energy was fixed to \unit{27.5}{\GeV}. The proton beam energy was raised from \unit{820 to 920}{\GeV} during the ``HERA-I'' data-taking phase (1998) which lasted until 2000. With these beam energies, HERA achieved a centre-of-mass energy of about \unit{318}{\GeV}. A shutdown in the years 2001--2003 was used to prepare the HERA-II phase which brought an increase in luminosity by a factor 4--5 and the possibility of longitudinal lepton polarisation.\\ The HERA experiments H1 and ZEUS were typical high-energy physics detectors. The most striking feature of both detectors was their asymmetric structure which reflected the different beam energies of proton and lepton beam and the boosted centre-of-mass system. Both H1 and ZEUS provided tracking with silicon detectors close to the interaction point and large-volume jet drift chambers. Typical \pT resolutions were of the order of $\sigma\left(\pT\right) / \pT = 0.003 \cdot \pT \left[\GeV\right]$ for both experiments, with an additional constant contribution of $0.015$ in the case of H1. The tracking devices were surrounded by calorimeters (LAr sampling technology with 45\,000 cells and lead / steel absorbers in H1, depleted uranium with scintillator for the compensating ZEUS calorimeter with about 12\,000 cells). The energy resolutions for electrons were determined in test beam measurements to be 11(18)$\% / \sqrt{E \left[ \GeV \right]}$ for electrons in the case of H1 (ZEUS). For hadrons, the ZEUS performance was about 35$\% / \sqrt{E \left[ \GeV \right]}$, for H1 50$\% / \sqrt{E \left[ \GeV \right]}$ were achieved. \\ In both experiments, superconducting coils provided magnetic fields of \unit{1.16(1.43)}{\tesla} (H1 / ZEUS). The experiments were additionally equipped with small-angle detectors for the measurement of particles close to the beam pipe, and with large-area muon chambers for the rejection of cosmic-ray background and the measurement of muons and hadronic leakage. Luminosity detectors which employed the Bethe-Heitler (or \emph{bremsstrahlung}) process \ep$\rightarrow$\ep$\gamma$ complemented the detectors. \section{\label{sec:physics}QCD AND PHYSICS AT HERA} \subsection{\label{sec:physics:qcd}QCD and the Strong Coupling} The strong coupling constant \alpS is the central parameter of QCD. It obeys the \emph{renormalisation group equation} (RGE) \begin{equation} \qtwo \frac{ \partial \alpS \left( \qtwo \right) }{\partial \qtwo} = \beta\left( \alpS \left( \qtwo\right)\right)\, . \end{equation} The $\beta$ function can be calculated using perturbative methods; so far contributions up to four-loop precision have been derived. The solution for \alpS at the two-loop level (which is the relevant one for usage in NLO pQCD calculations of jet cross sections) is given by \begin{equation} \alpS\left(\qtwo\right) = \frac{1}{\beta_0 L} - \frac{1}{\beta_0^3 L\squared}\beta_1 \ln L\, , \end{equation} with known coefficients $\beta_{0,1}$ and $L = \ln \left( \qtwo / \Lambda\squared \right)$. $\Lambda$ defines the energy scale at which the perturbative approach breaks down. Its value has to be determined from data and is of the order of \unit{200}{\MeV}. The RGE fixes the behaviour of \alpS with energy scale but not the absolute normalisation which has to be taken from data. Alternatively, one can fix the coupling value for a given reference energy scale $\mu$ and express the value at other scales \qtwo as a function of the reference scale. For the one-loop solution this leads to \begin{equation} \alpS\left( \qtwo \right) = \frac{\alpS\left( \mu\squared \right)}{1+\alpS\left( \mu\squared\right) \beta_0 \ln \frac{\qtwo}{\mu\squared}}\, . \label{eq:evolution} \end{equation} Typically the reference scale $\mu$ is chosen to be the mass of the \Zo boson, \MZ. Equation~\ref{eq:evolution} can be used to evolve the strong coupling from the scale \MZ\, --- at which it is often measured --- to any arbitrary scale that might be needed in a calculation. \subsection{\label{sec:physics:kinematics}Basics of HERA Physics} Figure~\ref{fig:feynman} (left) shows a Feynman diagram for a lowest-order \ep scattering process. The incoming electron with four-momentum $k$ interacts with the proton (momentum $P$) via the exchange of a boson of four-momentum $q$, resulting in a final state with a scattered lepton ($k'$) and a hadronic final state $X$. The kinematics of this process are fully described by the following quantities: The \emph{momentum transfer} $\qtwo = -q\squared = -(k-k')\squared$ defines the resolution power of the exchanged boson. The \emph{Bjorken scaling variable} $x = \qtwo / (2P\cdot q)$ defines the proton's momentum fraction that participates in the hard interaction. The \emph{inelasticity} $y = (P \cdot q)/(P \cdot k)$ is the fractional energy transferred from the electron to the proton in the latter's rest frame. The quantities \qtwo, $x$ and $y$ are related via the squared centre-of-mass energy, $s$, such that for given beam energies two of the three are sufficient to fully characterise the kinematics of the scattering process: $\qtwo = xys$. \begin{figure} \includegraphics[width=75mm]{diagramsnew.eps} \caption{Feynman diagrams of important processes in \ep physics.} \label{fig:feynman} \end{figure} Two further distinctions are made: First, the observable \qtwo is used to define two different kinematic regimes: \emph{photoproduction} (PHP) is defined by the condition $\qtwo \sim 0$; for \qtwo significantly larger than \unit{1}{\GeV\squared} one speaks of \emph{deep-inelastic scattering}. Second, the type of the exchanged boson is used to distinguish between \emph{neutral-current} (NC, for electron / \Zo-boson exchange) and \emph{charged-current} (CC, \Wpm-boson exchange) events. In the latter case, the final-state lepton is a neutrino and will escape the experiment undetected, leading to missing transverse energy in the final state. For determinations of the strong coupling constant at HERA, CC jet physics is of negligible importance. The kinematics of the \ep scattering are typically derived from measurements of the scattered electron or, in the case of CC events or of photoproduction events (where the electron escapes the experiment undetected), from the properties of the hadronic final state. The hadronic final state was originally calculated from the energy depositions in the calorimeters. Later on, \emph{energy-flow algorithms} were developed in both experiments which aim at maximising the resolution by making the best use of both tracking and calorimetric information. Also jets were either reconstructed from calorimeter cells (mostly for ZEUS) or from the energy-flow objects (typically in H1). The hadronic energy scale is known, in both experiments, to about 1\%, and this uncertainty is in most cases the dominating experimental one, followed often by the model uncertainty that is evaluated by using different MC models for the correction of the data for detector effects. The experiments at HERA typically measure the so-called \emph{reduced cross section}, $\sigma_r$, that is closely related to the double-differential cross section in the kinematic quantities \qtwo and $x$ and that is to a good approximation given by the structure function $F_2$: \begin{equation} \sigma_r = \frac{1}{Y_+}\frac{\dif\squared\sigma_{NC}^{\pm}}{\dif x \dif \qtwo}\frac{xQ^4}{2\pi\alpha\squared}=F_2\left(1+\Delta\right)\, , \end{equation} with $Y_{\pm} = 1\pm\left(1-y\right)\squared$. In leading order, $F_2$ is related to the PDFs --- or more precisely the quark densities, $q_i\left(x,\qtwo\right)$ --- via the relation $F_2 = x \sum_i e\squared_i \left(q_i + \overline{q}_i\right)$, where the sum runs over all quark flavours and $e_i$ is the charge of the quark $i$ in units of the elementary charge. From a measurement of $F_2$, therefore, the PDFs can be extracted using the DGLAP equations which govern the evolution of the PDFs with \qtwo. At HERA, the HERA-PDF working group of the H1 and ZEUS experiments has since a few years extracted PDF sets from combined H1+ZEUS reduced cross section data. The latest HERA-PDF set based entirely on inclusive (structure-function) data is known as HERA-PDF 1.5~\cite{ringaile}. \subsection{\label{sec:physics:jets}Jet Physics at HERA} The inclusive measurements sketched above rely exclusively on the measurement of the scattered electron and the subsequent determination of the kinematics. Neither is the hadronic final state involved, nor does the leading-order contribution (left diagram in \fig{\ref{fig:feynman}}) involve a QCD coupling --- the process is of a purely electroweak nature ---, nor do the inclusive data provide a direct access to the gluon density in the proton, $g$ (an indirect access that is also strongly correlated with \alpS is given via the scaling violations of the DGLAP evolution equations). All these limitations for more extensive studies of QCD can be overcome by studies of jets. Jets are (collimated) bundles of hadrons which are created by the showering and hadronisation of final-state partons. Since the involved processes do not lead to significant transverse momenta, the resulting hadrons are close in phase space. A jet is formed from these hadrons by applying a specific procedure or \emph{jet algorithm} that defines which particles are combined into a jet and how the resulting jet four-momentum is calculated from the four-vectors of the contributing particles. The limited transverse momenta also ensure that the jet's four-momentum is close to the original parton's so that the jets can be regarded as the ``footprints'' of the final-state partons and can give direct access to the hard interaction. At HERA, typically the \emph{longitudinally invariant inclusive \kT algorithm}~\cite{inclkt} is used. For DIS analyses, the algorithm is typically applied in the \emph{Breit reference frame} in which the transverse momenta of the jets are a measure for the hardness of the underlying QCD process. The two Feynman diagrams in the centre and on the right side of \fig{\ref{fig:feynman}} show the dominant contributions to jet production at HERA; they are of order $\mathcal{O}\left(\alpS\right)$ (centre: \emph{QCD-Compton process} or QCDC; right: \emph{boson-gluon fusion} or BGF). The BGF process introduces a dependence of the jet production cross section on the gluon density, $g$, already at leading order. The QCDC process, on the other hand, helps to break the aforementioned strong correlation between the gluon density and \alpS from which inclusive measurements suffer. The cross section for jet production can be written as a convolution of the proton PDFs $f_{i/p}$ with a hard scattering matrix element $\hat{\sigma}$, expanded in powers of \alpS: \begin{eqnarray} \sigma_{\text{jet}} & = & \sum_n \alpS^n\left(\murs\squared\right) \\ & & \cdot \sum_{i=q,\overline{q},g} \int \dif x f_{i/p} \left(x,\mufs\squared\right) \cdot C_{i,n}\left(x,\murs\squared,\mufs\squared \right)\, .\nonumber \end{eqnarray} In this equation, the $f_i$ are the parton distribution functions, and the coefficients $C_i$ can be calculated --- up to some order --- in pQCD. The \murs and \mufs are the renormalisation and factorisation scales. The terms proportional to $\alpha_s^2$ or higher correspond to corrections to the leading-order diagram. In pQCD, jet cross sections can be calculated up to next-to-leading order (NLO) for the case of inclusive-jet, dijet and trijet production (DIS) or dijet production (PHP). Measurements of jet cross sections have been performed for all of these scenarios in various regions of phase space (different regions of \qtwo, various requirements on transverse energies or pseudorapidities of the jets, etc.). The dominating theoretical uncertainty (and often the largest uncertainty overall) is typically the effect of unknown higher orders (beyond NLO) in the perturbative expansion. This uncertainty is typically estimated by a variation of the renormalisation scale, \murs, in the calculations. \section{\label{sec:alphas}THE STRONG COUPLING AT HERA} \subsection{\label{sec:alphas:strong}Deriving the Strong Coupling at HERA} Two different methods for deriving the strong coupling from HERA jet data exist. ZEUS~\cite{zeusalphasfit} typically performs NLO QCD calculations with PDF sets that use different input \alpS values, making it possible to parametrise the dependence of the theory cross section in a given analysis bin $i$ of an observable $A$ on \alpS by a quadratic function: \begin{equation} \frac{\dif\sigma}{\dif A}\bigg\vert_i = C_1 \cdot \alpS\left(\MZ\right) + C_2 \cdot \alpS\squared\left(\MZ\right)\, . \end{equation} The cross section measured in bin $i$ is then mapped to the parametrisation and the value of \alpS can easily be read of. In this way, the full coupling dependence of the PDFs and the hard scattering matrix element is preserved, and experimental and theoretical uncertainties can easily be derived. The method is applicable to both single data points and to sets of several points. H1, on the other hand, employs the Hessian method~\cite{hessian} for deriving values of \alpS. In this method a full $\chi\squared$ is evaluated for an arbitrary number of data points: \begin{equation} \chi\squared = \vec{V}^T \cdot M^{-1} \cdot \vec{V} + \sum_k \epsilon\squared_k\, , \label{eq:chi2} \end{equation} where the matrix $M$ comprises the statistical and uncorrelated systematic uncertainties and $V_i = \sigma_i^{exp} - \sigma_i^{theo} \left( 1- \sum_k\Delta_{ik}\epsilon_k\right)$. The $\sigma_i$ are the measured and theoretically predicted cross sections for bin $i$, $\Delta_{ik}$ is the correlated systematic uncertainty of type $k$ for bin $i$, and $\epsilon_i$ is a fit parameter which in \eqn{\eqref{eq:chi2}} is used as a penalty term. The experimental uncertainty can be read of at the places where $\chi\squared = \chi\squared_{min} + 1$. The Hessian method is also used in the combined HERA \alpS determinations discussed below. \subsection{\label{sec:alphas:jetsphp}\boldmath{\alpS} from PHP Jet Data} ZEUS recently released a preliminary determination of \alpS from a PHP measurement of inclusive-jet cross sections in \unit{300}{\invpb} of HERA-II data~\cite{zeusphppaper}. The result of the measurement is shown in \fig{\ref{fig:zeusphp}} as a function of the transverse jet energy, \ET. The measurement is experimentally limited by the jet energy scale uncertainty which amounts to 1.8\% on the extracted values of $\alpS\left(\MZ\right)$; the theoretical uncertainties are dominated (as is the case for most HERA jet measurements) by the influence of missing higher orders in the perturbative expansion of the cross section. The effect on \alpS is estimated to be of the order of 2.5\%. The resulting value for \alpS is \begin{equation*} \alpS\left(\MZ\right) = 0.1206^{+0.0023}_{-0.0022}\left(exp\right)^{+0.0042}_{-0.0033}\left(theo\right)\, . \end{equation*} A similar measurement~\cite{zeusphppaperold} in slightly smaller statistics (\unit{189}{\invpb}) has also been carried out using the anti-\kT and \SISCONE jet algorithms (instead of only the \kT algorithm) which are by now the default choices at the LHC. Compatible values of the strong coupling have been extracted from all three measurements. \begin{figure} \includegraphics[width=65mm]{zeusphp.eps} \caption{ZEUS inclusive-jet cross section in PHP compared to NLO predictions.} \label{fig:zeusphp} \end{figure} \subsection{\label{sec:alphas:jetsdis}\boldmath{\alpS} from DIS Jet Data} In PHP jet measurements, the transverse jet energy, \ET, is used as a hard scale. In DIS, also the photon virtuality \qtwo can be used (or linear combinations of \ET and \qtwo). However, at low values of \qtwo, the theoretical uncertainty due to the effect of missing higher orders in the perturbative expansion is typically significant. H1 measured inclusive-jet, dijet and trijet cross sections at low values of \qtwo between \unit{5 and 100}{\GeV\squared} in HERA-I data~\cite{h1lowq2paper}. The measurements were performed (multi-)differentially as functions of e.g.\ \qtwo and jet \pT. As a first step, values of \alpS have been extracted from all 61 measured data points individually. The individual values are observed to be well compatible. In a next step, combinations of all data points in 4 different \qtwo intervals were fitted. Since the experimental uncertainties of the resulting \alpS values are typically much smaller than the theoretical ones, the results demonstrate the limited predictive power of NLO calculations at least at low \qtwo values and thus the necessity of going to higher orders in the perturbative expansion, i.e.\ of using calculations at next-to-next-to-leading order (NNLO). Finally, all 61 data points are included in one combined fit, resulting in an \alpS value of \begin{eqnarray*} \alpS\left(\MZ\right) &=& 0.1160\pm0.0014\left(exp\right)\\ & &^{+0.0093}_{-0.0077}\left(theo\right)\pm0.0016\left(PDF\right)\, . \end{eqnarray*} The di- and trijet cross sections just discussed were also used by the H1 collaboration for a measurement of the strong coupling from the ratio $R_{3/2}$ of trijet to dijet cross section. A possible benefit of this ratio is a (partial) cancellation of some systematic uncertainties between numerator and denominator (luminosity, scales, PDFs ...). However, mainly because of limited statistics the result alone is not competitive with the best determinations from e.g.\ inclusive-jet cross sections at high \qtwo (see later). A similar analysis has also been performed by ZEUS~\cite{zeuslowq2paper}. \subsection{\label{sec:alphas:jetshighq2}\boldmath{\alpS} at the Highest \boldmath{\qtwo} Values} \begin{figure} \includegraphics[width=60mm]{d09-162f12.eps} \caption{H1 \alpS values from combinations of 1/2/3-jet cross sections in high- and low-\qtwo measurements.} \label{fig:h1highq2a} \end{figure} A 2009 publication from H1 complements the above-mentioned low-\qtwo measurement of inclusive-, di- and trijets by the corresponding measurements in the high-\qtwo regime (above \unit{150}{\GeV\squared})~\cite{h1highq2paper1}. The extracted value is $\alpS = 0.1168\pm 0.0007(exp.)^{+0.0046}_{-0.0030}(th.)\pm 0.0016(PDF)$. Figure~\ref{fig:h1highq2a} shows the main result, namely the running of the coupling as derived from a combined fit to all high-\qtwo data points (solid line with error band), together with individual \alpS values (evolved to their respective scale) at high \qtwo (small circles). In the figure, the high-\qtwo value is extrapolated down to the low-\qtwo regime of the measurement described above. The good agreement of the extrapolation with the four low-\qtwo measurements (small squares) is as striking as is the strong reduction of the theoretical uncertainties with respect to the low-\qtwo \alpS result. It is due to this smaller sensitivity to theoretical uncertainties that most \alpS measurements at HERA have been performed in the region of high \qtwo values. The most recent published ZEUS and H1 results comprise a measurement of inclusive-jet cross sections at $\qtwo > \unit{125}{\GeV\squared}$ with the \kT, anti-\kT and \SISCONE jet algorithms in \unit{82}{\invpb} of HERA-I ZEUS data, and a multi-differential measurement measurement of inclusive/di/tri-jet cross sections in \unit{351}{\invpb} at $\qtwo > \unit{150}{\GeV\squared}$ from H1. The ZEUS measurement~\cite{zeushighq2paper1} used the $\dif \sigma / \dif \qtwo$ distributions at \qtwo values above \unit{500}{\GeV\squared} for the extraction of the strong coupling, since this region proved to have the smallest theoretical uncertainties. The agreement observed between data and NLO QCD is excellent, and up to \qtwo values of about \unit{1000}{\GeV\squared} the theory uncertainty is found to dominate the measurement. ZEUS extracted a value of the strong coupling for each of the three jet algorithms; the three values are very well compatible, and the \kT result is given as \begin{eqnarray*} \alpS\left(\MZ\right) &=& 0.1207\pm0.0014\left(stat\right)\\ &&^{+0.0035}_{-0.0033}\left(exp\right)^{+0.0022}_{-0.0023}\left(theo\right)\, . \end{eqnarray*} The analysis has also been repeated in about \unit{300}{\invpb} of HERA-II data~\cite{zeushighq2paper2}, and the two results agree very well. \begin{figure} \includegraphics[width=75mm]{H1prelim-11-032-fig9c.eps} \caption{H1 ratio of trijet cross sections to NLO predictions in different regions of \qtwo as functions of the average jet \pT.} \label{fig:h1newhighq2trijetsalphas} \end{figure} The latest H1 high-\qtwo measurement~\cite{h1highq2paper2} was the first H1 jet measurement to profit from a strongly improved calibration of the hadronic final state and jet energy scale. In addition, the statistics of \unit{351}{\invpb} were large enough to allow the first double-differential measurement of trijet cross sections to be performed. As in the low-\qtwo case, H1 extracted \alpS values from each measured data point of the (average) \pT distributions for inclusive jets, dijets and trijets in different regions of \qtwo. The individual measurements agree very nicely as can be seen in \fig{\ref{fig:h1newhighq2trijetsalphas}} which shows the resulting \alpS values for the trijet cross sections for different average jet \pT values and \qtwo regions. Typically, the scale uncertainties are larger than the experimental uncertainties by a factor of about 2. In a next step, again, \alpS values are extracted from a combined fit to all data points within a certain \qtwo region, and from all combined inclusive-/di/tri-jet data points. As it turns out, the value extracted from the trijet data points offers the smallest experimental uncertainty. This is due to the fact that the trijet cross section already at leading order is proportional to $\alpha_s^2$. The resulting value is \begin{eqnarray*} \alpS\left(\MZ\right) &=& 0.1196\pm0.0016\left(exp\right)\\ &&\pm0.0010\left(PDF\right)^{+0.0055}_{-0.0039}\left(theo\right)\, ; \end{eqnarray*} the inclusive-jet and dijet results are compatible. \subsection{\label{sec:alphas:shape}Alternatives: \boldmath{\alpS} from Jet Shapes} Jet final states offer still more accesses to the strong coupling value than only via the cross-section measurements discussed above. One example is a ZEUS extraction of \alpS from the averaged integrated jet shape, $\langle \psi\left( r \right) \rangle$, in DIS~\cite{DESY04072}. The quantity $\langle \psi\left( r \right) \rangle$ is defined as $\langle \psi\left( r \right) \rangle = \frac{1}{N_{jets}}\sum_{jets} \frac{\ET\left( r \right)}{\ETjet}$, where $N_{jets}$ is the number of jets studied, $\ET\left( r \right)$ is the transverse energy contained in a cone of radius $r$ ($0 < r < R$, with the jet radius $R$) around the jet axis, and \ETjet is the total jet transverse energy. $\langle \psi\left( r \right) \rangle$ is thus a measure for the distribution of transverse energy inside a jet, and an understanding of this quantity gives interesting insights into the fragmentation process and allows, among other things, gluon and quark jets to be discriminated against each other on a statistical basis. \begin{figure} \includegraphics[width=65mm]{DESY-04-072_17.eps} \caption{ZEUS \alpS values extracted from integrated jet shapes at radius 0.5 as a function of \ETjet. Dashed error bars: theory uncertainties; inner/outer solid error bars: statistical / full experimental uncertainties.} \label{fig:shapes} \end{figure} For the extraction of \alpS, ZEUS parametrised the cross section for $\langle \psi\left( r \right) \rangle$ at a radius $r = 0.5$ in different bins of \ETjet as $\langle \psi\left( r \right) \rangle = C_1 + C_2 \cdot \alpS\left( \MZ \right)$. The constants $C_i$ were taken from a fit to the theoretically predicted cross section at different values of $\alpS\left(\MZ\right)$. Since the predictions include only terms of order $\mathcal{O}\left( \alpS \right)$, a reduced sensitivity to \alpS is obtained: \begin{eqnarray*} \alpS\left(\MZ\right) &=& 0.1176\pm0.0009\left(stat\right)\\ &&^{+0.0009}_{-0.0026}\left(exp\right)^{+0.0091}_{-0.0072}\left(theo\right)\, . \end{eqnarray*} Figure~\ref{fig:shapes} shows the \alpS values extracted for the individual \ETjet bins and for the combined extraction. \section{\label{sec:heracomb}THE HERA \boldmath{\alpS} COMBINATIONS} Since 2004, the HERA collaborations have started to combine their \alpS measurements~\cite{heracomb2004}. Since 2007, combined fits to H1 and ZEUS data sets have been performed. The 2007 HERA average, which was based on an inclusive-jet $\dif \sigma/\dif \qtwo$ measurement from ZEUS and the double-differential inclusive-jet $\dif\squared \sigma / \dif \ET \dif \qtwo$ from H1, resulted in~\cite{heracomb2007} \begin{equation*} \alpS\left(\MZ\right) = 0.1198\pm0.0019\left(exp\right)\pm0.0026\left(theo\right)\, . \end{equation*} Again, the result is dominated by the theory uncertainties, underlining the necessity of higher orders in the jet cross-section calculations. All in all, the HERA combined \alpS measurements demonstrate the excellent agreement between results obtained from various jet final states, from different kinematic regimes (DIS and PHP), and with different methods (H1 and ZEUS). Furthermore, the running character of the coupling can be demonstrated with data from one experimental environment alone --- and it is in excellent agreement with the running as predicted by QCD based on the world average of \alpS (see also later). \section{\label{sec:pdf}COMBINED FITS OF PDFS AND \boldmath{\alpS}} Fits to the inclusive structure-function measurements alone provide good insight into the proton structure. However, there are regions in phase space which are not sufficiently covered by structure-function data to meaningfully constrain the PDFs. One example is the gluon density at large values of $x$. Here, the use of jet data might help: First, the jet data provide sensitivity to $g$ already at leading order, and specifically at large $x$, and second, the QCDC contributions to the jet cross sections break the strong correlation between gluon density and \alpS, allowing a meaningful determination of both parameters to be done simultaneously. This insight had early impacts e.g.\ in a 2005 ZEUS publication~\cite{desy05050} in which both structure-function data and DIS and PHP jet cross sections were used in PDF fits, leading to the ZEUS-JETS PDF set. The result was a competitive determination of \alpS and a reduction of the uncertainty on the gluon density at medium and large values of $x$ of up to 35\%. This idea was recently taken up by the HERA-PDF working group who also included jet data from H1 and ZEUS into their PDF fits, leading to the HERA-PDF 1.6 set~\cite{herapdf16}. Detailed studies with fixed \alpS as in HERA-PDF 1.5 showed that the inclusion of the jet data does not change the resulting PDF fit very much, the most remarkable difference being the slightly softer gluon density at high $x$. Freeing \alpS in the fit, however, for the case of HERA-PDF 1.5 suffers from the already mentioned gluon--\alpS correlation which increases the error on the gluon density. Here HERA-PDF 1.6 (with jets) is clearly superior, with much smaller uncertainties on the gluon density as can be seen from \fig{\ref{fig:herapdf1.6}} which shows, at the top, HERA-PDF 1.5 with free \alpS as a function of $x$ and at the bottom HERA-PDF 1.6 for $\qtwo = 10~\GeV\squared$. \begin{figure} \includegraphics[width=65mm]{HERAPDF15top.eps} \includegraphics[width=65mm]{HERAPDF16bottom.eps} \caption{Comparison of HERA-PDF 1.5 for free \alpS (top) and 1.6 (bottom).} \label{fig:herapdf1.6} \end{figure} The HERA-PDF 1.6 value for \alpS of \begin{eqnarray*} \alpS\left(\MZ\right) &=& 0.1202\pm0.0013\left(exp\right)\pm0.0007\left(model\right)\\ &&\pm0.0012\left(hadronisation\right)^{+0.0045}_{-0.0036}\left(scale\right) \end{eqnarray*} is comparable to the HERA 2007 average. \section{\label{sec:world}HERA AND {\boldmath $\alpha_S$} WORLD DATA} \begin{figure} \includegraphics[width=60mm]{H1prelim-11-034-fig8.eps} \caption{Comparison of differnet \alpS values, including the one from HERA-PDF 1.6.} \label{fig:heraandworld} \end{figure} Figure~\ref{fig:heraandworld} shows a comparison of the \alpS value obtained from HERA-PDF 1.6, from the included individual jet measurements from H1 and ZEUS, and the world average~\cite{Bethke:2009jm}. There is excellent agreement, although of course the world average has significantly smaller uncertainties. In fact, the precision in the world average value (of order 1\%) comes mostly from the very precise derivations of \alpS from lattice calculations~\cite{davies} and from the analysis of $\tau$ decays. Among the high-energy collider \alpS values contributing to the world average, the determinations from HERA are very competitive. \section{\label{sec:conclusion}CONCLUSIONS} HERA has contributed massively to our current understanding of QCD, and in particular to the world knowledge of the proton structure and the central parameter of QCD, \alpS. Values for \alpS have been derived both in analyses of structure functions and from jet measurements in DIS and PHP. Lately, the H1 and ZEUS collaborations have also begun to derive combined values of \alpS from their data sets and to derive values for \alpS in PDF fits which take both inclusive structure-function and jet data into account. The HERA \alpS values are an important input to the \alpS world average. \bigskip
1,477,468,751,353
arxiv
\section{\@startsection {section}{1}{\zeta@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\zeta@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \makeatother \numberwithin{equation}{section} \def{\it i.e.}{{\it i.e.}} \def{\it e.g.}{{\it e.g.}} \def{\rm Im ~}{{\rm Im ~}} \def{\rm Re ~}{{\rm Re ~}} \def\alpha{\alpha} \def\beta{\beta} \def\delta{\delta} \def\mu{\mu} \def\nu{\nu} \def\epsilon{\epsilon} \def\lambda{\lambda} \def\gamma{\gamma} \def\zeta{\zeta} \def\rho{\rho} \def\omega{\omega} \def\varrho{\varrho} \def\sigma{\sigma} \def\tilde{\tilde} \def\varphi{\varphi} \def\Delta{\Delta} \def\Gamma{\Gamma} \def{\cal N}{{\cal N}} \def{\cal O}{{\cal O}} \def{\cal R}{{\cal R}} \def{\cal S}{{\cal S}} \def{\cal K}{{\cal K}} \def{\cal N}{{\cal N}} \def{\cal H}{{\cal H}} \def{\cal F}{{\cal F}} \def{\cal B}{{\cal B}} \def\partial{\partial} \def{\bf R^3}{{\bf R^3}} \def\({\left (} \def\){\right )} \newcommand{{\rm Tr}}{{\rm Tr}} \newcommand{\partial}{\partial} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\red}[1]{\textcolor{red}{#1}} \newcommand{\blue}[1]{\textcolor{blue}{#1}} \newcommand{SO(2)_{\textrm{res}} }{SO(2)_{\textrm{res}} } \begin{document} \begin{titlepage} \begin{flushright} \end{flushright} \begin{center} \vskip 1.5cm {\Large {\bf Holographic pump probe spectroscopy}} \vskip 1cm \renewcommand{\thefootnote}{\fnsymbol{footnote}} {\large A.~Bagrov,$^a$ B.~Craps,$^{b}$ F.~Galli,$^{c}$ V.~Ker\"anen,$^d$ \\ \vskip3mm E.~Keski-Vakkuri,$^d$ J.~Zaanen$^e$} \vskip5mm {$^a$Institute for Molecules and Materials, Radboud University,\\ Nijmegen, The Netherlands \\ $^b$Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB) and \\ The International Solvay Institutes, Brussels, Belgium \\ $^c$Perimeter Institute for Theoretical Physics, Waterloo, Ontario, Canada\\ $^d$Department of Physics, University of Helsinki, Helsinki, Finland\\ $^e$Instituut-Lorentz for Theoretical Physics, Universiteit Leiden, \\Leiden, The Netherlands} \vskip 4mm {\small\noindent {\tt abagrov@science.ru.nl, Ben.Craps@vub.be, fgalli@perimeterinstitute.ca, vkeranen1@gmail.com, esko.keski-vakkuri@helsinki.fi, jan@lorentz.leidenuniv.nl }} \end{center} \vfill \begin{center} {\bf ABSTRACT} \vspace{3mm} \end{center} We study the non-linear response of a 2+1 dimensional holographic model with weak momentum relaxation and finite charge density to an oscillatory electric field pump pulse. Following the time evolution of one point functions after the pumping has ended, we find that deviations from thermality are well captured within the linear response theory. For electric pulses with a negligible zero frequency component the response approaches the instantaneously thermalizing form typical of holographic Vaidya models. We link this to the suppression of the amplitude of the quasinormal mode that governs the approach to equilibrium. In the large frequency limit, we are also able to show analytically that the holographic geometry takes the Vaidya form. A simple toy model captures these features of our holographic setup. Computing the out-of-equilibrium probe optical conductivity after the pump pulse, we similarly find that for high-frequency pulses the optical conductivity reaches its final equilibrium value effectively instantaneously. Pulses with significant DC components show exponential relaxation governed by twice the frequency of the vector quasinormal mode that governs the approach to equilibrium for the background solution. We explain this numerical factor in terms of a simple symmetry argument. \end{titlepage} \tableofcontents \section{Introduction} The gauge/gravity duality applied within the context of strongly correlated many-body quantum systems started out as an interesting, yet limited, source of intuition on some properties of quantum critical matter. In the past decade, it has evolved into a powerful framework capable of taking into account a number of phenomenological aspects that should not be neglected when dealing with realistic models, such as crystal lattices, disorder, non-relativistic dispersion relations, etc. \cite{Ammon:2015wua,Zaanenetalbook}. An important advantage of the holographic approach is its capacity of describing within a unique framework both equilibrium and out-of-equilibrium quantum systems by mapping them to tractable problems in general relativity, which can be systematically analyzed in real time without any need for conceptually new approaches. In the past, most of the attention towards far-from-equilibrium situations in this framework has gone to the formation of quark gluon plasmas in heavy ion collisions. Holographic models relevant for this process incorporating a number of realistic features have been suggested and explored, with interesting results in relation with experiments \cite{DeWolfe:2013cua}. Studies of far from equilibrium situations directly relevant to condensed matter systems have been, on the other hand, relatively scarce and mostly limited to toy models (see e.g. \cite{Murata:2010dx,Bhaseen:2012gg,Chesler:2014gya,Sonner:2014tca,Callebaut:2014tva,Das:2014lda,Zeng:2016api,Zeng:2016gqj,Camilo:2015wea,Withers:2016lft}). At the same time, recent advances in ultrafast experimental techniques in condensed matter physics have put a demand for a theoretical framework capable of explaining and predicting observed phenomena \cite{Orenstein,DalConte,Giannettietal,Freericks}. One is therefore led to ask whether, given the current state-of-the-art, time-dependent holographic models can make contact with experiments. In this paper we make a step in this direction by proposing a model for pump-probe experiments in which one follows the optical response of a holographic strange metal after it has been taken into a highly excited state by an electromagnetic pulse. Our starting point is the minimal model considered in \cite{Andrade:2013gsa}, which describes a 2+1 dimensional strange metal at finite temperature and density, in presence of a weak momentum relaxation mechanism obtained through axion fields linear in the boundary spatial coordinates. This efficiently reproduces the effects of explicit translational symmetry breaking \cite{Horowitz:2012ky} while preserving a homogeneous and isotropic bulk geometry (see also \cite{Vegh:2013sk,Blake:2013owa,Donos:2013eha} for related holographic models). To mimic a pump pulse, we quench the holographic system by applying for a finite amount of time an oscillatory electric field. For simplicity we take it to be in the form of a modulated Gaussian wave packet of mean frequency $\omega_P$. In this way the system is driven into a highly excited out-of-equilibrium state, which then relaxes towards a new equilibrium state at a higher temperature, but equal charge density. In contrast with the zero density case where, both with \cite{Horowitz:2013mia} and without \cite{Bardoux:2012aw} momentum relaxation the bulk dynamics results into a simple Vaidya geometry, at finite density the response of the system to the external electric field becomes more complicated. In fact, although the electric field always sets the charges into motion, explicitly breaking spatial isotropy and inducing on the boundary non-trivial currents, at finite density this also causes non-zero momentum densities. Holographically this corresponds to having additional metric components and field excitations, which generically make the problem not treatable analytically. We study the resulting non-linear bulk dynamics with numerical methods and follow the evolution of the boundary one point functions as $\omega_P$ is varied. Although we work in the non-linear regime, we find that deviations from thermality after the pump pulse ends are surprisingly well captured within the linear response theory and their decay controlled by quasinormal modes (QNMs). In particular, at zero frequency the purely imaginary longest lived QNM of the vector sector governs the decay toward the new equilibrium configuration.\footnote{For the specific case of zero mean frequency, a related analysis has previously been performed in \cite{Withers:2016lft}. There the one point functions of electric and heat currents, as well as the QNMs that control their decay were studied in detail, also away from the weak momentum relaxation regime considered here.} As the pump frequency is dialed up, we find that the response of the bulk geometry is increasingly well approximated by a bulk solution of the Vaidya form. That is, we observe that as soon as the pump electric field is turned off the boundary one point functions almost instantaneously approach their final equilibrium configurations. In fact, in the limit $\omega_P \to \infty$ we are also able to show analytically that the bulk solution takes precisely the Vaidya form, and one point functions thermalize instantaneously. From the bulk point of view the origin of this dynamics can be understood from an analysis of QNM amplitudes. As we show explicitly, the amplitude of each QNM contribution is determined by the Fourier transform of the electric pump pulse evaluated at the frequency of the mode in question. The almost instantaneous approach to thermality at large frequencies is then explained by the absence of overlap between the pump spectrum and the frequency of long lived modes. A very simple toy model realized in terms of a driven harmonic oscillator effectively captures the main features of the bulk solution. With the numerical background in hand we then proceed to compute the main observable of interest for a pump-probe experiment, the probe optical conductivity after the quench. In the same way as in a pump-probe experiment, we consider the optical response of the holographic strange metal to a probe pulse that is applied only after the pump has ended. This is incorporated in the definition of out-of-equilibrium conductivity we adopt. Similarly to what happens for the background solution, the conductivity thermalizes almost instantaneously whenever the pump pulse has a negligible DC component. This behavior, although surprising from the boundary point of view, is completely natural with the insight provided by the analysis of the bulk background solution: If the geometry is described by a Vaidya solution, by causality in the bulk, the response to any perturbation applied after the light-like Vaidya shell will be insensitive to any detail of the quench other than the final equilibrium configuration. On the other hand, we find that for pump pulses with a DC component the optical conductivity relaxes with a rate set by twice the lowest vector QNM frequency. The appearance of this QNM, which governs momentum relaxation, can be understood from the fact that the zero frequency component in the pulse corresponds to a static electric field, which accelerates the finite density system. When the pulse is over, the resulting finite momentum has to relax in order to reach equilibrium. To explain the factor of two, which is less intuitive from a boundary point of view, we provide a careful but general analysis of linearized bulk fluctuations relying on the symmetries of the final equilibrium configuration. A brief summary of our main results appeared before in \cite{Bagrov:2017tqn}. There we proposed this model as an idealized setup for realistic pump-probe experiments, and the almost instantaneous thermalization as an extreme limit of fast thermalization that might manifest itself experimentally in certain regimes, similarly to what has been observed in the creation of quark gluon plasma in heavy ion collisions. In this paper, we present the computations behind them, as well as a number of new results, including the surprisingly good estimate of the size of non-thermality from a linear response analysis, and the explanation based on symmetry of the appearance of twice the lowest vector QNM frequency. The rest of the paper is organized as follows. In the next section, we define the bulk model of a strange metal with momentum dissipation and briefly review its equilibrium properties. In Sec.~\ref{sec:numerics}, details of the used numerical techniques are outlined. In Sec.~\ref{sec:noneqbst}, we provide the non-equilibrium background solution computed numerically and discuss the behavior of the corresponding boundary one point functions. In Sec.~\ref{sec:toy}, we introduce a toy model of a rapidly driven oscillator, which captures some of the important features of our holographic model and makes the phenomenological picture more transparent. Sec.~\ref{sec:conduct} contains the main physical result of the paper, the time-dependent AC conductivity. Finally, in Sec.~\ref{sec:conjecture} we conclude with a general discussion of our results, including prospects and challenges for comparison with experiment. \section{The model} \label{sec:model} The model we want to consider is specified by the action \begin{equation} \label{eq:action} S = \frac{1}{2\kappa_4^2}\int d^4x\,\sqrt{-g}\Big[R - 2\Lambda - \frac{1}{2}\sum_{I = 1}^2(\partial\phi_I)^2 - \frac{1}{4} F^2\Big] \, , \end{equation} with $\Lambda =-3 $ and equations of motion \begin{align} \label{eq:eqs} &E_{\mu\nu} \equiv G_{\mu\nu} + g_{\mu\nu} \Lambda - \frac{1}{2}\( g^{\rho\sigma} F_{\mu\rho}F_{\nu\sigma} -\frac{1}{4}g_{\mu\nu}F^2 \) -\frac{1}{2} \sum^{d-1}_{I}\( \partial_{\mu} \phi_I \partial_{\nu} \phi_I - \frac{1}{2}g_{\mu\nu} (\partial\phi_I)^2 \) =0 \nonumber \\ &M_{\nu} = \nabla_{\mu}F^{\mu}_{\phantom{\mu}\nu} = \frac{1}{\sqrt{-g}}\partial_{\mu}\(\sqrt{-g} F^{\mu\sigma} \)g_{\sigma\nu} = 0\, ,\\ &\Box \phi_{I} \equiv \frac{1}{\sqrt{-g}}\partial_{\mu}\(\sqrt{-g} \partial^{\mu} \phi_I \) = 0 \, . \nonumber \end{align} This admits a homogeneous and isotropic charged black brane configuration with non-trivial scalar fields profiles \cite{Bardoux:2012aw} that was explored in \cite{Andrade:2013gsa} as a simple holographic model for spatial translational symmetry breaking. Such a configuration has scalar fields with a linear dependence on the spatial coordinates $x^i=x, y$ common to the dual field theory \begin{equation} \phi_1 = k x,\quad \phi_2 = k y, \label{eq:scalarsk} \end{equation} and translationally invariant geometry and gauge field \begin{align} \label{eq:eqsol} &ds^2 = \frac{1}{z^2}\Big( - f dt^2 + \frac{dz^2}{f} + dx^2 + dy^2\Big) \, ,\nonumber \\ &f(z) = 1 - \frac{1}{2}k^2 z^2 - m z^3 +\frac{1}{4}\rho^2 z^4 \, , \\ &A= (- \mu + \rho z)dt \, . \nonumber \end{align} The dual field theory state is a thermal state with finite charge density and with translational symmetry breaking, whose properties can be fully specified in terms of $T, \rho$ and $k$. The chemical potential $\mu$ is determined in terms of the charge density $\rho$ by requiring the regularity condition that $A$ should vanish at the horizon of the black brane, leading to $\mu = \rho z_0$, with $z_0$ being the horizon location where $f(z_0)=0$. The location of the horizon $z_0$ is associated with the temperature $T$ of the field theory state through \begin{equation} T = \frac{1}{4\pi z_0} \left( 3-\frac{k^2z^2_0}{2}-\frac{\mu^2z^2_0}{4}\right) \, , \end{equation} which gives the Hawking temperature of the black brane geometry. Notice that for fixed $k, \mu$ and $z_0$, the mass parameter $m$ appearing in the gravitational solution is not an independent quantity. It is fixed by the condition $f(z_0)=0$ and is directly related to the energy density of the dual equilibrium state \begin{equation} \epsilon = 2m = \frac{2}{z^3_0}\left(1-\frac{k^2z^2_0}{2}+\frac{\mu^2z^2_0}{4}\right) \, \label{eq:epsilon} \end{equation} and the isotropic pressure \begin{equation} p = \frac{\epsilon}{2} =m \, . \end{equation} Finally, the entropy density of this configuration is \begin{equation} s = \frac{4 \pi }{z_0^2} \label{eq:entropy} \, . \end{equation} The reason why such a holographic solution with a completely homogeneous and isotropic geometry can be used to effectively describe momentum dissipation can be grasped from the Ward identities \begin{align} &\nabla_{\mu} \langle T^{\mu\nu} \rangle= \nabla^{\nu}\varphi_{I} \langle \mathcal{ O}_I \rangle + {\cal F}^{\nu\mu} \langle J_{\mu} \rangle\ , \label{eq:wardT} \\ &\nabla_{\mu} \langle J^{\mu} \rangle =0 \, . \label{eq:wardJ} \end{align} Following the standard AdS/CFT dictionary, the operators $\mathcal{ O}_{I}$ are dual to the bulk scalars and the couplings $\varphi_{I}$ are directly related to the asymptotic values of the bulk scalar profiles, that is $\varphi_{I} = k x^i \delta_{i,I}$ in our case. Similarly the $U(1)$ current $J^{\mu}$ is dual to the AdS gauge field $A_{\mu}$ and the boundary field strength ${\cal F}^{\nu\mu}$ is determined in terms of the asymptotic value of $A_{\mu}$. Let us first notice that \eqref{eq:wardJ} implies that the charge density $\rho = \langle J^{t} \rangle$ is conserved. From \eqref{eq:wardT} instead it follows that spatial momenta $\langle T^{t i} \rangle$ will generically not be conserved whenever on the r.h.s. one has non-vanishing vevs. The coupling between the scalar and the gauge field in the bulk is such that the boundary electric field $E_i = {\cal F}_{it}$ induces a non-zero expectation value for $\mathcal{ O}_{I}$, and thus % \begin{equation} \partial_t \langle T^{t i} \rangle = k \langle \mathcal{ O}_I \rangle \delta_{i,I} + \rho E_{i} \, . \end{equation} Before concluding this section, let us quickly review the holographic result for the equilibrium optical conductivity computed in this model \cite{Andrade:2013gsa}, which will be of use in the rest of the paper. The probe optical conductivity $\sigma$ measures the linear response of the boundary current $J_x$ to a boundary probe electric field $E_x = - \partial_t A^{0}_{x}$. To compute it holographically one can consider the minimal consistent set of ``vector" bulk fluctuations % \begin{eqnarray} \label{eq:linearized} \delta A_{x} &=& e^{-i \omega t } \delta a_x(z) \, ,\nonumber \\ \delta g_{tx} &=& e^{-i \omega t } z^{-2} \delta h_{tx}(z) \, , \\ \delta \phi_{1} &=& e^{-i \omega t} k^{-1} \delta \varphi(z) \, , \nonumber \end{eqnarray} around the equilibrium background (\ref{eq:scalarsk}\,--\,\ref{eq:eqsol}), and use the relation between the AdS asymptotic modes of $\delta A_{x}$ and the boundary quantities \begin{equation} \delta A_{x} \approx A^{0}_{x} + z \langle J_{x} \rangle + \dots \, \end{equation} to write \begin{equation} \sigma(\omega) = \frac{\langle J_{x} \rangle}{i \omega A^{0}_{x} } \, . \end{equation} The computation of the optical conductivity therefore amounts to solving the following system of linearized equations for the fluctuations \eqref{eq:linearized} \begin{align} & \partial^2_{t}\delta \phi_{1} - z^2 f^2 \partial_z \( \frac{f}{z^2}\partial_z \delta \phi_{1} \) -k z^2 \partial_{t}\delta g_{tx} = 0 \, , \nonumber \\ &\partial^2_{t}\delta A_x - f \partial_z\( f \partial_z \delta A_x\) - \rho f \partial_{z}\(z^2 \delta g_{tx} \) = 0 \, , \label{eq:linfluc} \\ & \partial_{z}\(z^2 \partial_{t}\delta g_{tx} \) + \rho z^2 \partial_{t}\delta A_x - k f \partial_z \delta \phi_{1} =0 \nonumber\, , \end{align} subject to appropriate asymptotically AdS boundary condition for the metric, non-vanishing source for the gauge field, and vanishing source for the scalar fluctuation. Away from the zero-frequency limit these equations can be solved numerically, and one finds that for small enough values of $k$ as compared to the other parameters of the equilibrium solution the resulting optical conductivity has the low frequency Drude form \cite{Davison:2013jba,Vegh:2013sk,Andrade:2013gsa}. In Fig.~\ref{fig:eqconductivity}, we reproduce a sample plot of the optical conductivity for $k = 0.2$, $T = 0.2$, and $\mu = 1.0$. \begin{figure}[t] \begin{center} \includegraphics[width=0.48 \textwidth]{re_thermal_sigma.pdf} \hfill \includegraphics[width=0.48 \textwidth]{im_thermal_sigma.pdf} \end{center} \caption{The real (left) and imaginary part (right) of the optical conductivity for $k = 0.2$, $T = 0.2$ , and $\mu = 1.0$. } \label{fig:eqconductivity} \end{figure} The finite value of the zero-frequency DC conductivity can be obtained analytically \cite{Andrade:2013gsa} \begin{equation} \sigma_{DC} = 1+ \frac{\mu^2}{k^2} \, . \end{equation} The relaxation rate $\tau_{Q}$ associated to the Drude peak corresponds to the purely imaginary frequency of the lowest lying quasinormal mode of the bulk vector preturbations. In the small $k$ regime we will be interested in, this has been obtained analytically in \cite{Davison:2013jba} and reads \begin{equation} \label{eq:relax} \frac{1}{\tau_{Q} }\approx \frac{s k^2}{6 \pi \epsilon} \, . \end{equation} \section{Setup and details on numerical calculation \label{sec:numerics}} To study the response of the model of the previous section to the boundary electric field, we go to ingoing Eddington-Finkelstein coordinates and, following \cite{Withers:2016lft}, consider the ansatz \begin{align} \label{eq:ansatz} ds^2 &= - F_z( z,v)dv^2 - \frac{2 dv dz}{z^2} + 2F_x(z, v) dx dv + \Sigma(z, v)^2(e^{-B(z, v)}dx^2 + e^{B(z, v)}dy^2)\, .\nonumber \\ A &= (E_x(v) x + a_v(z, v))dv + a_x(z, v)dx \, ,\nonumber \\ \phi_1 &= k x + \Phi(z, v)\, , \\ \phi_2 &= k y \, .\nonumber \end{align} We solve the resulting system imposing appropriate asymptotically AdS boundary conditions under the assumption that for early enough times, when the pulse $E_x(v)$ has not been turned on yet, the solution coincides with the equilibrium configuration of Sec.~\ref{sec:model}. By now, there are standard methods for solving such numerical relativity systems (see e.g. \cite{Chesler:2008hg,Heller:2013oxa, Chesler:2013lia, Ecker:2015kna}). We will review the main ingredients of this procedure below. Inspecting Einstein's equations, it is convenient to define derivative operators along ingoing and outgoing radial null geodesics, which act on a field $X(z, v)$ as follows: \begin{align} X' &= \partial_z X \, , \\ \dot{X} &= \partial_v X - \frac{z^2}{2}F_z\partial_z X \, . \end{align} With this notation, the equations of motion become { \allowdisplaybreaks \begin{align} 0 &= \Sigma'' + \frac{2}{z}\Sigma' + \frac{1}{4}((B')^2 + (\Phi')^2)\Sigma + \frac{e^{B}(a_x')^2}{4\Sigma}, \label{eq:eom1} \\ 0 &= F_x'' + \Big(\frac{2}{z} + B'\Big) F_x' + \Big(B'' - \frac{2(\Sigma')^2}{\Sigma^2} + \frac{2B'\Sigma'}{\Sigma} + \frac{(\Phi')^2}{2} + \frac{(B')^2}{2} + \frac{2B'}{z} \nonumber \\ &\hspace{5cm} - \frac{e^B (a_x')^2}{2\Sigma^2}\Big) F_x + \frac{k}{z^2}\Phi' + a_v'a_x' \, , \label{eq:eom2} \\ 0 &= a_v'' + \Big(\frac{2}{z} + \frac{2\Sigma'}{\Sigma}\Big)a_v' - \frac{e^B F_x a_x''}{\Sigma^2} - \frac{e^B}{\Sigma^2} \Big(F_x B' + F_x' + \frac{2 F_x}{z}\Big)a_x' \, , \label{eq:eom3} \\ 0 &= \dot{\Sigma}' + \frac{\Sigma'}{\Sigma}\dot{\Sigma} + \frac{3\Sigma}{2z^2} - \frac{z^2}{8}\Sigma (a_v')^2 - \frac{k^2 e^{-B}}{8z^2\Sigma} - \frac{e^{B}}{\Sigma}\Big(\frac{k^2}{8z^2} - \frac{z^2 e^B F_x^2(a_x')^2}{8\Sigma^2} \nonumber \\ &+ \frac{z}{2} F_x^2B' + \frac{z^2}{8}F_x^2(B')^2 + z F_x F_x' + \frac{z^2}{2}B' F_x F_x' + \frac{z^2}{8}(F_x')^2 + \frac{1}{4}k F_x \Phi' \nonumber \\ &+ \frac{z^2}{2\Sigma}F_x^2 B'\Sigma' + \frac{z^2}{2\Sigma}F_x F_x'\Sigma' - \frac{z^2 F_x^2 (\Sigma')^2}{2\Sigma^2} + \frac{z^2}{4}F_x^2(B'' + F_x'')\Big) \, , \label{eq:eom4} \\ 0&=\dot{B}' +\frac{\dot{B} \Sigma'}{\Sigma} -\frac{e^{-B} k^2}{4 z^2 \Sigma^2}+\frac{e^{B} k^2}{4 z^2 \Sigma^2}+\frac{e^{2 B} z^2 F_x^2 a_x'^2}{4 \Sigma^4}-\frac{e^{B} z^2 F_x^2 B'^2}{4 \Sigma^2}\nonumber \\ &+\frac{e^{B} z^2 F_x'^2}{4 \Sigma^2}+\frac{e^{B} z^2 F_x^2 \Sigma'^2}{\Sigma^4}-\frac{e^{B} \dot{a}_x a_x'}{2 \Sigma^2}+\frac{e^{B} E_x(v) a_x'}{2 \Sigma^2}+\frac{\dot{\Sigma} B'}{\Sigma}-\frac{e^{B} z F_x^2 B'}{\Sigma^2} \nonumber \\ &-\frac{e^{B} z^2 F_x B' F_x'}{2 \Sigma^2}-\frac{e^{B} z^2 F_x^2 B' \Sigma'}{\Sigma^3}-\frac{e^{B} z^2 F_x F_x' \Sigma'}{\Sigma^3}-\frac{e^{B} z^2 F_x^2 B''}{2 \Sigma^2} \, , \label{eq:eom5} \\ 0&=\dot{a}_x'+\frac{1}{2} \dot{a}_x B' +\frac{1}{2} \dot{B} a_x'-\frac{1}{2} F_x a_v' B' z^2-\frac{1}{2} a_v' F_x' z^2 -\frac{1}{2} F_x a_v'' z^2 \nonumber \\ &-F_x a_v' z -\frac{1}{2} E_x(v) B' \, , \label{eq:eom6} \\ 0&= \dot{\Phi}'+\frac{\dot{\Phi} \Sigma'}{\Sigma} -\frac{e^{B} F_x^2 B' \Phi' z^2}{2 \Sigma^2}-\frac{e^{B} F_x F_x' \Phi' z^2}{\Sigma^2}-\frac{e^{B} F_x^2 \Phi'' z^2}{2 \Sigma^2}-\frac{e^{B} F_x^2 \Phi' z}{\Sigma^2}\nonumber \\ &-\frac{e^{B} k F_x B'}{2 \Sigma^2}-\frac{e^{B} k F_x'}{2 \Sigma^2}+\frac{\dot{\Sigma} \Phi'}{\Sigma} \, , \label{eq:eom7} \\ 0&=F_z'' +\frac{2 F_z'}{z} + \frac{e^{-B} k^2}{2 z^4 \Sigma^2}-\frac{e^{B} k^2}{2 z^4 \Sigma^2}+\frac{e^{B} F_x \Phi' k}{z^2 \Sigma^2}-\frac{1}{2} a_v'^2-\frac{e^{2 B} F_x^2 a_x'^2}{\Sigma^4}\nonumber \\ &+\frac{e^{B} F_x^2 B'^2}{\Sigma^2}+\frac{e^{B} F_x'^2}{2 \Sigma^2}+\frac{e^{B} F_x^2 \Phi'^2}{2 \Sigma^2}-\frac{4 e^{B} F_x^2 \Sigma'^2}{\Sigma^4}+\frac{e^{B} F_x a_v' a_x'}{\Sigma^2}+\frac{e^{B} \dot{a}_x a_x'}{z^2 \Sigma^2}\nonumber \\ &-\frac{e^{B} E_x(v) a_x'}{z^2 \Sigma^2}-\frac{\dot{B} B'}{z^2}-\frac{2 \dot{\Sigma} B'}{z^2 \Sigma}+\frac{4 e^{B} F_x^2 B'}{z \Sigma^2}-\frac{2 \dot{B}'}{z^2}+\frac{3 e^{B} F_x B' F_x'}{\Sigma^2} \nonumber \\ &+\frac{4 e^{B} F_x F_x'}{z \Sigma^2}-\frac{\dot{\Phi} \Phi'}{z^2}+\frac{4 e^{B} F_x^2 B' \Sigma'}{\Sigma^3}+\frac{2 e^{B} F_x F_x' \Sigma'}{\Sigma^3}-\frac{2 \dot{B} \Sigma'}{z^2 \Sigma}-\frac{4 \dot{\Sigma}'}{z^2 \Sigma} \nonumber \\ &+\frac{2 e^{B} F_x^2 B''}{\Sigma^2}+\frac{2 e^{B} F_x F_x''}{\Sigma^2}-\frac{6}{z^4}\, . \label{eq:eom8} \end{align}} Notice that we are denoting, for example, $\dot{\Phi}' = \partial_z(\dot{\Phi})$, i.e., the dot derivatives are taken before the $z$ derivatives. The above set of equations represents a convenient set of non-redundant equations that can be obtained from all the non identically vanishing equations of motion following from our ansatz. More precisely, the first three equations correspond to the $E_{zz}$ and $E_{zx}$ components of Einstein's equations and to the $M_{v}$ component of Maxwell's equations respectively. The fourth equation is given by $E_{zv}$ once $E_{zz}$ is used to eliminate $\Sigma''$. The fifth equation is given by the linear combination $e^{2B}E_{xx} + E_{yy} $. The remaining three equations are respectively the $M_x$ component of Maxwell's equation, the $E_{xx}$ component of Einstein's equations and the equation for the scalar field $\phi_1$. The strategy for numerically solving the equations from the specified initial and boundary conditions proceeds iteratively as follows. The fields $(B, \Phi, a_x)$ represents the free initial data. All the other fields can then be solved from the equations of motion on a constant time slice. In more detail, after specifying the initial data at a given lightcone time, we solve (\ref{eq:eom1}) for the field $\Sigma$. This is a non-linear ordinary differential equation as no time derivatives appear. Next, equations (\ref{eq:eom2}) and (\ref{eq:eom3}) provide two coupled linear ordinary differential equations for $F_x$ and $a_v$. Given $\Sigma, F_x$ and $a_v$ on the fixed time slice, one can proceed similarly to solve for the dotted fields. First we can solve (\ref{eq:eom4}) for $\dot{\Sigma}$. Then, we can solve the two coupled linear differential equations (\ref{eq:eom5}) and (\ref{eq:eom6}) for $\dot{B}$ and $\dot{a}_x$ and, subsequently, the linear ordinary differential equation (\ref{eq:eom7}) for $\dot{\Phi}$. Finally, via the linear ordinary differential equation (\ref{eq:eom8}) we determine $F_z$. This way we obtain all the fields and dotted fields on the initial time slice. The time evolution is obtained by undoing the definition of the dotted derivative fields, to obtain a set of dynamical equations for $(B, \Phi, a_x)$ \begin{align} \partial_v B &=\dot{B} + \frac{z^2}{2}F_z\partial_z B\, , \label{eq:teom1} \\ \partial_v \Phi & =\dot{\Phi} + \frac{z^2}{2}F_z\partial_z \Phi \, ,\label{eq:teom2} \\ \partial_v a_x &=\dot{a}_x + \frac{z^2}{2}F_z\partial_z a_x\, .\label{eq:teom3} \end{align} Knowing their time derivative at a given time, we can time-evolve $(B, \Phi, a_x)$ to the next time step and we can repeat the above procedure to solve for all the fields on that time step. This way we can iterate the algorithm to the time we want. To solve the equations (\ref{eq:eom1}\,--\,\ref{eq:eom8}), we use the Chebyshev spectral method and introduce a Chebyshev grid $z_i$ in the $z$-direction. This way fields are replaced by vectors, $X(z) \rightarrow X_i = X(z_i)$, derivative operators become matrices acting on the vectors and the differential equations translate into sets of coupled equations for the different field components $X_i$. More precisely, the equation for $\Sigma$ is non-linear and becomes a set of non-linear coupled algebraic equations for the coefficients $\Sigma_i$, which we can collectively denote by \begin{equation} f_j(\Sigma_i) = 0 \, . \end{equation} The index $j$ counts the number of components in the equation, which is the same as the number of variables $\Sigma_i$. We solve this set of non-linear equations using the Newton-Raphson method. This method finds an approximate solution to $f_j = 0$ as follows. First we start from a guess solution $\Sigma^{(0)}_i$. Then, use the updating routine \begin{equation} \Sigma^{(1)}_i = \Sigma^{(0)}_i - (J^{-1})_{ij}f_j(\Sigma_i^{(0)}),\label{eq:newton} \end{equation} where $J$ is a Jacobian matrix \begin{equation} J_{ij} = \frac{\partial f_i}{\partial\Sigma_j}. \end{equation} Now $\Sigma^{(1)}$ should provide a vector which is closer to the solution of $f_j = 0$ than our original guess. Repeating the algorithm by taking $\Sigma^{(1)}$ as a new guess and using (\ref{eq:newton}), we again get closer to the correct solution. Iterating this algorithm many times, one should converge to the solution of $f_j = 0$. In practice the number of iterations needed depends on how good the initial guess was. In our numerical algorithm we use the solution from the previous time step as the initial guess. This way we only need a few (typically 3) iterations to solve the equation of motion to the desired accuracy (around $10^{-13}$ accuracy). The rest of the equations (\ref{eq:eom2}\,-\,\ref{eq:eom8}) are all linear in the unknown variables and can be straightforwardly solved by standard matrix inversion methods. We have used the numpy.linalg.solve and numpy.linalg.inv functions, which are included in the Python Numpy package and are based on the LAPACK library. At the practical level, when solving (\ref{eq:eom1}\,-\,\ref{eq:eom8}), in order to simplify the numerics, we also find it convenient to subtract or rescale the near boundary behavior of some of the fields. In particular, we work with the set of regularized fields $X_{r}$ defined as follows \begin{align} \label{eq:asysub} &F_z = \frac{1}{z^2}(1 - \frac{1}{2}k^2 z^2 + z^2 F_{z, r}) \, , & & \dot{\Phi} = -\frac{3}{2}\dot{\Phi}_r \, , \nonumber \\ &\Sigma = \frac{1}{z}( 1 + z^2 \Sigma_r) \, , & & \dot{\Sigma} = \frac{1}{2z^2}-\frac{k^2}{4} + z\dot{\Sigma}_r\, ,\nonumber \\ & B= z^2 B_r \, , &\qquad & \dot{B} = -\frac{3}{2}z^2\dot{B}_r \, , \\ & \Phi = z^2 \Phi_r \,, & & \dot{a}_x = -\frac{1}{2}\dot{a}_{x, r}\, . \nonumber \end{align} The functions $F_x,\,a_x,\,a_v$ are left intact. Note that in these redefinitions, $\dot{\Phi}_r$ is not the dot derivative acting on $\Phi_r$, but a new variable defined through the equation in (\ref{eq:asysub}). The same applies for all the other dotted fields. In addition to (\ref{eq:eom1}\,-\,\ref{eq:eom8}), there are other components of Einstein's and Maxwell's equations that do not vanish identically for our ansatz. These are in principle redundant with (\ref{eq:eom1}\,-\,\ref{eq:eom8}), but in practice they are useful for testing our numerical solutions. To evolve $(B, \Phi, a_x)$ we use the fourth order explicit Runge-Kutta method. The time domain is divided in discrete time-steps $v_n$ and the value of a field at step $n+1$, $X(v_{n+1})$, is obtained as the value at step $X(v_n)$, plus the weighted average of four different time increments determined in terms of (\ref{eq:teom1}\,-\,\ref{eq:teom3}). Using this algorithm, we can solve the full numerical problem modelling the pump probe experiment. In practice we have found it computationally faster to separate the problem of determining the probe conductivity as a separate problem. Thus, we use the above algorithm to solve for the spacetime corresponding to the system subject to the pump electric field. To obtain the probe conductivity, we linearize the equations of motion around the numerically known background spacetime and solve them using a similar numerical procedure as above. The main advantage of this procedure is that when solving the linearized equations of motion, we do not need to use the Newton-Raphson method, but all the equations are solved using linear algebra, which is faster. This becomes particularly useful when we consider several probe ``experiments" for the same pump pulse. We have checked the numerical accuracy of solving the linearized system by comparing the results to those obtained using the (slower) full code for both pump and probe parts. Further checks are provided by testing the system on the equilibrium states, in particular by comparing the numerically obtained conductivity with the analytic formula for the DC conductivity. These agree to a very good accuracy (in the cases we have tested they agree up to $10^{-7} \%$ accuracy). \subsection{Numerical error estimate} There are two sources for the numerical error in our procedure. The first one arises from discretizing the $z$ coordinate, and the second one arises from discretizing the time coordinate. As a measure of the numerical error we use the remaining three redundant equations of motion. For an exact solution, these equations would be automatically solved. Denoting the equations as Eq$_i=0$ where $i = 1,2,3$, we consider the following quantity \begin{equation} \textrm{Err} = \sqrt{\sum_{i=1}^3 \textrm{max}_{z,v}\(\textrm{Eq}_i\)^2}. \end{equation} We evaluate the equations on the spacetime grid using fourth order finite differences for approximating time derivatives and then find the maximum value of $|\textrm{Eq}_i|$ within the grid. Finally we take the root mean square of the maximum error of the three equations. This error measure is displayed in Fig.~\ref{fig:error} as a function of the number of timelike $N_t$ and spacelike $N_z$ lattice points for fixed timelike and spacelike size of the computational domain. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.6]{error.pdf} \caption{\label{fig:error} Numerical error as a function of the number of timesteps $N_t$. The different curves correspond to different numbers $N_z$ of spatial lattice points. Here we study a Gaussian pulse $E_x(t) = A \cos(\omega_{P} t) \exp\left( -\frac{(t - t_0)^2}{(\Delta t)^2}\right)$ with the choice of parameters: $A = 0.5$, $t_0 = 3$, $\Delta t_P = 1$, $\omega_P = \pi/2$, $\rho = 0.5$, $k = 1.0$, $m_{I} = 0.5$, where $m_{I}$ is the initial mass of the black hole. Here we have chosen a shorter pulse to keep the computational time shorter. The spatial size of the computational domain is $z\in [0, 1]$ while the timelike size is $v\in [0, 10 ]$.} \end{center} \end{figure} From this error measure, we find that the numerical error approximately first decays as $\textrm{Err}\propto N_t^{-4}$ as $N_t$ is increased and then saturates to a constant value. This is expected as there is a remaining error due to finite number of spatial lattice sites $N_z$. Increasing this number then decreases the saturated value approximately exponentially. Thus, as both $N_t$ and $N_z$ are increased the error is found to decrease rapidly, which gives strong evidence that the numerical calculation is converging towards a solution of the continuum equations of motion.\footnote{Eventually as $N_t$ and $N_z$ are sufficiently large, the error saturates again due to the finite accuracy of Python floating point numbers.} In practice we have found that rather small values of the spatial lattice sites such as $N_z=8$ or $N_z = 10$ are sufficient to give reliable results. For $N_t$ we use the highest values from those shown in Fig.~\ref{fig:error}. This is forced by the fact that the probe pulses have to be short in order to reasonably approximate delta functions. On the other hand the timelike computational domain has to be large in order to get a reliable Fourier transform of the differential conductivity, without finite size effects. For example for a computational domain of length of order $10^3$ we use $N_t$ of the order $10^{6}$. This results in a computational time of the order of tens of hours on a laptop. \section{Non-equilibrium background spacetimes} \label{sec:noneqbst} To model the process of applying the pump electric field, we start from an initial state corresponding to an equilibrium black brane dual to a state at a given temperature $T_I$. The time dependent pump electric field then takes the system out of equilibrium to a configuration captured by the ansatz \eqref{eq:ansatz} to finally reach a new equilibrium configuration at a different temperature $T_F$. Throughout this process we keep $k$ fixed and $\rho$ is conserved, as guaranteed by Ward's identities. In more detail, the starting equilibrium configuration in terms of the regularized fields defined in \eqref{eq:asysub} corresponds to setting \begin{equation} F_{z,r} = -m_I z + \frac{1}{4}\rho^2 z^2, \qquad a_{v} = \rho z - \mu_I, \end{equation} and all the other regularized fields in \eqref{eq:asysub} to zero. The parameters $m_I$ and $\mu_I$ are determined in terms of $T_I, k$ and $\rho$ according to the relations of Sec.~\ref{sec:model}. The specific form we use for the pump field is given by \begin{equation}\label{eq:pump_pulse_form} E_x(t) = A \cos(\omega_{P} t) e^{-\frac{(t - t_0)^2}{(\Delta t)^2}} \frac{1 - \tanh \frac{t - t_0 - 3\Delta t}{\delta}}{2} \, . \end{equation} This represents a Gaussian wavepacket with central frequency $\omega_{P}$ and width $\Delta t$, centered at $t_0$ and cut off by a smoothed step function at $t_{\rm end}\equiv t_0+3\Delta t$ (from which time onwards we consider the pumping to have finished). Throughout this section, we choose the parameters $t_0 = 50$, $\Delta t = 15$ and $\delta = 0.01$. The pulse amplitude $A$ is instead tuned in order to obtain the desired increase in temperature. At the time of the pulse, the metric functions start time-evolving, exciting all the rescaled fields defined in \eqref{eq:asysub}. Notice that in particular, this will give a nontrivial expectation value for the boundary operators associated to the bulk fields. More specifically, according to our ansatz, the current $J_{\mu}$ associated to the bulk gauge field, the operator $O$ associated to the scalar excitation $\Phi$ and the non-isotropic stress-energy tensor $T_{\mu}$ associated to the bulk metric will acquire a time dependent expectation value. At late times they will all settle to new equilibrium values with \begin{equation} F_{z,r} = -m_F z + \frac{1}{4}\rho^2 z^2, \qquad a_{v} = \rho z - \mu_F, \end{equation} and again all other rescaled fields vanishing. The final mass parameter of the black hole will increase throughout the process, $m_F > m_I$, consistently with the fact that energy has been pumped into the system. As an example, Fig.~\ref{fig:bulk_metric_fns} shows plots of some metric function components obtained from the numerical solution. \begin{figure}[ht] \begin{center} \includegraphics[width=0.49\textwidth]{F_x_plot.pdf} \includegraphics[width=0.49\textwidth]{F_z_plot.pdf} \caption{\label{fig:bulk_metric_fns} Plots of bulk profile of the metric functions $F_{z,r}$ and $F_x$ interpolating between initial and final equilibrium state. The parameters corresponding to the plot are: $\mu_I = 1, T_I = 0.2, T_F = 0.3, k = 0.2, \omega_P = 0$.} \end{center} \end{figure} To obtain boundary theory expectation values from the bulk solution, one has to perform the corresponding holographic renormalization procedure \cite{Andrade:2013gsa}. The resulting one point functions are given in terms of asymptotics of the bulk fields as \cite{Withers:2016lft} \begin{align} \epsilon = \langle T_{tt}\rangle &= -2 F_{z,r}' \, ,\nonumber \\ \langle T_{tx}\rangle &= 3 F_x' \, ,\nonumber \\ p_x-p_y=\langle (T_{xx} - T_{yy})\rangle &= 12 B_r' \, , \\ \langle \mathcal{O}\rangle &= 3 \Phi_r' \, ,\nonumber \\ \rho=\langle J_t\rangle &= a_{v}' \, ,\nonumber \\ \langle J_x\rangle &= E_x(t) + a_ {x}' \, , \nonumber \end{align} where the different bulk function appearing on the r.h.s.\ are all evaluated at the AdS boundary $z\to0$. \subsection{Vanishing pulse frequency} \label{sec:0pulsefreq} We start by considering the particular case where the pump field is not oscillating, $\omega_P =0$. Fig.~\ref{fig:1pt1} shows the boundary theory expectation values for the same type of time dependent state represented in Fig.~\ref{fig:bulk_metric_fns}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.32\textwidth]{pt1.pdf} \includegraphics[width=0.32\textwidth]{pt2.pdf} \includegraphics[width=0.32\textwidth]{pt3.pdf}\\ \includegraphics[width=0.32\textwidth]{pt4.pdf} \includegraphics[width=0.32\textwidth]{pt5.pdf} \caption{\label{fig:1pt1} Time dependence of the expectation value of one point functions the dual field theory. The parameters considered here are the same of Fig.~\ref{fig:bulk_metric_fns}. } \end{center} \end{figure} In particular one can observe that the pump electric field $E_{x}(t)$ induces an electric current and a momentum current in the field theory. Furthermore, there is a substantial pressure anisotropy induced and, in the case represented here, the energy density increases by more than a factor of two. All one point functions, except for the energy density, seem to have a relaxation time far longer than the time scale of the pump field. Inspecting the logarithmic plots in Fig.~\ref{fig:1ptlog} one finds that they decay towards equilibrium exponentially in time. The rates of the exponentials are consistent with \begin{equation} \langle T_{tx}\rangle \propto e^{-\omega_i t}, \quad \langle (T_{xx} - T_{yy})\rangle \propto e^{-2\omega_i t}, \quad \langle \mathcal{O}\rangle \propto e^{-\omega_i t}, \quad \langle J_{x}\rangle \propto e^{-\omega_i t}, \end{equation} where $\omega_* = -i\omega_i$ is the, purely imaginary, lowest quasinormal mode in the vector channel. It is important to note that the pressure anisotropy is decaying with double the rate of the other expectation values. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{lp1.pdf} \includegraphics[width=0.45\textwidth]{lp2.pdf} \includegraphics[width=0.45\textwidth]{lp3.pdf} \includegraphics[width=0.45\textwidth]{lp4.pdf} \caption{\label{fig:1ptlog} Logarithmic plots of expectation values $\langle T_{tx}\rangle, \langle (T_{xx} - T_{yy})\rangle, \langle \mathcal{O}\rangle,$ and $\langle J_{x}\rangle$. The dashed line corresponds to $e^{-\omega_i t}$ while the dot-dashed line corresponds to $e^{-2\omega_i t}$.} \end{center} \end{figure} We are able to provide an explanation for this, working under the reasonable assumption -- supported by the numerical calculations -- that the deviations from thermality are sufficiently small at late times, so that the equations of motion can be expanded in powers of the deviations. For this, consider an expansion around the final thermal black brane configuration \begin{align} \label{eq:fluctdef} &F_z = \frac{1}{z^2} f(z) + \delta F_z \, ,& & F_x = \delta F_x, \, , \nonumber \\ &\Sigma = \frac{1}{z} + \delta \Sigma \, , & &a_x = \delta a_x \, ,\nonumber \\ & a_v = -\mu + \rho z + \delta a_v \, ,& & \Phi = \delta \Phi \, , \\ &B = \delta B \nonumber \end{align} with $f(z)$ being the equilibrium metric function defined in \eqref{eq:eqsol} and $\delta X$ indicating fluctuations. At the linear level in the deviations, the equations of motion decouple into two sets. One set describes the vector fluctuations \begin{align} \label{eq:linerizedV} & z^2 \partial_z\( \frac{f}{z^2} \partial_z\delta\Phi\) - 2 z \partial_z\( \frac{1}{z} \partial_v\delta\Phi\) + kz^2\partial_z\delta F_x = 0\, , \nonumber \\ & \partial_z\( f \partial_z \delta a_x \) - 2\partial_z\partial_v\delta a_x + \rho \partial_z\( z^2\delta F_x\) = 0 \, , \\ & \partial_z \( \frac{1}{z^2} \partial_z \(z^2 \delta F_x\)\) + \rho \partial_z\delta a_x - \frac{k}{z^2} \partial_z\delta \Phi = 0 \, . \nonumber \end{align} The other set describes tensor fluctuations (also often called scalar fluctuations). Imposing AdS boundary and initial conditions, on finds that $\delta \Sigma$, $\delta a_v$ and $\delta F_z$ vanish, after which the remaining tensor fluctuation $\delta B$ is governed by \begin{equation} z^2 \partial_z\( \frac{f}{z^2} \partial_z\delta B\) - 2 z \partial_z\( \frac{1}{z} \partial_v\delta B\) - k^2\delta B = 0. \end{equation} At late times, the vector fluctuations decay with a rate set by the lowest vector quasinormal mode. At linear order, the $\delta B$ field is decoupled from the vector fluctuations and therefore remains zero. If we go to quadratic order in the fluctuations, however, the two sectors are no longer decoupled. In particular, the $B$ field equation of motion is now sourced by terms quadratic in the vector sector fields $\delta \Phi, \delta a_x$ and $\delta F_x$. Setting the linearized tensor fluctuations to zero and indicating with $\delta \delta B$ the quadratic fluctuation for $B$, from the linear combination of Einstein's equations $E_{xx} -E_{yy}$ one gets \begin{align} z^2 \partial_z\( \frac{f}{z^2} \partial_z\delta\delta B\) - 2 z \partial_z\( \frac{1}{z} \partial_v\delta\delta B\) - k^2\delta\delta B = \frac{1}{z}\(\partial_z\(z^2 \delta F_x \)\)^2 + z \partial_{z} \delta a_x \(f \partial_{z} \delta a_x - \partial_v \delta a_x \) \, . \end{align} From this we can argue that the decay rate of the source term sets the decay rate of the $B$ field, which is thus twice the decay rate of the vector perturbations. That is, twice the imaginary part of the lowest vector quasinormal mode. This explains the factor of two in the decay rate we see from the numerics in Fig.~\ref{fig:1ptlog}. \subsection{Increasing the pulse frequency } So far we have studied the case of an approximately Gaussian pump electric field (\ref{eq:pump_pulse_form}) with a vanishing mean frequency $\omega_P = 0$. This has reproduced the by now standard story that the late time relaxation of the black brane solution is dominated by the lowest quasinormal mode that gets excited. A slight subtlety was that some of the metric components relax with a rate given by twice the lowest quasinormal mode from a different sector. Next, we will study how this picture changes as we increase the mean frequency of the pump pulse. In Fig.~\ref{fig:1pts_omega} we show the one point functions for increasing values of $\omega_P = (0.1, 0.2, 0.5, 1.0)$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.32\textwidth]{pt1_om.pdf} \includegraphics[width=0.32\textwidth]{pt2_om.pdf} \includegraphics[width=0.32\textwidth]{pt3_om.pdf}\\ \includegraphics[width=0.32\textwidth]{pt4_om.pdf} \includegraphics[width=0.32\textwidth]{pt5_om.pdf} \caption{\label{fig:1pts_omega} Plots of expectation values of one point functions for different pump frequencies $\omega_P=(0.1, 0.2, 0.5, 1.0)$. Again $\mu_I = 1, T_I = 0.2, T_F = 0.3, k = 0.2.$ The plots show the exponential QNM approach to equilibrium, with decreasing amplitudes for increasing $\omega_P$.} \end{center} \end{figure} The one point functions still exhibit the late time quasinormal mode tails. The magnitudes of the tails are decreasing rapidly with increasing $\omega_P$. Already at $\omega_P = 0.5$ the tail becomes invisible by the eye. So for practical purposes the quasinormal mode tail has disappeared. Furthermore, the magnitudes of $\langle T_{tx}\rangle, \langle (T_{xx} - T_{yy}) \rangle, \langle \mathcal{O} \rangle$ are decreasing with increasing $\omega_P$, while the magnitudes of $\langle T_{tt}\rangle$ and $\langle J_x\rangle$ stay fixed. We note that, strictly speaking, even if the leading QNM has only infinitesimal amplitude, one could still choose to refer to its decay constant as the decay time. If the amplitude of the QNM is below the experimental resolution, however, then it is not measurable, and in that sense irrelevant. In this paper, we therefore refer to thermalization as instantaneous or very fast if the slower decay modes have zero or negligible amplitude. These observations suggest that the spacetime could be approximated with the Vaidya spacetime, with an appropriately chosen time dependent mass function, at large enough $\omega_P$. In the rest of this section we provide evidence in support of this claim. First we will show that working in the limit of a large pulse frequency the leading order solution is exactly of the Vaidya form. We obtain this result working analytically in the large frequency expansion. Next, we analyze the amplitude associated to the quasinormal mode decay described above to show how this is determined by the relation between the power spectrum of the pump pulse and the quasinormal mode frequency. \subsection{Large frequency solution} In the regime where the pump frequency is very large compared to the other parameters of the gravitational background the bulk solution can be studied analytically.\footnote{A related but distinct situation where an analytical treatment is also possible and the resulting geometry takes the Vaidya form was considered in \cite{Buchel:2013gba} in the context of abrupt holographic quenches.} We assume the electric field is of the simple oscillating form \begin{equation} E_x(t) = \cos(\omega_P t)\Omega(t)\, , \label{eq:pulsesimple} \end{equation} where the enveloping function $\Omega(t)$ is assumed to have compact support and slow variation compared to the cosine. Using the knowledge obtained from the numerical solution and inspecting the equations of motions, we formulate an ansatz for the $1/\omega_{P}$ expansion of each field and for the type of time dependence (rapidly or slowly varying) for each term in the expansion, and proceed to solve the resulting system of equations order by order. The details of the analysis are reported in Appendix \ref{app:largew}. For the different fields and metric components, the leading correction to the unperturbed solution induced by the rapidly varying source $E_x$ takes the form % \begin{align} &F_z =\frac{1}{z^2} \left(1 - \frac{1}{2}k^2 z^2 - m z^3 + \frac{1}{4}\rho^2 z^4 \right)+ F^{(0)}_{z} + \dots& \qquad & \Sigma = \frac{1}{z}+ \frac{1}{\omega_{P}^5} \Sigma^{(5)}+ \dots& \nonumber \\ &F_x = \frac{1}{\omega_{P}} F^{(1)}_{x}+ \dots & \qquad & \beta = \frac{1}{\omega_{P}^3} \beta^{(3)}+ \dots & \\ &a_v = - \mu + \rho z + \frac{1}{\omega_{P}^3} a^{(3)}_{v} + \dots& \qquad & a_x = \frac{1}{\omega_{P}^2} a^{(2)}_{x} + \dots \nonumber \\ &\Phi = \frac{1}{\omega_P^2} \Phi^{(2)}+ \dots\, .\nonumber \end{align} At leading order in the frequency expansion only $F_z$ gets corrected by\footnote{Strictly speaking here and in the expression \eqref{eq:m(v)} below we are assuming that on the r.h.s.\ we are consistently taking only the leading contribution from the integral $ \frac{1}{2}\int^v_{-\infty} dv' E_x(v')^2 $, which in general will also have subleading terms in $1/\omega_P$ (see Appendix \ref{app:largew}).} \begin{equation} F^{(0)}_{z}(z,v) = -\frac{z}{2}\int^v_{-\infty} dv' E_x(v')^2 \, . \end{equation} This directly shows that the response to the rapidly oscillating electric field takes at leading order the Vaidya spacetime form \begin{align} ds^2 = \frac{1}{z^2} \Big[ - \left(1 - \frac{1}{2}k^2 z^2 - M(v) z^3 + \frac{1}{4}\rho^2 z^4 \right) dv^2 - 2 dv dz + dx^2 + dy^2 \Big] \, , \end{align} % with the mass function $M(v)$ given by the background $m$ value plus the contribution coming from $F^{(0)}_{z}$, that is \begin{equation} M(v) = m+\frac{1}{2}\int^v_{-\infty} dv' E_x(v')^2 \label{eq:m(v)}\, . \end{equation} The first correction to the Vaidya form of the geometry comes from the $F_x$ component of the metric at order $1/\omega_P$ % \begin{equation} F_{x}(z,v) = \frac{1}{3} \rho z \int_{-\infty}^{v} d v'~ E_x (v') + O(\omega_P^{-2}) \, \label{eq:FXleading} \, . \end{equation} In the limiting approximation where one can treat the function $\Omega(t)$ as a constant under the integral, we would simply have $F_{x} \approx \rho z \sin(\omega_P v ) \Omega(v) / (3 \omega_P)$. Notice however that in our case for those times $v$ where $E_x(v)$ has no support, that is times where the pump pulse has been turned off, the suppression of this correction is even stronger. In fact, with a choice of $E_x(v)$ of the form \eqref{eq:pulsesimple} and for $\Omega$ any smooth function, $F_{x}$ is suppressed more strongly than any inverse power of $\omega_P$, as follows from basic Fourier analysis. At order $1/\omega_P^2$ also $\Phi$ and $a_x$ get their leading contribution from the pulse, which we report here for comparison with the result obtained from the numerics % \begin{align} &\Phi(z , v) = \frac{ z^3 \rho k }{12} \int^{v}_{-\infty} \int^{v'}_{-\infty}E_x(v'') dv'' dv' + O(w_P^{-3}) \, , \\ &a_{x}(z , v) = \frac{1}{6} z^3 \rho^2 \int^{v}_{-\infty} \int^{v'}_{-\infty}E_x(v'') dv'' dv' + O(w_P^{-3}) \, , \end{align} while the order $1/\omega_{P}^3$ gives the leading corrections to the background values of $a_v$ and $B$ \begin{align} a_{v}(z, v ) & = - \mu + \rho z + \frac{\rho^3 z^6}{36}\( \int^{v}_{-\infty}E_x(v') dv'\) \( \int^{v}_{-\infty} \int^{v'}_{-\infty}E_x(v'') dv'' dv'\) + O(w_P^{-4})\, , \\ B(z, v ) &= - \frac{\rho^2 z^5}{16}\( \int^{v}_{-\infty}E_x(v') dv'\) \( \int^{v}_{-\infty} \int^{v'}_{-\infty}E_x(v'') dv'' dv'\) + O(w_P^{-4}) \, . \end{align} In Fig.~\ref{fig:vaidya_1pt} we show the one point functions obtained from the full numerical solution (solid curves) together with the large $\omega_P$ analytic solution. In this example we have a fairly small value of $\omega_P=0.5$. Thus, the two results do not agree quantitatively very precisely, although the qualitative form of the solutions is already very similar. Furthermore for $T_{tx}$, the difference between the full solution and the approximate analytic one is already surprisingly small. \begin{figure}[ht] \begin{center} \includegraphics[width=0.49\textwidth]{T_tt_vaidya.pdf} \hfill \includegraphics[width=0.49\textwidth]{T_tx_vaidya.pdf} \includegraphics[width=0.49\textwidth]{O_vaidya.pdf} \hfill \includegraphics[width=0.49\textwidth]{J_vaidya.pdf} \caption{\label{fig:vaidya_1pt} One point functions from the full numerical solution (solid) compared to the analytic large $\omega_P$ approximation at leading order (dashed), for $\omega_P = 0.5$.} \end{center} \end{figure} To test the convergence of the approximate analytic solution to the full numerical solution at large $\omega_P$, we define the subtracted one point functions \begin{align} \delta \langle T_{tt}(t)\rangle &= \langle T_{tt}(t)\rangle - 2 m - \int^{t}_{-\infty} dt' E_x(t')^2, \\ \delta \langle T_{tx}(t)\rangle &= \langle T_{tx}(t)\rangle - \rho \int_{-\infty}^t dt' E_x(t'), \\ \delta \langle \mathcal{O}(t)\rangle & = \langle \mathcal{O}(t)\rangle - \frac{\rho k}{4}\int^{t}_{-\infty} dt'\int^{t'}_{-\infty} dt'' E_x(t''). \end{align} These are plotted in Fig.~\ref{fig:delta_1pt}, where we have multiplied them with appropriate powers of $\omega_P$ in order to make the corresponding expectation values order one in the large $\omega_P$ limit. \begin{figure}[ht] \begin{center} \includegraphics[width=0.48\textwidth]{delta_T_tt.pdf}\hfill \includegraphics[width=0.48\textwidth]{delta_T_tx.pdf} \includegraphics[width=0.47\textwidth]{delta_O.pdf}\hfill \includegraphics[width=0.48\textwidth]{delta_J.pdf} \caption{\label{fig:delta_1pt} Difference between the expectation value obtained from numerics and from the large $\omega_P$ analytic solution. The differences are seen to decrease as $\omega_P$ increases.} \end{center} \end{figure} It can be seen that the full numerical solution is converging well to the approximate analytic one. \subsection{Estimating the size of non-thermality from linear response theory} In this subsection, we estimate the size of the quasinormal mode contributions to the one point functions (and thus, to the gravitational background solution) from linear response theory. That is, we assume that the electric field $E_x$ is small and we evaluate at linearized level its effect on one point functions. This assumption is clearly not a priori valid for our setup, but the final result we obtain this way is surprisingly close to the exact numerical results. Linear response theory tells us that the leading contribution to the expectation value of an operator $\chi(t)$ due to the presence of an external electric field is given by\footnote{Here for simplicity we work in the equivalent gauge where the electric field is generated by $A_x$, that is $E_x = -\partial_t A_x$. In writing the linear response \eqref{eq:lin_resp} we are also assuming that the expectation value of the operator $\chi(t)$ is zero when the electric field is absent, which is the case for the operators we will be considering.} \begin{equation} \langle\chi(t)\rangle = \int_{-\infty}^{t} dt' G_R^{\chi,J_x}(t,t') A_x(t') \label{eq:lin_resp} \, , \end{equation} where \begin{equation} G_R^{\chi,J_x}(t,t') = - i\theta(t-t')\int d^2x' \langle [\chi(t, x), J_x(t',x')]\rangle \, . \label{eq:GR} \end{equation} The expectation value is taken in the final equilibrium thermal state, and in writing \eqref{eq:GR} we have taken into consideration the fact that we are considering a spatially homogeneous configuration and consequently $G_R^{\chi,J_x}$ is independent of the spatial position. We will be considering the operator $\chi$ to be one of the vector sector $T_{tx}, J_x, \mathcal{O}$. The retarded correlator can be expanded at late times (i.e.\ large $|t-t'|$) in terms of quasinormal modes \begin{equation} G_R^{\chi,J_x}(t,t') = \theta(t-t')\sum_n g_n e^{-i\omega_n (t-t')} \, , \end{equation} where $\omega_n$ are the quasinormal modes frequencies shared by the correlators of the vector sector operators and $g_n$ are the residues of the quasinormal mode poles in the Fourier transformed correlator. In the situation we are interested in, $A_x$ vanishes after the pulse is turned off. For late enough times $t$ compared to the time where the source has been turned off, we can reliably substitute the quasinormal mode expansion inside the integral (\ref{eq:lin_resp}). This way we obtain the late time expression \begin{equation} \langle\chi(t)\rangle = \sum_{n} g_n e^{-i\omega_n t}\int_{-\infty}^{t} dt' A_x(t') e^{i\omega_n t'} \, . \end{equation} Thus, $\chi(t)$ decays at late times with a rate set by the quasinormal modes and an amplitude set by the integral involving $A_x$. When $t$ is large enough to be outside the support of $A_x$, the time integral with upper limit is formally equal to the integral over the entire temporal domain. Integrating by parts we can express this in terms of the electric field \begin{equation} \int_{-\infty}^{\infty} dt' A_x(t') e^{i\omega_n t'} = -\frac{i}{\omega_n}\int_{-\infty}^{\infty} dt' E_x(t') e^{i\omega_n t'}\, ,\label{eq:QNM_size} \end{equation} which gives for the linear response of the operator $\chi$ at late enough times where the electric pump field has been turned off \begin{equation} \label{eq:QNMdecomposition} \langle\chi(t)\rangle = - i \sum_{n} \frac{g_n}{\omega_n} e^{-i\omega_n t} \int_{-\infty}^{\infty} dt' E_x(t') e^{i\omega_n t'} \, . \end{equation} This shows how the amplitude associated to each quasinormal mode contribution is determined by the Fourier transform of the pulse electric field evaluated at the frequency of the quasinormal mode itself. At the times we focus on in our analysis, the lowest lying QNM with purely imaginary frequency generically dominates over the others. There can be in principle cases where other QNMs, with frequency having a non-vanishing real part, may be in resonance with the pump pulse and compete with the leading QNM. Nonetheless, these contributions would decay fast in time and quickly become negligible. In Fig.~\ref{fig:QNM_comp} we plot the expectation value $\langle T_{tx}\rangle$ at the time $t=t_{end}$ when the pump pulse turns off. In the large $\omega_P$ approximation the expectation value is immediately zero at this time. In the numerics we see significant deviations of $\langle T_{tx}\rangle$ from zero for small $\omega_P$. The deviation mainly arises from the lowest quasinormal mode contribution, whose size is given by the Fourier transform (\ref{eq:QNM_size}) in the linear response approximation. The blue curve in Fig.~\ref{fig:QNM_comp} is obtained by fitting (\ref{eq:QNM_size}) with a constant coefficient in front as a fitting parameter. \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{T_tx_and_E_x.pdf} \includegraphics[width=0.48\textwidth]{chi_vs_E_x.pdf} \caption{\label{fig:QNM_comp} Left: The red dots are data points obtained from the full numerical solution. The blue curve is a fit to the functional form (\ref{eq:QNM_size}) with a single fitting parameter (the overall scale). The root-mean-square error of the fit is $RMSE\approx 0.0036$. Right: Different one point functions evaluated at $t=t_{end}$ plotted as functions of the Fourier transformed electric field (\ref{eq:QNM_size}). The blue solid line corresponds to a linear relation, while the green dashed line corresponds to a quadratic relation.} \end{center} \end{figure} As the figure shows, the linear response curve fits the numerical data very well with a root-mean-square error\footnote{The root-mean-square error is the positive root of $RMSE^2 = MSE = n^{-1}\sum_{i=1}^n(y_i - y_i^{fit})^2$.} of $RMSE\approx 0.0036$. Similarly, also the other one point functions $\langle J_x\rangle$ and $\langle\mathcal{O}\rangle$ can be fitted with the form (\ref{eq:QNM_size}). An exception is the one point function $\langle(T_{xx} - T_{yy})\rangle$ which is better fitted to $(E_x(\omega = \omega_*))^2$. In the right part of Fig.~\ref{fig:QNM_comp}, we show a loglog plot of the one point functions versus the Fourier transformed electric field. For the vector sector operators, the relation is approximately linear while for $\langle(T_{xx} - T_{yy})\rangle$ the relation is approximately quadratic. The quadratic approximation appears for the same reason as the factor of two in the decay of $\langle(T_{xx} - T_{yy})\rangle$, as the corresponding field is sourced by the squares of the vector sector fields. \section{Driven oscillator toy model} \label{sec:toy} As we will show in this section, two signatures in the evolution of our holographic setup can be captured and explained by a simple toy model given in terms of a driven damped harmonic oscillator. These are the instantaneous relaxation at large driving frequency $\omega_P\rightarrow \infty$ and the smallness of the quasinormal mode contributions when $\omega_P$ is separated from the real parts of the quasinormal mode frequencies. The equation of motion of the driven oscillator is given by \begin{equation} \ddot{x} = -\omega_0^2 x - \kappa \dot{x} + f(t) \, ,\label{eq:osc} \end{equation} where $\omega_0$ is the undamped oscillation frequency and $\kappa$ the damping strength. We will choose the driving force to have the form \begin{equation} f(t) = \cos(\omega_P t) \Omega(t), \label{eq:time_decomp} \end{equation} where $\Omega(t)$ varies much more slowly than the cosine. Without a driving force, displacements of $x$ decay back to zero exponentially in time as \begin{equation} x(t) = A_{-} e^{ -i \omega_- t} + A_{+} e^{- i \omega_+ t}\, , \label{eq:OQNM} \end{equation} where \begin{equation} \omega_{\pm} = -\frac{i}{2}(\kappa \pm \nu)\, , \quad \nu = \sqrt{\kappa^2 - 4\omega_0^2} \, . \end{equation} These complex frequencies represent the analogue of the quasinormal mode frequencies in our system. The driven system \eqref{eq:osc} is solved explicitly as \begin{equation} x(t) = A_-(t)e^{- i \omega_- t } + A_+(t) e^{-i \omega_+ t } \, , \label{eq:Osol} \end{equation} where the time dependent amplitudes associated to each mode are \begin{equation}\label{Apm} A_{\pm}(t) = \mp\frac{1}{\nu}\int^{t }_{-\infty} dt' e^{i \omega_\pm t' } f(t' ) \, . \end{equation} In writing the solution we have assumed that $x(t)$ vanishes at early times before the driving force has been turned on. Let us consider the large $\omega_P$ limit of the solution. The basic intuition is that in this case $x(t)$ oscillates fast in which case the equation of motion can be approximated as $\ddot{x}(t) \approx f(t)$, with the approximate solution $x(t) \approx -\cos(\omega_P t)\Omega(t)/\omega_P^2$. We can see this from the exact solution using integration by parts twice,\footnote{Integrating $\cos(\omega_P t)$ and taking derivatives of the rest.} leading to \begin{equation} A_{\pm}(t) = \mp\frac{1}{ \nu\omega_P} e^{i\omega_{\pm} t}\Omega(t) \sin(\omega_P t) \mp \frac{1}{ \nu\omega_P^2}(i\omega_{\pm} \Omega(t)+ \Omega'(t))e^{i\omega_{\pm} t}\cos(\omega_P t) + O(\omega_P^{-3}). \end{equation} Substituting into (\ref{eq:Osol}), we arrive at \begin{equation} x(t) = -\frac{1}{\omega_P^2}\cos(\omega_P t) \Omega(t) + O(\omega_P^{-3}) = -\frac{f(t)}{\omega_P^2} + O(\omega_P^{-3}).\label{eq:largewsol} \end{equation} Thus, we see that the oscillator follows the driving force instantaneously and, in particular, it relaxes instantaneously as the force vanishes. This behavior is similar to the instantaneous thermalization of Vaidya spacetimes where one point functions relax to their thermal values as soon as the boundary source turns off. Notice also that the series expansion in $1/\omega_P$ can be computed to arbitrary orders by repeatedly integrating by parts. This way the instantaneous behavior is seen to hold to all orders in the $1/\omega_P$ expansion. The next question we want to address is that what happened to the quasinormal mode contributions and whether a large $\omega_P$ is a necessary condition to have instantaneous thermalization. At late times, that is for times after the driving function has been turned off, it is apparent that the solution written in the form \eqref{eq:Osol} takes the ``quasinormal mode'' decay form \eqref{eq:OQNM}. In fact, $A_{\pm}(t)$ become time independent as long as one considers values of $t$ where the driving force has been turned off, and therefore at late times give the amplitude associated to the quasinormal modes. For concreteness, let us assume that $\Omega(t)$ has support on a compact region so that the driving function is turned off after some time $t_{\rm end}$. For all those times $t>t_{\rm end}$ where the driving force has been turned off, one can formally replace the integrals in \eqref{Apm} with integrals over the entire time range, that is \begin{equation} A_{\pm} = \mp\frac{1}{\nu}\int^{\infty}_{-\infty} dt' e^{i \omega_\pm t' } f(t' ) = \mp\frac{1}{2\nu}\(\hat{\Omega}( \omega_\pm+ \omega_P ) + \hat{\Omega}(\omega_\pm-\omega_P)\), \end{equation} where we denote the Fourier transforms as \begin{equation} \hat{\Omega}(\omega) = \int_{-\infty}^{\infty} dt' e^{i\omega t}\Omega(t) \, . \end{equation} For any smooth choice of $\Omega(t)$, the coefficients $\hat\Omega$ of the quasinormal modes will be suppressed more strongly than any inverse power of the arguments $ \omega_\pm \pm \omega_P$. To build some more explicit intuition, we can specialize to an example close to our holographic calculation by choosing a Gaussian envelope \begin{equation} \Omega(t) = e^{-\frac{t^2}{\Delta t^2} } \, . \label{eq:gaussOmega} \end{equation} Strictly speaking with this choice the forcing pulse is never turned off. However, for all practical purposes, at large enough times compared to the Gaussian width we can consider the driving force to be vanishing. With this choice, the amplitudes of QNMs take the form \begin{equation} \begin{aligned} \label{eq:modeampl} A_{\pm} = \mp\frac{ \sqrt{\pi} \Delta t }{{2\nu}} \( e^{-\frac{( \omega_{\pm} + \omega_P)^2 \Delta t^2}{4} } + e^{-\frac{( \omega_{\pm}- \omega_P)^2 \Delta t^2}{4} } \). \end{aligned} \end{equation} Notice that, modulo a factor $2\nu$, these are nothing else than the Fourier transform of the driving force % \begin{equation} \hat f(p) = \frac{\sqrt{\pi} \Delta t }{2} \(e^{-\frac{\Delta t^2 (p+\omega_P)^2}{4}} + e^{-\frac{\Delta t^2 (p -\omega_P)^2}{4}} \) \ \end{equation} evaluated at the QNM frequencies $p = \omega_{\pm }$. This shows explicitly how the amplitudes associated to QNMs depend on the overlap between the spectrum of the driving force and the real part of the QNM frequencies: If the driving frequency $\omega_P$ coincides with the real part of a QNM frequency there will be no Gaussian suppression of the amplitude of the corresponding mode. Conversely, for ${\rm Re}( \omega_{\pm} \pm \omega_P)\neq 0$ the QNM amplitude is exponentially suppressed in the square of this combination. Let us further specialize to the over-damped case, where $\kappa > 2m$. This exactly mimics our holographic setup in the fact that the relevant QNM frequencies are purely imaginary. Since $\omega_{\pm}$ have no real part, the amplitudes of the QNM excitations are exponentially suppressed in $\omega_P$ as \begin{align} A_{\pm} =\mp \frac{1}{\nu} \sqrt{\pi} \Delta t \cos(\frac{\omega_P |\omega_{\pm}| \Delta t^2}{4}) e^{\frac{ |\omega_{\pm}|^{2} \Delta t^2} {4} } e^{-\frac{ \omega^2_P \Delta t^2}{4} } \, . \end{align} % Thus, for large $\omega_P \Delta t$ the quasinormal mode contributions are very strongly suppressed. \section{Out of equilibrium conductivity \label{sec:conduct}} Next, we want to study the conductivity properties of the non-equilibrium states we have prepared with a pump pulse. For this purpose one introduces a smaller ``probe" electric field $\delta E_x(t)$. This electric field induces a change in the current $\delta \langle J_x(t)\rangle$. This way we can define a real-time conductivity called differential conductivity $\sigma(t,t')$ through the relation \begin{equation} \delta \langle J_x(t)\rangle = \int_{-\infty}^t dt'\sigma(t, t') \delta E_x(t')\, .\label{eq:difcond} \end{equation} Although there is no standard definition of frequency space conductivity out of equilibrium, we will follow \cite{Lenarcic2014} and define \begin{equation} \sigma(\omega, t) = \int_t^{t_m}dt' e^{i\omega(t' - t)}\sigma(t', t)\, ,\label{eq:cond} \end{equation} where $t_m$ is the time at which the experiment ends. The conductivity defined this way can be related to the current two point function as discussed in Appendix \ref{sec:curcurcor}. In thermal equilibrium, the conductivity (\ref{eq:cond}) approaches the standard definition of optical conductivity at frequency $\omega$, and spatial momentum $k=0$ as the observation time $t_m$ is sent to infinity. This is also shown in Appendix \ref{sec:curcurcor}. In practice we calculate the conductivity in two steps. First, we calculate the differential conductivity appearing in (\ref{eq:difcond}), and then perform a Fourier transform to obtain (\ref{eq:cond}). By choosing a probe field $\delta E_x(t) = \epsilon \delta(t - t_0)$, (\ref{eq:difcond}) becomes \begin{equation} \delta \langle J_x(t)\rangle = \epsilon \sigma(t, t_0) \, . \end{equation} This way we obtain the differential conductivity directly from the knowledge of $\delta \langle J_x(t)\rangle$. In the numerical implementation, we actually use a smoothed version of a delta function, which we choose to be a Gaussian \begin{equation} \delta E_x(t) = \epsilon \frac{1}{\sqrt{2\pi}\delta t} e^{-\frac{(t-t_0)^2}{2(\delta t)^2}}\, . \end{equation} In the limit $\delta t\rightarrow 0$, this approaches a delta function, while in practice we keep $\delta t$ non-vanishing but small enough that it does not affect our results considerably. Smoothing out the delta function over a small scale $\delta t$ affects the conductivity at frequencies $\omega \propto 1/\delta t$ and larger, while the main interesting time-dependence in the conductivity is seen for frequency $\omega = O(1)$. In our analysis, we have taken $\delta t = 0.05$, which has only small (order $1\%$) effect on the conductivity in the regime of interest. \subsection{Numerical results} First, we consider the pump pulse profile (\ref{eq:pump_pulse_form}) with different values of the pump frequency $\omega_P$. Fig.~\ref{fig:sigma_omega_non_eq} shows the optical conductivity as a function of $\omega$ for different values of times. The time $\delta t$ is measured from the time $t_m = 3\Delta t + t_0$ at which the pump pulse practically turns off (due to the smoothed theta function in (\ref{eq:pump_pulse_form})), \begin{equation} \delta t = t - t_m. \end{equation} Fig.~\ref{fig:sigma_omega_non_eq}a shows the conductivity for the large frequency pump pulse which appears thermal immediately at time $\delta t=0$. This is consistent with the results of the previous section which showed that the larger $\omega_P$, the closer to the Vaidya spacetime we get. \begin{figure}[t] \begin{center} \includegraphics[width=1 \textwidth]{sigma_omega_non_eq.pdf} \caption{\label{fig:sigma_omega_non_eq} The optical conductivity at different times and in the initial and the final thermal equilibrium states. Left: The pump frequency is $\omega_P = 0.5$, and the optical conductivity right after the pump pulse has ended coincides with the final equilibrium one. Right: The pump frequency is $\omega_P=0$, and the optical conductivity is seen to interpolate in time between the initial and final equilibrium values.} \end{center} \end{figure} On the other hand Fig.~\ref{fig:sigma_omega_non_eq}b shows the conductivity for $\omega_P=0$, in which case the conductivity deviates significantly from the final thermalized conductivity for all times displayed. Next, we study how the conductivity approaches its final thermalized value. We will focus on the DC conductivity, that is, the optical conductivity $\sigma(\omega)$ at $\omega=0$. Fig~\ref{fig:sigma_dc_non_eq} shows $\sigma_{DC}$ as a function of time. \begin{figure}[t] \begin{center} \includegraphics[width=0.47 \textwidth]{sigma_vs_t.pdf} \includegraphics[width=0.47 \textwidth]{log_sigma_vs_delta_t.pdf} \caption{\label{fig:sigma_dc_non_eq} The DC conductivity as a function of time.} \end{center} \end{figure} In the previous section we saw that the background spacetime approaches a static black hole with a rate set by the lowest quasinormal mode. Thus, we might expect that the conductivity approaches its final thermalized value with the same rate. This is where we find a slightly surprising result. The conductivity approaches its thermalized value with a rate $2 \textrm{Im}(\omega_*)$. We will come back to this factor of two in the next subsection. Thus, the DC conductivity is consistent with the approximate form \begin{equation} \sigma_{DC}(\delta t) = \sigma_{DC}^{th} + C e^{-2 \textrm{Im}(\omega_*) \delta t},\label{eq:approx_decay} \end{equation} where $\sigma_{DC}^{th}$ is the final thermalized value of the DC conductivity. The coefficient $C$ quantifies how far out of equilibrium the conductivity gets. Defining \begin{equation} \delta \sigma_{DC}= \sigma_{DC}^{th} - \sigma_{DC}(\delta t = 0)\, , \end{equation} with the approximate form (\ref{eq:approx_decay}) we have $\delta \sigma_{DC}=-C$. In Fig.~\ref{fig:C_vs_omega_P} we show how $\delta \sigma_{DC}$ behaves as a function of $\omega_P$. As before, the initial and final temperatures are kept fixed while varying $\omega_P$. \begin{figure}[t] \begin{center} \includegraphics[width=0.8 \textwidth]{C_vs_omega_p.pdf} \caption{\label{fig:C_vs_omega_P} The maximum deviation of the DC conductivity from its thermalized value as a function of the pump frequency $\omega_P$. The solid green curve is a one parameter fit to a form $(E_x(\omega_*))^2$ } \end{center} \end{figure} Clearly the largest deviation appears as $\omega_P=0$. Recalling that the amplitude of the deviation of the background spacetime from the equilibrium one was well approximated by the Fourier transformed electric field $E_x(\omega_*)$, we are motivated to also attempt a similar fit to the deviation of the conductivity from its equilibrium value. Fitting $\delta \sigma_{DC}$ with $E_x(\omega_*)$ does not give a good fit but instead $E_x(\omega_*)^2$ does. A fit to $E_x(\omega_*)^2$ is shown in Fig.~\ref{fig:C_vs_omega_P} as the solid green curve. This is a one parameter fit with the overall amplitude being the only parameter. Thus, our results for the time dependence and $\omega_P$ dependence of the non-equilibrium conductivity can be summarized in \begin{equation} \sigma_{DC}(\delta t) = \sigma_{DC}^{th} + \gamma E_x(\omega_*)^2 e^{-2 \textrm{Im}(\omega_*) \delta t},\label{approx_decay_2} \end{equation} with a constant coefficient $\gamma\approx -0.0798$. Finally we study how the deviation from thermality behaves as we change the difference between the initial and final temperatures. We have chosen to keep the final temperature $T_f$ fixed and to vary the difference $\Delta T = T_f-T_i$ by changing the initial temperature $T_i$. A plot of $\delta \sigma_{DC}$ as a function of $\Delta T$ is shown in Fig.~\ref{fig:dsigma_vs_dT}. \begin{figure}[t] \begin{center} \includegraphics[width=0.8 \textwidth]{dsigma_vs_dT.pdf} \caption{\label{fig:dsigma_vs_dT} The maximum deviation of the DC conductivity from its thermalized value as a function of the temperature difference $\Delta T = T_f - T_i$.} \end{center} \end{figure} This confirms the intuition that the more we increase the magnitude of the pumping electric field, the further from equilibrium the conductivity deviates (in units of the final thermalized conductivity $\sigma_{DC}^{th}$). \subsection{A symmetry argument for the thermalization rate} We have given numerical evidence that the conductivity thermalizes with a rate $e^{-2i\omega_* t}$ where $\omega_*$ is the lowest vector quasinormal mode. The conductivity deviates from thermality simply because the background spacetime deviates from a static black hole. Thus, it is not surprising that the thermalization time scale of the conductivity is related to that of the background spacetime. What is somewhat surprising instead is the factor of $2$ relating the two rates. Here we provide a symmetry argument for it. Let us start considering the symmetries of the bulk action \eqref{eq:action} of the holographic model we are studying. These include the subgroup $SO(2) \times SO(2)$, where the first factor represents the rotations $M$ acting on the spatial coordinates $x^{i}=(x,y)$ common to the boundary, and the second factor the global rotations $R$ that act non-trivially only on the scalar duplet $\phi_{I} = (\phi_1,\phi_2)$, rotating the two fields into one another. The equilibrium solution, and more specifically the scalar field configuration \eqref{eq:scalarsk} \begin{equation} \phi_I = k \delta_{I i} x^i \, , \end{equation} explicitly breaks this symmetry as $SO(2) \times SO(2) \to SO(2)_{\textrm{res}} $, with the residual $SO(2)$ being the subgroup that leaves the scalar field configuration invariant, \begin{equation} \phi_{I}(x^i) \to R_{I}^{~J}\phi_{J}(M^{k}_{~i}x^i) = \phi_{I}(x^i) \, , \end{equation} that is \begin{equation} R_{I}^{~J}\delta_{Jk}M^{k}_{~i} = \delta_{Ii} \, . \end{equation} The bulk metric and the bulk gauge field configurations \eqref{eq:eqsol} do not break any of the original $SO(2) \times SO(2)$ symmetry, as they are isotropic (and homogeneous) in the boundary spatial coordinates and they do not transform under the global $SO(2)$ symmetry. Hence the residual $SO(2)$ is automatically preserved by the bulk metric and gauge fields, and the entire equilibrium solution is a scalar of $ SO(2)_{\textrm{res}} $. One can conveniently organize deviations from the equilibrium background solution according to representations of the $SO(2)_{\textrm{res}} $. Starting with the ansatz \eqref{eq:ansatz} for the non-equilibrium solution, and employing the definitions given in \eqref{eq:fluctdef}, we can split the fluctuations as \begin{align} &\delta F_z, \delta\Sigma, \delta a_v && \textrm{scalar},\nonumber \\ &\delta F_x, \delta a_x, \delta \Phi & &\textrm{vector},\nonumber \\ &\delta B & &\textrm{symmetric traceless tensor}.\nonumber \end{align} As we described, at sufficiently late times the spacetime is close to thermal equilibrium and one can to a good approximation expand the equations of motion in perturbations around the equilibrium spacetime. At the linearized level the three type of perturbations completely decouple from each other. This can be immediately understood by thinking about the equation of motions in terms of the $SO(2)_{\textrm{res}} $ symmetry. In general, the linearized equations for the fluctuations can be schematically written as \begin{align} \label{eq:lin_eqs_schem} \mathcal{L}^{(1)}_{SS} \delta S + \mathcal{L}^{(1)}_{SV} \delta V +\mathcal{L}^{(1)}_{ST} \delta T = 0 , \nonumber \\ \mathcal{L}^{(1)}_ {VS}\delta S+ \mathcal{L}^{(1)}_{VV} \delta V +\mathcal{L}^{(1)}_{VT} \delta T = 0, \\ \mathcal{L}^{(1)}_ {TS}\delta S+ \mathcal{L}^{(1)}_{VS} \delta S +\mathcal{L}^{(1)}_{TT} \delta T = 0 . \nonumber \end{align} Here $\delta S , \delta V,\delta T$ collectively indicate fluctuations belonging the scalar, vector and traceless tensor sector respectively. $\mathcal{L}^{(1)}_{XY}$ indicates operator constructed from the equilibrium solution and derivative operators, which acting on linearized fluctuations establish a map from the sector $Y$ to the sector $X$. Since we only consider spatially homogeneous background fields and fluctuations, if follows that the operators $\mathcal{L}$ effectively do not contain any derivative operator in the boundary spatial coordinates -- or rather these have a trivial effect. This, together with the fact that the equilibrium solution completely belongs to the singlet representation of $SO(2)_{\textrm{res}} $, implies that the operators $\mathcal{L}^{(1)}_{XY}$ are diagonal in $XY$. Thus the three sector decouples completely \begin{align} \mathcal{L}^{(1)}_{SS} \delta S = 0 , \qquad \mathcal{L}^{(1)}_{VV} \delta V = 0, \qquad \mathcal{L}^{(1)}_{TT} \delta T = 0 \, , \end{align} as we explicitly discussed in Sec.~\ref{sec:0pulsefreq}. To linearized level the only non-vanishing perturbations we considered were the vector fluctuations, which decayed towards equilibrium with a rate $q = e^{-i\omega_* t}$ set by the lowest vector quasinormal mode $\omega_*$. Working at the quadratic level in the fluctuations, the equations of motion can now be written according to the structure given by $SO(2)_{\textrm{res}} $ in the form \begin{align} \label{eq:2nd_eqs_schem} \mathcal{L}^{(2)}_{SS} \delta\d S + \mathcal{L}^{(2)}_{SV} \delta\d V +\mathcal{L}^{(2)}_{ST} \delta\d T= \mathcal{J}_{S} , \nonumber \\ \mathcal{L}^{(2)}_ {VS}\delta\d S + \mathcal{L}^{(2)}_{VV} \delta\d V +\mathcal{L}^{(2)}_{VT} \delta\d T = \mathcal{J}_{V}, \\ \mathcal{L}^{(2)}_ {TS}\delta\d S+ \mathcal{L}^{(2)}_{VS} \delta\d S +\mathcal{L}^{(2)}_{TT} \delta\d T = \mathcal{J}_{T}. \nonumber \end{align} Now $\delta\d X $ indicates quadratic fluctuations and, in a similar way as above, $\mathcal{L}^{(2)}_{XY}$ indicates an operator constructed from the equilibrium solution and derivative operators. The sources $\mathcal{J}_{X}$ collect terms that are quadratic in the linearized fluctuations $\delta S , \delta V, \delta T$. Using the same symmetry argument as above, together with the fact the we only have vectorial perturbations at the linear order, we can conclude that the equations for the quadratic fluctuations relevant to our case take the form \begin{align} \mathcal{L}^{(2)}_{SS} \delta\d S = \mathcal{J}_{S} , \qquad \mathcal{L}^{(2)}_{VV} \delta\d V = 0 , \qquad \mathcal{L}^{(2)}_{TT} \delta\d T = \mathcal{J}_{T} \, . \end{align} Again, this consistently reproduces what we observed in Sec.~\ref{sec:0pulsefreq}. In particular, as the leading excitations of the scalar and tensor sector are sourced by expressions that are quadratic in $\delta V \propto q$, we have that $\delta\d S, \delta\d T \propto \delta V^2 \propto q^2$. All in all, this analysis teaches us that the non-equilibrium solution written in a perturbative expansion has the schematic structure \begin{equation} \label{eq:backstruct} \textrm{Background} \approx S + \delta V + \delta\d S+ \delta\d T +O(q^3) \, , \end{equation} with $S \propto q^0$ indicating the equilibrium, scalar sector, solution. Given this, we can proceed to analyze the linearized perturbations that compute the optical conductivity in the background \eqref{eq:backstruct}. We indicate these new sets of linearized fluctuations collectively as $\tilde \delta s , \tilde \delta v, \tilde \delta t$ to distinguish them from the background ones. Similarly we use $\tilde{ \mathcal{L}}$ to indicate the operators that act on these fluctuations in the equations of motion, which take an analogous form as \eqref{eq:lin_eqs_schem}. We are interested in how much these fluctuations deviate from the form they would take around the background equilibrium, $O(q^0)$, solution. For this we organize our computation in an expansion in $q$. Making this perturbative structure explicit, we have up to second order in $q$ included \begin{align} &\tilde{ \mathcal{L}}_{ss} \approx \tilde{\mathcal{L}}_{ss, 0} + \tilde{\mathcal{L}}^{S}_{ss,2}\delta\d S , \quad &\tilde{ \mathcal{L}}_{sv} &\approx \tilde{\mathcal{L}}^{V}_{sv,1}\delta V \, , \quad &\tilde{ \mathcal{L}}_{st} &\approx \tilde{\mathcal{L}}^{T}_{st,2}\delta\d T \, ,\nonumber \\ &\tilde{ \mathcal{L}}_{vv} \approx \tilde{\mathcal{L}}_{vv, 0} + \tilde{\mathcal{L}}^{S}_{vv,2}\delta\d S + \tilde{\mathcal{L}}^{T}_{vv,2}\delta\d T \, , & \tilde{ \mathcal{L}}_{vs} &\approx \tilde{\mathcal{L}}^{V}_{vs,1}\delta V \, , & \tilde{ \mathcal{L}}_{vt} &\approx \tilde{\mathcal{L}}^{V}_{vt,1}\delta V \, , \\ &\tilde{ \mathcal{L}}_{tt} \approx \tilde{\mathcal{L}}_{tt, 0} + \tilde{\mathcal{L}}^{S}_{tt,2}\delta\d S \, , & \tilde{ \mathcal{L}}_{ts}& \approx \tilde{\mathcal{L}}^{T}_{ts,2}\delta\d T \, , &\tilde{ \mathcal{L}}_{tv} &\approx \tilde{\mathcal{L}}^{V}_{tv,1}\delta V \, . \nonumber \end{align} Similarly we write the fluctuations as $\tilde \delta s = \tilde\delta s_0 + \tilde \delta s_1 + \dots $, with $ \tilde \delta s_i$ being order $q^i$, and analogously for the other sectors. Solving the resulting equations order by order in $q$, the zeroth order problem corresponds to the study of linearized fluctuations around the thermal equilibrium value. All three sectors remain decoupled and to compute the optical conductivity one only excites the vector sector, so only $\tilde\delta v_0$ is non-vanishing. At the next order in $q$ we have \begin{equation} \tilde{ \mathcal{L}}_{XY,0} \tilde \delta Y_1 + \tilde{ \mathcal{L}}_{XY,1} \tilde \delta Y_0 = 0 , \end{equation} and using the fact that $\tilde\delta s_0 =\tilde\delta t_0 =0$ and the explicit form of $\tilde{ \mathcal{L}}_{XY,1}$ that can be read form above, it is easy to see that the equation for $\tilde \delta v_1$ reduces to \begin{equation} \tilde{ \mathcal{L}}_{vv,0} \tilde \delta v_1 = 0 \, . \end{equation} Thus, there is no order $q$ contribution to the optical conductivity. At the next order instead \begin{equation} \tilde{ \mathcal{L}}_{XY,0} \tilde \delta Y_2 + \tilde{ \mathcal{L}}_{XY,1} \tilde \delta Y_1 +\tilde{ \mathcal{L}}_{XY,2} \tilde \delta Y_0 = 0 , \end{equation} and generically all the sectors get sourced by the lower orders solution. Concentrating on $\tilde \delta v_2$, which is the relevant sector for the optical conductivity, \begin{align} \tilde{ \mathcal{L}}_{vv,0} \tilde \delta v_2 &= - \tilde{ \mathcal{L}}_{vs,1} \tilde \delta s_1 - \tilde{ \mathcal{L}}_{vt,1} \tilde \delta t_1 - \tilde{ \mathcal{L}}_{vv,2} \tilde \delta v_0 \nonumber \\ &= - \tilde{\mathcal{L}}^{V}_{vs,1}\delta V \tilde \delta s_1 - \tilde{\mathcal{L}}^{V}_{vt,1}\delta V \tilde \delta t_1 - (\tilde{\mathcal{L}}^{S}_{vv,2}\delta\d S + \tilde{\mathcal{L}}^{T}_{vv,2}\delta\d T) \tilde \delta v_0 \,. \end{align} We therefore conclude that for the vector fluctuations the leading deviation from their thermal value is of order $q^2$. That is, we showed that the rate of thermalization of the optical conductivity corresponds to $e^{-2i\omega_* t}$. \section{Discussion} \label{sec:conjecture} In this paper we have analyzed the pattern of conductivity thermalization in a minimal holographic setting that includes finite charge density and a mechanism of weak momentum dissipation. We have shown that, when quenched by a laser pulse with a significant DC component, the equilibration time of the conductivity is given by half the lowest-lying imaginary bulk quasinormal mode: $\tau = - 1/ (2{\rm Im ~} \omega_*)$. The appearance of the QNM governing momentum relaxation can be understood from the fact that the DC component of the electric field sets the charges of a finite-density system in motion, thus inducing finite momentum densities in the system. We have also provided a symmetry argument for the factor of two. If the mean frequency of the wave packet is large, the pulse lacks resonance with the corresponding QNM, and the thermalization is effectively instantaneous. The latter result is surprising from different points of view and gives rise to a number of questions. The instantaneous thermalization of the conductivity is surely not intuitive from the boundary perspective, but is somewhat natural from the bulk point of view once we are given a background dynamics of the Vaidya form. However, when going to finite density, a priori we would not have expected the bulk dynamics to remain (even approximately) of the instantaneously thermalizing Vaidya form as in the zero density case \cite{Horowitz:2013mia,Bardoux:2012aw}. It is then interesting to ask to what extent this result is generic and, especially, whether it still holds in non-relativistic systems. A natural option would be to consider Einstein-Dilaton-Maxwell holographic models that explicitly exhibit Lifshitz scaling and hyperscaling violation, which have also been generalized to include momentum dissipation \cite{Cremonini:2016avj}. However, a closer look reveals that they do not allow for a dynamical electric field, as this will generically result in a modification of the dilaton profile and affect the scaling exponents. On the other hand, models that are Lorentz-invariant in the UV and where the relevant scaling properties only emerge in the IR (see, e.g., \cite{Davison:2013txa,Gouteraux:2014hca,Andrade:2016tbr}) seem to be less constraining in this sense, and could represent an interesting context where to explore this question further. From a broader perspective, our observation poses a question about the underlying principles governing such ultrafast equilibration. One might ask to which extent this surprising prediction depends on the precise details of the UV description of the field theory, and more specifically what is the role played by the large $N$ limit, on which our classical gravity computation relies. Recently it has been shown that the causal behavior of certain observables associated with the Vaidya geometry, in the zero density case, can be explicitly recovered from conformal field theory structures in the limit of infinite central charge \cite{Anous:2016kss}. It has also been suggested that $1/N$-corrections deflect the system from this regime \cite{Anous:2016kss}. This might suggest that the Vaidya-like response of the holographic strange metal to the laser pulse is simply an artifact of the regime of classical gravity. The eigenstate thermalization hypothesis provides us with another way to think of this instantaneous equilibration, relating rapid thermalization of local observables to the dense entanglement within the full many-body quantum system \cite{Srednicki,He:2017vyf}. From this perspective, it is possible that the seemingly universal behavior of holographic systems is not an artifact of specific limits, but rather a generic manifestation of the entangled nature of those states that have holographic duals. In the same way that the holographic predictions have been shown to enjoy some UV independence through the minimal viscosity \cite{Kovtun:2004de} and the early onset of the hydrodynamic behavior \cite{Chesler:2008hg} that are reflected in heavy ion collisions experiments \cite{Schafer:2009dj}, it would be interesting to look for signatures of this instantaneous thermalization in condensed matter systems. This is the perspective we highlighted in the companion paper \cite{Bagrov:2017tqn}. \section*{Acknowledgements} We thank T.~Andrade, C.~Ecker, A.~Ficnar, B.~Gouteraux, M.P.~Heller, D.H.~Lee, L.~Rademaker, A.O.~Starinets, S.A.~Stricker and D.~Thompson for helpful discussions. This research has been supported in part by BELSPO (IAP P7/37), FWO-Vlaanderen (projects G020714N, G044016N and G006918N), Academy of Finland (grant no 1297472), and the National Science Foundation (grant no NSF PHY-1125915). Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \& Innovation.
1,477,468,751,354
arxiv
\section{Introduction} Since their introduction in the seventies supercompact cardinals played a central role in set theory. They have been a fundamental assumption to obtain many of the most interesting breakthroughs: Solovay's original proof that the singular cardinal hypothesis \ensuremath{\text{{\sf SCH}}}\ holds eventually above a large cardinal, Silver's first proof of $\Con(\neg\ensuremath{\text{{\sf SCH}}})$, Baumgartner's proof of the consistency of the proper forcing axiom \ensuremath{\text{{\sf PFA}}}~\cite{devlin} and Foreman, Magidor, and Shelah's proof of the consistency of Martin's maximum \ensuremath{\text{{\sf MM}}}~\cite{foreman_magidor_shelah} all relied on the assumption of the existence of a supercompact cardinal. While some of these result have been shown to have considerably weaker consistency strength, the exact large cardinal strength of the forcing axioms \ensuremath{\text{{\sf PFA}}}\ and \ensuremath{\text{{\sf MM}}}\ is one of the major open problems in set theory. It is what we want to address in this paper. Forcing axioms play an important role in contemporary set theory. Historically they evolved from Martin's axiom, which was commonly used as the axiomatic counterpart to ``$V = L$.'' The most prominent forcing axioms today are \ensuremath{\text{{\sf PFA}}}\ as well as the stronger \ensuremath{\text{{\sf MM}}}. Not only do they serve as a natural extension of \ensuremath{\text{{\sf ZFC}}}, they also answer a plethora of questions undecidable in \ensuremath{\text{{\sf ZFC}}}\ alone, from elementary questions like the size of the continuum to combinatorially complicated ones like the basis problem for uncountable linear orders~\cite{moore.basis}. Even problems originating from other fields of mathematics and apparently unrelated to set theory have been settled appealing to \ensuremath{\text{{\sf PFA}}}. For example, Farah~\cite{farah} recently proved the nonexistence of outer automorphisms of the Calkin algebra assuming \ensuremath{\text{{\sf PFA}}}. The consistency proofs of \ensuremath{\text{{\sf PFA}}}\ and \ensuremath{\text{{\sf MM}}}\ both start in a set theoretic universe in which there is a supercompact cardinal $\kappa$. They then collapse $\kappa$ to $\omega_2$ in such a way that in the resulting model \ensuremath{\text{{\sf PFA}}}\ or \ensuremath{\text{{\sf MM}}}\ holds, thus showing the consistency strength of these axioms is at most that of the existence of a supercompact cardinal. An early result on \ensuremath{\text{{\sf PFA}}}\ by Baumgartner~\cite{baumgartner.PFA} was that \ensuremath{\text{{\sf PFA}}}\ implies the tree property on $\omega_2$, that is, \ensuremath{\text{{\sf PFA}}}\ implies there are no $\omega_2$-Aronszajn trees. As a cardinal $\kappa$ is weakly compact if and only if it is inaccessible and the tree property holds on $\kappa$, this can be seen as \ensuremath{\text{{\sf PFA}}}\ showing the ``weak compactness'' of $\omega_2$, apart from its missing inaccessibility. This is an affirmation of the idea that collapsing a large cardinal to $\omega_2$ is necessary to produce a model of \ensuremath{\text{{\sf PFA}}}, and it actually implies the consistency strength of \ensuremath{\text{{\sf PFA}}}\ is at least the existence of a weakly compact cardinal, for if the tree property holds on $\omega_2$, then $\omega_2$ is weakly compact in $L$ by~\cite{mitchell}. This was the first insight that showed \ensuremath{\text{{\sf PFA}}}\ posses large cardinal strength, and many heuristic results indicate that supercompactness actually is the correct consistency strength of \ensuremath{\text{{\sf PFA}}}\ and thus in particular also of \ensuremath{\text{{\sf MM}}}. Still giving lower bounds for the consistency strength of \ensuremath{\text{{\sf PFA}}}\ or \ensuremath{\text{{\sf MM}}}\ is one major open problem today. While inner model theoretic methods were refined and enhanced tremendously over the last three decades, the best lower bounds they can establish today are still far below supercompactness~\cite{jensen.schimmerling.schindler.steel}. In~\cite{weiss} the second author introduced combinatorial principles which do for strong compactness and supercompactness what the tree property does for weak compactness: A cardinal $\kappa$ is strongly compact (supercompact) if and only if $\kappa$ is inaccessible and $\ensuremath{\text{{\sf TP}}}(\kappa)$ or, equivalently, $\ensuremath{\text{{\sf SP}}}(\kappa)$ ($\ensuremath{\text{{\sf ITP}}}(\kappa)$ or, equivalently, $\ensuremath{\text{{\sf ISP}}}(\kappa)$) holds. We will show \ensuremath{\text{{\sf PFA}}}\ implies $\ensuremath{\text{{\sf ISP}}}(\omega_2)$, the strongest of the four principles. This, in the line of thought from above, says \ensuremath{\text{{\sf PFA}}}\ shows $\omega_2$ is, modulo inaccessibility, ``supercompact.'' Apart from the strong heuristic evidence this gives, by using arguments for pulling back these principles from generic extensions these characterizations actually allow us to show the following theorems: If one forces a model of \ensuremath{\text{{\sf PFA}}}\ using a forcing that collapses a large cardinal $\kappa$ to $\omega_2$ and satisfies the $\kappa$-covering and $\kappa$-approximation properties,\footnote{See Definition~\ref{def.covering_approximation}.} then $\kappa$ has to be strongly compact; if the forcing is also proper, then $\kappa$ is supercompact. We will show that all known forcings for producing models of \ensuremath{\text{{\sf PFA}}}\ by collapsing an inaccessible cardinal $\kappa$ to $\omega_2$ satisfy these properties. Results of this kind have first been obtained by Neeman~\cite{Nee08}. He showed that if one starts with a ground model that satisfies certain fine structural properties and forces \ensuremath{\text{{\sf PFA}}}\ by means of a proper forcing, then $\omega_2$ of the generic extension has to be a cardinal $\kappa$ which is close to being $\kappa^+$-supercompact in the ground model. (More precisely, in the ground model $[\kappa,\kappa^+]$ is a $\Sigma^2_1$-indescribable gap.) Our results, which approach the issue from a different perspective, are substantially stronger in that they reach full supercompactness. \subsection*{Notation} The notation used is mostly standard. For a regular cardinal $\delta$, $\cof \delta$ denotes the class of all ordinals of cofinality $\delta$. The phrases \emph{for large enough $\theta$} and \emph{for sufficiently large $\theta$} will be used for saying that there exists a $\theta'$ such that the sentence's proposition holds for all $\theta \geq \theta'$. For an ordinal $\kappa$ and a set $X$ we let $P_\kappa X \coloneqq \{ x \subset X\ |\ |x| < \kappa \}$ and, if $\kappa \subset X$, \begin{equation*} P_\kappa' X \coloneqq \{ x \in P_\kappa X\ |\ \kappa \cap x \in \ensuremath{\text{{\rm Ord}}},\ \langle x, \in \rangle \prec \langle X, \in \rangle \}. \end{equation*} For $x \in P_\kappa X$ we set $\kappa_x \coloneqq \kappa \cap x$. For $f: P_\omega X \to P_\kappa X$ let $\ensuremath{\text{{\rm Cl}}}_f \coloneqq \{ x \in P_\kappa X\ |\ \forall z \in P_\omega x\ f(z) \subset x \}$. $\ensuremath{\text{{\rm Cl}}}_f$ is club, and it is well known that for any club $C \subset P_\kappa X$ there is an $f: P_\omega X \to P_\kappa X$ such that $\ensuremath{\text{{\rm Cl}}}_f \subset C$. For sections~\ref{sect.principles} and~\ref{sect.guessing}, $\kappa$ and $\lambda$ are assumed to be cardinals, $\kappa \leq \lambda$, and $\kappa$ is regular and uncountable. \subsection*{Acknowledgments} The authors wish to express their gratitude to David Asper\'{o}, Sean Cox, Dieter Donder, Hiroshi Sakai, Ralf Schindler, and Boban Veli\v{c}kovi\'c for valuable comments and feedback on this research. They are indebted to Menachem Magidor for supplying them with the idea of the proof of Theorem~\ref{theorem.pull_back_ISP}, that is, Claim~\ref{claim.magidor}. They furthermore want to thank Mauro Di Nasso for an invitation to discuss this material at a one week workshop in Pisa. \section{The principles {\sffamily TP}, {\sffamily SP}, {\sffamily ITP}, and {\sffamily ISP}}\label{sect.principles} We recall the necessary definitions from~\cite{weiss}. Let us call a sequence $\langle d_a\ |\ a \in P_\kappa \lambda \rangle$ a \emph{$P_\kappa \lambda$-list} if $d_a \subset a$ for all $a \in P_\kappa \lambda$. \begin{definition}\label{def.P_kappa_lambda.thin} Let $D = \langle d_a\ |\ a \in P_\kappa \lambda \rangle$ be a $P_\kappa \lambda$-list. \begin{itemize} \item $D$ is called \emph{thin} if there is a club $C \subset P_\kappa \lambda$ such that $| \{ d_a \cap c\ |\ c \subset a \in P_\kappa \lambda \} | < \kappa$ for every $c \in C$. \item $D$ is called \emph{slender} if for every sufficiently large $\theta$ there is a club $C \subset P_\kappa H_\theta$ such that $d_{M \cap \lambda} \cap b \in M$ for all $M \in C$ and all $b \in M \cap P_{\omega_1} \lambda$. \end{itemize} \end{definition} Note that if $D$ is a thin list, then $D$ is slender. \begin{definition}\label{def.ineffable_branch} Let $D = \langle d_a\ |\ a \in P_\kappa \lambda \rangle$ be a $P_\kappa \lambda$-list and $d \subset \lambda$. \begin{itemize} \item $d$ is called a \emph{cofinal branch of $D$} if for all $a \in P_\kappa \lambda$ there is $z_a \in P_\kappa \lambda$ such that $a \subset z_a$ and $d \cap a = d_{z_a} \cap a$. \item $d$ is called an \emph{ineffable branch of $D$} if there is a stationary set $S \subset P_\kappa \lambda$ such that $d \cap a = d_a$ for all $a \in S$. \end{itemize} \end{definition} \begin{definition} \begin{itemize} \item $\ensuremath{\text{{\sf TP}}}(\kappa, \lambda)$ holds if every thin $P_\kappa \lambda$-list has a cofinal branch. \item $\ensuremath{\text{{\sf SP}}}(\kappa, \lambda)$ holds if every slender $P_\kappa \lambda$-list has a cofinal branch. \item $\ensuremath{\text{{\sf ITP}}}(\kappa, \lambda)$ holds if every thin $P_\kappa \lambda$-list has an ineffable branch. \item $\ensuremath{\text{{\sf ISP}}}(\kappa, \lambda)$ holds if every slender $P_\kappa \lambda$-list has an ineffable branch. \end{itemize} We let $\ensuremath{\text{{\sf TP}}}(\kappa)$ abbreviate the statement that $\ensuremath{\text{{\sf TP}}}(\kappa, \lambda)$ holds for all $\lambda \geq \kappa$, and similarly for the other principles. \end{definition} These definitions admit different ways of defining strong compactness and supercompactness. \begin{theorem}\label{theorem.TP<->stronglycompact} Suppose $\kappa$ is inaccessible. Then $\kappa$ is strongly compact if and only if\/ $\ensuremath{\text{{\sf TP}}}(\kappa)$ holds. \end{theorem} \begin{theorem}\label{theorem.ITP<->supercompact} Suppose $\kappa$ is inaccessible. Then $\kappa$ is supercompact if and only if\/ $\ensuremath{\text{{\sf ITP}}}(\kappa)$ holds. \end{theorem} Unlike other characterizations however, by~\cite{weiss} the principles \ensuremath{\text{{\sf ITP}}}\ and \ensuremath{\text{{\sf ISP}}}\ also make sense for small cardinals. There exist ideals and filters naturally associated to the principles \ensuremath{\text{{\sf ITP}}}\ and \ensuremath{\text{{\sf ISP}}}. \begin{definition} Let $A \subset P_\kappa \lambda$ and let $D = \langle d_a\ |\ a \in P_\kappa \lambda \rangle$ be a $P_\kappa \lambda$-list. $D$ is called \emph{$A$-effable} if for every $S \subset A$ that is stationary in $P_\kappa \lambda$ there are $a, b \in S$ such that $a \subset b$ and $d_a \neq d_b \cap a$. $D$ is called \emph{effable} if it is $P_\kappa \lambda$-effable. \end{definition} \begin{definition} We let \begin{align*} I_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda] & \coloneqq \{ A \subset P_\kappa \lambda\ |\ \text{there exists a thin $A$-effable $P_\kappa \lambda$-list} \},\\ I_\ensuremath{\text{{\rm IS}}}[\kappa, \lambda] & \coloneqq \{ A \subset P_\kappa \lambda\ |\ \text{there exists a slender $A$-effable $P_\kappa \lambda$-list} \}. \end{align*} By $F_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$ and $F_\ensuremath{\text{{\rm IS}}}[\kappa, \lambda]$ we denote the filters associated to $I_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$ and $I_\ensuremath{\text{{\rm IS}}}[\kappa, \lambda]$ respectively. \end{definition} The ideals $I_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$ and $I_\ensuremath{\text{{\rm IS}}}[\kappa, \lambda]$ are normal ideals on $P_\kappa \lambda$ by~\cite{weiss}. \section{Guessing models}\label{sect.guessing} We now introduce the concept of a \emph{guessing model} which gives an alternative presentation of the principle \ensuremath{\text{{\sf ISP}}}. \begin{definition} Let $M \prec H_\theta$ for some large enough $\theta$. \begin{itemize} \item A set $d$ is called $M$-\emph{approximated} if $d \cap b \in M$ for all $b \in M \cap P_{\omega_1} M$. \item A set $d$ is called $M$-\emph{guessed} if there is an $e \in M$ such that $d \cap M = e \cap M$. \end{itemize} $M$ is called \emph{$z$-guessing} if every $M$-approximated $d \subset z$ is $M$-guessed. $M$ is called \emph{guessing} if for all $z \in M$, $M$ is $z$-guessing. \end{definition} Note that since for every $z \in M$ there is a bijection $f: z \to \rho$ in $M$ for some ordinal $\rho$, it holds that $M$ is guessing if and only if $M$ is $\rho$-guessing for all $\rho \in M$. Also note that since $M$ cannot be $\sup (M \cap \ensuremath{\text{{\rm Ord}}})$-guessing, any ordinal $\rho$ such that $M$ is $\rho$-guessing has to be bounded by $\sup(M\cap \ensuremath{\text{{\rm Ord}}})$. Define \begin{align*} \mathcal{G}^z_\kappa X & \coloneqq \{ M \in P'_\kappa X\ |\ \text{$M$ is $z$-guessing} \},\\ \mathcal{G}_\kappa X & \coloneqq \{ M \in P'_\kappa X\ |\ \text{$M$ is guessing} \}. \end{align*} \begin{proposition}\label{prop.guessingmodels} If\/ $\ensuremath{\text{{\sf ISP}}}(\kappa, |H_\theta|)$ holds, then $\mathcal{G}_\kappa H_\theta$ is stationary. \end{proposition} \begin{proof} By working with a bijection $f: |H_\theta| \to H_\theta$, it is obvious that we can apply $\ensuremath{\text{{\sf ISP}}}(\kappa, |H_\theta|)$ to the set $P_\kappa H_\theta$ directly. Suppose to the contrary that there is a club $C \subset P'_\kappa H_\theta$ such that every $M \in C$ is not guessing, that is, there is $z_M \in M$ and $d_M \subset z_M$ that is $M$-approximated but not $M$-guessed. Then also $d_M \cap M$ is $M$-approximated but not $M$-guessed, so we may assume $d_M \subset M$. Consider the list $D \coloneqq \langle d_M\ |\ M \in C \rangle$. Then $D$ is slender, for let $\theta'$ be large enough and let $C' \coloneqq \{ M' \in P_\kappa H_{\theta'}\ |\ M' \cap H_\theta \in C \}$. $C'$ is club in $P_\kappa H_\theta$, and if $M' \in C$ and $b \in P_{\omega_1} H_\theta \cap M'$, then $b \in M' \cap H_\theta$, so $d_{M' \cap H_\theta} \cap b \in M' \cap H_\theta \subset M'$. By $\ensuremath{\text{{\sf ISP}}}(\kappa, |H_\theta|)$, there is an ineffable branch $d$ for the list $D$. Let $S \coloneqq \{ M \in C\ |\ d_M = d \cap M \}$. $S$ is stationary, and we may assume $z_M = z$ for some fixed $z$ and all $M \in S$. This means $d \subset z$. As $Pz \subset H_\theta$, there is an $M \in S$ such that $d \in M$. But then $d_M$ is $M$-guessed, a contradiction. \end{proof} \begin{proposition}\label{prop.guessing->ISP} Let $\theta$ be sufficiently large and $M \in P'_\kappa H_\theta$ be a $\lambda$-guessing model such that $\lambda^+ \in M$. Then $\ensuremath{\text{{\sf ISP}}}(\kappa, \lambda)$ holds. \end{proposition} \begin{proof} Since $M\prec H_\theta$ it is enough to show that $M \models \ensuremath{\text{{\sf ISP}}}(\kappa, \lambda)$. So pick a slender list $D = \langle d_a\ |\ a \in P_\kappa \lambda \rangle \in M$. Notice that the slenderness of $D$ is witnessed by a club $C' \subset P_\kappa H_{\lambda^+}$ which is in $M$. Then $M \cap H_{\lambda^+} \in C'$, so $d_{M \cap \lambda} \cap b \in M$ for all $b \in M \cap P_{\omega_1} \lambda$. This means $d_{M \cap \lambda}$ is an $M$-approximated subset of $M$. So since $M$ is a $\lambda$-guessing model, there is an $e \in M$ such that $e \cap M = d_{M \cap \lambda}$. Let $S \coloneqq \{ a \in P_\kappa \lambda\ |\ d_a = e \cap a \}$. Then $S \in M$. To see $S$ is stationary, let $C \in M$ be a club in $P_\kappa \lambda$. Then $M \cap \lambda \in C \cap S$, so $H_\theta \models C \cap S \neq \emptyset$, so it also holds in $M$. \end{proof} Notice that we cannot literally say that $F_\ensuremath{\text{{\rm IS}}}[\kappa, H_\theta]$ is the club filter restricted to $\mathcal{G}_\kappa H_\theta$: There might be a slender list $\langle d_M\ |\ M\in S\rangle$ indexed by some stationary set $S \subset \mathcal{G}_\kappa H_\theta$ that does not have an ineffable branch. For such a list we necessarily have that $d_M \not\subset z$ for all $z \in M$ and all $M \in S$. Still the following holds. \begin{proposition} $I_\ensuremath{\text{{\rm IS}}}[\kappa, X]$ is contained in the projection of the nonstationary ideal restricted to $\mathcal{G}_\kappa^X H_\theta$ onto $X$ for any regular $\theta$ such that $X \in H_\theta$. \end{proposition} \begin{proof} Assume to the contrary that there is an $S \in I_\ensuremath{\text{{\rm IS}}}[\kappa, X]$ such that $S^* \coloneqq \{ M \in \mathcal{G}_\kappa^X H_\theta\ |\ M\cap X\in S\}$ is stationary. Pick a slender list $D = \langle d_a\ |\ a\in S\rangle$ witnessing that $S \in I_\ensuremath{\text{{\rm IS}}}[\kappa, X]$. Let $C$ be a club subset of $P_\kappa H_\theta$ witnessing that $D$ is slender. Pick $M \in S^*\cap C$ such that $D\in M$. Then $d_{M\cap X}$ is an $M$-approximated subset of $X$ as $M\in C$. Thus $d_{M\cap X} = e \cap M$ for some $e \in M$ since $M$ is $X$-guessing. As in the proof of Proposition~\ref{prop.guessing->ISP} it follows that $e$ is an ineffable branch for $D$, contradicting the fact that $D$ witnesses $S \in I_\ensuremath{\text{{\rm IS}}}[\kappa, X]$. \end{proof} \section{Implications under {\sffamily PFA}}\label{sect.PFA} In this section, we are going to show \ensuremath{\text{{\sf PFA}}}\ implies $\ensuremath{\text{{\sf ISP}}}(\omega_2)$. The following lemma is due to Woodin~\cite[Proof of Theorem~2.53]{woodin}. Recall that $G \subset \mathbb{P}$ is said to be \emph{$M$-generic} if $G$ is a filter on $\mathbb{P}$ and $G \cap D \cap M \neq \emptyset$ for all $D \in M$ that are dense in $\mathbb{P}$. \begin{lemma}\label{lemma.PFA_stationarily_often} Let $\mathbb{P}$ be a proper forcing, and let $\theta$ be sufficiently large. Then \ensuremath{\text{{\sf PFA}}}\ implies \begin{equation*} \{ M \in P_{\omega_2} H_\theta\ |\ \exists G \subset \mathbb{P}\ \text{$G$ is $M$-generic} \} \end{equation*} is stationary in $P_{\omega_2} H_\theta$. \end{lemma} \begin{definition} Let $T$ be a tree and $B$ be a set of cofinal branches of $T$. A function $g: B \to T$ is called \emph{Baumgartner function} if $g$ is injective and for all $b, b' \in B$ it holds that \begin{enumerate} \item $g(b) \in b$, \item $g(b) < g(b') \rightarrow g(b') \notin b$. \end{enumerate} \end{definition} The following lemma is due to Baumgartner, see~\cite{baumgartner.PFA}. \begin{lemma}\label{lemma.baumgartner} Let $T$ be a tree and $B$ be a set cofinal branches of $T$. Suppose $\kappa \coloneqq \height(T)$ is regular and $|B| \leq \kappa$. Then there is a Baumgartner function $g: B \to T$. \end{lemma} \begin{proof} Let $\langle b_\alpha:\alpha < \mu \rangle$ enumerate $B$, with $\mu \leq \kappa$. Recursively define $g$ by $g(b_\alpha) \coloneqq \min ( b_\alpha - \ensuremath{{\textstyle\bigcup}} \{ b_\beta:\beta < \alpha \} )$. This can be done since $\kappa$ is regular. Suppose $g(b_\alpha) < g(b_{\alpha'})$ for some $\alpha, \alpha' < \mu$. Then $g(b_{\alpha'}) \in b_{\alpha'}$, so $g(b_\alpha) \in b_{\alpha'}$, so $\alpha < \alpha'$ and thus $g(b_{\alpha'}) \notin b_\alpha$. \end{proof} Recall that a tree $T$ is said to \emph{not split at limit levels} if for all $t, t' \in T$ such that $\height t = \height t'$ is a limit ordinal and $\{ s \in T:s < t \} = \{ s \in T:s < t' \}$ it follows that $t = t'$. \begin{lemma}\label{lemma.like_regressive} Let $T$ be a tree that does not split at limit levels and suppose $B$ is a set of cofinal branches of $T$. Suppose $g: B \to T$ is a Baumgartner function. Suppose $\langle \alpha_\nu:\nu < \omega_1 \rangle$ is continuous and increasing. Let $\alpha \coloneqq \sup_{\nu < \omega_1} \alpha_\nu$ and $t \in T_\alpha$. Suppose that for all $\nu < \omega_1$ there is $b_\nu \in B$ such that $g(b_\nu) < t \restriction \alpha_\nu \in b_\nu$. Then there is a stationary $S \subset \omega_1$ such that $b_\nu = b_{\nu}$ for all $\nu, \nu' \in S$. In particular there is an $s < t$ such that $t \in g^{-1}(s)$. \end{lemma} \begin{proof} For $\nu < \omega_1$ let $r(\nu) \coloneqq \min \{ \rho < \nu\ |\ \height g(b_\nu) < \alpha_\rho \}$. Then $r$ is regressive and thus constant on a stationary set $S \subset \omega_1$. As $g$ is a Baumgartner function, this implies $g$ is constant on the set $\{ b_\nu\ |\ \nu \in S \}$. But $g$ is injective, so $b_\nu = b_{\nu'}$ for $\nu, \nu' \in S$. \end{proof} \begin{definition}\label{def.covering_approximation} Let $V\subset W$ be a pair of transitive models of\/ \ensuremath{\text{{\sf ZFC}}}. \begin{itemize} \item $(V,W)$ satisfies the $\mu$-covering property if the class $P_\mu^V V$ is cofinal in $P_\mu^W V$, that is, for every $x \in W$ with $x \subset V$ and $|x| < \mu$ there is $z \in P_\mu^V V$ such that $x \subset z$. \item $(V,W)$ satisfies the $\mu$-approximation property if for all $x \in W$, $x \subset V$, it holds that if $x \cap z \in V$ for all $z \in P_\mu^V V$, then $x \in V$. \end{itemize} A forcing $\mathbb{P}$ is said to satisfy the $\mu$-covering property or the $\mu$-approximation property if for every $V$-generic $G \subset \mathbb{P}$ the pair $(V, V[G])$ satisfies the $\mu$-covering property or the $\mu$-approximation property respectively. \end{definition} These properties have been introduced and extensively studied by Hamkins, see for example~\cite{hamkins}. The following lemma is the essential argument in the proof of Theorem~\ref{theorem.PFA->ISP}. Extracting it has the advantage that it can be applied to a wider class of different forcings, so that it can yield more information about the nature of the guessing models and $I_\ensuremath{\text{{\rm IS}}}[\omega_2, \lambda]$. \begin{lemma}\label{lemma.PFA->ISP} Let $\theta$ be sufficiently large. Assume $\mathbb{P}$ satisfies the $\omega_1$-covering and the $\omega_1$-approximation properties and collapses $2^\lambda$ to $\omega_1$. Then in $V^{\mathbb{P}}$ there is a ccc forcing $\dot{\mathbb{Q}}$ and some $w \in H_\theta$ such that \begin{equation*} \{ M \in P'_{\omega_2} H_\theta\ |\ w \in M,\ \exists G \subset \mathbb{P} * \dot{\mathbb{Q}}\ \text{$G$ is $M$-generic} \} \subset \mathcal{G}_\kappa^\lambda H_\theta, \end{equation*} and every such $M$ is internally unbounded, that is, $M \cap P_{\omega_1} M$ is cofinal in $P_{\omega_1} M$. \end{lemma} \begin{proof} Let $B \coloneqq \vphantom{2}^\lambda 2$. Work in $V^{\mathbb{P}}$. Let $\dot{c}: \omega_1 \to P_{\omega_1} \lambda$ be continuous and cofinal. As $\mathbb{P}$ satisfies the $\omega_1$-covering property, we may assume that $\dot{c}(\alpha + 1) \in V$ for all $\alpha < \omega_1$. Define \begin{equation*} \dot{T} \coloneqq \{ h \restriction \dot{c}(\alpha)\ |\ h \in B,\ \alpha < \omega_1 \} \end{equation*} As $\mathbb{P}$ satisfies the $\omega_1$-approximation property, we have that $B$ is the set of cofinal branches through $\dot{T}$. Since $|B|= \omega_1$, we can apply Lemma~\ref{lemma.baumgartner} and get a Baumgartner function $\dot{g}: B \to \dot{T}$. Let $\dot{l}: \omega_1 \to B$ be a bijection. Let \begin{align*} \dot{T}^0 & \coloneqq \{ t \in \dot{T}:\exists b \in B\ \dot{g}(b) < t \in b \},\\ \dot{T}^1 & \coloneqq \dot{T} - \dot{T}^0. \end{align*} Note that $\dot{T}^1$ does not have cofinal branches. Thus there is a ccc forcing $\dot{\mathbb{Q}}$ that specializes $\dot{T}^1$ with a specialization map $\dot{f}$. Now work in $V$. Let $w \in H_\theta$ contain all the relevant information, and let $M \in P'_{\omega_2} H_\theta$ be such that $w \in M$ and there is an $M$-generic $G_0 * G_1 \subset \mathbb{P} * \dot{\mathbb{Q}}$. By the usual density arguments, $c \coloneqq \dot{c}^{G_0}: \omega_1 \to P_{\omega_1} (M \cap \lambda)$ is continuous and cofinal and $c(\alpha + 1) \in M$ for all $\alpha < \omega_1$. Therefore $M$ is internally unbounded. We let $g \coloneqq \dot{g}^{G_0}$, $T \coloneqq \dot{T}^{G_0}$, $T^0 \coloneqq (\dot{T}^0)^{G_0}$, $T^1 \coloneqq (\dot{T}^1)^{G_0}$, $l \coloneqq \dot{l}^G$, and $f \coloneqq \dot{f}^{G_0 * G_1}$. Define $B \restriction M \coloneqq \{ h \restriction M\ |\ h \in B \cap M\}$. Then we can use the facts that $G_0 * G_1$ is an $M$-generic filter and that $V^{\mathbb{P}} \models \rng \dot{l} = B$ to argue that \begin{itemize} \item $l: \omega_1 \to B \cap M$ is bijective, \item $T = \ensuremath{{\textstyle\bigcup}} \{h \restriction c(\alpha)\ |\ h \in B \cap M ,\ \alpha < \omega_1 \}$, \item $g: B \restriction M \to T$ is a Baumgartner function,\footnote{Here we naturally identify $\dom g = B \cap M$ with $B \restriction M$, which is a set of uncountable branches of $T$.} \item $T = T^0 \cup T^1$, \item $f: T^1 \to \omega$ is a specialization map. \end{itemize} \begin{claim}\label{claim.keyclaim1} $B \restriction M$ is the set of uncountable branches of $T$. \end{claim} \begin{claimproof} It is clear that $B \restriction M$ is included in the set of uncountable branches of $T$. For the other inclusion, observe that if $h$ is a branch through $T$, then $h$ must be a branch through $T_0$ since the specialization map $f$ witnesses that $T_1$ cannot have uncountable branches. This means that $h \restriction c(\alpha) \in T_0$ for eventually all $\alpha$. So for each such $\alpha$ there is a unique $b_\alpha \in B \restriction M$ such that $g(b_\alpha) \subset h \restriction c(\alpha) \subset b_\alpha$. Thus for eventually all $\alpha < \omega_1$ we have $\dom g(b_\alpha) = c(\beta_\alpha)$ for some $\beta_\alpha < \alpha$, and we may assume that there is a $\beta < \omega_1$ such that $\beta_\alpha = \beta$ for stationarily many $\alpha < \omega_1$. Hence if $\alpha$ is such that $\beta_\alpha = \beta$, then $h = b_\alpha \in B \restriction M$. \end{claimproof} \begin{claim}\label{claim.keyclaim2} $t \in B \restriction M$ if and only if $t$ is the characteristic function of $d \cap M$ for some $M$-approximated $d \subset \lambda$. \end{claim} \begin{claimproof} If $t \in B \restriction M$, then $t = h \restriction M$ for some $h \in B \cap M$, and $h$ is the characteristic function of some $d \in M \cap P\lambda$. For the other direction pick an $M$-approximated $d \subset \lambda$, and let $t$ be the characteristic function of $d\cap M$. We claim that $t$ is a branch through $T$ and thus in $B \restriction M$ by Claim~\ref{claim.keyclaim1}. To see this observe that $c(\alpha + 1) \in M$ for all $\alpha < \omega_1$, so that $t \restriction c(\alpha + 1)$ is the characteristic function of $d \cap c(\alpha + 1)$, which is in $M$ since $d$ is $M$-approximated. Thus $t \restriction c(\alpha + 1) \in T$. \end{claimproof} To see $M$ is $\lambda$-guessing, let $d \subset \lambda$ be $M$-approximated. Then by Claim~\ref{claim.keyclaim2} the characteristic function $t$ of $d \cap M$ is in $B \restriction M$. So there is $h \in B \cap M$ such that $t = h \restriction M$. Let $e \in M$ be such that $h$ is its characteristic function. Then $e \cap M = d \cap M$, and we are done. \end{proof} To apply Lemma~\ref{lemma.PFA->ISP}, we need an appropriate forcing. The simplest and earliest example comes from~\cite{mitchell}. We let $\mathbb{C}$ denote the forcing for adding a Cohen real. See~\cite{krueger} for a proof of the following theorem. \begin{theorem}\label{theorem.club_through_E} Let $\gamma \geq \omega_1$. Then the forcing $\mathbb{C} * \Coll(\omega_1, \gamma)$ is proper and satisfies the $\omega_1$-approximation property. \end{theorem} \begin{theorem}\label{theorem.PFA->ISP} \ensuremath{\text{{\sf PFA}}}\ implies $\ensuremath{\text{{\sf ISP}}}(\omega_2)$ holds. \end{theorem} \begin{proof} Let $\theta$ be large enough, $\lambda \geq \omega_2$, and $\mathbb{P} \coloneqq \mathbb{C} * \Coll(\omega_1, 2^\lambda)$. Then $\mathbb{P}$ is proper and satisfies the $\omega_1$-approximation property by Theorem~\ref{theorem.club_through_E}. Thus by Lemmas~\ref{lemma.PFA_stationarily_often} and~\ref{lemma.PFA->ISP} the set $\mathcal{G}_{\omega_2}^\lambda H_\theta$ is stationary in $P_{\omega_2} H_\theta$. Therefore by Proposition~\ref{prop.guessing->ISP} we can conclude that $\ensuremath{\text{{\sf ISP}}}(\omega_2, \lambda)$ holds. \end{proof} Krueger \cite{krueger.IC,krueger.IA} has shown there is a great variety of forcings $\dot{\mathbb{P}}$ living in $V^{\mathbb{C}}$ such that $\mathbb{C}*\dot{\mathbb{P}}$ has the $\omega_1$-approximation and the $\omega_1$-covering properties. These forcings can be used to show that under \ensuremath{\text{{\sf PFA}}}, there are stationarily many guessing models that are internally club. As guessing models are not internally approachable, this gives another separation of the properties internally club and internally approachable. Under \ensuremath{\text{{\sf MM}}}, one can use these forcings to show there are stationarily many guessing models that are internally unbounded but not internally stationary and also stationarily many that are internally stationary but not internally club, see also~\cite{VIA10}. Strullu~\cite{STR10} has shown the principle $\ensuremath{\text{{\sf ITP}}}(\omega_2)$ follows from $\ensuremath{\text{{\sf MRP}}} + \ensuremath{\text{{\sf MA}}}$, where \ensuremath{\text{{\sf MRP}}}\ is the mapping reflection principle introduced by Moore~\cite{moore.MRP}. It is furthermore worth noting that unlike $\ensuremath{\text{{\sf ISP}}}(\omega_2)$, the principle $\ensuremath{\text{{\sf ITP}}}(\omega_2)$ can already be proved by applying \ensuremath{\text{{\sf PFA}}}\ to a forcing of the form $\sigma$-closed $*$ ccc, see~\cite{diss}. The next corollary is originally independently due to Foreman and Todor\v{c}evi\'c, see~\cite{koenig}. \begin{corollary}\label{cor.PFA->nonAP} \ensuremath{\text{{\sf PFA}}}\ implies the approachability property fails for $\omega_1$, that is, $\omega_2 \notin I[\omega_2]$, where $I[\omega_2]$ denotes the approachability ideal on $\omega_2$. \end{corollary} \begin{proof} It is not hard to see that $I[\omega_2] \subset I_\ensuremath{\text{{\rm IS}}}[\omega_2, \omega_2]$. \end{proof} The failure of various square principles under \ensuremath{\text{{\sf PFA}}}\ is originally due to Todor\-\v{c}evi\'c and Magidor, see~\cite{todorcevic.note_on_PFA} and~\cite[Theorem~6.3]{schimmerling}. See~\cite{weiss} for the notation used in Corollary~\ref{cor.PFA->non_square}. \begin{corollary}\label{cor.PFA->non_square} Suppose \ensuremath{\text{{\sf PFA}}}\ holds and $\cf \lambda \geq \omega_2$. Then $\ensuremath{\lnot} \square_{\cof(\omega_1)}(\omega_2, \lambda)$. \end{corollary} \begin{proof} This follow from Theorem~\ref{theorem.PFA->ISP} and~\cite[Theorem~4.2]{weiss}. \end{proof} \section{An interlude on forcing} \begin{definition} Let $\mathbb{P}$ be a forcing. We say $\mathbb{P}$ is a \emph{standard iteration of length $\kappa$} if \begin{enumerate}[label=(\roman*)] \item $\mathbb{P}$ is the direct limit of an iteration $\langle \mathbb{P}_\alpha\ |\ \alpha < \kappa \rangle$ that takes direct limits stationarily often, \item $\mathbb{P}_\alpha$ has size less than $\kappa$ for all $\alpha<\kappa$. \end{enumerate} \end{definition} It is a classical result that the $\mu$-cc is preserved by iterations of length $\mu$ of posets of size less than $\mu$ that take direct limits stationarily often. So the following lemma does not come as a surprise but nonetheless has not been observed so far. \begin{lemma}\label{keylem1} Let $\mathbb{P}$ be a standard iteration of length $\kappa$. Then $\mathbb{P}$ is $\kappa$-cc and satisfies the $\kappa$-approximation property. \end{lemma} \begin{proof} Let $\mathbb{P}$ be the direct limit of $\langle \mathbb{P}_\alpha\ |\ \alpha < \kappa \rangle$. It suffices to verify the $\kappa$-approximation property for subsets of ordinals. The proof is by induction on $\lambda\geq\kappa$. We start with the proof of the base case $\lambda = \kappa$. We need to show that if $p \in \mathbb{P}$ and $\dot{h} \in V^{\mathbb{P}}$ are such that $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \in \vphantom{2}^\kappa 2$ and $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \forall \alpha < \kappa\ \dot{h} \restriction \alpha \in V$, then $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \in V$. So assume to the contrary there is $\bar{p} \leq p$ such that $\bar{p} \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \notin V$. Let $P=\{p_\xi\ |\ \xi<\kappa\}$ and let $C_0$ be the club of all $\alpha < \kappa$ such that $\ensuremath{{\textstyle\bigcup}} \{ \mathbb{P}_\xi\ |\ \xi < \alpha \} = \{p_\xi\ |\ \xi < \alpha \}$. Define $S \coloneqq \{ \alpha < \kappa\ |\ \mathbb{P}_\alpha\ \text{is direct limit} \}$. $S$ is stationary by assumption, and if $\alpha \in S \cap C_0$, then $\mathbb{P}_\alpha = \{ p_\xi\ |\ \xi < \alpha \}$. For $\xi < \kappa$ let $A_\xi \subset \mathbb{P}$ be a maximal antichain below $\bar{p}$ that decides the value of $\dot{h}(\xi)$. Then $C \coloneqq \{ \alpha \in C_0\ |\ \forall \xi < \alpha\ A_\xi \subset \mathbb{P}_\alpha \}$ is club. For $\alpha \in C$ let \begin{equation*} \dot{h}_\alpha \coloneqq \{ \langle (\xi,i) , p \rangle\ |\ \xi < \alpha,\ p \in \mathbb{P}_\alpha,\ p \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h}(\xi)=i \}. \end{equation*} Then $\dot{h}_\alpha \in V^{\mathbb{P}_\alpha}$ and $\bar{p} \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h}_\alpha \in \vphantom{2}^\alpha 2$. \begin{claim}\label{keyfa0} $\bar{p} \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \restriction \alpha = \dot{h}_\alpha$ for all $\alpha \in C$. \end{claim} \begin{claimproof} Suppose to the contrary that for some $\alpha \in C$ there are $q \leq \bar{p}$ and $\xi < \alpha$ such that $q \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h}(\xi) \neq \dot{h}_\alpha(\xi)$. Let $r \in A_\xi$ be compatible with $q$. Then $r \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h}(\xi) = i$ for some $i < 2$. But as $A_\xi \subset \mathbb{P}_\alpha$, this also means $r \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h}_\alpha(\xi) = i$, contradicting its compatibility with $q$. \end{claimproof} \begin{claim}\label{keyfa1} $\bar{p} \mathrel\|\joinrel\relbar_{\mathbb{P}_\alpha} \dot{h}_\alpha \in V$ for all $\alpha \in C$. \end{claim} \begin{claimproof} Assume towards a contradiction that some for some $q \leq \bar{p}$ and $\alpha \in C$ we have $ q \mathrel\|\joinrel\relbar_{\mathbb{P}_\alpha} \dot{h}_\alpha \notin V$. Then for each $g\in \vphantom{2}^\alpha 2$ there is a maximal antichain $A_g$ among the conditions in $\mathbb{P}_\alpha$ below $q$ such that for any element $r \in A_g$, there is $\xi_r < \alpha$ such that $r \mathrel\|\joinrel\relbar_{\mathbb{P}_\alpha} \dot{h}_\alpha(\xi_r) \neq g(\xi_r)$. This means that any $\langle(\xi_r,i),p\rangle\in\dot{h}_\alpha$ such that $p$ is compatible with $r$ is such that $g(\xi_r)\neq i$. This in turn means that $r\mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h}_\alpha(\xi_r) \neq g(\xi_r)$ for any $r\in A_g$ and for any $g\in \vphantom{2}^\alpha 2$. Since a maximal antichain in $\mathbb{P}_\alpha$ is also a maximal antichain in $\mathbb{P}$, this implies that $q \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h}_\alpha \notin V$, which is impossible by Claim~\ref{keyfa0}. \end{claimproof} For $\alpha \in S \cap C_0$ by Claim~\ref{keyfa1} $\bar{p} \mathrel\|\joinrel\relbar_{\mathbb{P}_\alpha} \dot{h}_\alpha \in V$, so there are $p_\xi \in \mathbb{P}_\alpha$, $p_\xi \leq \bar{p}$, and $g_\alpha \in \vphantom{2}^\alpha 2$ such that $p_\xi \mathrel\|\joinrel\relbar_{\mathbb{P}_\alpha} \dot{h}_\alpha = g_\alpha$. Since $\alpha \in S \cap C_0$, we have $\xi < \alpha$, so for some stationary $S_0 \subset S \cap C_0$ we may assume $\xi$ is fixed. But then $p_\xi \mathrel\|\joinrel\relbar_{\mathbb{P}_\alpha} \dot{h} \restriction \alpha = \dot{h}_\alpha = g_\alpha$ for all $\alpha \in S_0$, so that $p_\xi \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} = \ensuremath{{\textstyle\bigcup}}_{\alpha \in S_0} \dot{h}_\alpha = \ensuremath{{\textstyle\bigcup}}_{\alpha \in S_0} g_\alpha \in V$, contradicting $p_\xi \leq \bar{p}$. Now we prove the lemma for $\lambda > \kappa$, assuming it has been shown for all $\gamma < \lambda$. Let $p \in \mathbb{P}$ and $\dot{h} \in V^{\mathbb{P}}$ be such that $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \in \vphantom{2}^\lambda 2$ and $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \forall z \in P^V_\kappa V\ \dot{h} \restriction z \in V$. First suppose $\cf \lambda > \kappa$. By the induction hypothesis we know that $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \forall \gamma < \lambda\ \dot{h} \restriction \gamma \in V$. For every $\gamma < \lambda$ there is $\alpha_\gamma < \kappa$ and $g_\gamma \in \vphantom{2}^\gamma 2$ such that $p_{\alpha_\gamma} < p$ and $p_{\alpha_\gamma} \mathrel\|\joinrel\relbar \dot{h} \restriction \gamma = g_\gamma$. Thus there is an unbounded $U \subset \lambda$ such that $\alpha_\gamma = \alpha_{\gamma'}$ for all $\gamma, \gamma' \in U$, so that for $\gamma \in U$ we have $p_{\alpha_\gamma} \mathrel\|\joinrel\relbar \dot{h} = \ensuremath{{\textstyle\bigcup}}_{\gamma \in U} g_\gamma \in V$. If $\cf \lambda \leq \kappa$, let $U \subset \lambda$ be cofinal of order type $\cf \lambda$, and set \begin{equation*} T \coloneqq \{ g \in \vphantom{2}^{<\lambda} 2\ |\ \exists q \leq p\ \exists \gamma \in U\ q \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \restriction \gamma = g \}. \end{equation*} Then $T$, ordered by end extension, is a tree of height $\cf \lambda$. As $\mathbb{P}$ is $\kappa$-cc, all levels of $T$ have size less than $\kappa$. Let $X$ be a set of size at most $\kappa$ such that for every pair of incompatible elements $g, g' \in T$ there is $\alpha \in X$ such that $g(\alpha) \neq g'(\alpha)$. By the induction hypothesis we have $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \restriction X \in V$. But $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} = \ensuremath{{\textstyle\bigcup}} \{ g \in T\ |\ g \restriction X = \dot{h} \restriction X \}$, so that $p \mathrel\|\joinrel\relbar_{\mathbb{P}} \dot{h} \in V$. \end{proof} \section{The principles {\sffamily TP} and {\sffamily ITP} in generic extensions} \begin{lemma}\label{lemma.thin_from_groundmodel} Let $V \subset W$ be a pair of models of\/ \ensuremath{\text{{\sf ZFC}}}\ that satisfies the $\kappa$-covering property, and suppose $\kappa$ is inaccessible in $V$. Suppose $D = \langle d_a\ |\ a \in P^W_\kappa \lambda \rangle$ is a $P^W_\kappa \lambda$-list such that for every $a \in P^W_\kappa \lambda$ there is $z_a \in V$ such that $d_a = z_a \cap a$. Then $D$ is thin. \end{lemma} \begin{proof} Work in $W$. Let $c \in P_\kappa \lambda$. By the $\kappa$-covering property there is $\bar{c} \in P^V_\kappa \lambda$ such that $c \subset \bar{c}$. Also we have $\{ d_a \cap c\ |\ c \subset a \in P^W_\kappa \lambda \} = \{ z_a \cap \bar{c} \cap c\ |\ c \subset a \in P^V_\kappa \lambda \} \subset \{ z \cap c\ |\ z \in P^V \bar{c} \}$. But the latter set has cardinality less than than $\kappa$ since $\kappa$ is inaccessible in $V$. \end{proof} \begin{proposition}\label{prop.old_I_subset_of_new_I} Let $V \subset W$ be a pair of models of\/ \ensuremath{\text{{\sf ZFC}}}\ that satisfies the $\kappa$-covering and the $\kappa$-approximation properties, and suppose $\kappa$ is inaccessible in $V$. Then \begin{equation*} I_\ensuremath{\text{{\rm IT}}}^V[\kappa, \lambda] \subset I^W_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]. \end{equation*} \end{proposition} \begin{proof} Work in $W$. For $A \in I_\ensuremath{\text{{\rm IT}}}^V[\kappa, \lambda]$ let $\langle d_a\ |\ a \in P^V_\kappa \lambda \rangle \in V$ be $A$-effable in $V$. Then by Lemma~\ref{lemma.thin_from_groundmodel} $\langle d_a\ |\ a \in P_\kappa \lambda \rangle$ is thin, where $d_a \coloneqq \emptyset$ for $a \notin V$. Suppose $\langle d_a\ |\ a \in P_\kappa \lambda \rangle$ were not $A$-effable. Let $S \subset A$ be stationary and $d \subset \lambda$ such that $d_x = d \cap x$ for all $x \in S$. Suppose $d \notin V$. Then, by $\kappa$-approximation property, there is a $z \in P^V_\kappa \lambda$ such that $d \cap z \notin V$. But for $x \in S$ with $z \subset x$ we have $d \cap z = d \cap x \cap z = d_x \cap z \in V$, a contradiction. Therefore $d \in V$, and $S \subset \bar{S} \coloneqq \{ x \in P^V_\kappa \lambda\ |\ d_x = d \cap x \} \in V$. Since $\langle d_a\ |\ a \in P^V_\kappa \lambda \rangle \in V$ is $A$-effable in $V$, $\bar{S}$ is not stationary in $V$. So there exists $C \in V$, $C \subset P^V_\kappa \lambda$ club in $V$ such that $C \cap \bar{S} = \emptyset$. Let $f: P_\omega \lambda \to P_\kappa \lambda$ be in $V$ such that $\ensuremath{\text{{\rm Cl}}}_f^V \subset C$. But then, by the stationarity of $S$, there is an $x \in S$ such that $x \in \ensuremath{\text{{\rm Cl}}}_f$, so that $x \in C \cap \bar{S}$, a contradiction. \end{proof} \begin{theorem}\label{theorem.old_filter_subset_of_new_filter} Let $V \subset W$ be a pair of models of\/ \ensuremath{\text{{\sf ZFC}}}\ that satisfies the $\kappa$-covering property and the $\tau$-approximation property for some $\tau < \kappa$, and suppose $\kappa$ is inaccessible in $V$. Then \begin{equation*} P^W_\kappa \lambda - P^V_\kappa \lambda \in I^W_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda], \end{equation*} which furthermore implies \begin{equation*} F_\ensuremath{\text{{\rm IT}}}^V[\kappa, \lambda] \subset F^W_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]. \end{equation*} So in particular, if $W \models \ensuremath{\text{{\sf ITP}}}(\kappa, \lambda)$, then $V \models \ensuremath{\text{{\sf ITP}}}(\kappa, \lambda)$. \end{theorem} \begin{proof} Work in $W$. Let $B \coloneqq P_\kappa \lambda - P^V_\kappa \lambda$. For $x \in B$ let $a_x \in P^V_\tau \lambda$ be such that $x \cap a_x \notin V$, which exists by the $\tau$-approximation property. Put $d_x \coloneqq a_x \cap x$. For $x \in P_\kappa \lambda - B$, let $d_x \coloneqq \emptyset$. Then $\langle d_x\ |\ x \in P_\kappa \lambda \rangle$ is thin by Lemma~\ref{lemma.thin_from_groundmodel}. Suppose $\langle d_x\ |\ x \in P_\kappa \lambda \rangle$ were not $B$-effable. Then there are $d \subset \lambda$ and $U \subset B$ be such that $U$ is cofinal and $d_x = d \cap x$ for all $x \in U$. Define a $\subset$-increasing sequence $\langle x_\alpha\ |\ \alpha < \tau^+ \rangle$ with $x_\alpha \in U$ for all $\alpha < \tau^+$ and a sequence $\langle e_\alpha\ |\ \alpha < \tau^+ \rangle$ such that $x_\alpha \subset e_\alpha$ and $e_\alpha \in P^V_\kappa \lambda$ for all $\alpha < \tau^+$ as follows. Let $\beta < \tau^+$ and suppose $\langle x_\alpha\ |\ \alpha < \beta \rangle$ and $\langle e_\alpha\ |\ \alpha < \beta \rangle$ have been defined. Let $x_\beta \in U$ be such that $\ensuremath{{\textstyle\bigcup}}_{\alpha < \beta} ( x_\alpha \cup a_\alpha \cup e_\alpha ) \subset x_\beta$, and let $e_\beta \in P^V_\kappa \lambda$ be such that $x_\beta \subset e_\beta$, which exists by the $\kappa$-covering property. Then $\langle d_{x_\alpha}\ |\ \alpha < \tau^+ \rangle$ is $\subset$-increasing as $d_{x_\alpha} = d \cap x_\alpha$ for all $\alpha < \tau^+$, and since $| d_{x_\alpha} | < \tau$ for all $\alpha < \tau^+$, there is $\gamma < \tau^+$ such that $d_{x_\alpha} = d_{x_{\alpha'}}$ for all $\alpha, \alpha' \in [\gamma, \tau^+)$. But then $a_{x_{\gamma+1}} \cap e_\gamma \subset a_{x_{\gamma+1}} \cap x_{\gamma+1} = d_{x_{\gamma+1}} = d_{x_\gamma} \subset e_{\gamma}$ and $d_{x_{\gamma + 1}} \subset a_{x_{\gamma + 1}}$, so that $d_{x_\gamma} = a_{x_{\gamma+1}} \cap e_\gamma \in V$, a contradiction. To see $F_\ensuremath{\text{{\rm IT}}}^V[\kappa, \lambda] \subset F^W_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$, let $A \in F^V_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$. Then $P^V_\kappa \lambda - A \in I^V_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$, so by Proposition~\ref{prop.old_I_subset_of_new_I} $P^V_\kappa \lambda - A \in I^W_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$. Thus $P^W_\kappa \lambda - A = ( P^W_\kappa \lambda - P^V_\kappa \lambda ) \cup ( P^V_\kappa \lambda - A ) \in I^W_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$, which means $A \in F^W_\ensuremath{\text{{\rm IT}}}[\kappa, \lambda]$. \end{proof} Note that by~\cite[Theorem~1.1]{gitik.nonsplitting} the set $P^W_\kappa \lambda - P^V_\kappa \lambda$ in Theorem~\ref{theorem.old_filter_subset_of_new_filter} is stationary for $\lambda \geq \kappa^+$ if there is a real in $W - V$. We will now weaken the assumption that $(V, W)$ satisfies the $\tau$-approximation property for some $\tau < \kappa$ to the $\kappa$-approximation property, so that this kind of argument can be exploited for a wider range of forcing constructions. \begin{theorem}\label{theorem.pull_back_TP} Let $V \subset W$ be a pair of models of\/ \ensuremath{\text{{\sf ZFC}}}\ that satisfies the $\kappa$-covering and the $\kappa$-approximation properties, and suppose $\kappa$ is inaccessible in $V$. If $W \models \ensuremath{\text{{\sf TP}}}(\kappa, \lambda)$, then $V \models \ensuremath{\text{{\sf TP}}}(\kappa, \lambda)$. \end{theorem} \begin{proof} In $V$, let $D = \langle d_a\ |\ a \in P_\kappa \lambda \rangle$ be a $P_\kappa \lambda$-list. Now work in $W$. For every $a \in P_\kappa \lambda$ let, by the $\kappa$-covering property, $z_a \in P^V_\kappa \lambda$ be such that $a \subset z_a$. Define a $P_\kappa \lambda$-list $E = \langle e_a\ |\ a \in P_\kappa \lambda \rangle$ by $e_a \coloneqq d_{z_a} \cap a$. Then $E$ is thin by Lemma~\ref{lemma.thin_from_groundmodel}. Thus by $\ensuremath{\text{{\sf TP}}}(\kappa,\lambda)$ there is a cofinal branch $d$ for $E$. So for all $y \in P_\kappa \lambda$ there is $a \in P_\kappa \lambda$, $y \subset a$, such that $e_a \cap y = d\cap y$. In particular \begin{equation*} d \cap y = e_a \cap y = d_{z_a} \cap a \cap y = d_{z_a} \cap y. \end{equation*} Thus if $y \in P^V_\kappa \lambda$, then $d \cap y \in V$, so that $d \in V$ by the $\kappa$-approximation property. This means $d \in V$. But $d$ is also a cofinal branch for $D$ in $V$. \end{proof} \begin{corollary}\label{cor.strongly_compact_in_groundmodel} Let $\mathbb{P}$ be a standard iteration of length $\kappa$ and suppose $\kappa$ is inaccessible. If\/ $\mathbb{P}$ forces $\ensuremath{\text{{\sf TP}}}(\kappa)$, then $\kappa$ is strongly compact. \end{corollary} \begin{proof} This follows directly from Lemma~\ref{keylem1} and Theorem~\ref{theorem.pull_back_TP}. \end{proof} Notice that, together with Theorem~\ref{theorem.PFA->ISP}, Corollary~\ref{cor.strongly_compact_in_groundmodel} implies the following remarkable corollary. \begin{corollary}\label{cor.PFA_requires_strongly_compact} Suppose $\kappa$ is inaccessible and \ensuremath{\text{{\sf PFA}}}\ is forced by a standard iteration of length $\kappa$ that collapses $\kappa$ to $\omega_2$. Then $\kappa$ is strongly compact. \end{corollary} Corollary~\ref{cor.PFA_requires_strongly_compact} says that any of the known methods for producing a model of \ensuremath{\text{{\sf PFA}}}\ from a large cardinal assumption requires at least a strongly compact cardinal. This can be improved to the optimal result if we require the iteration for forcing \ensuremath{\text{{\sf PFA}}}\ to be proper. For this purpose we introduce an ad-hoc definition. \begin{definition} Let $V \subset W$ be a pair of models of\/ \ensuremath{\text{{\sf ZFC}}}\ that satisfies the $\kappa$-covering and the $\kappa$-approximation properties, and suppose $\kappa$ is inaccessible in $V$. We say $M \in (P_\kappa' H_\theta^V)^W$ is \emph{$V$-guessing} if for all $z \in M$ and all $d \in P^V z$ there is an $e \in M$ such that $d \cap M = e \cap M$. \end{definition} The following two propositions should be seen as analogs of Propositions~\ref{prop.guessingmodels} and~\ref{prop.guessing->ISP}. \begin{proposition}\label{prop.V-guessing} Let $V \subset W$ be a pair of models of\/ \ensuremath{\text{{\sf ZFC}}}\ that satisfies the $\kappa$-covering and the $\kappa$-approximation properties, and suppose $\kappa$ is inaccessible in $V$. Assume $W \models \ensuremath{\text{{\sf ITP}}}(\kappa, |H_\theta^V|)$ for some large enough $\theta$. Then in $W$ the set \begin{equation*} \{ M \in P'_\kappa H_\theta^V\ |\ \text{$M$ is $V$-guessing and closed under countable suprema} \} \end{equation*} is stationary.\footnote{However, it need not be a subset of $V$.} \end{proposition} \begin{proof} Work in $W$. By~\cite[Theorem~3.5]{weiss}, we have that the set of all $M \in P_\kappa' H_\theta^V$ that are closed under countable suprema belongs to $F_\ensuremath{\text{{\rm IT}}}[\kappa, H_\theta^V]$. Assume that there were a set $A \notin I_\ensuremath{\text{{\rm IT}}}[\kappa, H_\theta^V]$ such that for all $M \in A$ there is $z_M \in M$ and $d_M \in P^V z_M$ such that $d_M \cap M \neq e \cap M$ for all $e \in M$. Then $D \coloneqq \langle d_M \cap M\ |\ M \in A \rangle$ is thin by Lemma~\ref{lemma.thin_from_groundmodel}. Thus by $\ensuremath{\text{{\sf ITP}}}(\kappa, |H_\theta^V|)$ there is an ineffable branch $d$ for $D$, and by the $\kappa$-approximation property we have $d \in V$. Let $S \coloneqq \{ M \in A\ |\ d_M \cap M = d \cap M \}$. Then $S \in V$ is stationary, and we may assume $z_M = z$ for some $z \in H_\theta^V$ and all $M \in S$. As $P^V z \subset H_\theta^V$ and $d \subset z$, there is an $M \in S$ such that $d \in M$, a contradiction. \end{proof} \begin{theorem}\label{theorem.pull_back_ISP} Let $V \subset W$ be a pair of models of\/ \ensuremath{\text{{\sf ZFC}}}\ that satisfies the $\kappa$-covering and the $\kappa$-approximation properties. Let $\kappa$ be inaccessible in $V$ and $\lambda$ be regular in $W$. Suppose that for all $\gamma < \kappa$ and every $S \subset \cof(\omega) \cap \gamma$ in $V$ it holds that $V \models$ ``$S$ is stationary in $\gamma$'' if and only if $W \models$ ``$S$ is stationary in $\gamma$.'' Let $\theta$ be large enough. Suppose $M \in (P_\kappa' H_\theta^V)^W$ is a $V$-guessing model closed under countable suprema such that $\lambda \in M$. Then $M \cap \lambda \in V$ and $V \models \ensuremath{\text{{\sf ITP}}}(\kappa,\lambda)$. \end{theorem} \begin{proof} Let $\langle S_\alpha\ |\ \alpha < \lambda \rangle \in M$ be a partition of $\cof(\omega) \cap \lambda$ into sets stationary in $V$. Let $\lambda_M \coloneqq \sup (M \cap \lambda)$. \begin{claim}\label{claim.magidor} It holds that \begin{equation*} M \cap \lambda = \{ \delta < \lambda\ |\ V \models \text{$S_\delta$ is stationary in $\lambda_M$} \} \in V. \end{equation*} \end{claim} \begin{claimproof} For one direction, let $\delta$ be such that $V \models$ ``$S_\delta$ is stationary in $\lambda_M$.'' Notice that $\cf^V \lambda_M < \kappa$, so $W \models$ ``$S_\delta$ is stationary in $\lambda_M$.'' As $M$ is closed under countable suprema, we get that $S_\delta \cap M \neq \emptyset$. Thus if $\beta \in S_\delta \cap M$, then $\delta$ is definable in $M$ as the $\alpha$ for which $\beta \in S_\alpha$, so that $\delta \in M$. For the other direction, let $\delta \in M \cap \lambda$ and let $C \in V$ be club in $\lambda_M$. As $C \subset \lambda \in M$ and $M$ is $V$-guessing, $C \cap M = e \cap M$ for some $e \in M$. Since $C \cap M$ is closed under countable suprema, $M\models$ ``$e$ is closed under countable suprema.'' Thus $M \models e \cap S_\delta \neq \emptyset$, which proves $C \cap S_\delta \neq \emptyset$ as $e \cap S_\delta \cap M \subset C \cap S_\delta$. \end{claimproof} Now to argue that $V \models \ensuremath{\text{{\sf ITP}}}(\kappa, \lambda)$, it is enough to check that $H_\theta^V \models \ensuremath{\text{{\sf ITP}}}(\kappa, \lambda)$. Since $M \prec H_\theta^V$, it in turn suffices to verify $M \models \ensuremath{\text{{\sf ITP}}}(\kappa, \lambda)$. So let $D\in M$ be a $P^V_\kappa\lambda$-list. Since $M$ is $V$-guessing, $d_{M \cap \lambda} \in V$, and $d_{M \cap \lambda} \subset \lambda \in M$, we get that $d_{M \cap \lambda} = e \cap M$ for some $e \in M$. Then $M \models$ ``$e$ is an ineffable branch for $D$.'' \end{proof} \begin{corollary}\label{cor.supercompact_in_groundmodel} Let $\mathbb{P}$ be a proper standard iteration of length $\kappa$ and suppose $\kappa$ is inaccessible. If\/ $\mathbb{P}$ forces $\ensuremath{\text{{\sf ITP}}}(\kappa)$, then $\kappa$ is supercompact. \end{corollary} \begin{proof} This follows from Lemma~\ref{keylem1}, Proposition~\ref{prop.V-guessing}, and Theorem~\ref{theorem.pull_back_ISP}. \end{proof} Under the additional premise of properness, Corollary~\ref{cor.supercompact_in_groundmodel} implies the following strongest possible version of Corollary~\ref{cor.PFA_requires_strongly_compact}. \begin{corollary}\label{cor.proper_requires_supercompact} Suppose $\kappa$ is inaccessible and \ensuremath{\text{{\sf PFA}}}\ is forced by a proper standard iteration of length $\kappa$ that collapses $\kappa$ to $\omega_2$. Then $\kappa$ is supercompact. \end{corollary} It should be noted that Sakai has pointed out a serious obstruction in removing the assumption of $\mathbb{P}$ being proper in Corollary~\ref{cor.proper_requires_supercompact}. \begin{theorem}[Sakai, 2010]\label{theorem.sakai} Let $\kappa$ be a supercompact cardinal, $\theta > \kappa$ be sufficiently large, and suppose there is a Woodin cardinal $\mu > \theta$. Suppose $W$ is the standard semiproper forcing extension such that $W \models \ensuremath{\text{{\sf MM}}} + \kappa = \omega_2$. Then in $W$ it holds that for every stationary preserving forcing $\mathbb{P}$ the set \begin{equation*} \{ M \in P_{\omega_2} H_\theta\ |\ \exists G \subset \mathbb{P}\ \text{$G$ is $M$-generic},\ M \cap \omega_3 \notin V \} \end{equation*} is stationary in $P_{\omega_2} H_\theta$. \end{theorem} In the setting of Theorem~\ref{theorem.sakai}, if one carries out the proof of Theorem~\ref{theorem.PFA->ISP} in $W$, one gets that $P_{\kappa}^W \lambda - P_{\kappa}^V \lambda \notin I_\ensuremath{\text{{\rm IT}}}^W[\kappa, \lambda]$ for $\lambda$ such that $\kappa < \lambda$ and $2^\lambda < \theta$. This should be contrasted with Theorem~\ref{theorem.old_filter_subset_of_new_filter}. \section{Conclusion} There are several open problems which the results presented suggest. The most appealing deals with the construction of an inner model in which $\omega_2$ has an arbitrary degree of supercompactness starting from a universe of sets in which $\ensuremath{\text{{\sf MM}}}$ holds. It seems plausible to conjecture that if $\ensuremath{\text{{\sf ISP}}}(\kappa)$ holds, then for each $\lambda$ there is a simply definable transitive class in which $\kappa$ is $\lambda$-supercompact. Such a line of thought has already been pursued by Foreman~\cite{FOR10}, where he proved that a certain strong form of Chang's conjecture for a small cardinal $\kappa$ implies that there is an $X$ such that $\kappa$ is huge in $L[X]$. It has yet to be understood to what extent Foreman's ideas can be applied to the results of this paper; a key issue in this context appears to be a thorough study of the properties of guessing models and of the ideals $I_\ensuremath{\text{{\rm IS}}}[\omega_2,\lambda]$ in models of \ensuremath{\text{{\sf MM}}}. We also expect that many of the known consequences of \ensuremath{\text{{\sf PFA}}}\ and supercompactness might be obtained directly from the principle \ensuremath{\text{{\sf ISP}}}. Examples are given in \cite{weiss}, where it is shown that $\ensuremath{\text{{\sf ITP}}}(\omega_2)$ implies the failure of some of the weakest forms of square incompatible with $\ensuremath{\text{{\sf PFA}}}$, and in \cite{VIA10}, where, using properties of guessing models, a new proof that \ensuremath{\text{{\sf PFA}}}\ implies \ensuremath{\text{{\sf SCH}}}\ is provided. On the other hand we conjecture that $\ensuremath{\text{{\sf ISP}}}(\omega_2)$ does not decide the size of the continuum. \ifthenelse{\boolean{usemicrotype}}{\microtypesetup{spacing=false}}{} \bibliographystyle{amsplain}
1,477,468,751,355
arxiv
\section{Introduction} From a methodological viewpoint, testing a null hypothesis $H_0:~x\sim f_0(x|\omega_0)$ versus the alternative $H_a:~x\sim f_1(x|\omega_1)$ in a Bayesian framework requires the introduction of two prior distributions, $\pi_0(\omega_0)$ and $\pi_1(\omega_1)$, that are defined on the respective parameter spaces. In functional terms, the core object of the Bayesian approach to testing and model choice, the Bayes factor \citep{jeffreys:1939, robert:2001,ohagan:forster:2004}, is indeed a ratio of two marginal densities taken at the same observation $x$, $$ B_{01}(x) = \dfrac{\int \pi_0(\omega_0) f_0(x|\omega_0)\,\text{d}\omega_0} {\int \pi_1(\omega_1) f_1(x|\omega_1)\,\text{d}\omega_1} = \dfrac{m_0(x)}{m_1(x)}\,. $$ (This quantity $B_{01}(x)$ is then compared to $1$ in order to decide about the strength of the support of the data in favour of $H_0$ or $H_a$.) It is thus mathematically clearly and uniquely defined, provided both integrals exist and differ from both $0$ and $\infty$. The practical computation of the Bayes factor has generated a large literature on approximative \citep[see, e.g.][]{chib:1995,gelman:meng:1998,chen:shao:ibrahim:2000,chopin:robert:2010}, seeking improvements in numerical precision. The Savage--Dickey \citep{dickey:1971} representation of the Bayes factor is primarily known as a special identity that relates the Bayes factor to the posterior distribution which corresponds to the more complex hypothesis. As described in \cite{verdinelli:wasserman:1995} and \citeauthor{chen:shao:ibrahim:2000} (2000, pages 164-165), this representation has practical implications as a basis for simulation methods. However, as stressed in \cite{dickey:1971} and \cite{ohagan:forster:2004}, the foundation of the Savage--Dickey representation is clearly theoretical. More specifically, when considering a testing problem with an embedded model, $H_0:\theta=\theta_0$, and a nuisance parameter $\psi$, i.e.~when $\omega_1$ can be decomposed as $\omega_1=(\theta,\psi)$ and when $\omega_0=(\theta_0,\psi)$, for a sampling distribution $f(x|\theta, \psi)$, the plug-in representation \begin{equation}\label{eq:dickey} B_{01}(x) = \dfrac{\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\,, \end{equation} with the obvious notations for the marginal distributions $$ \pi_1(\theta) = \int \pi_1(\theta,\psi)\text{d}\psi \quad\text{and}\quad \pi_1(\theta|x) = \int \pi_1(\theta,\psi|x)\text{d}\psi\,, $$ holds under Dickey's (1971) assumption that the conditional prior density of $\psi$ under the alternative model, given $\theta=\theta_0$, $\pi_1(\psi|\theta_0)$, is equal to the prior density under the null hypothesis, $\pi_0(\psi)$, \begin{equation}\label{eq:savage} \pi_1(\psi|\theta_0)=\pi_0(\psi)\,. \end{equation} Therefore, Dickey's (1971) identity \eqref{eq:dickey} reduces the Bayes factor to the ratio of the posterior over the prior marginal densities of $\theta$ under the alternative model, taken at the tested value $\theta_0$. The Bayes factor is thus expressed as an amount of information brought by the data and this helps in its justification as a model choice tool. (See also \citealp{consonni:veronese:2008}.) In order to illustrate the Savage--Dickey representation, consider the artificial example of computing the Bayes factor between the models $$ \mathfrak{M}_0:\quad x|\psi\sim\mathcal{N}(\psi,1),\quad \psi\sim\mathcal{N}(0,1)\,, $$ and $$ \mathfrak{M}_1:\quad x|\theta,\psi\sim\mathcal{N}(\psi,\theta),\quad \psi|\theta\sim\mathcal{N}(0,\theta),\quad \theta\sim I\mathcal{G}(1,1)\,, $$ which is equivalent to testing the null hypothesis $H_0:\theta=\theta_0=1$ against the alternative $H_1:\theta\neq 1$ when $x|\theta,\psi\sim\mathcal{N}(\psi,\theta)$. In that case, model $\mathfrak{M}_0$ clearly is embedded in model $\mathfrak{M}_1$. We have $$ m_0(x)=\exp\left(-x^2/4\right)\big/ (\sqrt{2}\sqrt{2\pi})\quad\mbox{and}\quad m_1(x)=\left(1+x^2/4\right)^{-3/2}\Gamma(3/2)\big/ (\sqrt{2}\sqrt{2\pi})\,, $$ and therefore $$ B_{01}(x)=\Gamma(3/2)^{-1}\left(1+x^2/4\right)^{3/2}\exp\left(-x^2/4\right)\,. $$ Dickey's assumption \eqref{eq:savage} on the prior densities is satisfied, since $$ \pi_1(\psi|\theta_0)=\frac{1}{\sqrt{2\pi}}\exp\left(-\psi^2/2\right)=\pi_0(\psi)\,. $$ Therefore, since $$ \pi_1(\theta)=\theta^{-2}\exp\left(-\theta^{-1}\right)\,,\quad\pi_1(\theta_0)=\exp(-1)\,, $$ and \begin{align*} \pi_1(\theta|x)&=\Gamma(3/2)^{-1}\left(1+x^2/4\right)^{3/2}\theta^{-5/2} \exp\left(-\theta^{-1}\left(1+x^2/4\right)\right)\mathbb{I}_{\theta>0}\,,\\ \pi_1(\theta_0|x)&=\Gamma(3/2)^{-1}\left(1+x^2/4\right)^{3/2}\exp\left(-\left(1+x^2/4\right)\right)\,, \end{align*} we clearly recover the Savage--Dickey representation $$ B_{01}(x)=\Gamma(3/2)^{-1}\left(1+x^2/4\right)^{3/2}\exp\left(-x^2/4\right)=\pi_1(\theta_0|x)/\pi_1(\theta_0)\,. $$ While the difficulty with the representation \eqref{eq:dickey} is usually addressed in terms of computational aspects, given that $\pi_1(\theta|x)$ is rarely available in closed form, we argue in the current paper that the Savage--Dickey representation faces challenges of a deeper nature that led us to consider it a `paradox'. First, by considering both prior and posterior marginal distributions of $\theta$ uniquely {\em under the alternative model,} \eqref{eq:dickey} seems to indicate that the posterior probability of the null hypothesis $H_0:\theta=\theta_0$ is contained within the alternative hypothesis posterior distribution, even though the set of $(\theta,\psi)$'s such that $\theta=\theta_0$ has a zero probability under this alternative distribution. Second, as explained in Section \ref{sec:measure}, an even more fundamental difficulty with assumption \eqref{eq:savage} is that it is meaningless when examined (as it should) within the mathematical axioms of measure theory. Having stated those mathematical difficulties with the Savage--Dickey representation, we proceed to show in Section \ref{sec:montecarl} that similar identities hold under no constraint on the prior distributions. In Section \ref{sec:montecarl}, we derive computational algorithms that exploit these representations to approximate the Bayes factor, in an approach that differs from the earlier solution of \cite{verdinelli:wasserman:1995}. The paper concludes with an illustration in the setting of variable selection within a probit model. \section{A measure-theoretic paradox}\label{sec:measure} When considering a standard probabilistic setting where the dominating measure on the parameter space is the Lebesgue measure, rather than a counting measure, the conditional density $\pi_1(\psi|\theta)$ is rigorously \citep{billingsley:1986} defined as the density of the conditional probability distribution or, equivalently, by the condition that $$ \mathbb{P}((\theta,\psi)\in A_1\times A_2) = \int_{A_1} \int_{A_2} \pi_1(\psi|\theta) \,\text{d}\psi\,\pi_1(\theta)\,\text{d}\theta = \int_{A_1\times A_2} \pi_1(\theta,\psi) \text{d}\psi\,\text{d}\theta\,, $$ for all measurable sets $A_1\times A_2$, when $\pi_1(\theta)$ is the associated marginal density of $\theta$. Therefore, this identity points out the well-known fact that the conditional density function $\pi_1(\psi|\theta)$ is defined up to a set of measure zero both in $\psi$ for {\em every} value of $\theta$ {\em and} in $\theta$. This implies that changing arbitrarily the value of the {\em function} $\pi_1(\cdot|\theta)$ for a negligible collection of values of $\theta$ does not impact the properties of the conditional distribution. In the setting where the Savage--Dickey representation is advocated, the value $\theta_0$ to be tested is not determined from the observations but it is instead given in advance since this is a testing problem. Therefore the density function $$ \pi_1(\psi|\theta_0) $$ may be chosen in a {\em completely arbitrary} manner and there is no possible reason for a unique representation of $\pi_1(\psi|\theta_0)$ that can be found within measure theory. This implies that there always is a version of the conditional density $\pi_1(\psi|\theta_0)$ such that Dickey's (1971) condition \eqref{eq:savage} is satisfied---as well as, conversely, there are an infinity of versions for which it is {\em not} satisfied---. As a result, from a mathematical perspective, condition \eqref{eq:savage} cannot be seen as an {\em assumption} on the prior $\pi_1$ without further conditions, contrary to what is stated in the original \cite{dickey:1971} and later in \cite{ohagan:forster:2004}, \cite{consonni:veronese:2008} and \cite{wetzels:grasman:wagenmakers:2010}. This difficulty is the first part of what we call the {\em Savage--Dickey paradox}, namely that, as stated, the representation \eqref{eq:dickey} relies on a mathematically void constraint on the prior distribution. In the specific case of the artificial example introduced above, the choice of the conditional density $\pi_1(\psi|\theta_0)$ is therefore arbitrary: if we pick for this density the density of the $\mathcal{N}(0,1)$ distribution, there is agreement between $\pi_1(\psi|\theta_0)$ and $\pi_0(\psi)$, while, if we select instead the function $\exp(+\psi^2/2)$, which is not a density, there is no agreement in the sense of condition \eqref{eq:savage}. The paradox is that this disagreement has no consequence whatsoever in the Savage--Dickey representation. The second part of the Savage--Dickey paradox is that the representation \eqref{eq:dickey} is solely valid for a specific and unique choice of a version of the density for both the conditional density $\pi_1(\psi|\theta_0)$ and the joint density $\pi_1(\theta_0,\psi)$. When looking at the derivation of \eqref{eq:dickey}, the choices of some specific versions of those densities are indeed noteworthy: in the following development, \begin{alignat*}{2} B_{01}(x) & = \dfrac{\int \pi_0(\psi) f(x|\theta_0,\psi)\,\text{d}\psi}{\int \pi_1(\theta,\psi)f(x|\theta,\psi)\, \text{d}\psi\text{d}\theta} & \qquad & \text{[by definition]} \\ & = \dfrac{\int \pi_1(\psi|\theta_0) f(x|\theta_0,\psi)\,\text{d}\psi\,\pi_1(\theta_0)} {\int \pi_1(\theta,\psi)f(x|\theta,\psi)\,\text{d}\psi\text{d}\theta\,\pi_1(\theta_0)} && \text{[using a specific version of $\pi_1(\psi|\theta_0)$]}\\ & = \dfrac{\int \pi_1(\theta_0,\psi) f(x|\theta_0,\psi)\,\text{d}\psi}{m_1(x)\pi_1(\theta_0)} && \text{[using a specific version of $\pi_1(\theta_0,\psi)$]}\\ & = \dfrac{\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\,,&&\text{[using a specific version of $\pi_1(\theta_0|x)$]} \end{alignat*} the second equality depends on a specific choice of the version of $\pi_1(\psi|\theta_0)$ but not on the choice of the version of $\pi_1(\theta_0)$, while the third equality depends on a specific choice of the version of $\pi_1(\psi,\theta_0)$ as equal to $\pi_0(\psi) \pi_1(\theta_0)$, thus related to the choice of the version of $\pi_1(\theta_0)$. The last equality leading to the Savage--Dickey representation relies on the choice of a specific version of $\pi_1(\theta_0|x)$ as well, namely that the constraint $$ \dfrac{\pi_1(\theta_0|x)}{\pi_1(\theta_0)} = \dfrac{\int \pi_0(\psi) f(x|\theta_0,\psi)\,\text{d}\psi}{m_1(x)} $$ holds, where the right hand side is equal to the Bayes factor $B_{01}(x)$ and is therefore independent from the version. This rigorous analysis implies that the Savage--Dickey representation is tautological, due to the availability of a version of the posterior density that makes it hold. As an illustration, consider once again the artificial example above. As already stressed, the value to be tested $\theta_0=1$ is set prior to the experiment. Thus, without modifying either the prior distribution under model $\mathfrak{M}_1$ or the marginal posterior distribution of the parameter $\theta$ under model $\mathfrak{M}_1$, and in a completely rigorous measure-theoretic framework, we can select $$ \pi_1(\theta_0)=100=\pi_1(\theta_0|x)\,. $$ For that choice, we obtain $$ \pi_1(\theta_0|x)/\pi_1(\theta_0) = 1 \neq B_{01}(x)= \Gamma(3/2)^{-1}\left(1+x^2/4\right)^{3/2}\exp\left(-x^2/4\right)\,. $$ Hence, for this specific choice of the densities, the Savage--Dickey representation does not hold. \vspace{0.5cm} \cite{verdinelli:wasserman:1995} have proposed a generalisation of the Savage--Dickey density ratio when the constraint (\ref{eq:savage}) on the prior densities is not verified (we stress again that this is a mathematically void constraint on the respective prior distributions). \cite{verdinelli:wasserman:1995} state that \begin{alignat*}{2} B_{01}(x) & = \dfrac{\int \pi_0(\psi) f(x|\theta_0,\psi)\,\text{d}\psi}{m_1(x)} & \qquad & \text{[by definition]}\\ & = \pi_1(\theta_0|x)\dfrac{\int \pi_0(\psi) f(x|\theta_0,\psi)\,\text{d}\psi}{m_1(x)\pi_1(\theta_0|x)} & \qquad & \text{[for any version of $\pi_1(\theta_0|x)$]}\\ & = \pi_1(\theta_0|x)\int\dfrac{\pi_0(\psi) f(x|\theta_0,\psi)}{m_1(x)\pi_1(\theta_0|x)} \dfrac{\pi_1(\psi|\theta_0)}{\pi_1(\psi|\theta_0)}\,\text{d}\psi & \qquad & \text{[for any version of $\pi_1(\psi|\theta_0)$]}\\ & = \pi_1(\theta_0|x)\int\dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta_0)}\,\dfrac{f(x|\theta_0,\psi) \pi_1(\psi|\theta_0)\,\text{d}\psi}{m_1(x)\pi_1(\theta_0|x)}\,\dfrac{\pi_1(\theta_0)}{\pi_1(\theta_0)} & \qquad & \text{[for any version of $\pi_1(\theta_0)$]}\\ & = \dfrac{\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\,\int\dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta_0)}\, \pi_1(\psi|\theta_0,x)\,\text{d}\psi & \qquad & \text{[for a specific version of $\pi_1(\psi|\theta_0,x)$]}\\ & = \dfrac{\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\,\mathbb{E}^{\pi_1(\psi|x,\theta_0)} \left[\dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta_0)}\right]\,. \end{alignat*} This representation of \cite{verdinelli:wasserman:1995} therefore remains valid for any choice of versions for $\pi_1(\theta_0|x)$, $\pi_1(\theta_0)$, $\pi_1(\psi|\theta_0)$, provided the conditional density $\pi_1(\psi|\theta_0,x)$ is defined by $ \pi_1(\psi|\theta_0,x) = \dfrac{f(x|\theta_0,\psi) \pi_1(\psi|\theta_0)\pi_1(\theta_0)}{m_1(x)\pi_1(\theta_0|x)}\,, $ which obviously means that the Verdinelli--Wasserman representation \begin{equation} B_{01}(x) = \dfrac{\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\,\mathbb{E}^{\pi_1(\psi|x,\theta_0)} \left[\dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta_0)}\right] \end{equation} is dependent on the choice of a version of $\pi_1(\theta_0)$. \vspace{0.5cm} We now establish that an alternative representation of the Bayes factor is available and can be exploited towards approximation purposes. When considering the Bayes factor $$ B_{01}(x) = \dfrac{\int \pi_0(\psi) f(x|\theta_0,\psi)\,\text{d}\psi} {\int \pi_1(\theta,\psi)f(x|\theta,\psi)\, \text{d}\psi\text{d}\theta}\, \dfrac{\pi_1(\theta_0)}{\pi_1(\theta_0)}\,, $$ where the right hand side obviously is independent of the choice of the version of $\pi_1(\theta_0)$, the numerator can be seen as involving a specific version in $\theta=\theta_0$ of the marginal posterior density $$ \tilde\pi_1(\theta|x) \propto \int \pi_0(\psi) f(x|\theta,\psi) \,\text{d}\psi\,\pi_1(\theta)\,, $$ which is associated with the alternative prior $\tilde\pi_1(\theta,\psi)=\pi_1(\theta)\pi_0(\psi)$. Indeed, this density $\tilde\pi_1(\theta|x)$ appears as the marginal posterior density of the posterior distribution defined by the density $$ \tilde\pi_1(\theta,\psi|x) = \dfrac{ \pi_0(\psi) \pi_1(\theta) f(x|\theta,\psi) }{\tilde{m}_1(x)}\,, $$ where $\tilde m_1(x)$ is the proper normalising constant of the joint posterior density. In order to guarantee a Savage--Dickey-like representation of the Bayes factor, the appropriate version of the marginal posterior density in $\theta=\theta_0$, $\tilde\pi_1(\theta_0|x)$, is obtained by imposing \begin{equation}\label{eq:psudopost} \dfrac{\tilde\pi_1(\theta_0|x)}{\pi_0(\theta_0)} = \dfrac{\int \pi_0(\psi) f(x|\theta_0,\psi) \,\text{d}\psi}{\tilde{m}_1(x)}\,, \end{equation} where, once again, the right hand side of the equation is uniquely defined. This constraint amounts to imposing that Bayes' theorem holds in $\theta=\theta_0$ instead of almost everywhere (and thus not necessarily in $\theta=\theta_0$). It then leads to the alternative representation $$ B_{01}(x) = \dfrac{\tilde\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\,\dfrac{\tilde{m}_1(x)}{m_1(x)}\,, $$ which holds for any value chosen for $\pi_1(\theta_0)$ provided condition \eqref{eq:psudopost} applies. This new representation may seem to be only formal, since both $m_1(x)$ and $\tilde m_1(x)$ are usually unavailable in closed form, but we can take advantage of the fact that the bridge sampling identity of \cite{torrie:valleau:1977} (see also \citealp{gelman:meng:1998}) gives an unbiased estimator of $\tilde m_1(x)/{m}_1(x)$ since $$ \mathbb{E}^{\pi_1(\theta,\psi|x)} \left[ \dfrac{\pi_0(\psi)\pi_1(\theta) f(x|\theta,\psi) } {\pi_1(\theta,\psi)f(x|\theta,\psi) } \right] = \mathbb{E}^{\pi_1(\theta,\psi|x)} \left[ \dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta)} \right] = \dfrac{\tilde m_1(x)}{{m}_1(x)}\,. $$ In conclusion, we obtain the representation \begin{equation} B_{01}(x) = \dfrac{\tilde\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\, \mathbb{E}^{\pi_1(\theta,\psi|x)} \left[ \dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta)}\right]\,, \label{eq:mr09} \end{equation} whose expectation part is uniquely defined (in that it does not depend on the choice of a version of the densities involved therein), while the first ratio must satisfy condition \eqref{eq:psudopost}. We further note that this representation clearly differs from Verdinelli and Wasserman's (\citeyear{verdinelli:wasserman:1995}) representation: \begin{equation} B_{01}(x)=\dfrac{\pi_1(\theta_0|x)}{\pi_1(\theta_0)}\,\mathbb{E}^{\pi_1(\psi|x,\theta_0)}\left[\dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta_0)}\right]\,, \label{eq:vw05} \end{equation} since \eqref{eq:vw05} uses a specific version of the marginal posterior density on $\theta$ in $\theta_0$, as well as a specific version of the full conditional posterior density of $\psi$ given $\theta_0$ \section{Computational solutions}\label{sec:montecarl} In this Section, we consider the computational implications of the above representation in the specific case of latent variable models, namely under the practical possibility of a data completion by a latent variable $z$ such that $$ f(x|\theta,\psi) = \int f(x|\theta,\psi,z)f(z|\theta,\psi)\,\text{d}z $$ when $\pi_1(\theta|x,\psi,z) \propto \pi_1(\theta) f(x|\theta,\psi,z)$ is available in closed form, including the normalising constant. \vspace{0.5cm} We first consider a computational solution that approximates the Bayes factor based on our novel representation (\ref{eq:mr09}). Given a sample $(\bar \theta^{(1)},\bar \psi^{(1)},\bar z^{(1)}),\allowbreak \ldots, \allowbreak(\bar \theta^{(T)},\bar \psi^{(T)},\bar z^{(T)})$ simulated from (or converging to) the augmented posterior distribution $\tilde\pi_1(\theta,\psi,z|x)$, the sequence $$ \dfrac{1}{T}\,\sum_{t=1}^T \tilde\pi_1(\theta_0|x,\bar z^{(t)},\bar \psi^{(t)}) $$ converges to $\tilde\pi_1(\theta_0|x)$ in $T$ under the following constraint on the selected version of $\tilde\pi_1(\theta_0|x,z,\psi)$ used therein: $$ \dfrac{\tilde\pi_1(\theta_0|x,z,\psi)}{\pi_1(\theta_0)}= \dfrac{f(x,z|\theta_0,\psi)}{\int f(x,z|\theta,\psi) \pi_1(\theta)\,\text{d}\theta}\,. $$ which again amounts to imposing that Bayes' theorem holds in $\theta=\theta_0$ for $\tilde\pi_1(\theta|x,z,\psi)$ rather than almost everywhere. (Note once more that the right hand side is uniquely defined, i.e.~that it does not depend on a specific version.) Therefore, provided iid or MCMC simulations from the joint target $\tilde\pi_1(\theta,\psi,z|x)$ are available, the converging approximation to the Bayes factor $B_{01}(x)$ is then $$ \dfrac{1}{T}\sum_{t=1}^T \dfrac{\tilde\pi_1(\theta_0|x,\bar z^{(t)},\bar \psi^{(t)})}{\pi_1(\theta_0)}\, \dfrac{\tilde{m}_1(x)}{m_1(x)}\,. $$ (We stress that the simulated sample is produced for the artificial target $\tilde\pi_1(\theta,\psi,z|x)$ rather than the true posterior $\pi_1(\theta,\psi,z|x)$ if $\tilde\pi_1(\theta,\psi)\ne\pi_1(\theta,\psi)$.) Moreover, if $(\theta^{(1)},\psi^{(1)}), \allowbreak \ldots, \allowbreak(\theta^{(T)},\psi^{(T)})$ is a sample independently simulated from (or converging to) $\pi_1(\theta,\psi|x)$, then $$ \dfrac{1}{T}\,\sum_{t=1}^T \dfrac{\pi_0(\psi^{(t)})}{\pi_1(\psi^{(t)}|\theta^{(t)})} $$ is a convergent and unbiased estimator of $\tilde{m}_1(x)/m_1(x)$. Therefore, the computational solution associated to our representation \eqref{eq:mr09} of $B_{01}(x)$ leads to the following unbiased estimator of the Bayes factor: \begin{equation}\label{eq:arrox-mr09} \widehat{B_{01}}^{\text{MR}}(x) = \dfrac{1}{T}\, \sum_{t=1}^T \dfrac{\tilde\pi_1(\theta_0|x,\bar z^{(t)},\bar\psi^{(t)})}{\pi_1(\theta_0)}\, \dfrac{1}{T}\,\sum_{t=1}^T \dfrac{\pi_0(\psi^{(t)})}{\pi_1(\psi^{(t)}|\theta^{(t)})}\,. \end{equation} Note that $$ \mathbb{E}^{\tilde\pi_1(\theta,\psi|x)} \left[ \dfrac{\pi_1(\theta,\psi)f(x|\theta,\psi) } {\pi_0(\psi)\pi_1(\theta) f(x|\theta,\psi) } \right] = \mathbb{E}^{\tilde\pi_1(\theta,\psi|x)} \left[ \dfrac{\pi_1(\psi|\theta)} {\pi_0(\psi)} \right] = \dfrac{m_1(x)}{\tilde{m}_1(x)} $$ implies that $$ T \bigg/ \sum_{t=1}^T \dfrac{\pi_1(\bar \psi^{(t)}|\theta^{(t)})}{\pi_0(\bar \psi^{(t)})} $$ is another convergent (if biased) estimator of $\tilde{m}_1(x)/m_1(x)$. The availability of two estimates of the ratio $\tilde{m}_1(x)/m_1(x)$ is a major bonus from a computational point of view since the comparison of both estimators may allow for the detection of infinite variance estimators, as well as for coherence of the approximations. The first approach requires two simulation sequences, one from $\tilde\pi_1(\theta,\psi|x)$ and one from $\pi_1(\theta,\psi|x)$, but this is a void constraint in that, if $H_0$ is rejected, a sample from the alternative hypothesis posterior will be required no matter what. Although we do not pursue this possibility in the current paper, note that a comparison of the different representations (including Verdinelli and Wasserman's, 1995, as exposed below) could be conducted by expressing them in the bridge sampling formalism \citep{gelman:meng:1998}. \vspace{0.5cm} We now consider a computational solution that approximates the Bayes factor and is based on \cite{verdinelli:wasserman:1995}'s representation (\ref{eq:vw05}). Given a sample $(\theta^{(1)}, \psi^{(1)},z^{(1)}),\allowbreak \ldots, \allowbreak(\theta^{(T)},\psi^{(T)},z^{(T)})$ simulated from (or converging to) $\pi_1(\theta,\psi,z|x)$, the sequence $$ \dfrac{1}{T}\,\sum_{t=1}^T \pi_1(\theta_0|x,z^{(t)},\psi^{(t)}) $$ converges to $\pi_1(\theta_0|x)$ under the following constraint on the selected version of $\pi_1(\theta_0|x,z,\psi)$ used there: $$ \dfrac{\pi_1(\theta_0|x,z,\psi)}{\pi_1(\theta_0)}= \dfrac{f(x,z|\theta_0,\psi)}{\int f(x,z|\theta,\psi) \pi_1(\theta)\,\text{d}\theta}\,. $$ Moreover, if $\left(\tilde \psi^{(1)},\tilde z^{(1)}\right),\ldots,\left(\tilde \psi^{(T)},\tilde z^{(T)}\right)$ is a sample generated from (or converging to) $\pi_1(\psi,z|x,\theta_0)$, the sequence $$ \frac{1}{T}\,\sum_{t=1}^T\frac{\pi_0(\tilde\psi^{(t)})}{\pi_1(\tilde\psi^{(t)}|\theta_0)} $$ is converging to $$ \mathbb{E}^{\pi_1(\psi|x,\theta_0)}\left[\dfrac{\pi_0(\psi)}{\pi_1(\psi|\theta_0)}\right] $$ under the constraint $ \pi_1(\psi,z|\theta_0,x) \propto f(x,z|\theta_0,\psi) \pi_1(\psi|\theta_0)\,. $ Therefore, the computational solution associated to the \cite{verdinelli:wasserman:1995}'s representation of $B_{01}(x)$ (\ref{eq:vw05}) leads to the following unbiased estimator of the Bayes factor: \begin{equation}\label{eq:arrox-vw05} \widehat{B_{01}}^{\text{VW}}(x) = \dfrac{1}{T}\,\sum_{t=1}^T \dfrac{\pi_1(\theta_0|x,z^{(t)},\psi^{(t)})}{\pi_1(\theta_0)}\, \dfrac{1}{T}\,\sum_{t=1}^T \dfrac{\pi_0(\tilde\psi^{(t)})}{\pi_1(\tilde\psi^{(t)}|\theta_0)}\,. \end{equation} Although, at first sight, the approximations \eqref{eq:arrox-mr09} and \eqref{eq:arrox-vw05} may look very similar, the simulated sequences used in both approximations differ: the first average involves simulations from $\tilde\pi_1(\theta,\psi,z|x)$ and from $\pi_1(\theta,\psi,z|x)$, respectively, while the second average relies on simulations from $\pi_1(\theta,\psi,z|x)$ and from $\pi_1(\psi,z|x,\theta_0)$, respectively. \section{An illustration} Although our purpose in this note is far from advancing the superiority of the Savage--Dickey type representations for Bayes factor approximation, given the wealth of available solutions for embedded models \citep{chen:shao:ibrahim:2000, marin:robert:2010}, we briefly consider an example where both Verdinelli and Wasserman's (1995) and our proposal apply. The model is the Bayesian posterior distribution of the regression coefficients of a probit model, following the prior modelling adopted in \cite{marin:robert:2007} that extends \citeauthor{zellner:1986}'s (1971) $g$-prior to generalised linear models. We take as data the Pima Indian diabetes study available in R \citep{rmanual} dataset with 332 women registered and build a probit model predicting the presence of diabetes from three predictors, the glucose concentration, the diastolic blood pressure and the diabetes pedigree function, assessing the impact of the diabetes pedigree function, i.e.~testing the nullity of the coefficient $\theta$ associated to this variable. For more details on the statistical and computational issues, see \cite{marin:robert:2010} since this paper relies on the Pima Indian probit model as benchmark. This probit model is a natural setting for completion by a truncated normal latent variable \citep{albert:chib:1993b}. We can thus easily implement a Gibbs sampler to produce output from all the posterior distributions considered in the previous Section. Besides, in that case, the conditional distribution $\pi_1(\theta|x,\psi,z)$ is a normal distribution with closed form parameters. It is therefore straightforward to compute the unbiased estimators \eqref{eq:arrox-mr09} and \eqref{eq:arrox-vw05}. Figure \ref{fig:bfbsmrvwchiis} compares the variation of this approximation with other standard solutions covered in \cite{marin:robert:2010} for the same example, namely the regular importance sampling approximation based on the MLE asymptotic distribution, Chib's version based on the same completion, and a bridge sampling \citep{gelman:meng:1998} solution completing $\pi_0(\cdot)$ with the full conditional being derived from the conditional MLE asymptotic distribution. The boxplots are all based on 100 replicates of $T=20,000$ simulations. While the estimators \eqref{eq:arrox-mr09} and \eqref{eq:arrox-vw05} are not as accurate as Chib's version and as the importance sampler in this specific case, their variabilities remain at a reasonable order and are very comparable. The R code and the reformated datasets used in this Section are available at the following address: \verb+http://www.math.univ-montp2.fr/~marin/savage/dickey.html+. \begin{figure} \includegraphics[width=.6\textwidth]{bfbsmrvwchiis.pdf} \caption{\label{fig:bfbsmrvwchiis} Comparison of the variabilities of five approximations of the Bayes factor evaluating the impact of the diabetes pedigree covariate upon the occurrence of diabetes in the Pima Indian population, based on a probit modelling. The boxplots are based on $100$ replicas and the Savage--Dickey representation proposed in the current paper is denoted by MR, while Verdinelli and Wasserman's (1995) version is denoted by VW.} \end{figure} \section*{Acknowledgements} The authors are grateful to H.~Doss and J.~Rousseau for helpful discussions, as well as to M.~Kilbinger for bringing the problem to their attention. Comments from the editorial team were also most useful to improve our exposition of the Savage--Dickey paradox. The second author also thanks Geoff Nicholls for pointing out the bridge sampling connection at the CRiSM workshop at the University of Warwick, May 31, 2010. This work had been supported by the Agence Nationale de la Recherche (ANR, 212, rue de Bercy 75012 Paris) through the 2009-2012 project {\sf Big'MC}. \input MR10.bbl \end{document}
1,477,468,751,356
arxiv
\section{Introduction} The semiclassical or WKB approximation is usually discussed in textbooks on nonrelativistic quantum mechanics in the context of stationary states, i.e., determination of the energy eigenvalues and eigenfunctions, \cite{qm}. This approximation can also be used to obtain approximate and in some cases exact solutions of the dynamical problem, i.e., full Schr\"odinger equation. To the best of my knowledge, however, the utility of the semiclassical approximation in obtaining exact solutions of the Schr\"odinger equation has not been fully explored. The same seems to be the case for the relativistic quantum mechanics. The importance of the semiclassical approximation in the relativistic case is probably best appreciated in quantum cosmology, \cite{page,wiltshire}, specifically, in the analysis of the Wheeler-DeWitt equation which is essentially a Klein-Gordon equation on a superspace, \cite{dewitt}. In more general terms, the semiclassical approximation is usually viewed as an approximation scheme in which one neglects all but the first term in an asymptotic perturbation expansion of the solution of a linear differential equation. Typical examples of such an asymptotic expansion are the loop expansions of quantum mechanics and field theory where the perturbation parameter is the Planck constant\footnote{Here I have assumed that the kinetic term in the Lagrangian does not involve a coupling constant. For example in nonrelativistic quantum mechanics in a Euclidean space, this corresponds to the case where the mass $m$ of the particle is set to unity. Otherwise, the perturbation parameter for the loop expansion is $\hbar/\sqrt{m}$.} $\hbar$, \cite{qm}. In the context of quantum cosmology the relevant perturbation parameter is the gravitational coupling constant (or inverse of the Planck mass $M_p$), \cite{singh,keifer,kim}. Usually, these perturbation expansions are singular and it is difficult, if not impossible, to obtain their precise structure. There is a more universal alternative for defining the semiclassical approximation where the validity of the approximation is not linked with the values of the physical constants but determined by the properties of the wave function. In this approach one uses the polar representation of the wave function \begin{equation} \psi({\bf x};t)=R({\bf x};t)\:e^{iS({\bf x};t)/\hbar}\;, \label{polar} \end{equation} and obtains two coupled nonlinear differential equations for the amplitude $R$ and the phase (angle) $S$ of $\psi$ by substituting (\ref{polar}) in the dynamical equation. As it is demonstrated for the Schr\"odinger and Klein-Gordon equations in sections~2 and~3 below, there emerges a quantity called the quantum potential $Q$ which controls the coupling of these two equations. In other words, if $Q$ which depends only on $R$ happens to be negligible, then one of the equations decouples from the other. The decoupled equation which only involves $S$ turns out to satisfy a Hamilton-Jacobi equation. Thus, for $Q=0$, $S$ can be identified with the classical action function of the corresponding classical theory. This observation is originally due to Bohm \cite{d-bohm}. It provides the basis for the de~Broglie-Bohm causal or ontological interpretation of quantum mechanics, \cite{causal}. The latter has recently been applied to problems of quantum cosmology by several authors, \cite{db-qc}. The idea of the quantum potential leads to a precise criterion for the validity of the semiclassical approximation, namely the condition $Q\approx 0$. More precisely, one has the following \begin{itemize} \item[]{\bf Definition}:~ {\em A wave function is said to be semiclassical if the corresponding quantum potential vanishes identically.} \end{itemize} Note that the quantum potential $Q$ is determined by the amplitude $R$ of the wave function. Thus, the validity of the semiclassical approximation has nothing to do with the value of the physical constants which are fixed by nature. It is solely decided on the basis of the particular form of the wave function. This in turn depends on the interaction potential and the boundary conditions of the problem. The purpose of this article is to derive the necessary and sufficient conditions on the interaction potential and the boundary conditions under which the dynamical equations, namely the Schr\"odinger equation in the nonrelativistic case and Klein-Gordon equation in the relativistic case, are exactly solved by a semiclassical wave function. This is done in sections~2 and~3. Here the problem of the classification of all potentials which allow for exact semiclassical wave functions is solved. Section~4 includes a detailed analysis of the $(1+1)$-dimensional Klein-Gordon equation. The results are then applied in section~5 for the study of solutions of the Wheeler-DeWitt equation for FRW scalar field minisuperspace models. Here several exact semiclassical solutions are constructed. In section~6, the ideas and the results of the preceding sections are used to develop a novel semiclassical perturbation theory. The latter yields the semiclassical approximation in the zero-th order of the perturbation theory. The higher order corrections are shown to satisfy linear differential equations with vanishing boundary conditions. In this way the information about the boundary conditions of the original problem is included in the zero-th (semiclassical) terms and the definition of the perturbation potential. The proposed semiclassical perturbation expansion is quite different from the traditional $\hbar$ and $M_p^{-1}$ expansions used in quantum mechanics and quantum cosmology. \section{Nonrelativistic QM: Schr\"odinger Equation} Consider the Schr\"odinger equation \begin{eqnarray} &&i\hbar\frac{d}{d t}\,\psi(t)=\hat H\psi(t)\;,~~~~\psi(0)=\psi_0 \label{sch-eq}\\ &&\hat H=\frac{1}{2m}\,\left[\hat{\bf p}-{\bf A}(\hat{\bf x};t)\right]^2+ V(\hat {\bf x;}t)\;,\nonumber \end{eqnarray} where $\psi$ is a state vector represented in the position representation by the complex scalar wave function $\langle {\bf x}|\psi(t)\rangle=\psi({\bf x};t)$, ${\bf A}$ is an electromagnetic vector potential, and $V$ is a scalar interaction potential. Inserting Eq.~(\ref{polar}) in the Schr\"odinger equation (\ref{sch-eq}) and making use of $\langle {\bf x}|\hat {\bf p}=-i\hbar{\bf\nabla}\langle {\bf x}|$, one obtains \begin{eqnarray} &&\partial_t S({\bf x};t)+H({\bf x},{\bf p}_*;t)+Q({\bf x};t)=0\;, \label{q-hj-eq}\\ && \partial_t\rho({\bf x};t)+{\bf \nabla}\cdot{\bf J}({\bf x};t)=0\;, \label{conti} \end{eqnarray} where $H=H({\bf x},{\bf p};t)$ is the classical Hamiltonian, $\rho:=R^2$, ${\bf p}_*:={\bf\nabla} S$, $Q:=-\hbar^2{\bf\nabla}^2 R/(2 m R)$ is the quantum potential, and ${\bf J}:=\rho {\bf v}_*$ with ${\bf v}_*:= ({\bf p}_*-{\bf A})/m$, is the probability current. Eq.~(\ref{q-hj-eq}) is the quantum analog of the Hamilton-Jacobi equation \begin{equation} \partial_t S({\bf x};t)+H({\bf x},{\bf p}_*;t)=0\;, \label{hj-eq} \end{equation} of the classical mechanics, \cite{goldstein}. It differs from the latter because of the presence of the quantum potential $Q$. Eq.~(\ref{conti}) is the continuity equation signifying the conservation of the probabilities. According to the above definition, the semiclassical or WKB approximation provides the exact solution of the Schr\"odinger equation, if and only if in addition to Eqs.~(\ref{q-hj-eq}) and (\ref{conti}), one has \begin{equation} Q:=\frac{-\hbar^2}{2m}\:\frac{{\bf\nabla}^2 R}{R}=0~~~ \Longleftrightarrow~~~{\bf\nabla}^2 R=0\;, \label{condi} \end{equation} i.e., $R$ is a solution of the Laplace equation. In this case, Eq.~(\ref{q-hj-eq}) reduces to the Hamilton-Jacobi equation (\ref{hj-eq}). Therefore, the necessary and sufficient conditions for the exactness of the semiclassical approximation are (\ref{hj-eq}), (\ref{conti}), and (\ref{condi}). These equations may be viewed as three partial differential equations for the three unknown functions $R$, $S$ and $V$. Eq.~(\ref{condi}) does not involve time-derivatives. It is really a constraint equation which can be independently solved. Solving Eq.~(\ref{condi}) and substituting the result in (\ref{conti}), one finds a first order equation for ${\bf v}_*$ which in turn yields ${\bf p}_*$. This leads to another first order differential equation for $S$. The potential $V$ is then obtained by solving the latter equation and substituting the result in Eq.~(\ref{hj-eq}). There is an alternative way of solving the continuity equation (\ref{conti}) which involves writing it explicitly in terms of $S$, namely, considering the solution of \begin{equation} {\bf \nabla}\cdot (R^2{\bf \nabla} S)=R f\;, \label{conti-s0} \end{equation} where $f:=-2m\partial_t R+2{\bf\nabla}R\cdot{\bf A}+ R {\bf\nabla} \cdot{\bf A}$. Now, let us define $\tilde S:=RS$. Then, it is an easy exercise to show that $\tilde S$ is the solution of the following Poisson equation \begin{equation} {\bf \nabla}^2\tilde S=f\;. \label{conti-s1} \end{equation} Here I have used in addition to Eq.~(\ref{conti-s0}) the constraint equation (\ref{condi}). Hence, $S$ is given by a solution of the Poisson equation (\ref{conti-s1}) divided by a (non-zero) solution of the Laplace equation (\ref{condi}). Note that both of these equations are second order elliptic differential equations with well-posed boundary value problems. Thus, $R$, $S$ and consequently $V$ are uniquely determined by the boundary conditions. This solves the problem of the classification of all nonrelativistic (scalar) quantum systems with an exact semiclassical solution for the Schr\"odinger equation by relating the latter to the boundary conditions of the Laplace and Poisson equations. It is also important to note that these boundary conditions may in general depend on time which appears in the corresponding equations as a parameter. In order to demonstrate the utility of these findings in concrete terms, I shall next consider the case where the classical configuration space is one-dimensional. Here one can pursue according to the former approach of integrating the continuity equation (\ref{conti}) by first solving for ${\bf v}_*$. \subsection{One-Dimensional Configuration Spaces} Consider a quantum system whose configuration space is the interval $[x_1,x_2]\subset\relax{\rm I\kern-.18em R}$, and let the boundary conditions on the solution of the Schr\"odinger equation (\ref{sch-eq}) be given by $\psi(x_1,t)=\psi_1(t)=:R_1(t)\exp[iS_1(t)/\hbar],~\psi(x_2,t)=\psi_2(t) =:R_2(t)\exp[iS_2(t)/\hbar]$. In this case, Eq.~(\ref{condi}) becomes $\partial_x^2 R=0$ which yields \begin{equation} R=a(t)x+b(t)\;, \label{condi-1} \end{equation} where \begin{equation} a(t)=\frac{R_1(t)-R_2(t)}{x_1-x_2}\:,~~~~~b(t)= \frac{x_1R_1(t)-x_2R_2(t)}{x_1-x_2}\;. \label{a-b} \end{equation} Substituting (\ref{condi-1}) in Eq.~(\ref{conti}) and integrating the resulting differential equation, one obtains \begin{eqnarray} v_*&=&\frac{1}{(ax+b)^2}\left[-\frac{2}{3}a\partial_t a\: x^3- (a\partial_t b+b\partial_t a)x^2-2b\partial_t b\: x+c\right]\;, \label{v=}\\ S&=&d +m\left[-(\frac{2a\partial_t a}{3})I_3 -(a\partial_t b+b\partial_t a)I_2-2b\partial_t b\: I_1+c\: I_0\right] +\int A dx\;, \label{s=} \end{eqnarray} where $c=c(t)$ and $d=d(t)$ are functions of time determined by equating the right hand side of (\ref{s=}) with $S_1$ and $S_2$ at $x=x_1$ and $x=x_2$, respectively, and \[I_k:=\int \frac{x^k dx}{(ax+b)^2}\;,~~~~k=0,1,2,3\;.\] More explicitly, one has \begin{eqnarray} I_0&=&\frac{-1}{a(ax+b)}\:,~~~I_1\:=\:\frac{1}{a^2}\left[\ln|ax+b|+ \frac{b}{ax+b}\right]\;,~~~ I_2\:=\:\frac{1}{a^3}\left[ax+b-2b\ln|ax+b|+\frac{b^2}{ax+b}\right]\;,\nonumber\\ I_3&=&\frac{1}{a^4}\left[\frac{1}{2}(ax+b)^2-3b(ax+b)+3b^2\ln|ax+b|+ \frac{b^3}{ax+b}\right]\;.\nonumber \end{eqnarray} The potential is then obtained using Eq.~(\ref{hj-eq}), namely \begin{equation} V=-\partial_t S-\frac{m}{2}v_*^2\;. \label{V=} \end{equation} It has the following general form: \[V=C_0(t)\ln[a(t)x+b(t)]+ \frac{\sum_{\ell=0}^6 C_\ell(t)x^\ell}{[a(t)x+b(t)]^4} -\int \partial_t A(x;t) dx\;,\] where $C_\ell$, with $\ell=0,1,\cdots 6$, depend on $a$, $b$, $c$, and $d$. In view of the above analysis, one can reach the following conclusions: \begin{itemize} \item[---] The condition of the exactness of the semiclassical approximation together with the boundary conditions determine both the semiclassical wave function and the potential uniquely; \item[---] For $x_1\to-\infty$ and $x_2\to\infty$, i.e., for a particle in $\relax{\rm I\kern-.18em R}$, a smooth semiclassical wave function is not normalizable. It corresponds to a scattering state; \item[---] More general exact semiclassical wave functions may be obtained by allowing a countable number of discontinuities. The effect of these discontinuities is to divide the interval $[x_1,x_2]$ into a collection of subintervals in the interior of which the wave function and potential are given by the above expressions. The boundary conditions corresponding to each subinterval can be chosen freely. They determine the global structure of the wave function and the potential which can now be more complicated. This observation can also be used to devise an approximation scheme for the solution of the Schr\"odinger equation, by approximating the solution by a locally semiclassical one. \end{itemize} Next, let us consider the following special cases: \begin{itemize} \item[1)] { {\em Constant Boundary Conditions}: $\partial_t\psi_1= \partial_t\psi_2=0$ with $R_1\neq R_2$}\\ In this case, $a$ and $b$ do not depend on time. This simplifies the above formulae considerably. One has: \begin{eqnarray} v_*&=&\frac{c(t)}{(a x+b)^2}\;,~~~S\:=\:d(t)-\frac{m c(t)}{a(ax+b)} +\int A(x)dx\;, \label{S1=}\\ V&=&-\partial_t d(t)+\frac{m \partial_tc(t)}{a(ax+b)}- \frac{mc(t)^2}{2(ax+b)^4}-\int \partial_t A(x)dx\;, \label{V1=} \end{eqnarray} where \begin{eqnarray} c&=&\frac{(S_2-S_1)+(\gamma_1-\gamma_2)}{\zeta_1-\zeta_2}\:,~~~ d\:=\:\frac{(\zeta_1S_2-\zeta_2 S_1)+(\zeta_2\gamma_1-\zeta_1\gamma_2)}{ \zeta_1-\zeta_2}\:,\nonumber\\ \gamma_i&:=&\gamma(x_i)\;,~~~\gamma(x)\::=\: \int A(x)dx\;,~~~ \zeta_i\::=\:\frac{m}{a(ax_i+b)}\:,~~~~~{\rm with}~~~i=1,2.\nonumber \end{eqnarray} In particular, for $A=0$, $\gamma_i$ vanish and $c$ and $d$ are constant. This leads to a further simplification of Eqs.~(\ref{S1=}) and (\ref{V1=}) and yields \[S=d-\frac{m c}{a(ax+b)}\;,~~~~ V=-\frac{mv_*^2}{2}=-\frac{mc^2}{2(a x+b)^4}\;.\] In this case both the potential and the action function turn out to be time-independent. This corresponds to a semiclassical zero energy eigenfunction. \item[2)] {{\em Amplitude-Periodic Boundary Conditions}, i.e., $R_1(t)=R_2(t)$} \\ In this case, $a=0$ and $R=b(t)$. Then, a similar analysis leads to: \begin{eqnarray} v_*&=&\frac{-2b\partial_t b\:x+c}{b^2}\;,~~~ S\:=\:d+m\left[-(\partial_t \ln b)\:x^2+\frac{c}{b^2}\:x\right]+\int A\: dx\;,\nonumber\\ V&=&-\partial_t d-m\left[-(\partial_t^2\ln b) x^2+ \partial_t(c/b^2)x\right]-\frac{m}{2}\left[\frac{-2b\partial_t b\:x+c}{b^2}\right]^2 -\int\partial_t A\: dx \;.\nonumber \end{eqnarray} Here, $c$ and $d$ depend on the phases $S_i$ of $\psi_i$ according to: \[c=\frac{\Sigma_2(t)-\Sigma_1(t)}{\alpha_2(t)-\alpha_1(t)}\;,~~~ d=\frac{\Sigma_1(t)\alpha_2(t)-\Sigma_2(t)\alpha_1(t)}{ \alpha_2(t)-\alpha_1(t)}\;,\] where \[\alpha_i:=\frac{mx_i}{b^2}\;,~~~\Sigma_i:=S_i-\gamma_i+ m(\partial_t\ln b) \:x_i^2\;.\] Note also that in this case, for $A=$ constant, the potential $V$ is at most a quadratic polynomial in $x$. For, $b=R_1=R_2=$ constant, $V$ is either a first order or a zero-th order polynomial in $x$. For example, for $b=1$, $S_i=\omega_it$, with $\omega_i$ being real constants, and $A=0$, one has: \begin{eqnarray} c&=&\frac{(\omega_2-\omega_1)t}{m(x_2-x_1)}\;,~~~ d\:=\:\frac{(\omega_1x_2-\omega_2x_1)t}{x_2-x_1}\;,\nonumber\\ S&=&(\frac{t}{x_2-x_1})[(\omega_1x_2-\omega_2x_1)+ (\omega_2-\omega_1)x]\;,\nonumber\\ V&=&(\frac{-1}{x_2-x_1})[(\omega_1x_2-\omega_2x_1)+ (\omega_2-\omega_1)x]-\frac{1}{2m} \left[\frac{(\omega_2-\omega_1)t}{(x_2-x_1)}\right]^2 \;.\nonumber \end{eqnarray} Another interesting case is when $b=e^{\omega t/2}$, for some constant $\omega$. Then, the potential is a quadratic polynomial in $x$ with the coefficient of the quadratic term being a constant, namely, $-m\omega^2/2$. In the latter case if the phases $S_1$ and $S_2$ and the vector potential $A$ are also time-independent, then so are $c$ and $d$. Therefore, one has a time-independent quadratic potential, namely \[V=-\frac{m}{2}(\omega x+c)^2\;.\] \end{itemize} \subsection{Multi-Dimensional Configuration Spaces} For an $n$-dimensional configuration space, with $n>1$, the classification of the exact semiclassical wave functions and the corresponding potentials is more involved. This is mainly because in this case the constraint equation (\ref{condi}) is the $n$-dimensional Laplace equation ${\bf \nabla}^2R=0$. In Cartesian coordinates, one can use the method of separation of variables to express $R$ as a sum of the basic solutions \begin{equation} \prod_{i=1}^{n}\left\{a_{i}(t){\cal S} [\kappa_i(t)x^i]+b_{i}(t){\cal C}[\kappa_i(t)x^i]\right\}\;. \label{R-mult} \end{equation} where $(x^1,\cdots,x^n):={\bf x}$, $\kappa_i$, $a_{i}$, and $b_{i}$ are real functions of time which are determined by the boundary conditions on the wave function, $\kappa_i$ satisfy \begin{equation} \sum_{i=1}^n \eta_i\:\kappa_i^2(t)=0\;, \label{k-condi} \end{equation} with $\eta_i=\pm1$, and ${\cal S}$ (resp.\ ${\cal C}$) stands for either of $\sin$ or $\sinh$ (resp.\ $\cos$ or $\cosh$) depending on whether $\eta_i=-1$ or $+1$, respectively. Clearly, one can choose one of the $\eta_i$'s positive and others negative. Having found the expression for $R$, one then proceeds by integrating the continuity equation (\ref{conti}) which yields $S$. In view of the above analysis, $S=\tilde S/R$, where $\tilde S$ is a solution of the Poisson equation (\ref{conti-s1}). Using the well-known Green's function methods \cite{jackson} of solving the Poisson equation, one may express $S$ in an integral form. For example, if the configuration space is $\relax{\rm I\kern-.18em R}^3$, then \begin{equation} S({\bf x};t)=\frac{1}{ R({\bf x};t)}\left\{ \tilde S_0({\bf x};t)+ \frac{1}{2\pi}\int dx^{'3}~\left[\frac{ m\partial_t R({\bf x'};t)-{\bf\nabla}R({\bf x'};t)\cdot{\bf A}({\bf x'};t)- \frac{R({\bf x'};t)}{2} {\bf\nabla} \cdot{\bf A}({\bf x'};t)}{|{\bf x}-{\bf x'}|}\right]\right\}\;, \label{S=int} \end{equation} where $\tilde S_0$ is a solution of the Laplace equation determined by the boundary conditions. Similarly to the one-dimensional case, in the case that the configuration space is $\relax{\rm I\kern-.18em R}^n$, a smooth semiclassical wave function cannot be normalized. More generally, it cannot vanish at infinity, nor can it be localized. This is a direct consequence of Eq.~(\ref{k-condi}). The main difference with the one-dimensional case is that here one has a much richer structure as far as the general form of the wave function and the potential is concerned. Unfortunately, since without knowing the specific form of the boundary conditions one cannot express $R$ and $S$ in a closed form, an explicit classification of the semiclassical wave functions and the corresponding potentials for $n>1$ is not available. Nevertheless, it is evident that by choosing the boundary conditions appropriately one can obtain a large variety of potentials. In order to demonstrate the validity of this claim, I shall next concentrate on the special cases where the amplitude of the semiclassical wave function is independent of ${\bf x}$. A simple example of this is a particle in a cubical cavity of side length $L$ with the boundary conditions: \[\left.\psi\right|_\partial=N(t)\:e^{iS_\partial({\bf x};t)/\hbar}\;,\] where the symbol $\partial$ stands for the boundary of the cavity. In this case, $R=N(t)$ is the unique solution of the constraint equation (\ref{condi}), and ${\bf\nabla}R=0$. This reduces Eq.~(\ref{conti-s0}) to the simple Poisson equation \begin{equation} {\bf \nabla}^2 S=-2m\partial_t\ln N\;, \label{po-eq} \end{equation} where I have chosen the Coulomb gauge so that ${\bf\nabla}\cdot{\bf A}=0$. Note that the source term on the right hand side of Eq.~(\ref{po-eq}) only depends on time. Hence, one can define $\check S:=S+m[\partial_t\ln N]\: |{\bf x}|^2$ and reduce this equation to the Laplace equation \begin{equation} {\bf \nabla}^2 \check S=0\:. \label{laplace} \end{equation} Since the set of solutions of the Laplace equation are in one to one correspondence with the set of boundary conditions which is a very large function space, one obtains a large class of potentials. Next, consider the following simple subcases. \begin{itemize} \item[1)] $\check S_\partial=\left.{\bf {\cal K}}(t)\cdot{\bf x}\right|_\partial$, where ${\bf {\cal K}}$ is an ${\bf x}$-independent vector-valued function of time. Then, $\check S={\bf{\cal K}}(t)\cdot{\bf x}$ clearly satisfies the Laplace equation (\ref{laplace}) and one has: \begin{eqnarray} S&=&{\bf{\cal K}}(t)\cdot{\bf x}-m[\partial_t\ln N(t)]\: |{\bf x}|^2\nonumber\\ V&=&m[\partial_t^2\ln N(t)]|{\bf x}|^2-\partial_t{\bf {\cal K}}(t)\cdot{\bf x}- \frac{1}{2m}|2m[\partial_t\ln N(t)]{\bf x}+{\bf A}({\bf x};t)-{\bf {\cal K}}(t)|^2\;. \nonumber \end{eqnarray} This case is the multi-dimensional analog of example~2 of section~2.1. \item[2)] Consider the case $n=2$, i.e., a square with boundaries $x^1=:x=0,L$ and $x^2=:y=0,L$, and boundary conditions: $\check S=0$ for $x=0,L,~y=0$, and $\check S=\alpha(t)\sin(\pi x/L)$ for $y=L$. Then, one can easily show that the solution of the Laplace equation (\ref{laplace}) is given by \[\check S=\frac{\alpha(t)\sin(\pi x/L)\sinh(\pi y/L)}{\sinh\pi}\:.\] Hence, one has \begin{eqnarray} S&=&\frac{\alpha(t)\sin(\pi x/L)\sinh(\pi y/L)}{\sinh\pi}- m[\partial_t\ln N(t)]\: |{\bf x}|^2\nonumber\\ V&=&m[\partial_t^2\ln N(t)] |{\bf x}|^2-\frac{\partial_t\alpha(t) \sin(\pi x/L)\sinh(\pi y/L)}{\sinh\pi}-\nonumber\\ &&\frac{\alpha^2(t)}{2m\sinh^2\pi}\left\{ \left[ \frac{\pi}{L}\cos(\pi x/L)\sinh(\pi y/L)-2m[\partial_t\ln N(t)]x- \frac{\sinh(\pi)A_x}{\alpha(t)}\right]^2+\right.\nonumber\\ &&\left.\left[ \frac{\pi}{L}\sin(\pi x/L)\cosh(\pi y/L)-2m[\partial_t\ln N(t)]y- \frac{\sinh(\pi)A_y}{\alpha(t)}\right]^2\right\}\;.\nonumber \end{eqnarray} These relations show that unlike the one-dimensional case, here the wave function and the potential can be quite complicated. \end{itemize} So far, I have only considered cases for which ${\bf\nabla}R=0$. This is precisely the condition demanded in the traditional semiclassical approximation. It is usually argued that the semiclassical approximation is valid if the amplitude $R$ of the wave function is a slowly varying function of ${\bf x}$. As it is clear from the above analysis, this is only a sufficient condition, not a necessary one. There is an infinite number of examples where $R$ is a rapidly changing function of ${\bf x}$ but the semiclassical approximation is not only valid, but it yields the exact result. Specific examples can be constructed by simply choosing $R$ to be a rapidly changing solution of the Laplace equation. For instance consider a quantum system with the geometry of the preceding example, but the boundary conditions which lead to \begin{itemize} \item[3)] $R(x,y;t)=R(x,y)=N_0\sin(\ell \pi x/L)\sinh(\ell\pi x/L)$ and $S(x,y;t)=S(x,y)={\bf{\cal K}}\cdot{\bf x}/R$, where $N_0$ and $\ell$ are real and integer constants, respectively, and ${\bf{\cal K}}$ is a constant vector. One can easily check that $R$ and $S$ satisfy Eqs.~(\ref{condi}) and (\ref{conti-s0}). However, \[ |{\bf\nabla} R|=|\frac{\ell N\pi}{L}\cos(\frac{\ell \pi x}{L})\cosh(\frac{\ell \pi y}{L})| \sqrt{\tan^2(\frac{\ell \pi x}{L})+\tanh^2(\frac{\ell \pi y}{L})}\] can be made arbitrarily large by choosing large values for $\ell$, i.e., $R$ is not a slowly varying function of $x$ and $y$. Note also that in this case both $R$ and $S$ are time-independent. Thus, the wave function describes a zero-energy eigenfunction of the time-independent potential \[V=-\frac{({\bf\nabla}S)^2}{2m}= \frac{-1}{2mR^2}\left[{\bf{\cal K}}^2+ \frac{({\bf{\cal K}}\cdot{\bf x})^2({\bf\nabla}R)^2}{ R^2}-\frac{2({\bf{\cal K}}\cdot{\bf x})({\bf{\cal K}} \cdot{\bf \nabla}R)}{R}\right]\;.\] \end{itemize} This example clearly shows how the present analysis generalizes the results of the traditional semiclassical approach to quantum mechanics. \section{Relativistic QM: Klein-Gordon Equation} Consider the Klein-Gordon equation \begin{equation} \left[(\partial^\mu-A^\mu)(\partial_\mu-A_\mu)-V(x)\right]\psi(x)=0\;, \label{kg-eq} \end{equation} where $A_\mu$ are components of an electromagnetic gauge field, $V$ is a scalar interaction potential (including the mass term in the massive case), and $x$ stands for the four vector $(x^\mu)$. In the following, I shall follow the relativists' convention for the Minkowski metric, namely, $(\eta_{\mu\nu}) ={\rm diag}(-1,1,\cdots,1)$, and set $c=\hbar=1$. In the polar representation (\ref{polar}), the Klein-Gordon equation is written as \begin{eqnarray} &&(\partial^\mu S-A^\mu)(\partial_\mu S-A_\mu)+V+Q=0\;, \label{q-hj-eq/kg}\\ &&\partial_\mu J^\mu=0\;, \label{conti/kg} \end{eqnarray} where $Q:=-\partial^\mu\partial_\mu R/R$ is the quantum potential and $J^\mu:=\rho(\partial^\mu S-A^\mu)$, with $\rho:=R^2$, is the conserved current. Again, Eq.~(\ref{q-hj-eq/kg}) is the quantum analog of the Hamilton-Jacobi equation for a classical relativistic particle \begin{equation} (\partial^\mu S-A^\mu)(\partial_\mu S-A_\mu)+V=0\;, \label{hj-eq/kg} \end{equation} and Eq.~(\ref{conti/kg}) is the continuity equation associated with the charge conservation. A semiclassical Klein-Gordon field is defined by the condition \begin{equation} Q:=-\frac{\partial^\mu\partial_\mu R}{R}=0~~~~ \Longleftrightarrow~~~~\partial^\mu\partial_\mu R=0\;. \label{condi/kg} \end{equation} Therefore, the semiclassical or WKB approximation is exact, if and only if the relations (\ref{condi/kg}), (\ref{hj-eq/kg}), and (\ref{conti/kg}) are satisfied. As in the nonrelativistic case, these three equations may be used to determine the three unknown real functions $R$, $S$, and $V$. This is done by first solving Eq.~(\ref{condi/kg}) which is already decoupled from the other two. This is a wave equation for $R$. Its general solution is given by a linear combination of the functions of the form \begin{equation} W_{\hat{\bf k}}=W_{\hat{\bf k}}(x^0-{\bf x}\cdot\hat{\bf k}) \label{packet} \end{equation} where $x=(x^0,{\bf x})$ belongs to the $(n+1)$-dimensional Minkowski space ${\cal M}^{n+1}$ or a subset of ${\cal M}^{n+1}$, and $\hat{\bf k}$ is a unit $n$-vector defining the null wave $(n+1)$-vector $k=k_0(1,\hat{\bf k})$. A simple choice for $W_{\hat{\bf k}}$ which is essentially motivated by the Fourier analysis of the wave equation is the plane waves $\exp i(k_0x^0-{\bf x}\cdot{\bf k})$. Once $W_{\hat{\bf k}}$ are chosen then the solution of Eq.~(\ref {condi/kg}) reduces to the determination of the coefficients of $W_{\hat{\bf k}}$ in the expansion of $R$. Next, one substitutes the expression for $R$ in the continuity equation (\ref{conti/kg}) and integrates the resulting equation. The basic strategy is similar to the nonrelativistic case. In terms of $S$ the continuity equation (\ref{conti/kg}) takes the form \begin{equation} \partial^\mu(R^2\partial_\mu S)=RF\;, \label{conti/kg-S} \end{equation} where $F:=2\partial_\mu R A^\mu+R\partial_\mu A^\mu$. Eq.~(\ref{conti/kg-S}) is the analog of Eq.~(\ref{conti-s0}). It can further be simplified by defining $\tilde S:=RS$. This together with Eqs.~(\ref{condi/kg}) and (\ref{conti/kg-S}) leads to \begin{equation} \partial^\mu\partial_\mu\tilde S=F\;, \label{conti/kg-1} \end{equation} i.e., $S=\tilde S/R$, where $\tilde S$ is a solution of the inhomogeneous wave equation (\ref{conti/kg-1}). Having found $R$ and $S$, one can use the Hamilton-Jacobi equation (\ref{hj-eq/kg}) to determine the form of the potential. There is a very important difference between the relativistic and nonrelativistic cases. Here the condition of the exactness of the semiclassical approximation leads to two second order hyperbolic equations, namely (\ref{condi/kg}) and (\ref{conti/kg-1}), whereas in the nonrelativistic case one has two elliptic equations. One knows from the theory of hyperbolic differential equations that the boundary-value problem for such equations is not generally well-posed, i.e., for arbitrary boundary conditions, a solution may or may not exist and if it does, it may not be unique. The well-posed problem for a hyperbolic equation such as the wave equations (\ref{condi/kg}) and (\ref{conti/kg-1}) is the initial-value problem. In general for given initial data on a Cauchy hypersurface, one can solve these equations and determine $R$, $S$, and $V$ uniquely. One of the consequences of the hyperbolicity of (\ref{condi/kg}) is that unlike the nonrelativistic case, here $R$ and therefore the semiclassical wave function can be localized. In particular, one can form a coherent wave packet which approximates the behavior of a classical particle. Restricting to the case where $R$ is a constant and adopting the Lorentz gauge $\partial^\mu A_\mu=0$, one can reduce Eq.~(\ref{conti/kg-S}) to a (homogeneous) wave equation for $S$: \begin{equation} \partial^\mu \partial_\mu S=0\;, \label{wave-s} \end{equation} The general solution of this equation is also given as a linear combination of functions of the form (\ref{packet}). This is sufficient to conclude that even for this special case $S$ and consequently $V$ can be quite complicated. This shows that there is a large class of potentials which allow exact semiclassical solutions of the Klein-Gordon equation. These potentials and the corresponding semiclassical Klein-Gordon fields depend in a crucial way on the boundary conditions\footnote{Here and in what follows, ``boundary conditions'' means ``initial'', ``boundary'', or ``mixed initial-boundary conditions'' for which there exists at least one solution.}. This is especially important in quantum cosmology where there is an ongoing controversy regarding the choice of the boundary conditions for the wave function of the universe and also the form of the potential in the Wheeler-DeWitt equation. In particular, for the FRW scalar field minisuperspace models \cite{page,wiltshire}, the Wheeler-DeWitt equation is precisely a $(1+1)$-dimensional Klein-Gordon equation of the form (\ref{kg-eq}). For these models the form of the potential is directly linked with the phenomenon of inflation \cite{inflation,page,wiltshire}. On the other hand, most if not all the physical predictions which one hopes to derive from such models are relevant to the regions of the minisuperspace where the wave function is semiclassical. The results of this paper indicate that at least one can rule out the cases where the existence of a semiclassical solution (for some region of the minisuperspace) is inconsistent with the form of the potential (in that region). This together with the requirements imposed by inflation may be helpful in improving our understanding of quantum cosmology. This motivates a closer analysis of the Klein-Gordon equation in the $(1+1)$-dimensional Minkowski space. \section{Klein-Gordon Equation in $(1+1)$-dimensions} If the configuration space is the $(1+1)$-dimensional Minkowski space ${\cal M}^2$, then the general solution of the wave equations (\ref{condi/kg}) and (\ref{conti/kg-1}) can be written in terms of four real-valued functions $R_\pm$ and $\tilde S_\pm$, \cite{m-f}, \begin{eqnarray} R(x,t)&=&R_+(x+t)+R_-(x-t)\;, \label{1-1-R}\\ \tilde S(x,t)&=& \tilde S_+(x+t)+ \tilde S_-(x-t)+\int dx' dt'G(x,t;x',t')F(x',t')\;,\nonumber \end{eqnarray} where $F$ is the same as the one appearing in Eq.~(\ref{conti/kg-S}) and $G$ is the appropriate Green's function for the one-dimensional wave equation, \cite{m-f}. The latter can be constructed out of the advanced and retarded Green's functions given by $G^\pm(x,t;x',t'):=[\theta(|x-x' |\pm(t-t'))-1]/2$, where $+$ and $-$ label the advanced and retarded Green's functions, respectively, and $\theta$ is the step function, $\theta(z)=1$, if $z>0$; $\theta(z)=0$, if $z<0$. The usual choice in typical physical applications is the retarded Green's function $G^-$ which marks a particular direction of time. Note that if the electromagnetic potential $A_\mu$ is absent, $F=0$, the last term in Eq.~(\ref{1-1-S}) drops, and there is no need for a Green's function, in particular, a direction of time. Finally in view of $S=\tilde S/R$ one has \begin{equation} S(x,t)=\frac{1}{R_+(x+t)+R_-(x-t)}\left[ \tilde S_+(x+t)+ \tilde S_-(x-t)+\int dx' dt'G(x,t;x',t')F(x',t')\right]\;. \label{1-1-S} \end{equation} Eqs.~(\ref{1-1-R}) and (\ref{1-1-S}) together with Eq.~(\ref{hj-eq/kg}) show that in general the exact semiclassical wave functions and the corresponding potentials are classified by the set ${\cal C}^4:= \{(R_\pm,\tilde S_\pm)\}$, where ${\cal C}$ is the set of real-valued functions of a single real variable. Let us next concentrate on the case where there is no electromagnetic interaction. Then, in view of Eq.~(\ref{hj-eq/kg}) the potential has the general form \begin{eqnarray} V&=&\frac{-4[\tilde S'_+\tilde S'_-+ S^2R'_+R'_- -S(R'_+\tilde S'_-+R'_-\tilde S'_+)]}{(R_++R_-)^2}\;,\nonumber\\ &=&-4\left(\frac{\tilde S'_+}{\tilde S_++\tilde S_-}- \frac{R'_+}{R_++R_-}\right) \left(\frac{\tilde S'_-}{\tilde S_++\tilde S_-}- \frac{R'_-}{R_++R_-}\right)S^2 \:=:\:{\cal F}[R_\pm,\tilde S_\pm]\;, \label{1-1-V} \end{eqnarray} where $R_\pm=R_\pm(x\pm t)$, $\tilde S_\pm=\tilde S_\pm(x\pm t)$, a prime means first derivative of the corresponding function, \begin{equation} S=\frac{\tilde S_++\tilde S_-}{R_++R_-}\;, \label{1-1-S-0} \end{equation} and the function(al) ${\cal F}$ is defined for future use. Eq.~(\ref{1-1-V}) is obtained by substituting (\ref{1-1-R}) and (\ref{1-1-S-0}) in the Hamilton-Jacobi equation~(\ref{hj-eq/kg}). Next, consider the simple case $R_\pm=1/2$. Then, $S=\tilde S_+ +\tilde S_-$ and $V=-4\tilde S'_+\tilde S'_-$. In particular, one has the following interesting examples: \begin{itemize} \item[1)] { $S$ is linear in time $t$: $\tilde S_\pm=\omega_\pm(x\pm t)/2$ for some constants $\omega_\pm$}\\ In this case, $V=-\omega_+\omega_-=$ constant. This includes the case of a free Klein-Gordon field of mass $\mu=\sqrt{-\omega_+\omega_-}$, since in this case $V=\mu^2$. \item[2)] { $S$ is quadratic in $t$ and $x$: $\tilde S_\pm=\nu_\pm (x\pm t)^2/2$ for some constants $\nu_\pm$}\\ In this case, one obtains a quadratic potential of the form $V=4\nu_+\nu_-(t^2-x^2)$. This corresponds to a Klein-Gordon field with the time-dependent mass $\mu=2\sqrt{\nu_+\nu_-}~t$ and quadratic interaction potential. The nonrelativistic limit of this case is a time-dependent harmonic oscillator with imaginary frequency. \item[3)] { $S$ is a linear combination of exponential functions: $\tilde S_\pm=\nu_\pm e^{\omega_\pm(x\pm t)}$}\\ In this case, one has an exponential potential, $V=V_0 e^{(\omega_++\omega_-)x+(\omega_+-\omega_-)t}$, where $V_0:=-4\omega_+\omega_-\nu_+\nu_-$. Clearly, by choosing $\omega_-=\pm\omega_+=:\pm\omega$, one obtains exponential potentials which depend only on $t$ or $x$, namely $V=V_0e^{2\omega x}$ and $V=V_0e^{2\omega t}$, respectively. \end{itemize} These examples can also be described in the framework of the traditional semiclassical approach, since $R$ is chosen to be unity. Similarly to the nonrelativistic case, in order to demonstrate the generality of the present analysis, one must consider the cases where $R$ is a rapidly changing function of $t$ and $x$, but the semiclassical approximation is nevertheless exact. Again typical examples can be constructed starting from a rapidly changing solution of the wave equation (\ref{condi/kg}). For instance, consider the case \begin{itemize} \item[4)] { $R=R_0 e^{\omega(x-t)}$ and $S=S_0e^{-2\omega x}$}\\ Then one can check that Eqs.~(\ref{condi/kg}) and (\ref{conti/kg-S}) are satisfied and the corresponding semiclassical wave function is an exact solution of the Klein-Gordon equation defined by the time-independent potential $V=-4\omega^2S_0^2e^{-4\omega x}$. \end{itemize} Eq.~(\ref{1-1-V}) provides a classification of the potentials for which an exact semiclassical solution of the $(1+1)$-dimensional Klein-Gordon equation, with $A_\mu=0$, exists. In practice, however, it is the potential which is given, not the wave function. Hence, a more interesting question is {\em whether for a given potential $V=V(x,t)$ there is a set of boundary conditions which yields an exact solution of the Klein-Gordon equation.} One can alternatively ask whether a given potential belongs to the image of the function ${\cal F}$ defined in Eq.~(\ref{1-1-V}). In order to answer these questions, I shall first consider a special class of boundary conditions, namely $R_+=0$. For this class one can show that ${\cal F}$ is not onto and the set of potentials of the from Eq.~(\ref{1-1-V}), with $R_+=0$, forms a small subset of all possible potentials. To see this, let us first substitute $R_+=0$ in (\ref{1-1-V}). The resulting equation may then be viewed as a differential equation for $\tilde S_-$, while $R_-$ and $\tilde S_+$ are treated as undetermined functions. This leads to \begin{equation} \tilde S'_-+(-\frac{R'_-}{R_-})\tilde S_-+\frac{R_-^2}{4} \left[\frac{V}{\tilde S'_+}-\frac{4R'_-\tilde S_+}{R_-^3}\right]=0\;, \label{s-=} \end{equation} which is a consistent first order ordinary linear differential equation for $\tilde S_-$ provided that the bracket on its left-hand side depends only on $x-t$. This puts an strong restriction on the form of the allowed potentials. Namely, the potential must be of the form: \begin{equation} V(x,t)={\cal X}(x-t)\tilde S_+'(x+t)+{\cal Y}(x-t) \tilde S_+(x+t)\tilde S_+'(x+t)\;, \label{V-s-=} \end{equation} where ${\cal X}$ is an arbitrary real-valued function and ${\cal Y}:= 4R'_-/R^3_-$. \begin{itemize} \item[] {\bf Proposition~1:} {\em Let $u:=x-t$ and $v:=x+t$ be null coordinates in ${\cal M}^2$ and $V:{\cal M}^2\to\relax{\rm I\kern-.18em R}$ be an analytic function at $(0,0)\in{\cal M}^2$. Then, a necessary condition for $V$ to satisfy Eq.~(\ref{V-s-=}), for some functions ${\cal X}:\relax{\rm I\kern-.18em R}\to\relax{\rm I\kern-.18em R}$, ${\cal Y}:\relax{\rm I\kern-.18em R}\to\relax{\rm I\kern-.18em R}$, and $\tilde S_+:\relax{\rm I\kern-.18em R}\to\relax{\rm I\kern-.18em R}$, is that the coefficients $V_{jn}$ in the power series expansion $\sum_{j,n=0}^\infty V_{jn}u^jv^n$ of $V$ must satisfy one of the following two relations} \begin{eqnarray} V_{jn}&=&\frac{ (V_{k_1m_1}V_{jm_1}-V_{k_1m_2}V_{jm_2})V_{k_2n}+ (V_{k_2m_2}V_{jm_2}-V_{k_2m_1}V_{jm_1})V_{k_1n}}{ V_{k_1m_1}V_{k_2m_2}-V_{k_1m_2}V_{k_2m_1}}\;, \label{vjn-condi-1}\\ V_{jn}&=&\frac{V_{jm_1}V_{k_1n}}{V_{k_1m_1}}\;, \label{vjn-condi-2} \end{eqnarray} with $(j,k_1,k_2)$ and $(n,m_1,m_2)$ being triplets of mutually different arbitrary non-negative integers. \item[] {\bf Proof:} Substitute the power series expansions \begin{eqnarray} V&=&V(u,v)\:=:\:\sum_{j,n=0}^\infty V_{jn}u^jv^n\;,~~~~~~{\cal X}\:= \:{\cal X}(u)\:=:\:\sum_{j=0}^\infty {\cal X}_j u^j\;, \label{v-exp}\\ {\cal Y}&=&{\cal Y}(u)\:=:\: \sum_{j=0}^\infty {\cal Y}_j u^j\;,~~~~~~ \tilde S_+\:=\:\tilde S_+(v)\:=:\: \sum_{n=0}^\infty S_nv^n\;, \label{s-exp} \end{eqnarray} in Eq.~(\ref{V-s-=}). This leads to \begin{equation} V_{jn}=(n+1)S_{n+1}{\cal X}_j+{\cal Y}_j\sum_{m=0}^n (n-m+1)S_mS_{n-m_1}=(n+1)S_{n+1}{\cal Z}_j+ {\cal Y}_j\sum_{m=0}^{n-1}(n-m)S_{m+1}S_{n-m}\;, \label{Vjn} \end{equation} where ${\cal Z}_j:={\cal X}_j+S_0{\cal Y}_j$. Next, solve for the sum on the right-hand side of (\ref{Vjn}). The result is \[\sum_{m=0}^{n-1}(n-m)S_{m+1}S_{n-m}=\frac{1}{{\cal Y}_j}\left[ V_{jn}-(n+1)S_{n+1}{\cal Z}_j\right]\;.\] Clearly the left-hand side of this equation is independent of $j$. Hence, its right-hand side must also be independent of $j$. Writing the right-hand side for two different values of $j$ and equating the results, one has \begin{equation} S_{n}=\frac{1}{n}\left( \frac{{\cal X}_j}{{\cal Y}_j}- \frac{{\cal X}_k}{{\cal Y}_k}\right)^{-1}\left( \frac{V_{j,n-1}}{{\cal Y}_j}-\frac{V_{k,n-1}}{{\cal Y}_k}\right)\;,~~~~ \forall j\neq k\;. \label{Sn=} \end{equation} Next let us express $S_n$ using two different values of $k$, say $k_1$ and $k_2$. Equating the two expressions and simplifying the result, one obtains \begin{equation} {\cal X}_j=\frac{{\cal Y}_j}{{\cal Y}_{k_2}}\left( {\cal X}_{k_2}+ ({\cal Y}_{k_1}{\cal X}_{k_2}-{\cal Y}_{k_2}{\cal X}_{k_1}) \left[\frac{ {\cal Y}_{k_2}V_{jn}-{\cal Y}_jV_{k_2n}}{ {\cal Y}_{k_1}{\cal Y}_j V_{k_2n}-{\cal Y}_{k_2}{\cal Y}_jV_{k_1n} }\right]\right)\;, \label{Zj} \end{equation} Since $n$ does not appear in this equation except in the square bracket, the content of the square bracket must be independent of $n$. This argument may be used to determine ${\cal Y}_j$ by equating the square bracket on the right hand side of (\ref{Zj}) with its value for $n=m_1$. This leads to \begin{equation} {\cal Y}_j={\cal Y}_{k_2} {\cal W}_{jn}\;, \label{Yj} \end{equation} where \begin{equation} {\cal W}_{jn}:=\frac{ ( c V_{k_2m_1}-V_{k_1m_1})V_{jn}- ( c V_{k_2n}-V_{k_1n})V_{jm_1}}{ V_{k_2m_1}V_{k_1n}-V_{k_1m_1}V_{k_2n}}\;, \label{Wj} \end{equation} and $c:={\cal Y}_{k_1}/{\cal Y}_{k_2}$. Once again, ${\cal W}_{jn}={\cal Y}_j/{\cal Y}_{k_2}$ must be independent of $n$, i.e., for all $m$ and $n$, ${\cal W}_{jn} ={\cal W}_{jm}$. In particular ${\cal W}_{jn}={\cal W}_{jm_2}$, where $m_2$ is some arbitrarily chosen fixed non-negative integer. This equation may be used to express $V_{jn}$ in terms of $V_{jm_1},~V_{jm_2},~V_{k_1n},~V_{k_2n}$, and $c$ namely \begin{eqnarray} V_{jn}&=&\left[ \frac{cV_{k_2n}-V_{k_1n}}{cV_{k_2m_1}-V_{k_1m_1}}- \left(\frac{cV_{k_2m_2}-V_{k_1m_2}}{cV_{k_2m_1}-V_{k_1m_1}}\right) \left(\frac{V_{k_2m_1}V_{k_1n}-V_{k_1m_1}V_{k_2n}}{V_{k_2m_1} V_{k_1m_2}-V_{k_1m_1}V_{k_2m_2}}\right)\right]V_{jm_1}+\nonumber\\ &&\left[\frac{V_{k_2m_1}V_{k_1n}-V_{k_1m_1}V_{k_2n}}{V_{k_2m_1} V_{k_1m_2}-V_{k_1m_1}V_{k_2m_2}}\right] V_{jm_2}\;. \label{vjn-condi-0} \end{eqnarray} Next, consider the following two possibilities: \begin{itemize} \item[I)] $c\neq V_{k_1m_1}/V_{k_2m_1}$:\\ Then, the right-hand side of (\ref{vjn-condi-0}) does not actually depend on $c$. In this case, one finds \[ V_{jn}=\frac{ (V_{k_1m_1}V_{jm_1}-V_{k_1m_2}V_{jm_2})V_{k_2n}+ (V_{k_2m_2}V_{jm_2}-V_{k_2m_1}V_{jm_1})V_{k_1n}}{ V_{k_1m_1}V_{k_2m_2}-V_{k_1m_2}V_{k_2m_1}}\;,\] which is just Eq.~(\ref{vjn-condi-1}). \item[II)] $c=V_{k_1m_1}/V_{k_2m_1}$:\\ Then, according to the definition of $c$, i.e., $c:={\cal Y}_{k_1}/{\cal Y}_{k_2}$ and Eq.~(\ref{Sn=}) either all $S_n$ vanish --- this corresponds to the trivial case $V_{jn}=0$ --- or ${\cal X}_j/{\cal Y}_j={\cal X}_k/{\cal Y}_k$, i.e., the ratio ${\cal X}_j/{\cal Y}_j$ does not depend on $j$. The latter implies ${\cal X}= \eta {\cal Y}$ for some constant $\eta$. Moreover, in this case $V_{jn}$ must satisfy \[V_{jn}=\frac{V_{jm_1}V_{k_1n}}{V_{k_1m_1}}\;,\] which is just Eq.~(\ref{vjn-condi-2}). {\,\lower0.9pt\vbox{\hrule \hbox{\vrule height 0.2 cm \hskip 0.2 cm \vrule height 0.2 cm}\hrule}\,} \end{itemize} \end{itemize} Note that in the latter case, Eq.~(\ref{vjn-condi-2}) together with ${\cal X}=\eta {\cal Y}$ and Eqs.~(\ref{Vjn}) and (\ref{V-s-=}) lead to \begin{eqnarray} V_{jn}&=&v_n {\cal Y}_j\;,~~~~~v_n:=\eta(n+1)S_{n+1}+ \sum_{m=0}^n(n-m+1)S_mS_{n-m}\;, \label{vjn-condi-3}\\ V(x,t)&=&[\eta +\tilde S_+(x+t)]\tilde S'_+(x+t){\cal Y}(x-t)\;. \label{vjn-condi-4} \end{eqnarray} Now, substituting Eq.~(\ref{vjn-condi-4}) in Eq.~(\ref{s-=}), one can easily show that indeed $\tilde S_+$ drops out of this equation and one obtains $\tilde S_-=\eta$. Therefore the exact semiclassical wave function $\psi=Re^{iS}$ is given by $R=R_-$ and $S=(\eta+\tilde S_+) /R_-$. For example, consider choosing \begin{itemize} \item[5)] ${\cal Y}=\mu_-e^{\omega_-(x-t)}$ and $(\eta+\tilde S_+)\tilde S_+'=\mu_+e^{\omega_+(x+t)}$ for some constants $\mu_\pm$ and $\omega_\pm$:\\ Then, one has $V=\mu_+\mu_-e^{(\omega_++\omega_-)x} e^{(\omega_+-\omega_-)t}$. In particular for $\omega_+=-\omega_-=: \omega$, the potential depends only on $t$. In this case, one has \begin{equation} V=\mu_+\mu_-e^{2\omega t}\;,~~~ R=\sqrt{\frac{\nu_-+\mu_-e^{-\omega(x-t)}}{2\omega}}\;,~~~ S=\pm2\sqrt{\frac{\mu_+e^{\omega x}+\nu_+e^{-\omega t}}{ \mu_-e^{-\omega x}+\nu_-e^{-\omega t}}}\;, \label{v-r-s} \end{equation} where $\nu_\pm$ are also constants. The appearance of the square roots in these equations is an indication that for certain choices of $\mu_\pm$ and $\nu_\pm$ either $R$ or $S$ can become imaginary in some regions of the Minkowski space. Since $R$ and $S$ are assumed to be real, such a semiclassical solution does not exist in these regions. \end{itemize} This is another example of a case where the amplitude of an exact semiclassical solution is not a slowly varying function of its arguments. A simple consequence of Proposition~1 is: \begin{itemize} \item[] {\bf Corollary:} {\em The potentials of the form (\ref{V-s-=}) which are analytic at $(x=0,t=0)$ form a proper subset of the set of all potentials, i.e., for an arbitrary potential which is analytic at $(x=0,t=0)$, an exact semiclassical solution of the $(1+1)$-dimensional Klein-Gordon equation (\ref{kg-eq}), with $R_+=0$ and $A_\mu=0$, may not exist.} \end{itemize} Eqs.~(\ref{vjn-condi-1}) and (\ref{vjn-condi-2}) provide a useful criterion for finding out whether a given potential allows for an exact semiclassical solution with $R_+=0$ or not. If the result is positive, then Eqs.~(\ref{Sn=}), (\ref{Zj}), and (\ref{Yj}), may be used to determine $R_-$ and $\tilde S_+$. These are then used to integrate Eq.~(\ref{s-=}) which yields $\tilde S_-$ and consequently the wave function $\psi=R_-\exp[i(\tilde S_-+\tilde S_+)/R_-]$. Proposition~1 only applies to the cases where $R_+=0$. One might try to employ a similar method to treat the more general case, where $R_+$ is also an undetermined function. For this purpose, one must first write Eq.~(\ref{1-1-V}) as a polynomial equation in $V$, $R_\pm$, $\tilde S_\pm$ and their derivatives and substitute the power series expansions of these functions in the resulting expression. This leads to an infinite system of very complicated coupled nonlinear algebraic equations for the coefficients of $R_\pm$, $\tilde S_\pm$ whose analytic solution has not been possible. Although a similar proof is lacking for the most general case, further inspection of Eq.~(\ref{1-1-V}) suggests that this equation also restricts the form of the potential. Hence, in general for an arbitrary potential an exact semiclassical solution of the Klein-Gordon equation does not exist. \section{Exact Semiclassical Wave Functions of the Universe} Consider the Wheeler-DeWitt equation for the closed FRW cosmological model coupled to a scalar field $\phi$, \cite{page,wiltshire}, \begin{equation} \left[-\partial_\alpha^2+\partial_\phi^2+ e^{4\alpha}-e^{6\alpha}{\cal V}(\phi)\right]\psi=0\;, \label{wdw} \end{equation} where $\alpha:=\ln a$, $a$ is the scale factor of the FRW model, ${\cal V}$ is the matter field potential. Here, the cosmological constant is assumed to vanish, the usual factor ordering prescription \cite{page,wiltshire} is adopted, and the natural units in which the Planck mass is set to unity is used. Clearly, this is a Klein-Gordon equation in $(1+1)$-dimensional Minkowski (minisuper)space with potential \begin{equation} V=-e^{4\alpha}+e^{6\alpha}{\cal V}(\phi)\;, \label{wdw-v} \end{equation} where $(\alpha,\phi)$ play the role of $(t,x)$. The solutions of Eq.~(\ref{wdw}) have been studied mostly for the massless (${\cal V}=0$) and massive (${\cal V}=m^2\phi^2$) scalar fields in the literature \cite{approx,kiefer-88,h-p-90,page,kim-92,wiltshire,p-wdw}. This is done by making use of different approximation schemes except for the rather trivial and much less interesting massless case for which the exact solution is known \cite{massless,h-p-90}. The approximate solutions of Eq.~(\ref{wdw}) are usually developed by making particular assumptions for the boundary conditions, semiclassicality of the solution, or restricting to particular regions of the minisuperspace in which the Wheeler-DeWitt equation (\ref{wdw}) simplifies. In view of the developments reported in the preceding sections, the assumption of the exactness of the semiclassical approximation provides a direct link between the choice of the boundary conditions and the form of the potential. This is done through the function ${\cal F}$ defined by (\ref{1-1-V}) which can be viewed as a function from the set of boundary conditions to the set of potentials $V$. In section~4, I have shown for the case $R_+=0$ that this function is not onto, i.e., there are potentials which do not admit exact semiclassical solutions. Here, however, one is interested in a special class of potentials, namely those of the form (\ref{wdw-v}). Hence, the relevant problem is to find the intersection ${\cal U}$ of the image of ${\cal F}$ and the set of potentials of the form (\ref{wdw-v}). One can easily show that ${\cal U}$ is not empty. For example the massless case, where $V$ is an exponential function of $\alpha$, can be easily put in the form (\ref{1-1-V}), i.e., it belongs to ${\cal U}$. In fact, two possible choices for $R_\pm$ and $\tilde S_\pm$ which lead to this potential were already given in examples~3 and~5 of section~4 (with $\omega=2$). The following are nontrivial examples of the potentials of the form (\ref{wdw-v}) which also belong to ${\cal U}$. They are obtained by setting $R_+=0$ and making simple choices for the functions ${\cal X}$, ${\cal Y}$, and $\tilde S_+$ of Eq.~(\ref{V-s-=}) so that Eq.~(\ref{wdw-v}) is also satisfied. This together with Eq.~(\ref{s-=}) yield $\tilde S_-$. \begin{itemize} \item[1)] {${\cal V}=\lambda e^{2\phi}$ with $\lambda>0$}\\ In this case, the choices ${\cal X}=-e^{-2(\phi-\alpha)}$, $R_-=2e^{\phi-\alpha}/ \sqrt{\lambda}$, and $\tilde S_+=e^{2(\phi+\alpha)}/2$ satisfy Eq.~(\ref{V-s-=}). The solution of (\ref{s-=}) then leads to $\tilde S_-= ce^{\phi-\alpha}+1/\lambda$, where $c$ is a constant. The exact semiclassical solution is given by \[ R=\frac{2e^{\phi-\alpha}}{\sqrt{\lambda}}\;,~~~~ S=\sqrt{\lambda}(e^{\phi+3\alpha}+\frac{2e^{\alpha-\phi}}{ \lambda}+2c)\;;\] \item[2)] {${\cal V}=\lambda e^{-4\phi}$ with arbitrary $\lambda$}\\ In this case, one has ${\cal X}=\lambda e^{-5(\phi-\alpha)}/c_1$, $R_-=2c_1[c_2-e^{-2(\phi-\alpha)}]^{-1/2},~\tilde S_+=c_1e^{\phi+\alpha}$, and \[ \tilde S_-=-\frac{\lambda c_1}{8}\left[2e^{-3(\phi-\alpha)}+ 3c_2 e^{-(\phi-\alpha)}+3c_2^2e^{\phi-\alpha}\left( \frac{\tan^{-1}\sqrt{c_2 e^{2(\phi-\alpha)}-1}+c_3}{ \sqrt{c_2 e^{2(\phi-\alpha)}-1}}\right)\right]\;,\] where $c_1, ~c_2$ and $c_3$ are constants. The exact semiclassical solution is therefore given by \begin{eqnarray} R&=&\frac{2c_1}{\sqrt{c_2-e^{-2(\phi-\alpha)}}}\;,\nonumber\\ S&=&\left( \frac{e^{\phi+\alpha}}{2}-\frac{\lambda \left[2e^{-3(\phi-\alpha)}+3c_2 e^{-(\phi-\alpha)}\right]}{16} \right) \sqrt{c_2-e^{-2(\phi-\alpha)}} -\frac{3\lambda c_2^2}{16}\left( \tan^{-1}\sqrt{c_2 e^{2(\phi-\alpha)}-1}+c_3 \right) \;.\nonumber \label{sol-2} \end{eqnarray} Note that in this case, there is always a region of the minisuperspace defined by $e^{\phi-\alpha}<c_2^{-1/2}$ where a semiclassical solution does not exist. \end{itemize} The next logical step is to explore the existence of exact semiclassical solutions of the Wheeler-DeWitt equation with matter potentials of the form ${\cal V}=\lambda \phi^{2p}$. These are among the potentials which lead to inflationary classical solutions, \cite{inflation}. The simplest case is that of a massive scalar field, i.e., $\lambda=m^2,~p=1$. In the remainder of this section, I shall restrict to the semiclassical solutions with $R_+=0$. The existence of this type of solutions can be easily decided using Eqs.~(\ref{vjn-condi-1}) and (\ref{vjn-condi-2}). One simply needs to compute the coefficients $V_{jn}$ of (\ref{v-exp}) and check whether they satisfy one of these equations. A simple calculation shows that for $p=1,~2,~3$, none of these equations are satisfied. Hence, the matter potentials $\lambda\phi^2$, $\lambda\phi^4$, and $\lambda\phi^6$ do not admit exact `right-going' ($R_+=0$) semiclassical solutions. This is done by choosing the integers $j,~n,~k_1~,k_2~,m_1,$ and $m_2$ in such a way that both Eqs.~(\ref{vjn-condi-1}) and (\ref{vjn-condi-2}) fail. In order to demonstrate this, let us denote the right-hand sides of Eqs.~(\ref{vjn-condi-1}) and (\ref{vjn-condi-2}) by $V^{(1)}_{jn}$ and $V^{(2)}_{jn}$, respectively. Then, \begin{itemize} \item[---] for $p=1$ and $k_1=m_1=0,~k_2=m_2=1$, one finds \[V_{22}=-4+\frac{9\lambda}{4}\;,~~~ V^{(1)}_{22}=-4+\frac{13\lambda}{8}-\frac{\lambda^2}{16}\;,~~~ V^{(2)}_{22}=-(2-\frac{\lambda}{4})^2\;,\] \[V_{32}=\frac{8}{3}+\frac{9\lambda}{4}\;,~~~ V^{(1)}_{32}=\frac{8}{3}-\frac{23\lambda}{12}-\frac{3\lambda^2}{16}\;,~~~ V^{(2)}_{32}=\frac{8}{3}-\frac{11\lambda}{6}+\frac{3\lambda^2}{16}\;.\] \item[---] for $p=2$ and $k_1=m_1=0,~k_2=m_2=2$, one finds \[V_{31}=\frac{8}{3}+\frac{\lambda}{4}\;,~~~ V^{(1)}_{31}=V^{(2)}_{31}=\frac{8}{3}\;,\] \[V_{33}=\frac{16}{9}-\frac{9\lambda}{8}\;,~~~ V^{(1)}_{33}=\frac{16}{9}+\frac{3\lambda}{8}\;,~~~ V^{(2)}_{33}=\frac{16}{9}\;.\] \item[---] for $p=3$ and $k_1=m_1=0,~k_2=m_2=3$, one finds \[V_{24}=-\frac{4}{3}+\frac{15\lambda}{64}\;,~~~V^{(1)}_{24}= V^{(2)}_{24}=-\frac{4}{3}\;.\] \end{itemize} Note that for $\lambda=0$, i.e., the massless case, the values of $V_{jn}$ in the above list match the values of $V^{(1)}_{jn}$ and $V^{(2)}_{jn}$. This is in agreement with the fact that for the massless case one does in fact have right-going exact semiclassical solutions. Eqs.~(\ref{v-r-s}) with $(t,x)\to(\alpha,\phi)$, $\omega=2$, and $\mu_-=-1/\mu_+$, provide a concrete example. Furthermore, one knows from the studies of the inflationary cosmological models that for the polynomial matter potentials $\lambda\phi^{2p}$ the coupling constant $\lambda$ must be a very small number. For example for the massive case, where $p=1$ and $\lambda=m^2$, these theories predict $m\approx10^{-6}$, i.e., $\lambda\approx10^{-12}$, \cite{inflation}. Thus, although there is no right-going semiclassical solutions for $p=1,2,3$, at least for small values of $u=\phi-\alpha$ and $v=\phi+\alpha$, where one can neglect forth and higher order terms in the power series expansion of $V$, the semiclassical approximation seems to be reliable. The phrase {\em semiclassical approximation} is used in quantum cosmology in a very crude way. One usually makes additional assumptions such as the adiabaticity of the evolution \cite{kiefer-88,conradi} to reduce the situation to the one-dimensional quantum mechanical case. In this way, one is able to express the condition of the validity of the semiclassical approximation as a simple limitation on the range of values of the matter potential, \cite{wiltshire}. The consistency of these assumptions with the validity of the semiclassical approximation is either left unchecked or a set of conditions is imposed which render the scheme consistent. These conditions are usually sufficient conditions not necessary. Hence, in general they may be too restrictive. The situation is very similar to restricting the exact semiclassical wave functions to those with slowly varying amplitudes. As shown in the preceding sections, this is an absolutely unnecessary restriction. The approach pursued in the article also allows for a precise definition of a more general semiclassical approximation where the solution to the dynamical equations is approximated with the general semiclassical wave functions introduced in section~1. This will be discussed next. \section{Semiclassical Perturbation Theory} Let us first define a {\em semiclassical potential} $V_0$ to be a potential which corresponds to an exact semiclassical solution of the dynamical equation. In view of the results of sections~2 and 3, the set of semiclassical potentials is in fact much larger than one usually expects. This suggests a generalized notion of {\em semiclassical expansion} and in particular {\em semiclassical approximation} which corresponds to a perturbation theory around the semiclassical potentials. Let $V$ be an arbitrary potential, $\psi=Re^{iS/\hbar}$ be the solution of the dynamical equation, and $\epsilon\in\relax{\rm I\kern-.18em R}$ be a perturbation parameter. Then, the semiclassical perturbation theory corresponds to \begin{eqnarray} V&=&V_0+\delta V\;,~~~~\delta V=\epsilon~V_{\rm p} \label{pert-v}\\ R&=&R_0+\delta R\;,~~~~\delta R=\sum_{\ell=1}^\infty \epsilon^\ell R_\ell\;, \label{pert-r}\\ S&=&S_0+\delta S\;,~~~~\delta S=\sum_{\ell=1}^\infty \epsilon^\ell S_\ell\;, \label{pert-s} \end{eqnarray} where $V_0$ and $\psi_0=R_0 e^{iS_0/\hbar}$ are the semiclassical potential and wave function defined by the boundary conditions of the problem. Substituting Eqs.~(\ref{pert-v}) -- (\ref{pert-s}) in the dynamical equations and treating $\epsilon$ as an independent parameter, one obtains an infinite family of equations which can be iteratively solved to yield $R_\ell$ and $S_\ell$. In particular, the equations obtained in the $\ell$-th order are two linear coupled differential equations in $R_\ell$ and $S_\ell$ with vanishing boundary conditions. This is because the original boundary conditions are already imposed in the determination of $R_0$ and $S_0$. Eqs.~(\ref{pert-r}) and (\ref{pert-s}) yield what one might call a {\em semiclassical perturbation expansion}. Note that they are not power series in $\hbar$ but in the perturbation parameter $\epsilon$. The zero-th order terms correspond to the semi-classical wave function. Thus, the {\em semiclassical approximation} is defined by $R\approx R_0$ and $S\approx S_0$. Note also that the semiclassical wave function $\psi_0$ and potential $V_0$ are uniquely determined by the boundary conditions. The choice of the perturbation parameter is, however, made by the physics of the problem. A typical example of a perturbation parameter is the coupling constant $\lambda$ of the preceding section. Let us next list the equations governing the first and second order terms in the semiclassical perturbation expansion. \begin{itemize} \item[---] {\em Nonrelativistic QM: Schr\"odinger Equation} \begin{itemize} \item[] First order (post-semiclassical) corrections, i.e., equations determining $R_1$ and $S_1$: \begin{eqnarray} \partial_t S_1+\frac{1}{m}({\bf\nabla}S_0-{\bf A})\cdot {\bf\nabla}S_1+V_{\rm p}-\frac{\hbar^2{\bf\nabla}^2R_1}{2mR_0} &=&0\;,\nonumber \\ 2\partial_t (R_0R_1)+\frac{1}{m}{\bf\nabla}\cdot\left[ R_0^2{\bf\nabla }S_1+2R_0R_1({\bf\nabla}S_0-{\bf A})\right] &=&0\;.\nonumber \end{eqnarray} \item[] Second order corrections, i.e., equations determining $R_2$ and $S_2$: \begin{eqnarray} \partial_t S_2+\frac{1}{2m}\left[({\bf\nabla}S_1)^2+ 2({\bf\nabla}S_0-{\bf A})\cdot {\bf\nabla}S_2\right]- \frac{1}{2m}\left(\frac{{\bf\nabla}^2R_2}{R_0}- \frac{R_1{\bf\nabla}^2R_1}{R_0^2}\right)&=&0\;, \nonumber\\ 2\partial_t (R_0R_2)+\partial_t R^2_1+\frac{1}{m}{\bf\nabla} \cdot\left[(R_1^2+2R_0R_2)({\bf\nabla}S_0-{\bf A})+ 2R_0R_1{\bf\nabla}S_1+R_0^2{\bf\nabla}S_2\right]&=&0\;.\nonumber \end{eqnarray} \end{itemize} These equations are obtained by substituting Eqs.~(\ref{pert-v}) -- (\ref{pert-s}) in Eqs.~(\ref{q-hj-eq}) and (\ref{conti}). \item[---] {\em Relativistic QM: Klein-Gordon Equation} ($c=\hbar=1$) \begin{itemize} \item[] First order (post-semiclassical) corrections, i.e., equations determining $R_1$ and $S_1$: \begin{eqnarray} 2(\partial^\mu S_0-A^\mu)\partial_\mu S_1+V_{\rm p}+ \frac{\partial^\mu\partial_\mu R_1}{R_0}&=&0\;, \label{l=1-1/kg}\\ \partial_\mu\left[2R_0R_1(\partial^\mu S_0-A^\mu)+ R_0^2\partial^\mu S_1\right]&=&0\;. \label{l=1-2/kg} \end{eqnarray} \item[] Second order corrections, i.e., equations determining $R_2$ and $S_2$: \begin{eqnarray} 2(\partial^\mu S_0-A^\mu)\partial_\mu S_2+\partial^\mu S_1\partial_\mu S_1+ \frac{\partial^\mu\partial_\mu R_2}{R_0}- \frac{R_1\partial^\mu\partial_\mu R_1}{R_0^2}&=&0\;,\nonumber\\ \partial_\mu\left[R_0^2\partial^\mu S_2+2R_0R_1\partial^\mu S_1+ (R_1^2+2R_0R_2)(\partial^\mu S_0-A^\mu)\right]&=&0\;.\nonumber \end{eqnarray} \end{itemize} These equations are obtained by substituting Eqs.~(\ref{pert-v}) -- (\ref{pert-s}) in Eqs.~(\ref{q-hj-eq/kg}) and (\ref{conti/kg}). They can be further simplified. For example, consider the first order equations~(\ref{l=1-1/kg}) and (\ref{l=1-2/kg}), and define $p_0^\mu:=\partial^\mu S_0-A^\mu$ and $T_{\pm1}:=R_1\pm R_0 S_1$. Then using the fact that $R_0$ and $S_0$ define a semiclassical wave function, i.e., $\partial_\mu\partial^\mu R_0=0$ and $\partial_\mu(R_0^2 p_0^\mu)=0$, one can show that Eqs.~(\ref{l=1-1/kg}) and (\ref{l=1-2/kg}) are equivalent to \begin{equation} \left[\partial_\mu\partial^\mu \pm 2p_0^\mu(\partial_\mu- \partial_\mu\ln R_0)\right] T_{\pm 1}=-R_0V_{\rm P}\;. \label{T} \end{equation} These are two separate equations for $T_{\pm 1}$ whose solution yields $R_1$ and $S_1$. \end{itemize} These equations appear to be even more difficult to solve than the original dynamical equations. Note, however, that they are to be solved with vanishing boundary conditions. The advantage of this scheme is that the information on the boundary conditions of the original dynamical equation is restored in the definition of $R_0$ and $S_0$. The higher order corrections are affected by these boundary conditions only through $R_0$, $S_0$ and the perturbation potential $V_{\rm p}$ which appears in the first order of perturbation. In this sense, the semiclassical perturbation theory has a universal character. As a concrete example consider the Wheeler-DeWitt equation of section~5 with a matter potential of the form ${\cal V}=\lambda f(\phi)$, where $f$ is some real-valued function. Let the boundary conditions be such that one recovers example~5 of section~4 with $\omega=2$, $\mu_\pm=\mp1$, $\nu_-=0$, and $\nu_+=\nu$. Then replacing $(t,x)$ with $(\alpha,\phi)$, $(V,R,S)$ with $(V_0,R_0,S_0)$, and adopting the positive sign in Eqs.~(\ref{v-r-s}), one has \begin{equation} V_0=-e^{4\alpha}\;,~~~~R_0=\frac{1}{2} e^{-(\phi-\alpha)}\:,~~~~ S_0=2\sqrt{\nu e^{2(\phi-\alpha)}-e^{4\phi}}\;. \label{v-r-s-0} \end{equation} These correspond to an approximate semiclassical solution which is an exact solution of the massless case. Clearly, the perturbation potential $V_{\rm P}$ is given by $e^{6\alpha}f(\phi)$ and the perturbation parameter is $\lambda$. The first order (post-semiclassical) correction $(R_1,S_1)$ to $(R_0,S_0)$ is obtained by solving Eqs.~(\ref{T}) which take the form \begin{equation} \left\{ -\partial_\alpha^2+\partial_\phi^2\pm[\xi (\partial_\alpha-1) +\zeta(\partial_\phi+1)]\right\}T_{\pm 1}=-\frac{1}{2} e^{-\phi+7\alpha}f(\phi)\;, \label{T'} \end{equation} with \[ \xi:=-2\partial_\alpha S_0=\frac{4\nu e^{2(\phi-\alpha)}}{ \sqrt{\nu e^{2(\phi-\alpha)}-e^{4\phi}}}\:,~~~~ \zeta:=2\partial_\phi S_0=\frac{4(\nu e^{2(\phi-\alpha)}- 2e^{4\phi})}{\sqrt{\nu e^{2(\phi-\alpha)}-e^{4\phi}}}\:.\] Although solving Eqs.~(\ref{T'}) seems much more difficult than solving the original Wheeler-DeWitt equation (\ref{wdw}), one must recall that these equations are to be solved with vanishing boundary conditions. In this way one can at least devise an efficient numerical scheme which can treat the solution of these and the equations for the higher order corrections for arbitrary matter potentials. In general, such a scheme should first compute the semiclassical potential $V_0$ and wave function $R_0e^{iS_0}$ using the boundary conditions. This would yield the perturbation potential $V_{\rm p}$ (after the perturbation parameter is identified according to the physical characteristics of the problem). Then, it should numerically integrate the equations satisfied by $R_\ell$ and $S_\ell$ with vanishing boundary conditions. Finally, let me emphasize that for the Wheeler-DeWitt equation with the massive scalar field. The perturbation parameter is already extremely small $\lambda=m^2={\cal O}(10^{-12})$, therefore the first order corrections provide solutions which are valid up to the order $\lambda^2={\cal O}(10^{-24})$. This suggests that the domain of the validity of the first order perturbation theory is indeed quite large. \section{Conclusion} In this article I have tried to demonstrate how the simple observation that the traditional condition of validity of the semiclassical approximation is only a sufficient condition, can be used to introduce the notions of exact semiclassical wave function and potential. I have shown that the semiclassical wave function and potential are both determined by the boundary conditions of the problem uniquely. I have also given the full classification of exact semiclassical wave functions and potentials for the Schr\"odinger and Klein-Gordon equations in arbitrary dimensions. The analysis of the one-dimensional Schr\"odinger equation is much simpler than the multi-dimensional case. For the Klein-Gordon equation in (1+1)-dimensions which is directly relevant to the solution of the Wheeler-DeWitt equation for FRW scalar field minisuperspace models, I have explicitly constructed semiclassical wave functions and potentials. For this case, I have also developed a practical criterion for checking whether a given potential allows for a right-going exact semiclassical solution. I have then used this criterion to study the semiclassical solutions of the minisuperspace Wheeler-DeWitt equation. For the polynomial matter potentials of the form $\lambda\phi^{2p}$ with $p=1,2,3$, I have shown that a semiclassical solution of the Wheeler-DeWitt equation does not exist. However, the non-existence proof relies on the non-zero value of the coupling constant $\lambda$ which is expected to be an extremely small number. This motivated the development of a semiclassical perturbation theory which yields the semiclassical approximation as the zero-th order term in the semiclassical perturbation expansion. The higher order terms satisfy coupled linear differential equations with vanishing boundary conditions. The resulting semiclassical approximation and the domain of its reliability are different from the traditional semiclassical approximation. They coincide for the cases where the semiclassical approximation is exact, i.e., the potential and wave function are semiclassical. The semiclassical expansion developed in this paper is a perturbation expansion about the exact semiclassical potential defined by the boundary conditions. This is in contrast with the traditional semiclassical expansion which is an $\hbar$ or $M_p^{-1}$-expansion of the solution of the dynamical equation. \section*{Acknowledgements} I would like to thank B.~Darian and M.~Razavi for helpful discussions. I would also like to acknowledge the financial support of the Killam Foundation of Canada.
1,477,468,751,357
arxiv
\section{Introduction} \label{sec:intro} \medskip \subsection{} \label{sub:physics} Let $X$ be a (non-compact) toric Calabi-Yau threefold. To $X$ one can associate a 2d quantum field theory with four supercharges, and we will be interested in two features of this theory: its vector space of BPS states, and more importantly for us, the BPS algebra which acts on said vector space. The latter algebra has been dubbed the quiver quantum toroidal algebra (\cite{GLY1,GLY2,NW1,NW2}, following \cite{LY}). \medskip \noindent Before we dive into the definition of the quiver quantum toroidal algebra ${\widetilde{\mathbf{U}}}$, let us recall certain objects associated to the Calabi-Yau threefold $X$ $$ X \leadsto \text{toric diagram} \leadsto \text{brane tiling} \leadsto \text{quiver} $$ We refer the reader to \cite[Appendix C]{NW2} for a review of the procedures $\leadsto$ listed above, and we simply contend ourselves with reviewing the objects involved. \medskip \begin{itemize}[leftmargin=*] \item The toric diagram associated to $X$ is a particular collection of points in ${\mathbb{Z}}^2$ and line segments between them. \medskip \item The normals to the aforementioned line segments can be drawn on the torus ${\mathbb{T}}^2$, and they define a brane tiling, i.e. a decomposition of the torus into polygonal regions called faces. Very importantly, the faces can be colored in blue and red such that any two faces which share an edge have different colors. \footnote{As just described, the brane tiling is a graph $G$ drawn on the torus. In the literature, the term ``brane tiling" is sometimes applied to the dual graph of $G$, which is bipartite.} \medskip \item The vertices and edges of the aforementioned faces determine a quiver $Q$ on ${\mathbb{T}}^2$. The bicolorability property of the brane tiling implies that the edges of $Q$ can be oriented so that they go clockwise around the blue faces. \end{itemize} \medskip \subsection{} \label{sub:consistent} As the definition of the quiver quantum toroidal algebra ${\widetilde{\mathbf{U}}^+}$ only takes the quiver as input, one can state the construction in generality greater than those quivers which arise from toric Calabi-Yau threefolds via the procedure above. \medskip \begin{definition} \label{def:quiver intro} Let $Q$ be a quiver drawn on a torus (with vertex set $I$ and edge set $E$), whose faces are colored in blue and red such that the two incident faces to a given edge have different colors. We assume that the edges of the quiver are oriented so as to go clockwise around the blue faces. \end{definition} \medskip \noindent We will write $\widetilde{Q}$ for the lift of $Q$ to the universal cover ${\mathbb{R}}^2$ of ${\mathbb{T}}^2$, and note that $\widetilde{Q}$ inherits the blue/red colored faces of $Q$. In the present paper, ``paths" and ``cycles" in a quiver will refer to the oriented notions. \medskip \begin{definition} \label{def:quasi} A \textbf{broken wheel} refers to a path obtained by removing a single edge $e$ from the boundary of any face $F$ of $\widetilde{Q}$. The \textbf{mirror image} of the aforementioned broken wheel is the path obtained by removing $e$ from the boundary of the other face $F' \neq F$ incident to $e$. The edge $e$ will be called the \textbf{interface} of the broken wheel (and of its mirror image). \begin{figure}[h] \includegraphics[scale=0.45]{Broken.png} \caption{A broken wheel (the path in red) and its mirror image (the path in blue). The black arrow is the interface.} \end{figure} \end{definition} \medskip \begin{definition} \label{def:consistent intro} The quiver $Q$ is called \textbf{shrubby} if given any paths $p \neq p'$ in $\widetilde{Q}$ with the same start and end points, at least one of $p$ and $p'$ contains a broken wheel whose interface lies in the closed region between the two paths. \end{definition} \medskip \noindent When one of $p$ and $p'$ is trivial, Definition \ref{def:consistent intro} states that any cycle in $\widetilde{Q}$ must contain a broken wheel in the closure of its interior. We will see in Lemma \ref{lem:non-degenerate is shrubby} that the shrubbiness condition above follows from more traditional notions of consistency of brane tilings and dimer models, such as the existence of a non-degenerate $R$-charge. \medskip \subsection{} \label{sub:zeta} Let ${\mathbb{K}}$ be a field of characteristic 0. To every edge $e$ of the quiver $Q$, we associate a \textbf{parameter} $t_e \in {\mathbb{K}}^\times$ such that \begin{equation} \label{eqn:loop constraint} \prod_{e \text{ edge around }F} t_e = 1 \end{equation} for every face $F$ of $Q$. Moreover, we assume that the parameters are generic subject to the condition that \eqref{eqn:loop constraint} holds. In other words, any relation of the form \begin{equation} \label{eqn:generic} \prod_{e \in E} t_e^{\text{various integers}} = 1 \end{equation} arises by taking appropriate products of relations \eqref{eqn:loop constraint} and their inverses. The edge parameters can be assembled into the following rational functions \begin{equation} \label{eqn:def zeta} \zeta_{ij}(x) = \frac {\alpha_{ij} x^{s_{ij}}}{(1-x)^{\delta_{ij}}} \prod_{e \text{ arrow from }i \text{ to }j} (1-xt_e) \in {\mathbb{K}}(x) \end{equation} for all $i,j \in I$, where $\alpha_{ij} \in {\mathbb{K}}^\times$ and $s_{ij} \in {\mathbb{Z}}$ are suitably chosen (but will not play an important role in the present paper, so we will not specify them explicitly). \medskip \begin{remark} \label{rem:intro} Moreover, different authors use different conventions on $\alpha_{ij}$ and $s_{ij}$. For example, \cite{GLY1} requires $s_{ij}$ to be minus half the number of arrows from $i$ to $j$; this situation can also be accommodated by the present paper, at the cost of replacing polynomials built out of integer powers by polynomials built out of half-integer powers. We will avoid this setup in order to not overburden our notation. \end{remark} \medskip \subsection{} \label{sub:qqta} Using the data in Subsection \ref{sub:zeta}, we will now review the definition of the quantum toroidal algebra associated to the quiver $Q$ and parameters $\{t_e\}_{e \in E}$, which was introduced in \cite{GLY1,NW1} as a trigonometric version of the quiver Yangian of \cite{LY} (see also \cite{RSYZ} for a closely related mathematical construction). \medskip \begin{definition} \label{def:quad intro} The (half) \textbf{quiver quantum toroidal algebra} ${\widetilde{\mathbf{U}}^+}$ is \begin{equation} \label{eqn:def positive} {\widetilde{\mathbf{U}}^+} = {\mathbb{K}} \Big\langle e_{i,d} \Big \rangle_{i \in I, d \in {\mathbb{Z}}} \Big / \text{relation \eqref{eqn:rel quad}} \end{equation} where if we write $$ e_i(z) = \sum_{d \in {\mathbb{Z}}} \frac {e_{i,d}}{z^d} $$ then the defining relations are given by the formula \begin{equation} \label{eqn:rel quad} e_i(z) e_j(w) \zeta_{ji} \left(\frac wz\right) = e_j(w) e_i(z) \zeta_{ij} \left( \frac zw \right) \end{equation} for all $i,j \in I$ \footnote{Relation \eqref{eqn:rel quad} is interpreted as an infinite collection of relations obtained by equating the coefficients of all $\{z^aw^b\}_{a,b\in {\mathbb{Z}}}$ in the left and right-hand sides (if $i = j$, one clears the denominators $z-w$ from \eqref{eqn:rel quad} before equating coefficients).}. \end{definition} \medskip \noindent Define ${\widetilde{\mathbf{U}}^-} = {\widetilde{\mathbf{U}}}^{+,\text{op}}$, and denote its generators by $f_{i,d}$ instead of $e_{i,d}$. Finally, let us consider the commutative algebra $$ \mathbf{U}^0 = {\mathbb{K}}\left[h_{i,d}, h'_{i,d'}\right]_{i \in I, d,d' \geq \text{appropriately chosen integers}} $$ Then the (full) \textbf{quiver quantum toroidal algebra} is defined as \begin{equation} \label{eqn:full} {\widetilde{\mathbf{U}}} = {\widetilde{\mathbf{U}}^+} \otimes \mathbf{U}^0 \otimes {\widetilde{\mathbf{U}}^-} \end{equation} with certain commutation relations imposed between elements in the three tensor factors above. We refer the reader to \cite{GLY1, NW1} for the explicit commutation relations, as they will not be used in the present paper; instead, we will only focus on ${\widetilde{\mathbf{U}}^+}$. \medskip \subsection{} \label{sub:action} The main motivation for defining the algebra ${\widetilde{\mathbf{U}}}$ is that it acts on the vector space of so-called BPS crystal configurations \begin{equation} \label{eqn:action intro} {\widetilde{\mathbf{U}}} \curvearrowright M = \bigoplus_{\Lambda \text{ 3d crystal configuration}} {\mathbb{K}} \cdot |\Lambda\rangle \end{equation} (see \cite[Section 5]{NW1} for a review of 3d crystal configurations, which are generalizations of plane partitions). The main goal of the present paper is to describe the kernel of the action \eqref{eqn:action intro}, i.e. to define the smallest possible quotient \begin{equation} \label{eqn:quotient intro} {\widetilde{\mathbf{U}}} \twoheadrightarrow \mathbf{U} \end{equation} such that the action \eqref{eqn:action intro} factors through an action of $\mathbf{U}$. To this end, we will consider the \textbf{shuffle algebra} realization of quiver quantum toroidal algebras $$ {\widetilde{\mathbf{U}}^\pm} \xrightarrow{\widetilde{\Upsilon}^\pm} {\mathcal{V}}^\pm = \bigoplus_{\boldsymbol{n} \in {\mathbb{N}}^I} {\mathbb{K}}[z_{i1}^{\pm 1},\dots,z_{in_i}^{\pm 1}]_{i \in I}^{\textrm{sym}} $$ (we refer the reader to Subsection \ref{sub:def shuf} for a description of the shuffle product on ${\mathcal{V}}^\pm$, and to Subsection \ref{sub:upsilon} for the definition of the homomorphism $\widetilde{\Upsilon}^\pm$). Set \begin{equation} \label{eqn:quotient intro} \mathbf{U}^\pm = {\widetilde{\mathbf{U}}^\pm} \Big / \text{Ker }\widetilde{\Upsilon}^{\pm} \end{equation} As noted in \cite[Section 5]{GLY1}, the action \eqref{eqn:action intro} factors through the shuffle algebra. Therefore, the \textbf{reduced} quiver quantum toroidal algebra \begin{equation} \label{eqn:reduced intro} \mathbf{U} = \mathbf{U}^+ \otimes \mathbf{U}^0 \otimes \mathbf{U}^- \end{equation} (with the same commutation relations between factors as in \eqref{eqn:full}) acts on $M$. \medskip \subsection{} \noindent The main purpose of the present paper is to describe $\mathbf{U}^\pm$ by explicitly presenting the quotient \eqref{eqn:quotient intro}. More explicitly, we will describe a collection of generators for the two-sided ideal $\text{Ker } \widetilde{\Upsilon}^\pm$. For every face $F = \{i_0,i_1,\dots,i_{k-1},i_k=i_0\}$ of the quiver $Q$, consider the following parameters corresponding to the edges of $F$ \begin{equation} \label{eqn:cycle intro} t_a = t_{\overrightarrow{i_{a-1}i_a}} \end{equation} Note that $t_1\dots t_k = 1$ due to \eqref{eqn:loop constraint}. Let $\widetilde{\zeta}_{ij}(x) = \zeta_{ij}(x) (1-x)^{\delta_{ij}}$ for all $i,j \in I$. Then we may define formal series \begin{equation} \label{eqn:series intro} e_F(x_1,\dots,x_k) \in {\widetilde{\mathbf{U}}^+}[[x_1^{\pm 1}, \dots, x_k^{\pm 1}]] \end{equation} by the following formula \begin{multline} \sum_{a=1}^k \frac {x_1t_2\dots t_a}{x_a} \cdot \frac { \prod_{b \succ c} \widetilde{\zeta}_{i_ci_b} \left(\frac {x_c}{x_b} \right)\left( - \frac {x_b}{x_c} \right)^{\delta_{i_bi_c} \delta_{b<c}} }{\prod_{b \sim c + 1} \left(1 - \frac {x_ct_b}{x_b} \right)} \cdot \\ \cdot e_{i_{a}}(x_{a}) \dots e_{i_1}(x_1) e_{i_k}(x_k) \dots e_{i_{a+1}}(x_{a+1}) \label{eqn:formula series intro} \end{multline} In \eqref{eqn:formula series intro}, the notation $b \succ c$ (respectively $b \sim c+1$) means that $b$ precedes (respectively immediately precedes) $c$ in the sequence $(a,\dots,1,k,\dots,a+1)$. The symbols $\delta_{b<c}$ and $\delta_{i_bi_c}$ are defined as in Subsection \ref{sub:toric cy}. The following is our main result. \bigskip \begin{theorem} \label{thm:main} If $Q$ is shrubby (as in Definition \ref{def:consistent intro}), then the coefficients of the series \eqref{eqn:series intro} generate $\emph{Ker } \widetilde{\Upsilon}^+$ as a two-sided ideal. In other words, we have \begin{equation} \label{eqn:relations intro} \mathbf{U}^+ = {\widetilde{\mathbf{U}}^+} \Big / \Big(\text{series coefficients of }e_F(x_1,\dots,x_k) \Big)_{F \text{ face of }Q} \end{equation} Similar results hold for $\mathbf{U}^-$ (by reversing the product on the second line of \eqref{eqn:formula series intro}). \end{theorem} \bigskip \noindent Thus, the relations which we factor in \eqref{eqn:relations intro} are the sought-for ``Serre relations" of \cite{GLY1}. The terminology of these relations is historically motivated by the analogous situation of quantum loop groups associated to finite type Dynkin diagrams, in which the role of relations \eqref{eqn:relations intro} is played by the Drinfeld-Serre relations. Note, however, that the Drinfeld-Serre relations are not enough to characterize quantum loop groups associated to general type Dynkin diagrams (see \cite{Loop}). \medskip \begin{remark} \label{rem:intro 1} If $Q$ is not shrubby, then we expect that one needs additional relations besides \eqref{eqn:series intro}. In this situation, the ideal $\emph{Ker }\widetilde{\Upsilon}^+$ can be studied according to the general principles of \cite{Arbitrary}, but we do not have explicit generators of this ideal. \end{remark} \medskip \begin{remark} \label{rem:intro 2} It is straightforward to write down rational/elliptic versions of the relations \eqref{eqn:series intro}, which would give necessary relations that hold in the rational/elliptic counterparts of the reduced algebra $\mathbf{U}^+$ (see \cite{GLY1} for an overview). However, in the rational/elliptic settings, we do not know whether these relations are also sufficient, i.e. if they generate the analogue of the two-sided ideal $\emph{Ker }\widetilde{\Upsilon}^+$. \end{remark} \medskip \subsection{} \label{sub:coha} Quiver quantum toroidal algebras are related to the $K$-theoretic Hall algebras (defined in \cite{Pad}, by analogy with the cohomological Hall algebras of \cite{KS}) $$ K(Q,W) $$ defined with respect to the following potential $$ W = \sum_{F \text{ face of }Q} (-1)^F \prod_{e \text{ edge around }F} \phi_e \in {\mathbb{C}}[Q] $$ where $(-1)^F$ is $+1$ or $-1$ depending on whether the face $F$ is blue or red, and the symbols $\phi_e$ denote generators of the path algebra ${\mathbb{C}}[Q]$. We consider $K(Q,W)$ as an algebra over the ring ${\mathbb{L}}$ of polynomials in the edge parameters (modulo \eqref{eqn:loop constraint}), and let our ground field be ${\mathbb{K}} = \text{Frac}({\mathbb{L}})$. Then the localized $K$-theoretic Hall algebra $$ K(Q,W)_{\text{loc}} = K(Q,W) \bigotimes_{{\mathbb{L}}} {\mathbb{K}} $$ is endowed with an algebra homomorphism $$ K(Q,W)_{\text{loc}} \xrightarrow{\iota} {\mathcal{V}}^+ $$ By combining Theorem \ref{thm:equal}, Definition \ref{def:shuffle wheel} and Proposition \ref{prop:main}, the image of $\widetilde{\Upsilon}^+$ can be described as the subspace ${\mathcal{S}}^+ \subset {\mathcal{V}}^+$ of Laurent polynomials \footnote{For any face $F = \{i_1,\dots,i_{k-1},i_k\}$, we use the notation $z_1,\dots,z_k$ to represent variables of $R$ in accordance with \eqref{eqn:relabeling}, i.e. one should interpret $z_a = z_{i_a\bullet_a}$ for certain $\bullet_a \in {\mathbb{N}}$, $\forall a \in \{1,\dots,k\}$.} $R(z_1,\dots,z_k,\dots)$ which vanish whenever their variables are specialized according to the rule \begin{equation} \label{eqn:specialization intro} \Big\{ z_a = z_{a-1} t_a \Big\}_{a \in \{1,\dots,k\}} \end{equation} (in the notation of \eqref{eqn:cycle intro}) for any face $F$ of $Q$. This yields the following result. \medskip \begin{corollary} \label{cor:main} If $Q$ is shrubby, the images of $\iota$ and $\widetilde{\Upsilon}^+$ coincide, i.e. the localized $K$-theoretic Hall algebra surjects onto the subspace ${\mathcal{S}}^+ \subset {\mathcal{V}}^+$ of Laurent polynomials which vanish when their variables are specialized to \eqref{eqn:specialization intro}, for any face $F$. \end{corollary} \medskip \begin{proof} The fact that the image of $\widetilde{\Upsilon}^+$ is (tautologically) generated by $\{z_{i1}^d\}_{i \in I, d\in {\mathbb{Z}}}$, which all lie in the image of $\iota$, implies that \begin{equation} \label{eqn:inclusion intro 1} \text{Im } \widetilde{\Upsilon}^+ \subseteq \text{Im }\iota \end{equation} To prove the opposite inclusion, one needs to show that the image of $\iota$ is contained in the subspace of Laurent polynomials which vanish when their variables are specialized to \eqref{eqn:specialization intro} for every face $F$. This is achieved by noting that the specialization in question can be realized as restriction to the locally closed subset $Z$ of quiver representations $(\phi_{e} : {\mathbb{C}}^{n_i} \rightarrow {\mathbb{C}}^{n_j})_{e = \overrightarrow{ij}}$ whose only non-zero elements are $$ \phi_{\overrightarrow{i_1i_2}} \in {\mathbb{C}}^* E_{\bullet_1\bullet_2}, \dots, \phi_{\overrightarrow{i_{k-1}i_k}} \in {\mathbb{C}}^* E_{\bullet_{k-1}\bullet_k} $$ (where $E_{ab}$ denote the matrix units with respect to the standard basis of $\{{\mathbb{C}}^{n_i}\}_{i \in I}$, and the natural numbers $\bullet_1,\dots,\bullet_k$ are chosen as in \eqref{eqn:relabeling}). Since the locally closed subset $Z$ does not intersect the critical locus of $W$ (on which $K(Q,W)$ is supported), this implies the opposite inclusion to \eqref{eqn:inclusion intro 1} \begin{equation} \label{eqn:inclusion intro 2} \text{Im } \widetilde{\Upsilon}^+ \supseteq \text{Im }\iota \end{equation} \end{proof} \subsection{} The structure of the present paper is the following. \medskip \begin{itemize}[leftmargin=*] \item In Section \ref{sec:shuffle}, we discuss ${\widetilde{\mathbf{U}}^+}$ and its shuffle algebra interpretation for general quivers $Q$. \medskip \item In Section \ref{sec:consistent}, we study ${\widetilde{\mathbf{U}}^+}$ for a quiver $Q$ as in Definition \ref{def:quiver intro}, and prove Theorem \ref{thm:main}. \medskip \item In Section \ref{sec:gardening}, we provide some key results on shrubs (which are certain subgraphs of the universal cover of $Q$ that we use in the proof of Theorem \ref{thm:main}). \end{itemize} \medskip \subsection{} I would like to thank Ben Davison, Richard Kenyon and Masahito Yamazaki for very useful conversations about the topics in the present paper. I gratefully acknowledge NSF grant DMS-$1845034$, as well as support from the MIT Research Support Committee. \bigskip \section{Shuffle algebras in general} \label{sec:shuffle} \medskip \noindent We will now recall the basic theory of trigonometric shuffle algebras, in the generality of \cite{Arbitrary}. Thus, throughout the present Section, $Q$ will denote an arbitrary quiver (whose vertex and edge sets will be denoted by $I$ and $E$, respectively), ${\mathbb{K}}$ will denote an arbitrary field of characteristic zero, and $\zeta_{ij}(x)(1-x)^{\delta_{ij}}$ will denote arbitrary Laurent polynomials with coefficients in ${\mathbb{K}}$ for all $i,j \in I$. Throughout the present paper, the set ${\mathbb{N}}$ will be thought to contain 0. \medskip \subsection{} \label{sub:def shuf} Let us consider an infinite collection of variables $z_{i1},z_{i2},\dots$ for all $i \in I$. For any $\boldsymbol{n} = (n_i)_{i \in I} \in {{\BN}}^I$, we will write $\boldsymbol{n}! = \prod_{i \in I} n_i!$. The following construction is a straightforward generalization of the trigonometric quantum loop groups of \cite{E, FO}. \medskip \begin{definition} \label{def:big shuf} The \textbf{big shuffle algebra} associated to the datum $\{\zeta_{ij}(x)\}_{i,j \in I}$ is $$ {\mathcal{V}}^+ = \bigoplus_{\boldsymbol{n} \in {\mathbb{N}}^I} {\mathbb{K}}[z_{i1}^{\pm 1},\dots,z_{in_i}^{\pm 1}]_{i \in I}^{\emph{sym}} $$ endowed with the multiplication \begin{equation} \label{eqn:shuf prod} R(\dots,z_{i1},\dots,z_{in_i},\dots) * R'(\dots,z_{i1},\dots,z_{in_i'},\dots ) = \end{equation} $$ \emph{Sym} \left[ \frac {R(\dots,z_{i1},\dots,z_{in_i},\dots) R'(\dots,z_{i,n_i+1},\dots,z_{i,n_i+n_i'},\dots)}{\boldsymbol{n}! \boldsymbol{n}'!} \mathop{\prod^{i,j \in I}_{1\leq a\leq n_i}}_{n_j < b \leq n_j+n_j'} \zeta_{ij} \left(\frac {z_{ia}}{z_{jb}} \right) \right] $$ Above and henceforth, ``\emph{sym}" (resp. ``\emph{Sym}") denotes symmetric functions (resp. symmetrization) with respect to the variables $z_{i1},z_{i2},\dots$ for each $i \in I$ separately \footnote{Although the $\zeta$ functions might seem to contribute simple poles at $z_{ia}-z_{ib}$ for $a \neq b$ to the right-hand side of \eqref{eqn:shuf prod}, these poles disappear when taking the symmetrization (the poles in question can only have even order in any symmetric rational function).}. \end{definition} \medskip \noindent By defining the subspace ${\mathcal{V}}_{\boldsymbol{n}} \subset {\mathcal{V}}^+$ to consist of rational functions in $\boldsymbol{n} = (n_i)_{i \in I}$ variables, we obtain a decomposition \begin{equation} \label{eqn:graded pieces} {\mathcal{V}}^+ = \bigoplus_{\boldsymbol{n} \in {{\BN}}^I} {\mathcal{V}}_{\boldsymbol{n}} \end{equation} For example, the Laurent polynomial in a single variable $z_{i1}^d$ lies in ${\mathcal{V}}_{{\boldsymbol{\vs}}^i}$, where $$ {\boldsymbol{\vs}}^i = (\underbrace{0,\dots,0,1,0,\dots,0}_{1\text{ on }i\text{-th position}}) \in {{\BN}}^I $$ We will also consider the opposite big shuffle algebra ${\mathcal{V}}^- = {\mathcal{V}}^{+,\text{op}}$, whose graded components analogous to \eqref{eqn:graded pieces} will be denoted by ${\mathcal{V}}_{-\boldsymbol{n}}$, for all $\boldsymbol{n} \in {{\BN}}^I$. \medskip \subsection{} \label{sub:upsilon} Recall that ${\widetilde{\mathbf{U}}^+}$ is the quiver quantum toroidal algebra of Definition \ref{def:quad intro}, and ${\widetilde{\mathbf{U}}^-}$ denotes its opposite. There exist ${\mathbb{K}}$-algebra homomorphisms \begin{equation} \label{eqn:tilde upsilon} {\widetilde{\mathbf{U}}^\pm} \xrightarrow{\widetilde{\Upsilon}^\pm} {\mathcal{V}}^{\pm}, \qquad e_{i,d}, f_{i,d} \mapsto z_{i1}^d \end{equation} which can be easily established by checking the fact that relations \eqref{eqn:rel quad} are respected by the shuffle product \eqref{eqn:shuf prod}. Let us consider the kernel and image of the map \eqref{eqn:tilde upsilon} \begin{align} &K^\pm = \text{Ker } \widetilde{\Upsilon}^\pm \subset {\widetilde{\mathbf{U}}^\pm} \label{eqn:kernel} \\ &\mathring{\CS}^\pm = \ \text{Im } \widetilde{\Upsilon}^\pm \ \subset {\mathcal{V}}^\pm \label{eqn:image} \end{align} The subalgebra $\mathring{\CS}^+$ will be called the \textbf{shuffle algebra}, to differentiate it from the big shuffle algebra of Definition \ref{def:big shuf}. \medskip \subsection{} \label{sub:pairing} An important role in the present paper will be played by a certain integral pairing, which we will now describe. Let us consider the following notation for all rational functions $f(z_1,\dots,z_n)$. If $Dz_a = \frac {dz_a}{2\pi i z_a}$, then we will write \begin{equation} \label{eqn:contour integral} \int_{|z_1| \gg \dots \gg |z_n|} f(z_1,\dots,z_n) \prod_{a=1}^n Dz_a \end{equation} for the constant term in the expansion of $f$ as a power series in $$ \frac {z_2}{z_1}, \dots, \frac {z_n}{z_{n-1}} $$ The notation in \eqref{eqn:contour integral} is motivated by the fact that if ${\mathbb{K}} = {\mathbb{C}}$, one could compute this constant term as a contour integral (with the contours being concentric circles, situated very far from each other compared to the absolute values of the coefficients of $f$). \medskip \begin{definition} \label{def:pair} There exists a non-degenerate bilinear pairing \footnote{The reason we employ the notation ${\mathcal{V}}^-$ and ${\mathcal{V}}^+$ in \eqref{eqn:pair}, despite the fact that the two notations represent identical ${\mathbb{K}}$-vector spaces, is the fact that under certain assumptions, \eqref{eqn:pair} can be upgraded to a bialgebra pairing (as in \cite{Arbitrary}).} \begin{equation} \label{eqn:pair} {\widetilde{\mathbf{U}}^+} \otimes {\mathcal{V}}^- \xrightarrow{\langle \cdot, \cdot \rangle} {\mathbb{K}} \end{equation} given for all $R \in {\mathcal{V}}_{-\boldsymbol{n}}$ and all $i_1,\dots,i_n \in I$, $d_1,\dots,d_n \in {\mathbb{Z}}$ by \begin{equation} \label{eqn:pair formula} \Big \langle e_{i_1,d_1} \cdots e_{i_n,d_n}, R \Big \rangle = \int_{|z_1| \gg \dots \gg |z_n|} \frac {z_1^{d_1}\dots z_n^{d_n} R(z_1,\dots,z_n)}{\prod_{1\leq a < b \leq n} \zeta_{i_bi_a} \left(\frac {z_b}{z_a} \right)} \prod_{a=1}^n Dz_a \end{equation} if ${\boldsymbol{\vs}}^{i_1}+\dots +{\boldsymbol{\vs}}^{i_n} = \boldsymbol{n}$, and 0 otherwise. In the right-hand side of \eqref{eqn:pair formula}, we identify \begin{equation} \label{eqn:relabeling} z_a \quad \text{with} \quad z_{i_a\bullet_a}, \quad \forall a \in \{1,\dots, n\} \end{equation} where $\bullet_a \in \{1,2,\dots,n_{i_a}\}$ may be chosen arbitrarily due to the symmetry of $R$ (however, we require $\bullet_a \neq \bullet_b$ if $a\neq b$ and $i_a = i_b$). We will call \eqref{eqn:relabeling} a \textbf{relabeling}. \end{definition} \medskip \noindent There is also an analogous pairing \begin{equation} \label{eqn:pair opposite} {\mathcal{V}}^+ \otimes {\widetilde{\mathbf{U}}^-} \xrightarrow{\langle \cdot, \cdot \rangle} {\mathbb{K}} \end{equation} whose formula the interested reader may find in \cite[Definition 2.8]{Arbitrary}. \medskip \subsection{} Let ${\mathcal{S}}^\mp \subset {\mathcal{V}}^\mp$ denote the dual of $K^\pm = \text{Ker }\widetilde{\Upsilon}^\pm$ under the pairings \eqref{eqn:pair} and \eqref{eqn:pair opposite}, respectively, i.e. \begin{align} &R^- \in {\mathcal{S}}^- \quad \Leftrightarrow \quad \Big \langle K^+, R^- \Big \rangle = 0 \label{eqn:shuf 1} \\ &R^+ \in {\mathcal{S}}^+ \quad \Leftrightarrow \quad \Big \langle R^+, K^- \Big \rangle = 0 \label{eqn:shuf 2} \end{align} It is easy to check that ${\mathcal{S}}^\pm$ are subalgebras of ${\mathcal{V}}^{\pm}$ (in fact, this also follows from the fact that \eqref{eqn:pair} and \eqref{eqn:pair opposite} yield bialgebra pairings). Thus, we have $$ \mathring{\CS}^\pm \subseteq {\mathcal{S}}^\pm \qquad \qquad $$ because the generators $\{z_{i1}^d\}_{i \in I, d\in {\mathbb{Z}}}$ of the algebras on the left lie in the algebras on the right. Moreover, if we consider the \textbf{reduced} quiver quantum toroidal algebra $$ \mathbf{U}^\pm = {\widetilde{\mathbf{U}}^\pm} \Big / K^\pm $$ then the parings \eqref{eqn:pair} and \eqref{eqn:pair opposite} descend to non-degenerate pairings \begin{align} &\mathbf{U}^+ \otimes {\mathcal{S}}^- \xrightarrow{\langle \cdot, \cdot \rangle} {\mathbb{K}} \label{eqn:descended pairing plus} \\ &{\mathcal{S}}^+ \otimes \mathbf{U}^- \xrightarrow{\langle \cdot, \cdot \rangle} {\mathbb{K}} \label{eqn:descended pairing minus} \end{align} One of the main results of \cite{Arbitrary} (specifically, Theorem 1.5 therein) is the following. \medskip \begin{theorem} \label{thm:equal} We have ${\mathcal{S}}^\pm = \mathring{\CS}^\pm$, and hence $\widetilde{\Upsilon}^\pm$ induce isomorphisms \begin{equation} \label{eqn:iso equal} \mathbf{U}^\pm \xrightarrow{\Upsilon^\pm} {\mathcal{S}}^\pm \end{equation} Moreover, the pairings \eqref{eqn:descended pairing plus} and \eqref{eqn:descended pairing minus} match under these isomorphisms, thus yielding a non-degenerate pairing \begin{equation} \label{eqn:descended pairing} {\mathcal{S}}^+ \otimes {\mathcal{S}}^- \xrightarrow{\langle \cdot, \cdot \rangle} {\mathbb{K}} \end{equation} \end{theorem} \medskip \noindent We wish to describe $\mathbf{U}^\pm$ explicitly, i.e. to give formulas for a system of generators of the kernel $K^\pm$ of the map ${\widetilde{\mathbf{U}}^\pm} \twoheadrightarrow \mathbf{U}^\pm$. By formulas \eqref{eqn:shuf 1}--\eqref{eqn:shuf 2}, these kernels are precisely dual to the linear conditions describing the inclusions ${\mathcal{S}}^\mp \subset {\mathcal{V}}^\mp$. We will exploit this duality in the following Section. \bigskip \section{Shuffle algebras for shrubby quivers} \label{sec:consistent} \medskip \noindent From now onward, we will consider the special case when $Q$ is a quiver drawn on the torus, as in Definition \ref{def:quiver intro}. Moreover, we assume the edges of $Q$ are endowed with parameters $t_e$ as in Subsection \ref{sub:zeta}, and we define the rational functions $\zeta_{ij}(x)$ by formula \eqref{eqn:def zeta}. Our goal is to obtain explicit generators of the ideal $K^\pm$, so that we may realize the reduced quiver quantum toroidal algebras $\mathbf{U}^\pm$ as being determined by finitely many explicit relations. In what follows, we will only focus on the case $\pm = +$, as the opposite case $\pm = -$ can be obtained by reversing all products. \medskip \subsection{} \label{sub:toric cy} In Definition \ref{def:relations}, we will construct formal series $e_F$ of elements of $K^+$ associated to the faces of the quiver $Q$. When the quiver $Q$ is shrubby (in the sense of Definition \ref{def:consistent intro}), we will show that the series $e_F$ generate $K^+$, thus concluding the proof of Theorem \ref{thm:main}. For every face $F = \{i_0,i_1,\dots,i_{k-1},i_k=i_0\}$ of $Q$, consider \begin{equation} \label{eqn:cycle} t_a = t_{\overrightarrow{i_{a-1}i_a}} \end{equation} and note that $t_1\dots t_k = 1$ due to \eqref{eqn:loop constraint}. The arrows in \eqref{eqn:cycle} are the boundary edges of the face $F$ (these edges are uniquely defined, even though it is possible that $Q$ has multiple edges between $i_a$ and $i_b$ for various $ a \neq b$). We will write \begin{equation} \label{eqn:tzeta} \widetilde{\zeta}_{ij}(x) = \zeta_{ij}(x) (1-x)^{\delta_{ij}} \in {\mathbb{K}}[x^{\pm 1}] \end{equation} for all $i,j \in I$. For any $1 \leq b \neq c \leq k$, we will write \begin{align*} &\delta_{b<c} = \begin{cases} 1 &\text{if } b < c \\ 0 &\text{if } b > c \end{cases} \\ &\delta_{i_bi_c} = \begin{cases} 1 &\text{if } i_b=i_c \in I \\ 0 &\text{otherwise} \end{cases} \end{align*} \medskip \begin{definition} \label{def:relations} For any face $F$ as above, consider the formal series \begin{multline} \label{eqn:series} e_F(x_1,\dots,x_k) = \sum_{a=1}^k \frac {x_1t_2\dots t_a}{x_a} \cdot \frac { \prod_{b \succ c} \widetilde{\zeta}_{i_ci_b} \left(\frac {x_c}{x_b} \right)\left( - \frac {z_b}{z_c} \right)^{\delta_{i_bi_c} \delta_{b<c}} }{\prod_{b \sim c + 1} \left(1 - \frac {x_ct_b}{x_b} \right)} \cdot \\ \cdot e_{i_{a}}(x_{a}) \dots e_{i_1}(x_1) e_{i_k}(x_k) \dots e_{i_{a+1}}(x_{a+1}) \quad \in \quad {\widetilde{\mathbf{U}}^+}[[x_1^{\pm 1}, \dots, x_k^{\pm 1}]] \end{multline} In expression \eqref{eqn:series}, the notation $b \succ c$ (respectively $b \sim c+1$) means the fact that $b$ precedes (respectively immediately precedes) $c$ in the sequence $(a,\dots,1,k,\dots,a+1)$. \end{definition} \medskip \begin{proposition} \label{prop:kernel} The coefficients of the series \eqref{eqn:series} all lie in $K^+$. \end{proposition} \medskip \begin{proof} Let us consider the formal delta series $$ \delta\left(z \right) = \sum_{d \in {\mathbb{Z}}} z^d $$ which has the following property for all Laurent polynomials $f(x)$ \begin{equation} \label{eqn:property delta} \delta \left( \frac zx \right) f(z) = \delta \left( \frac zx \right) f(x) \end{equation} To prove Proposition \ref{prop:kernel}, we must apply the map $\widetilde{\Upsilon}^+$ to the right-hand side of \eqref{eqn:series} and show that the result is 0. With \eqref{eqn:property delta} in mind, we have \begin{multline*} \widetilde{\Upsilon}^+ \left( e_F(x_1,\dots,x_k) \right) = \textrm{Sym} \left[ \sum_{a=1}^k \frac {x_1t_2\dots t_a}{x_a} \cdot \right. \\ \left. \frac { \prod_{b \succ c} \widetilde{\zeta}_{i_ci_b} \left(\frac {x_c}{x_b} \right) \prod_{b < c, b \succ c} \left(- \frac {x_b}{x_c} \right)^{\delta_{i_bi_c}} \prod_{c \succ b} \zeta_{i_ci_b} \left(\frac {x_c}{x_b} \right)}{\prod_{b \sim c + 1} \left(1 - \frac {x_ct_b}{x_b} \right)} \cdot \delta\left(\frac {z_1}{x_1} \right) \dots \delta\left(\frac {z_k}{x_k} \right) \right] = \end{multline*} $$ = \textrm{Sym} \left[ \sum_{a=1}^k \frac {x_1t_2\dots t_a}{x_a} \cdot \frac { \prod_{1 \leq b \neq c \leq k} \zeta_{i_ci_b} \left(\frac {x_c}{x_b} \right) \prod_{b > c, i_b = i_c} \left(1 - \frac {x_c}{x_b} \right)}{\prod_{b \sim c + 1} \left(1 - \frac {x_ct_b}{x_b} \right)} \cdot \delta\left(\frac {z_1}{x_1} \right) \dots \delta\left(\frac {z_k}{x_k} \right) \right] $$ where we let $z_a = z_{i_a\bullet_a}$ as in the relabeling \eqref{eqn:relabeling}, and ``Sym" refers to symmetrization with respect to all $z_a$ and $z_b$ such that $i_a = i_b$. Therefore, $\widetilde{\Upsilon}^+ (e_F)$ equals \begin{multline} \textrm{Sym} \left[ \frac { \prod_{1 \leq b \neq c \leq k} \zeta_{i_ci_b} \left(\frac {x_c}{x_b} \right) \prod_{b > c, i_b = i_c} \left(1 - \frac {x_c}{x_b} \right)}{\left(1 - \frac {x_kt_1}{x_1} \right) \dots \left(1 - \frac {x_1t_2}{x_2} \right)\dots \left(1 - \frac {x_{k-1}t_k}{x_k} \right)} \cdot \right. \\ \left. \delta\left(\frac {z_1}{x_1} \right) \dots \delta\left(\frac {z_k}{x_k} \right) \sum_{a=1}^k \frac {x_1t_2\dots t_a}{x_a} \left(1 - \frac {x_a t_{a+1}}{x_{a+1}} \right) \right] \label{eqn:upsilon computation} \end{multline} where $x_{k+1}=x_1$. As $t_1\dots t_k = 1$, the sum in \eqref{eqn:upsilon computation} vanishes, hence so does $\widetilde{\Upsilon}^+ (e_F)$. \end{proof} \medskip \subsection{} We will now consider the dual to the series $e_F(x_1,\dots,x_k) \in {\widetilde{\mathbf{U}}^+}[[x_1^{\pm 1}, \dots, x_k^{\pm 1}]]$ under the pairing \eqref{eqn:pair}. We still write $F$ for an arbitrary face of $Q$. \medskip \begin{proposition} \label{prop:vanish} For any \footnote{The variables of $R$ are relabeled in accordance with \eqref{eqn:relabeling}.} $R(z_1,\dots,z_k) \in {\mathcal{V}}_{-{\boldsymbol{\vs}}^{i_1}-\dots -{\boldsymbol{\vs}}^{i_k}}$, we have \begin{equation} \label{eqn:vanish} \Big\langle e_F(x_1,\dots,x_k), R \Big \rangle = 0 \qquad \Leftrightarrow \qquad R\Big|_{z_a = z_{a-1} t_a, \forall a \in \{1,\dots,k\}} = 0 \end{equation} \end{proposition} \medskip \begin{proof} As a consequence of \eqref{eqn:pair formula}, we have \begin{equation} \label{eqn:ev} \Big\langle e_F(x_1,\dots,x_k), R \Big \rangle = \end{equation} $$ = \sum_{a=1}^k \textrm{ev}_{|x_a| \gg \dots \gg |x_1| \gg |x_k| \gg \dots \gg |x_{a+1}|} \left[ \frac {x_1t_2\dots t_a}{x_a} \cdot \frac {R(x_1,\dots,x_k) \prod_{b>c}^{i_b = i_c} \left(1-\frac {x_c}{x_b}\right) }{\prod_{b \sim c + 1} \left(1 - \frac {x_ct_b}{x_b} \right)} \right] $$ where $\textrm{ev}_{\star}[f]$ denotes the expansion of a rational function $f$ in the region prescribed by the inequalities $\star$. It is elementary to prove that $t_1\dots t_k=1$ implies the following identity of formal series $$ \sum_{a=1}^k \textrm{ev}_{|x_a| \gg \dots \gg |x_1| \gg |x_k| \gg \dots \gg |x_{a+1}|} \left[ \frac {\frac {x_1t_2\dots t_a}{x_a}}{\prod_{b \sim c + 1} \left(1 - \frac {x_ct_b}{x_b} \right)} \right] = \delta\left(\frac {x_1t_2}{x_2} \right) \dots \delta\left( \frac {x_{k-1}t_k}{x_k} \right) $$ Therefore, the right-hand side of \eqref{eqn:ev} is equal to $$ \delta\left(\frac {x_1t_2}{x_2} \right) \dots \delta\left( \frac {x_{k-1}t_k}{x_k} \right) R(x_1,\dots,x_k) \prod_{b>c}^{i_b = i_c} \left(1-\frac {x_c}{x_b}\right) $$ and vanishes if and only if \begin{equation} \label{eqn:vanishes} R\Big|_{z_a = z_{a-1}t_a, \forall a\in \{1,\dots,k\}} \prod_{b>c}^{i_b = i_c} \left(1-\frac 1{t_{c+1} \dots t_{b}}\right) = 0 \end{equation} Because of \eqref{eqn:generic}, we cannot have $t_{c+1}\dots t_b = 1$ for any $b \neq c$, and therefore \eqref{eqn:vanishes} only holds if $R|_{z_a = z_{a-1}t_a, \forall a\in \{1,\dots,k\}}= 0$, as we needed to show. \end{proof} \medskip \noindent More generally, if $R \in {\mathcal{V}}^-$ is arbitrary, then \begin{equation} \label{eqn:vanish general} \Big\langle {\widetilde{\mathbf{U}}^+} e_F(x_1,\dots,x_k) {\widetilde{\mathbf{U}}^+}, R \Big \rangle = 0 \qquad \Leftrightarrow \qquad R\Big|_{z_a = z_{a-1}t_a, \forall a \in \{1,\dots,k\}} = 0 \end{equation} where $z_a$ denotes any variable of $R$ of the form $z_{i_a\bullet_a}$, for all $a \in \{1,\dots,k\}$ (the choice of $\bullet_a$ does not matter due to the symmetry of $R$). Property \eqref{eqn:vanish general} is proved like \cite[Proposition 3.13]{Loop}; we leave the details as an exercise to the reader. \medskip \subsection{} Motivated by Proposition \ref{prop:vanish} and equation \eqref{eqn:vanish general}, we consider the following. \medskip \begin{definition} \label{def:shuffle wheel} Let $\boldsymbol{\dot}{\CS}^\pm \subset {\mathcal{V}}^\pm$ denote the subspace consisting of Laurent polynomials $R(z_1,\dots,z_k,\dots)$ such that \begin{equation} \label{eqn:wheel} R\Big|_{z_a = z_{a-1}t_a, \forall a \in \{1,\dots,k\}} = 0 \end{equation} for any face $F = \{i_0,i_1,\dots,i_{k-1},i_k=i_0\}$ of $Q$ (the notation $t_a$ is that of \eqref{eqn:cycle}). \end{definition} \medskip \noindent We call \eqref{eqn:wheel} a \textbf{wheel condition}, by analogy with the constructions of \cite{E, FO}. It is straightforward to show that $\boldsymbol{\dot}{\CS}^\pm$ are closed under the shuffle product, although this will also follow from Proposition \ref{prop:main}. Thus, if we consider the two-sided ideal $$ J^+ = \Big(\text{series coefficients of }e_F(x_1,\dots,x_k) \Big)_{F \text{ face of }Q} $$ then property \eqref{eqn:vanish general} reads \begin{equation} \label{eqn:iff} \Big \langle J^+, R \Big \rangle = 0 \qquad \Leftrightarrow \qquad R \in \boldsymbol{\dot}{\CS}^- \end{equation} \medskip \begin{remark} Property \eqref{eqn:iff} would still hold if we defined $J^+$ as the ideal generated by a single coefficient of the series $e_F(x_1,\dots,x_k)$ of every given homogeneous degree in $x_1,\dots,x_k$, for all faces $F$ of the quiver $Q$. In other words, including all the coefficients of all the series $e_F$ as generators of $J^+$ is superfluous; a single coefficient of each homogeneous degree for all faces $F$ would suffice. \end{remark} \medskip \subsection{} Proposition \ref{prop:kernel} implies that $J^+ \subseteq K^+$, and therefore \begin{equation} \label{eqn:one way} \boldsymbol{\dot}{\CS}^- \supseteq {\mathcal{S}}^- \end{equation} Our main goal for the remainder of the paper is to prove the opposite inclusion. \medskip \begin{proposition} \label{prop:main} If $Q$ is shrubby (as in Definition \ref{def:consistent intro}), then we have \begin{equation} \label{eqn:other way} \boldsymbol{\dot}{\CS}^- \subseteq {\mathcal{S}}^- \end{equation} and therefore $\boldsymbol{\dot}{\CS}^- = {\mathcal{S}}^-$. \end{proposition} \medskip \noindent We also have $\boldsymbol{\dot}{\CS}^+ = {\mathcal{S}}^+$; the proof is analogous and we will not repeat it. \medskip \begin{proof} \emph{of Theorem \ref{thm:main}:} With \eqref{eqn:shuf 1} and \eqref{eqn:iff} in mind, the fact that $\boldsymbol{\dot}{\CS}^- = {\mathcal{S}}^-$ implies that $$ \Big\langle K^+, R \Big \rangle = 0 \quad \Leftrightarrow \quad \Big\langle J^+, R \Big \rangle = 0 $$ for any $R \in {\mathcal{V}}^-$. If \eqref{eqn:pair} were a pairing of finite-dimensional vector spaces over ${\mathbb{K}}$, this would imply that $J^+ = K^+$ and we would be done. In the infinite-dimensional setting at hand, one needs to emulate the proof of \cite[Theorem 1.8]{Arbitrary} to conclude that $J^+ = K^+$. The details are straightforward, and we leave them to the reader. \end{proof} \medskip \subsection{} \label{sub:shrub} Assume that $Q$ is shrubby, according to Definition \ref{def:consistent intro}, and let $\widetilde{Q}$ be its universal cover. The following notion will be key to our proof of Proposition \ref{prop:main}. \medskip \begin{definition} \label{def:pre-shrub} A \textbf{pre-shrub} $S$ is an subgraph of $\widetilde{Q}$ which does not contain the entire boundary of any face, and if $S$ contains a broken wheel then it must also contain its mirror image. \end{definition} \medskip \begin{proposition} \label{prop:no cycles} A pre-shrub cannot contain any cycles. \end{proposition} \medskip \noindent The Proposition above will be proved in the Appendix. Although a pre-shrub cannot contain any oriented cycles, it can contain unoriented ones (for example, a broken wheel together with its mirror image). The \textbf{interior} of a pre-shrub $S$ is the region completely surrounded by the unoriented cycles belonging to $S$. \medskip \noindent Recall that any oriented graph with no cycles yields a partial order on the set of its vertices, with $i>j$ if there exists a path in the graph from $i$ to $j$. Having established that pre-shrubs do not contain any cycles in Proposition \ref{prop:no cycles}, we may consider the corresponding partial order on the set of vertices. With respect to this order, a \textbf{root} of a pre-shrub will refer to a maximal vertex. \medskip \begin{definition} \label{def:shrub} A \textbf{shrub} $S$ is a pre-shrub with a single root, which contains all the vertices in its interior. \end{definition} \medskip \noindent The following Proposition will also be proved in the Appendix. \medskip \begin{proposition} \label{prop:no edges} If $i,i'$ are vertices of a shrub $S$, and $i \xrightarrow{e} i'$ is an edge not contained in $S$, then $e$ must be the interface of a broken wheel contained in $S$. \end{proposition} \medskip \subsection{} Consider a shrub $S \subset \widetilde{Q}$ and a vertex $i \notin S$. Assume that there are $k>0$ edges from vertices of $S$ to $i$, labeled $e_1,\dots,e_k$ in counterclockwise order around $i$, as in Figures 3 and 4. The difference between these figures will be explained in Definition \ref{def:addable}, when we discuss the notion of $i$ being addable or non-addable to $S$. \begin{figure}[H] \includegraphics[scale=0.45]{Addable.png} \caption{An addable vertex $i$ (in black) to a shrub $S$ (in red).} \end{figure} \medskip \begin{figure}[H] \includegraphics[scale=0.45]{Non-addable.png} \caption{Two situations of non-addable vertices $i$ (in black) to a shrub $S$ (in red).} \end{figure} \medskip \noindent In the situation above, consider any two consecutive edges $e_s$ and $e_{s+1}$ (we make the convention that $e_{k+1} = e_1$). Because $S$ is a shrub, we may continue these edges in $S$ until they meet, thus yielding paths \begin{align} &p : j \rightarrow \dots \xrightarrow{e_s} i \label{eqn:path p} \\ &p' : j \rightarrow \dots \xrightarrow{e_{s+1}} i \label{eqn:path p'} \end{align} We may assume the paths $p$ and $p'$ are simple, non-intersecting (except for the endpoints) and that the region $r_s$ between $p$ and $p'$ is minimal. Because the vertex $i$ does not belong to the interior of the shrub, a single one of the regions $r_s$ does not contain the counterclockwise angle at $i$ between $e_s$ and $e_{s+1}$. By relabeling the edges if necessary, we assume that the aforementioned region is $r_k$. With this in mind, an index $s\in \{1,\dots,k-1\}$ is called \medskip \begin{itemize}[leftmargin=*] \item \textbf{good} if $p$ and $p'$ are broken wheels, which are mirror images of each other \medskip \item \textbf{bad} if there exist edges $i \xrightarrow{e} v \in p$ and $i \xrightarrow{e'} v' \in p'$ with $v,v' \neq j$, such that the sub-regions of $r_s$ between $e$ and $p$ (respectively between $e'$ and $p'$) are faces \end{itemize} \medskip \noindent For example, both $s \in \{1,2\}$ in Figure 2 are good. However, in the picture on the left of Figure 3, $s = 1$ is bad and $s = 2$ is good. Meanwhile, we call the index $s = k$ \medskip \begin{itemize}[leftmargin=*] \item \textbf{good} if there are no edges from $i$ to $S$ in the counterclockwise region from $e_k$ to $e_1$ (i.e. the region ${\mathbb{R}}^2 \backslash R_k$); this is the case in Figure 2. \medskip \item \textbf{bad} if there exists an edge from $i$ to $S$ in the region ${\mathbb{R}}^2 \backslash R_k$, which determines a face together with $e_1,e_k$ and the other edges in $S$; this is the case in the picture on the right of Figure 3. \medskip \end{itemize} \noindent The following result will be proved in the Appendix. \medskip \begin{proposition} \label{prop:good or bad} For $i \notin S$ as above, every $s \in \{1,\dots,k\}$ is either good or bad. \end{proposition} \medskip \subsection{} \label{sub:broken wheels} If $S$ is a shrub and $i \notin S$, let $S+i$ denote the subgraph obtained from $S$ by adding the vertex $i$ and the edges from $S$ to $i$ (we assume such edges exist). \medskip \begin{definition} \label{def:addable} In the situation above, we call $i$ \textbf{addable} to $S$ if all $s \in \{1,\dots,k\}$ are good, and \textbf{non-addable} to $S$ otherwise. \end{definition} \medskip \noindent The terminology above is motivated by the following result, which will be proved in the Appendix. \medskip \begin{proposition} \label{prop:key} Assume $S \subset \widetilde{Q}$ is a shrub and $i \notin S$ is a vertex. Then $S+i$ is a shrub if and only if $i$ is an addable vertex to $S$. \end{proposition} \medskip \noindent The main distinction to us between addable and non-addable vertices is the following result, which will also be proved in the Appendix. \medskip \begin{proposition} \label{prop:count} Assume $S \subset \widetilde{Q}$ is a shrub and $i \notin S$ is a vertex with $k > 0$ edges from $S$ to $i$. The maximal number of broken wheels in $S+i$ that all pass through $i$ and do not pairwise intersect at any other vertex is $$ \begin{cases} k-1 &\text{if }i\text{ is addable to }S \\ \geq k &\text{otherwise} \end{cases} $$ \end{proposition} \medskip \subsection{} We are now ready to give the proof of Proposition \ref{prop:main}. We will assume that the edge parameters $t_e$ of Subsection \ref{sub:zeta} are generic complex numbers satisfying the loop constraint \eqref{eqn:loop constraint}. This assumption is merely a convenient tool for us to keep track of all the residues involved in the subsequent argument; since all formulas are rational functions in the parameters $t_e$, they are well-defined for arbitrary $\{t_e\}_{e\in E}$. \medskip \begin{proof} \emph{of Proposition \ref{prop:main}:} Let us consider any $$ \phi = \mathop{\sum_{i_1,\dots,i_n \in I}}_{d_1,\dots,d_n \in {\mathbb{Z}}} \text{coefficient} \cdot e_{i_1,d_1} \dots e_{i_n,d_n} \in K^+ $$ and any $R \in \boldsymbol{\dot}{\CS}^-$. Our goal is to show that \begin{equation} \label{eqn:goal} \Big \langle \phi, R \Big \rangle = 0 \end{equation} as this would imply the required $R \in {\mathcal{S}}^-$. Recall from formula \eqref{eqn:pair formula} that \begin{equation} \label{eqn:pairing final} \Big \langle e_{i_1,d_1} \cdots e_{i_n,d_n}, R \Big \rangle = \int_{|z_1| \gg \dots \gg |z_n|} f(z_1,\dots,z_n) \prod_{a=1}^n Dz_a \end{equation} where \begin{equation} \label{eqn:function} f(z_1,\dots,z_n) = \frac {z_1^{d_1}\dots z_n^{d_n} R(z_1,\dots,z_n)}{\prod_{1\leq a < b \leq n} \zeta_{i_bi_a} \left(\frac {z_b}{z_a} \right)} \end{equation} A \textbf{labeling} of a shrub $S \subset \widetilde{Q}$ will refer to a labeling of the $s$ vertices of $S$ by one of the variables $z_{a_1},\dots,z_{a_s}$ (for certain distinct $a_1,\dots,a_s \in {\mathbb{N}}$) such that the increasing order of the indices of the variables refines the partial order on the vertices given by the shrub, i.e. $a_x < a_{x'}$ if the corresponding vertices $i_x, i_{x'} \in S$ are connected in $S$ by a path going from $i_{x'}$ to $i_x$. In particular, the root of $S$ must be labeled by the variable $z_{a_s}$. For every $x \in \{1,\dots,s-1\}$, choose a path from the root $i_s$ to $i_x$ $$ i_s \xrightarrow{\alpha} i_{s'} \xrightarrow{\beta} \dots \xrightarrow{\omega} i_x $$ and define $q_x = t_\alpha t_{\beta}\dots t_{\omega}$. Because such paths are unique up to removing cycles or replacing a broken wheel by its mirror image (according to Definition \ref{def:consistent intro}), and because such removals/replacements do not change the product of parameters along the path, the quantity $q_x$ does not depend on any choices made. An \textbf{acceptable} labeled shrub is one for which $|q_x| > 1$ for all $x \in \{1,\dots,s-1\}$. \medskip \begin{proposition} \label{prop:residue} For any labeled shrub $S$ and function $f$ as in \eqref{eqn:function}, define \begin{equation} \label{eqn:residue} \mathop{\emph{Res}}_S f \end{equation} as a function in $\{z_a\}_{a \notin \{a_1,\dots,a_{s-1}\}}$ by the following iterated residue procedure. \medskip \noindent At step number $x \in \{1,\dots,s-1\}$, the variables $z_{a_{s-x+1}},\dots,z_{a_{s-1}}$ have all been specialized to $z_{a_s}$ times $q_{s-x+1},\dots,q_{s-1}$, respectively. Upon this specialization, we claim that the rational function $f$ has at most a simple pole at \begin{equation} \label{eqn:pole} z_{a_{s-x}} = z_{a_s} q_{s-x} \end{equation} Replace $f$ by its residue at the pole \eqref{eqn:pole}, and move on to step number $x+1$. \end{proposition} \medskip \noindent Because one only encounters simple poles in the algorithm above, the value of \eqref{eqn:residue} would not change if we replaced (in the recursive procedure of Proposition \ref{prop:residue}) the total order $a_1 < \dots < a_s$ by any other total order refining the partial order on the vertices of the shrub. \medskip \begin{proof} Consider the induced subgraph $S' \subset S$ consisting of all vertices $>i:=i_{s-x}$. It is easy to see that $S'$ is a shrub and that $i$ is an addable vertex to $S'$. Therefore, we may assume that the there are $k > 0$ edges $$ i_{b_1} \xrightarrow{e_1} i_{s-x}, \dots, i_{b_k} \xrightarrow{e_k} i_{s-x} $$ from the shrub $S'$ to the vertex $i$, for certain $b_1,\dots,b_k > s-x$. As these edges must be distributed as in Figure 2, the denominator of \eqref{eqn:function} includes the $k$ factors $$ 1 - \frac {z_{a_{b_1}} t_{e_1}}{z_{a_{s-x}}}, \dots, 1 - \frac {z_{a_{b_k}} t_{e_k}}{z_{a_{s-x}}} $$ Once the variables $z_{a_{b_1}},\dots,z_{a_{b_k}}$ are specialized to $z_{a_s}$ times $q_{b_1},\dots, q_{b_k}$, respectively, the fact that $q_{s-x} = q_{b_1}t_{e_1} = \dots = q_{b_k}t_{e_k}$ implies that the denominator of \eqref{eqn:function} will feature the factor $$ \left(1 - \frac {z_{a_s} q_{s-x}}{z_{a_{s-x}}} \right)^k $$ Thus, to prove that the pole invoked in the statement of the Proposition is at most simple, we need to show that the numerator of \eqref{eqn:function} vanishes to order at least $k-1$ at the specialization \eqref{eqn:pole}. However, the numerator of $f$ vanishes whenever any subset of its variables are specialized according to \eqref{eqn:wheel} for any face $F$. As there exist $k-1$ broken wheels whose only common vertex is $i = i_{s-x}$ (according to Proposition \ref{prop:count}), property \eqref{eqn:wheel} for the $k-1$ faces enclosed by said broken wheels implies that the numerator of $f$ vanishes to order $\geq k-1$ at the specialization \eqref{eqn:pole} \footnote{In claiming the vanishing of the numerator of $f$ to order at least $k-1$, we are invoking the fact that for any $k, \ell_1,\dots, \ell_{k-1} \in {\mathbb{N}}$, we have $$ \bigcap_{c=1}^{k-1} \left(x_c^{(1)},\dots,x_c^{(\ell_c)}\right) = \left(x_1^{(\alpha_1)} x_2^{(\alpha_2)}\dots x_{k-1}^{(\alpha_{k-1})} \right)_{\alpha_1 \in \{1,\dots,\ell_1\},\dots, \alpha_{k-1} \in \{1,\dots,\ell_{k-1}\}} $$ in the ring of polynomials over \underline{distinct} variables $\{x_c^{(1)},\dots,x_c^{(\ell_c)}\}_{c \in \{1,\dots,k-1\}}$.}. \end{proof} \medskip \noindent An $m$-\textbf{labeled shrubbery} $\mathscr{S}$ is a disjoint union of labeled shrubs in $\widetilde{Q}$ (whose $n-m+1$ vertices are endowed with distinct labels among $z_m,\dots,z_n$) such that the order of the indices of the variables refines the partial order on the vertices given by each constituent shrub of $\mathscr{S}$. An $m$-labeled shrubbery is called \textbf{acceptable} if all of its constituent shrubs are acceptable. \medskip \begin{claim} \label{claim:final} For any $m \in \{1,\dots,n\}$, consider \begin{multline} X_m = \sum^{m\text{-labeled acceptable}}_{\text{shrubberies }\mathscr{S} = S_1 \sqcup \dots \sqcup S_t} \int_{|z_1| \gg \dots \gg |z_{m-1}|\gg |z_{r_1}| = \dots = |z_{r_t}|} \\ \mathop{\emph{Res}}_{S_1} \dots \mathop{\emph{Res}}_{S_t} f \prod_{a=1}^{m-1} Dz_a \prod_{u = 1}^t Dz_{r_u} \label{eqn:multline} \end{multline} where $z_{r_1},\dots,z_{r_t}$ are the labels of the roots of the shrubs $S_1,\dots,S_t$. Then we have \begin{equation} \label{eqn:recursion} X_{m-1} = X_m \end{equation} for all $m \in \{2,\dots,n\}$. \end{claim} \medskip \begin{proof} To prove \eqref{eqn:recursion}, one needs to move the contour of the variable $z_{m-1}$ toward the contours $|z_{r_1}| = \dots = |z_{r_t}|$. If the former contour reaches the latter contours, this corresponds to adding the one-vertex shrub $\{i_{m-1}\}$ to the shrubbery $\mathscr{S}$. Otherwise, the variable $z_{m-1}$ must be ``caught" in one of the poles of the form \begin{equation} \label{eqn:factor} 1 - \frac {z_b t_{\overrightarrow{i_b i_{m-1}}}}{z_{m-1}} \end{equation} for some $b > m-1$. Assume $i_b$ belongs to one of the constituent shrubs $S_u \subset \mathscr{S}$, and suppose there is a number $k > 0$ of edges from the shrub $S_u$ to $i = i_{m-1}$. Then we have one of the following three possibilities. \medskip \begin{itemize}[leftmargin=*] \item If the vertex $i$ is addable to $S_u$ as in Definition \ref{def:addable}, then Proposition \ref{prop:key} implies that $S_u' = S_u+i$ is a shrub. Thus, the operation \begin{multline*} m\text{-labeled shrubbery } \mathscr{S} = S_1 \sqcup \dots \sqcup S_u \sqcup \dots \sqcup S_t \leadsto \\ \leadsto (m-1)\text{-labeled shrubbery } \mathscr{S}' = S_1 \sqcup \dots \sqcup S'_u \sqcup \dots \sqcup S_t \end{multline*} shows how to obtain $X_{m-1}$ by applying the contour moving procedure to $X_m$ (the fact that we only encounter acceptable shrubs is due to the fact that we move the contour of $z_{m-1}$ from infinity down to the contour of $z_{r_u}$, but no further). \medskip \item If the vertex $i$ is non-addable to $S_u$, then Proposition \ref{prop:count} states that there exist $k$ broken wheels completely contained in $S_u + i$ that only intersect pairwise at the vertex $i$. As we have seen at the end of the proof of Proposition \ref{prop:residue}, this means that the numerator of $f$ has enough factors to cancel the $k$ copies of the factor \eqref{eqn:factor} from the denominator of $f$. We conclude that non-addable vertices do not correspond to actual poles. \medskip \item If the vertex $i$ is already in $S_u$ (say with label $z_c$ for some $c > m-1$), then the linear factor of $z_{m-1}-z_c$ in the denominator of $$ \zeta_{i_ci_{m-1}} \left(\frac {z_c}{z_{m-1}}\right) $$ allows the numerator of $f$ to annihilate the pole of the form \eqref{eqn:factor}. \end{itemize} \end{proof} \medskip \noindent Repeated applications of Claim \ref{claim:final} imply the fact that $X_1 = X_n$. Since $X_n$ is the right-hand side of \eqref{eqn:pairing final}, we conclude that \begin{multline} \Big \langle e_{i_1,d_1} \cdots e_{i_n,d_n}, R \Big \rangle = \sum^{1\text{-labeled acceptable}}_{\text{shrubberies }\mathscr{S} = S_1 \sqcup \dots \sqcup S_t} \\ \int_{|z_{r_1}| = \dots = |z_{r_t}|} \mathop{\text{Res}}_{S_1} \dots \mathop{\text{Res}}_{S_t} \frac {z_1^{d_1}\dots z_n^{d_n} R(z_1,\dots,z_n)}{\prod_{1\leq a < b \leq n} \zeta_{i_bi_a} \left(\frac {z_b}{z_a} \right)} \prod_{u = 1}^t Dz_{r_u} \label{eqn:pairing shrubberies} \end{multline} The fact that all the contours coincide means that we can symmetrize the integrand (with respect to all variables $z_1,\dots,z_n$) without changing the value of the integral \begin{multline*} \Big \langle e_{i_1,d_1} \cdots e_{i_n,d_n}, R \Big \rangle = \sum^{\text{fixed }1\text{-labeled acceptable}}_{\text{shrubberies } \bar{\mathscr{S}} = \bar{S}_1 \sqcup \dots \sqcup \bar{S}_t} \\ \int_{|z_{r_1}| = \dots = |z_{r_t}|} \mathop{\text{Res}}_{\bar{S}_1} \dots \mathop{\text{Res}}_{\bar{S}_t} \text{ Sym} \left[ \frac {z_1^{d_1}\dots z_n^{d_n} R(z_1,\dots,z_n)}{\prod_{1\leq a < b \leq n} \zeta_{i_bi_a} \left(\frac {z_b}{z_a} \right)} \right] \prod_{u = 1}^t Dz_{r_u} \end{multline*} where the adjective ``fixed" means that we are summing over a given 1-labeled acceptable shrubbery in every equivalence class given by permuting the labels on the vertices. We conclude that \begin{equation} \label{eqn:pairing sym shrubberies} \Big \langle e_{i_1,d_1} \cdots e_{i_n,d_n}, R \Big \rangle = \sum^{\text{fixed }1\text{-labeled acceptable}}_{\text{shrubberies } \bar{\mathscr{S}} = \bar{S}_1 \sqcup \dots \sqcup \bar{S}_t} \end{equation} $$ \int_{|z_{r_1}| = \dots = |z_{r_t}|} \mathop{\text{Res}}_{\bar{S}_1} \dots \mathop{\text{Res}}_{\bar{S}_t} \text{ Sym} \left[ \frac {\widetilde{\Upsilon}^+(e_{i_1,d_1} \cdots e_{i_n,d_n}) R(z_1,\dots,z_n)}{\prod_{1\leq a \neq b \leq n} \zeta_{i_bi_a} \left(\frac {z_b}{z_a} \right)} \right] \prod_{u = 1}^t Dz_{r_u} $$ and therefore $\langle \phi, R \rangle$ is a linear functional of $\widetilde{\Upsilon}^+(\phi)$. Since the latter expression is 0 due to the fact that $\phi \in K^+$, we conclude the required formula \eqref{eqn:goal}. \end{proof} \medskip \noindent Note that \eqref{eqn:pairing sym shrubberies} implies the following formula for the descended pairing \eqref{eqn:descended pairing}, under the assumption that $Q$ is shrubby \begin{equation} \label{eqn:formula pairing shrubberies} \Big \langle R^+, R^- \Big \rangle = \sum^{\text{fixed }1\text{-labeled acceptable}}_{\text{shrubberies } \bar{\mathscr{S}} = \bar{S}_1 \sqcup \dots \sqcup \bar{S}_t} \end{equation} $$ \int_{|z_{r_1}| = \dots = |z_{r_t}|} \mathop{\text{Res}}_{\bar{S}_1} \dots \mathop{\text{Res}}_{\bar{S}_t} \text{ Sym} \left[ \frac {R^+(z_1,\dots,z_n)R^-(z_1,\dots,z_n)}{\prod_{1\leq a \neq b \leq n} \zeta_{i_bi_a} \left(\frac {z_b}{z_a} \right)} \right] \prod_{u = 1}^t Dz_{r_u} $$ for any $R^\pm \in {\mathcal{S}}^\pm$ of opposite degrees. Formula \eqref{eqn:formula pairing shrubberies} shows that shrubberies are not just technical tools used in the proof of Proposition \ref{prop:main}, but natural combinatorial objects which parameterize the summands in the formula for the pairing \eqref{eqn:descended pairing}. \bigskip \section{Appendix: the joys of gardening} \label{sec:gardening} \medskip \noindent In the present Section, we will motivate our notion of shrubby quivers (by relating it with more traditional consistency conditions in the theory of brane tilings and dimer models) and prove several technical results from Section \ref{sec:consistent}. \medskip \subsection{} \noindent Let $Q$ denote a quiver in ${\mathbb{T}}^2$, as in Definition \ref{def:quiver intro}, i.e. the faces of $Q$ are colored in blue/red such that any two faces which share an edge have different colors. \medskip \begin{definition} \label{def:r charge} A non-degenerate $R$-charge (see for instance \cite{HHV, HV}) is a function $$ R : E \rightarrow (0,1) $$ such that for any vertex $i$ and any face $F$ of the quiver $Q$, we have \begin{align*} &\sum_{e \text{ edge around }F} R(e) = 2 \\ &\sum_{e \text{ edge incident to }i} (1-R(e)) = 2 \end{align*} Geometrically, the properties above imply that the quiver $Q$ can be drawn on the torus so that all faces are polygons circumscribed in circles of the same radius, and the centers of these circles lie strictly inside the faces (the number $\pi R(e)$ is the central angle subtended by the chord $e$ in the aforementioned circles). \end{definition} \medskip \noindent The existence of a non-degenerate $R$-charge allows one to define a rhombus tiling of the torus, as follows. Draw the centers of the (circles circumscribing the) blue/red polygonal faces as blue/red bullets. Then the condition that the segments between the vertices and the bullets all have the same length means that ${\mathbb{T}}^2$ is tiled by rhombi. To recover the arrows in the quiver $Q$ from the rhombus tiling, one need only draw the diagonals between non-bullet vertices of the rhombi, and orient them so that they keep the blue/red bullets on the right/left (see Figure 4). \medskip \begin{figure}[h] \includegraphics[scale=0.55]{Rhombus.png} \caption{A rhombus. The blue/red bullets represent the centers of the blue/red faces, while the other two vertices of the rhombus are vertices of $Q$ (with an arrow between them).} \end{figure} \medskip \noindent Recall the notion of shrubby quivers from Definition \ref{def:consistent intro}. Lemma \ref{lem:non-degenerate is shrubby} below is proved just like \cite[Lemma 5.3.1]{HHV} (note that the topology of shrubby quivers underlies the notion of $F$-term equivalent paths, see \cite[Definition 2.5]{D} and \cite[Condition 4.12]{MR}). \medskip \begin{lemma} \label{lem:non-degenerate is shrubby} If there exists a non-degenerate $R$-charge, then $Q$ is shrubby. \end{lemma} \medskip \subsection{} \label{sub:broken} In the remainder of the paper, we provide proofs of some technical results about shrubs and pre-shrubs, specifically Propositions \ref{prop:no cycles}, \ref{prop:no edges}, \ref{prop:good or bad}, \ref{prop:key} and \ref{prop:count}. Throughout the present Section, we assume $Q$ to be a shrubby quiver, with universal cover $\widetilde{Q}$. All paths and cycles in a quiver are understood to be oriented. \medskip \begin{definition} \label{def:region and area} Given two paths $p$ and $p'$ in $\widetilde{Q}$ with the same endpoints, we will write $r(p,p')$ for the closed \textbf{region} inside ${\mathbb{R}}^2$ contained between $p$ and $p'$. The \textbf{area} of this region, denoted by $a(p,p') \in {\mathbb{N}}$, will refer to the number of faces contained inside $r(p,p')$. In particular, if $C$ is a cycle, we will write $r(C)$ and $a(C)$ for the closed region and area (respectively) contained inside $C$. \end{definition} \medskip \begin{proof} \emph{of Proposition \ref{prop:no cycles}:} Assume for the purpose of contradiction that a pre-shrub $S$ contains a cycle, and let us fix such a cycle $C$ of minimal area (as in Definition \ref{def:region and area}). We must have $a(C) > 2$, since otherwise $C$ would be the boundary of a face, or the union of boundaries of two faces which meet at a single point, both situations being forbidden for pre-shrubs. Definition \ref{def:consistent intro} for $p=C$ and $p' = \text{trivial}$ implies that there exist two adjacent faces (as in Figure 1) for which e.g. the red path is completely contained in $C$, and the red and blue regions are contained inside $r(C)$. By the defining property of a pre-shrub, $S$ also contains the blue path. Thus, the cycle $$ C' = C - \{\text{red path}\} + \{\text{blue path}\} $$ is contained in $S$, and moreover $a(C') = a(C) - 2$. This contradicts the minimality of the area of $C$. \end{proof} \medskip \begin{proof} \emph{of Proposition \ref{prop:no edges}:} Assume that $e$ is an edge from vertex $i$ to vertex $i'$, where $i,i' \in S$ but $e \not\subset S$. By the very definition of the root $r$ of a shrub, there are paths from $r$ to $i$ and $i'$, respectively. Following the aforementioned paths until they first intersect, we conclude that there exist simple paths \begin{align*} &p : j \rightarrow \dots \rightarrow i \\ &p' : j \rightarrow \dots \rightarrow i' \end{align*} with no vertices in common other than the source $j$. We have three scenarios. \medskip \noindent (1) If $j=i$, then $e$ and $p'$ are both paths from $i$ to $i'$. We may assume that $p'$ is chosen such that $a(e,p')$ is minimal. Definition \ref{def:consistent intro} implies that $p'$ contains a broken wheel $B$ (since $e$ consists of a single edge, it cannot contain a broken wheel). Since $S$ is a shrub, it therefore contains the mirror image $B'$ of $B$. Thus, if we modify $p'$ by replacing its sub-path $B$ with $B'$, then we contradict the minimality of $a(e,p')$. We conclude that this scenario is impossible. \medskip \noindent (2) If $j = i'$, then $C = p \cup e$ is a cycle, and we assume that $p$ is chosen so that $a(C)$ is minimal. If $a (C) = 1$ then we are done (since $r(C)$ would be precisely the face that realizes $e$ as the interface of a broken wheel contained in $S$), so let us assume for the purpose of contradiction that $a(C) > 1$. Definition \ref{def:consistent intro} implies that $C$ contains a broken wheel $B$. There are two sub-cases. \begin{itemize}[leftmargin=*] \medskip \item If $B \subseteq p$, then $S$ must also contain the mirror image $B'$ of $B$. If we modify $p$ by replacing its sub-path $B$ with $B'$, then we contradict the minimality of $a(C)$. \medskip \item If $e \subset B$, then the interface $e'$ of the broken wheel $B$ is an edge between two vertices of the shrub $S$. If $e' \subset S$, then we contradict the minimality of $a(C)$ and the fact that $a(C) > 1$. If $e' \not\subset S$, then there is a sub-path of $p$ from the source to the tail of $e'$, and we are thus in the self-contradictory situation of item (1). \end{itemize} \medskip \noindent (3) If $j \notin \{i,i'\}$, then let us choose $p,p',e$ such that $a(p \cup e, p')$ is minimal. In this case, Definition \ref{def:consistent intro} implies that one of $p \cup e$ or $p'$ contains a broken wheel $B$ whose interface is contained in $r(p \cup e, p')$. If $B \subseteq p$ or $B \subseteq p'$, then we may modify the path $p$ or $p'$ by replacing its sub-path $B$ with its mirror image, and contradict the minimality of $a(p \cup e, p')$. The only other possibility is that $e \subset B$, in which case the interface of $B$ must be an edge $e' : i' \rightarrow v$ for some vertex $v \in p$, as in Figure 5. \begin{figure}[H] \includegraphics[scale=0.45]{Short.png} \caption{The situation in item (3).} \end{figure} \noindent If $e' \subset S$, then concatenating $e'$ with the sub-path of $p$ that goes from $v$ to $i$ puts us in the situation of item (2) above. Meanwhile, if $e' \not\subset S$ and $v = j$, the cycle formed by $p'$ and $e'$ also puts us in the situation of item (2); since $e'$ must therefore be the interface of a broken wheel $B$ contained in $S$, replacing $p'$ by the mirror image $B'$ of $B$ would contradict the minimality of $a(p \cup e, p')$. Finally, if $e' \not\subset S$ and $v \neq j$, then we note that $$ a(p' \cup e', p'') < a(p \cup e, p') $$ (where $p''$ is the sub-path of $p$ that goes from $j$ to $v$) contradicts the minimality of $a(p \cup e, p')$. \end{proof} \medskip \begin{proof} \emph{of Proposition \ref{prop:good or bad}:} We will treat the case $s \in \{1,\dots,k-1\}$, and leave the analogous case $s = k$ as an exercise to the reader. Consider the paths $p$ and $p'$ of \eqref{eqn:path p}--\eqref{eqn:path p'}. Definition \ref{def:consistent intro} states that one of these paths must contain a broken wheel $B$; without loss of generality, let us assume that $B \subseteq p$. If $e_s$ were not part of $B$, then we would be able to modify $p$ by replacing its sub-path $B$ with its mirror image $B'$, and thus contradict the minimality of $a(p,p')$. Therefore, we may assume that $e_s$ is part of $B$, and thus there exists $v \in p$ and an edge $$ i \xrightarrow{e} v $$ such that the region bounded by $e$ and $p$ is a face. If $v = j$, then the index $s$ is good (since the whole of $p$ is the sought-for broken wheel, and its mirror image must coincide with $p'$ by minimality). Otherwise $v \neq j$ and let us consider the paths \begin{align*} &\tilde{p} : j \rightarrow \dots \rightarrow v \\ &\tilde{p}' : j \rightarrow \dots \xrightarrow{e_{s+1}} i \xrightarrow{e} v \end{align*} as in Figure 6. \begin{figure}[H] \includegraphics[scale=0.45]{Bad.png} \caption{A bad case.} \end{figure} \noindent Definition \ref{def:consistent intro} implies that one of the paths $\tilde{p}$ and $\tilde{p'}$ must contain a broken wheel $\tilde{B}$. If $\tilde{B}$ did not involve the edges $e_{s+1}$ or $e$, then we could contradict the minimality of $a(p,p')$ by replacing $\tilde{B}$ with its mirror image $\tilde{B}'$. We are left only with the possibility of $\tilde{B}$ involving the edges $e_{s+1}$ or $e$, and we have two cases \medskip \begin{itemize}[leftmargin=*] \item If the interface $e'$ of $\tilde{B}$ is an edge from $i$ to some $v' \in p'$, then we assume $v' \neq j$ (as the case $v' = j$ can be treated like the case $v=j$ was treated above). We are thus in the situation of Figure 6 and the index $s$ is bad. \medskip \item If the interface $e'$ of $\tilde{B}'$ is an edge from $v$ to some vertex $v' \in p' \backslash \{i\}$, then we are in the situation of Figure 7. We have two sub-cases. If $e' \subset S$, then we contradict the minimality of $a(p,p')$. On the other hand, if $e' \not\subset S$, then Proposition \ref{prop:no edges} forces $e'$ to be the interface of a broken wheel $\bar{B} \subset S$. The paths $$ v' \xrightarrow{\bar{B}} v \rightarrow \dots \xrightarrow{e_s} i $$ and $v' \rightarrow \dots \xrightarrow{e_{s+1}} i$ contradict the minimality of $a(p,p')$. \medskip \begin{figure} \includegraphics[scale=0.45]{Worse.png} \caption{An impossible case.} \end{figure} \end{itemize} \end{proof} \medskip \begin{proof} \emph{of Proposition \ref{prop:key}:} If $i$ is not an addable vertex, there must exist a bad index $s \in \{1,\dots,k\}$, i.e. either the situation of $s=1$ in the picture on the left of Figure 3 or the situation of $s=3$ in the picture on the right of Figure 3. In both of these cases, one can see a broken wheel in $S+i$ whose mirror image is not contained in $S+i$, thus precluding $S+i$ from being a shrub. \medskip \noindent Conversely, suppose that $i$ is an addable vertex, and let us show that $S+i$ is a shrub. It is clear that $i$ can be reached via a path from the root, and that there are no vertices $\notin S+i$ inside the polygonal regions incident to $i$ in Figure 2. \medskip \noindent Assume for the purpose of contradiction that $S+i$ contains the entire boundary of a face. Since $S$ cannot contain the entire boundary of a face (as $S$ is a shrub), then the boundary in question must involve the vertex $i$. However, this would require an edge from $i$ to a vertex of $S$, which is not in $S+i$ by assumption. \medskip \noindent Now let us assume that $S+i$ contains a broken wheel $B$, and let us show that it also contains its mirror image. Since $S$ is already a shrub, we may assume that the broken wheel $B$ involves the vertex $i$. By the definition of an addable vertex, all possible edges between $i$ and $S$ are as in Figure 2. Thus, the interface of the broken wheel $B$ must be one of the dotted edges in Figure 2, and it is clear that the mirror image of $B$ is also contained in $S+i$. \end{proof} \medskip \begin{proof} \emph{of Proposition \ref{prop:count}:} If $i$ is addable to $S$, then all $s \in \{1,\dots,k\}$ are good. Therefore, there exist only $k-1$ outgoing edges from $i$ to $S$, and they are arrayed as in Figure 2. Among any family of faces passing through $i$ and without other pairwise intersections, no two faces can pass through the same outgoing edge, so the cardinality of the family is at most $k-1$. It is also easy to see that this maximum can be achieved, by taking for instance the collection of faces incident to $e_1,\dots,e_{k-1}$ in counterclockwise order around $i$. \medskip \noindent If $i$ is non-addable to $S$, then there exists a bad index $s$. Assume first that $s \in \{1,\dots,k-1\}$, e.g. we are in the situation of $s=1$ in the picture on the left of Figure 3. The two faces contained in the region $r_s$, together with the faces incident to $e_1,\dots,e_{s-1}$ in counterclockwise order around $i$, and the faces incident to $e_{s+2},\dots,e_k$ in clockwise order around $i$, yield altogether a family of $k$ faces which only pairwise intersect at $i$. \medskip \noindent If $s = k$ is a bad index, then we are in the situation in the picture on the right of Figure 3. Without loss of generality, let us assume that there is a face incident to $e_k$ in counterclockwise order around $i$. Then this face together with the faces incident to $e_1,\dots,e_{k-1}$ in counterclockwise order around $i$, yield the required family of $k$ faces which only pairwise intersect at $i$. \end{proof} \bigskip
1,477,468,751,358
arxiv
\section{Introduction} In this paper we continue the study of the problem of finding an integral representation for limits of oscillating integral energies $$u_{\ep}\mapsto \iO f\Big(x,\frac{x}{\ep^{\alpha}},u_{\ep}(x)\Big)\,dx,$$ where $\Omega\subset \R^N$ is a bounded open set, $\ep\to 0$, and the fields $u_{\ep}$ are subjected to $x-$dependent differential constraints of the type \be{eq:constrain-no-divergence} \sum_{i=1}^N A^i\Big(\frac{x}{\ep^{\beta}}\Big)\frac{\partial u_{\ep}(x)}{\partial {x_i}}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\R^l),\,1<p<+\infty, \ee or in divergence form \be{eq:constrain-divergence} \sum_{i=1}^N \frac{\partial}{\partial {x_i}} \Big(A^i\Big(\frac{x}{\ep^{\beta}}\Big)u_{\ep}(x)\Big)\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\R^l),\,1<p<+\infty, \ee with $A^i(x)\in Lin(\R^d;\R^l)$ for every $x\in\R^N$, $i=1,\cdots,N$, $d,l\geq 1$, and where $\alpha,\beta$ are two nonnegative parameters. Different regimes are expected to arise, depending on the relation between $\alpha$ and $\beta$. We recently analyzed in \cite{davoli.fonseca} the limit case in which $\alpha=0,\, \beta>0$, the energy density is independent of the first two variables, and the fields $\{u_{\ep}\}$ are subjected to \eqref{eq:constrain-divergence}. We will consider here the case in which $\alpha>0, \beta=0$ and \eqref{eq:constrain-no-divergence}, i.e., the energy density is oscillating but the differential constraint is fixed and in ``nondivergence" form. The situation in which there is an interplay between $\alpha$ and $\beta$ will be the subject of a forthcoming paper.\\ The key tool for our study is the notion of $\pdeor-$quasiconvexity with variable coefficients, characterized in \cite{santos}. $\pdeor-$quasiconvexity was first investigated by Dacorogna in \cite{dacorogna} and then studied by Fonseca and M\"uller in \cite{fonseca.muller} in the case of constant coefficients (see also \cite{fonseca.dacorogna}). More recently, in \cite{santos} Santos extended the analysis of \cite{fonseca.muller} to the case in which the coefficients of the differential operator $\pdeor$ depend on the space variable. In order to illustrate the main ideas of $\pdeor$-quasiconvexity, we need to introduce some notation. For $i=1\cdots,N$, consider matrix-valued maps $A^i\in C^{\infty}(\R^N;\M^{l\times d})$, where for $l,d\in\N$, $\M^{l\times d}$ stands for the linear space of matrices with $l$ rows and $d$ columns, and for every $x\in \R^N$ define $\pdeor$ as the differential operator such that \be{eq:intro-def-op}\pdeor u:=\sum_{i=1}^N A^i(x)\frac{\partial u(x)}{\partial {x_i}},\,x\in\Omega\ee for $u\in L^1_{\rm loc}(\Omega; \R^d)$, where $\frac{\partial u}{\partial {x_i}}$ is to be interpreted in the sense of distributions. We require that the operator $\pdeor$ satisfies a uniform constant-rank assumption (see \cite{murat}), i.e., there exists $r\in \N$ such that \begin{equation} \label{cr} \text{rank }\sum_{i=1}^N A^i(x)w_i=r\quad\text{for every }w\in\mathbb{S}^{n-1}, \end{equation} uniformly with respect to $x$, where $\mathbb{S}^{N-1}$ is the unit sphere in $\mathbb{R}^N$. The definitions of $\pdeor$-quasiconvex function and $\pdeor$-quasiconvex envelope in the case of variable coefficients read as follows: \begin{definition} Let $f:\Omega\times\R^d \to \R$ be a Carath\'eodory function, let $Q$ be the unit cube in $\R^N$ centered at the origin, $$Q=\Big(-\frac{1}{2},\frac{1}{2}\Big)^N,$$ and denote by $C^{\infty}_{\rm per}(\R^N;\R^d)$ the set of smooth maps which are $Q$-periodic in $\R^N$. Consider the set $$\cx:=\Big\{w\in C^{\infty}_{\rm per}(\R^N;\R^d):\,\int_Q{w(y)\,dy}=0,\quad \sum_{i=1}^N A^i(x)\frac{\partial w(y)}{\partial {y_i}}=0 \Big\}.$$ For a.e. $x\in\Omega$ and $\xi\in\R^d$, the \emph{$\pdeor-$quasiconvex envelope} of $f$ in $x\in\Omega$ is defined as $$\qa f(x,\xi):=\inf\Big\{\iq f(x,\xi+w(y))\,dy:\,w\in\cx\Big\}.$$ $f$ is said to be \emph{$\pdeor$-quasiconvex} if $f(x,\xi)=\qa f(x,\xi)$ for a.e. $x\in\Omega$ and $\xi\in\R^d$. \end{definition} Denote by $\pdeor^c$ a generic differential operator, defined as in \eqref{eq:intro-def-op} and with constant coefficients, i.e. such that $$A^i(x)\equiv A^i_c\quad\text{for every }x\in\R^N,$$ with $A^i_c\in\M^{l\times d}$, $i=1,\cdots,N$. We remark that when $\pdeor = \pdeor^c= \rm curl$, i.e., when $v=\nabla\phi$ for some $\phi\in W^{ 1,1}_{\rm loc} (\Omega; \R^m )$, then $d=m\times N$, and $\pdeor$-quasiconvexity reduces to Morrey's notion of quasiconvexity (see \cite{acerbi.fusco, ball, marcellini, morrey}). The first identification of the effective energy associated to periodic integrands evaluated along $\pdeor^c$-free fields was provided in \cite{braides.fonseca.leoni}, by Braides, Fonseca and Leoni. Their homogenization results were later generalized in \cite{fonseca.kromer}, where Fonseca and Kr\"omer worked under weaker assumptions on the energy density $f$.\\ This paper is devoted to extending the results in \cite{fonseca.kromer} to the framework of $\pdeor-$quasiconvexity with variable coefficients. To be precise, in \cite{fonseca.kromer} the authors studied the homogenized energy associated to a family of functionals of the type $$F_{\ep}(u_{\ep}):=\int_{\Omega}f\big(x,\tfrac{x}{\ep},u_{\ep}(x)\big)\,dx,$$ where $\Omega$ is a bounded, open subset of $\R^N$, $u_{\ep}\wk u$ weakly in $L^p(\Omega;\R^d)$ and the sequence $\{u_{\ep}\}$ satisfies a differential constraint of the form $\pdeor^c u_{\ep}=0$ for every $\ep$. We analyze the analogous problem in the case in which $\pdeor$ depends on the space variable and the differential constraint is replaced by the condition $$\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\R^l).$$ Our analysis leads to a limit homogenized energy of the form: $$\mathscr{E}_{\rm hom}(u):=\begin{cases}\int_{\Omega} f_{\rm hom}(x,u(x))\,dx&\text{if }\pdeor u=0,\\ +\infty&\text{otherwise in }L^p(\Omega;\R^d),\end{cases}$$ where $\mathscr{W}$ is the class of maps $w\in L^p(\Omega;L^p_{\rm per}(\R^N;\R^d))$ such that $$\int_Q w(x,y)\,dy=0\quad\text{for a.e. }x\in\Omega,$$ and \be{eq:condition-w} \sum_{i=1}^N A^i(x)\frac{\partial w(x,y)}{\partial {y_i}}=0\quad\text{in }W^{-1,p}(Q;\R^l)\,\text{ for a.e. }x\in\Omega,\ee and $f_{\rm hom}:\Omega\times \R^d\to [0,+\infty)$ is defined as $$f_{\rm hom}(x,\xi):=\liminfn\inf_{v\in\mathcal{C}_x}\int_Q f(x,ny,\xi+v(y))\,dy.$$ Our main result is the following. \begin{theorem} \label{thm:main} Let $1<p<+\infty$. Let $A^i\in C^{\infty}_{\rm per}(\R^N;\M^{l\times d})$, $i=1,\cdots, N$, and assume that $\pdeor$ satisfies the constant rank condition \eqref{cr}. Let $f:\Omega\times \Rn\times \rd$ be a function satisfying \ba{eq:hp-f-1} & f(x,\cdot,\xi)\quad\text{ is measurable},\\ &\label{eq:hp-f-2} f(\cdot,y,\cdot)\quad\text{ is continuous},\\ &\label{eq:hp-f-3} f(x,\cdot,\xi)\quad\text{ is }Q-\text{periodic},\\ &\label{eq:growth-p-f-3}0\leq f(x,y,\xi)\leq C(1+|\xi|^p)\quad\text{for all }(x,\xi)\in\Omega\times\rd,\text{ and for a.e. }y\in\Rn. \end{align} Then for every $u\in L^p(\Omega;\rd)$ there holds \begin{multline*} \inf\Big\{\liminf_{\ep\to 0}\iO f\Big(x,\frac{x}{\ep},u_{\ep}(x)\Big)\,dx:u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\rd)\\ \text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\rl)\Big\}\\ =\inf\Big\{\limsup_{\ep\to 0}\iO f\Big(x,\frac{x}{\ep},u_{\ep}(x)\Big)\,dx:u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\rd)\\ \text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\rl)\Big\}=\mathscr{E}_{\rm hom}(u). \end{multline*} \end{theorem} As in \cite{davoli.fonseca} and \cite{fonseca.kromer}, the proof of this result is based on the \emph{unfolding operator}, introduced in \cite{cioranescu.damlamian.griso08,cioranescu.damlamian.griso} (see also \cite{visintin, visintin1}). In contrast with \cite[Theorem 1.1]{fonseca.kromer} (i.e. the case in which $\pdeor=\pdeor^c$), here we are unable to work with exact solutions of the system $\pdeor u_{\ep}=0$, but instead we consider sequences of asymptotically $\pdeor-$vanishing fields. This is due to the fact that for $\pdeor$-quasiconvexity with variable coefficients we do not project directly on the kernel of the differential constraint, but construct an ``approximate" projection operator $P$ such that for every field $v\in L^p$, the $W^{-1,p}$ norm of $\pdeor Pv$ is controlled by the $W^{-1,p}$ norm of $v$ itself (for a detailed explanation we refer to \cite[Subsection 2.1]{santos}). In \cite{davoli.fonseca} the issue of defining a projection operator was tackled by imposing an additional invertibility assumption on $\pdeor$ and by exploiting the divergence form of the differential constraint. We do not add this invertibility requirement here, instead we use the fact that in our framework the differential operator depends on the ``macro" variable $x$ but acts on the ``micro" variable $y$ (see \eqref{eq:condition-w}). Hence it is possible to define a pointwise projection operator $\Pi(x)$ along the argument of \cite[Lemma 2.14]{fonseca.kromer} (see Lemma \ref{lemma:proj-operator}). As a corollary of our main result we recover an alternative proof of the relaxation theorem \cite[Theorem 1.1]{braides.fonseca.leoni} in the framework of $\pdeor-$quasiconvexity with variable coefficients, that is we obtain the identification (see Corollary \ref{thm:relax}) $$\int_D \qa f(x,u(x))\,dx=\mathcal{I}(u,D)$$ for every open subset $D$ of $\Omega$, and for every $u\in L^p(\Omega;\rd)$ satisfying $\pdeor u=0$, where the functional $\mathcal{I}$ is defined as \ba{eq:def-i-intro} &\mathcal{I}(u,D):=\inf\Big\{\liminf_{\ep\to 0}\int_{D}f(x,u_{\ep}(x)):\, \\ \nn&\quad u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\R^m)\,\text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\R^l)\Big\}. \end{align} We point out here that a proof of this relaxation theorem follows directly combining \cite[Proof of Theorem 1.1]{braides.fonseca.leoni} with the arguments in \cite{santos}. The interest in Corollary \ref{thm:relax} lies in the fact that it is obtained as a by-product of our homogenization result, and thus by adopting a completely different proof strategy.\\ In analogy to \cite{davoli.fonseca} one might expect to be able to apply an approximation argument and extend the results in Theorem \ref{thm:main-result-A-free} to the situation in which $A^i\in W^{1,\infty}(\R^N;\M^{l\times d})$, $i=1\cdots,N$, which is the least regularity assumption in order for $\pdeor$ to be well defined as a differential operator from $L^p$ to $W^{-1,p}$. We were unable to achieve this generalization, mainly because the projection operator here plays a key role in the proof of both the liminf and the limsup inequalities. In order to work with approximant operators $\pdeor^k$ having smooth coefficients, we would need to finally project on the kernel of $\pdeor$, whereas the projection argument provided in \cite{santos} applies only to the case of smooth differential constraints.\\ The article is organized as follows. In Section \ref{section:prel} we establish the main assumptions on the differential operator $\pdeor$ and we recall some preliminary results on two-scale convergence. In Section \ref{section:pt-quas} we recall the definition of $\pdeor$-quasiconvex envelope and we construct some examples of $\pdeor-$quasiconvex functions. Section \ref{section:A-free} is devoted to the proof of our main result.\\ \noindent\textbf{Notation}\\ Throughout this paper, $\Omega\subset \R^N$ is a bounded open set, $\mathcal{O}(\Omega)$ is the set of open subsets of $\Omega$, $Q$ denotes the unit cube in $\R^N$ centered at the origin and with normals to its faces parallel to the vectors in the standard orthonormal basis of $\R^N$, $\{e_1,\cdots,e_N\}$, i.e., $$Q=\Big(-\frac{1}{2},\frac{1}{2}\Big)^N.$$ Given $1<p<+\infty$, we denote by $p'$ its conjugate exponent, that is $$\frac{1}{p}+\frac{1}{p'}=1.$$ Whenever a map $u\in L^p, C^{\infty},\cdots$, is $Q-$periodic, that is $$u(x+e_i)=u(x)\quad i=1,\cdots, N$$ for a.e. $x\in \R^N$, we write $u\in L^p_{\rm per}, C^{\infty}_{\rm per},\cdots$, respectively. We will implicitly identify the spaces $L^p(Q)$ and $L^p_{\rm per}(\R^N)$. We will designate by $\scal{\cdot}{\cdot}$ the duality product between $W^{-1,p}$ and $W^{1,p'}_0$. We adopt the convention that $C$ will denote a generic constant, whose value may change from expression to expression in the same formula. \section{Preliminary results} \label{section:prel} In this section we introduce the main assumptions on the differential operator $\pdeor$ and we recall some preliminary results about $\pdeor-$quasiconvexity and two-scale convergence. \subsection{Preliminaries} \label{subsection:prel} For $i=1,\cdots, N$, consider the matrix-valued functions $A^i\in C^{\infty}(\R^N;\M^{l\times d})$. For $1<p<+\infty$ and $u\in L^p(\Omega;\R^d)$, we set $$\pdeor u:=\sum_{i=1}^N A^i(x)\frac{\partial u(x)}{\partial x_i} \in W^{-1,p}(\Omega;\R^l).$$ For every $x_0\in\Omega$ and $u\in L^p(\Omega;\rd)$ we define $$\pdeor(x_0) u:=\sum_{i=1}^N A^i(x_0)\frac{\partial u(x)}{\partial x_i} \in W^{-1,p}(\Omega;\R^l).$$ We will also consider the operators $$\pdeor_x w:=\sum_{i=1}^N A^i(x)\frac{\partial w(x,y)}{\partial {x_i}}$$ and $$\pdeor_y w:=\sum_{i=1}^N A^i(x)\frac{\partial w(x,y)}{\partial {y_i}}$$ for every $w\in L^p(\Omega\times Q;\R^d)$. Finally, for every $x_0\in\Omega$ and for $w\in L^p(\Omega\times Q;\R^d)$, we set $$\pdeor_x (x_0) w:=\sum_{i=1}^N A^i(x_0)\frac{\partial w(x,y)}{\partial {x_i}}$$ and $$\pdeor_y (x_0)w:=\sum_{i=1}^N A^i(x_0)\frac{\partial w(x,y)}{\partial {y_i}}.$$ For every $x\in \R^N$, $\lambda\in \R^N\setminus \{0\}$, let $\mathbb{A}(x,\lambda)$ be the linear operator $$\mathbb{A}(x,\lambda):=\sum_{i=1}^N A^i(x)\lambda_i\in \M^{l\times d}.$$ We assume that $\pdeor$ satisfies the following \emph{constant rank condition}: \be{eq:constant-rank-condition} \text{rank }\Big(\sum_{i=1}^N A^i(x)\lambda_i\Big)=r\quad\text{for some }r\in N\text{ and for all }x\in\R^N,\lambda\in \Rn \setminus \{0\}. \ee For every $x\in\Rn$, $\lambda\in \Rn\setminus\{0\}$, let $\PP(x,\lambda):\rd\to\rd$ be the linear projection on Ker $\mathbb{A}(x,\lambda)$, and let $\QQ(x,\lambda):\rl\to\rd$ be the linear operator given by \begin{eqnarray*} &&\QQ(x,\lambda)\mathbb{A}(x,\lambda)\xi:=\xi-\PP(x,\lambda)\xi\quad\text{for all }\xi\in\rd,\\ &&\QQ(x,\lambda)\xi=0\quad\text{if }\xi\notin \text{Range }\mathbb{A}(x,\lambda). \end{eqnarray*} The main properties of $\PP(\cdot,\cdot)$ and $\QQ(\cdot,\cdot)$ are stated in the following proposition (see \cite[Subsection 2.1]{santos}). \begin{proposition} \label{prop:properties-P-Q} Under the constant rank condition \eqref{eq:constant-rank-condition}, for every $x\in\Rn$ the operators $\PP(x,\cdot)$ and $\QQ(x,\cdot)$ are, respectively, $0-$homogeneous and $(-1)-$homogeneous. Moreover, $\PP\in C^{\infty}(\Rn\times \Rn\setminus\{0\};\M^{d\times d})$ and $\QQ\in C^{\infty}(\Rn\times\Rn\setminus\{0\};\M^{d\times l})$. \end{proposition} \subsection{Two-scale convergence} We recall here the definition and some properties of two-scale convergence. For a detailed treatment of the topic we refer to, e.g., \cite{allaire, lukkassen.nguetseng.wall, nguetseng}. Throughout this subsection $1<p<+\infty$. \begin{definition} If $v\in L^p(\Omega\times Q;\rd)$ and $\{u_{\ep}\}\in L^p(\Omega;\rd)$, we say that $\{u_{\ep}\}$ \emph{weakly two-scale converge to} $v$ in $L^p(\Omega\times Q;\rd)$, $u_{\ep}\wkts v$, if $$\iO u_{\ep}(x)\cdot \varphi\Big(x,\frac{x}{\ep}\Big)\,dx\to \int_{\Omega}\int_Q v(x,y)\cdot \varphi(x,y)\,dy\,dx$$ for every $\varphi\in L^{p'}(\Omega;C^{\infty}_{\rm per}(\Rn;\rd))$.\\ We say that $\{u_{\ep}\}$ \emph{strongly two-scale converge to $v$} in $L^p(\Omega\times Q;\rd)$, $u_{\ep}\sts v$, if $u_{\ep}\wkts v$ and $$\lim_{\ep\to 0}\|u_{\ep}\|_{L^p(\Omega;\rd)}=\|v\|_{L^p(\Omega\times Q;\rd)}.$$ \end{definition} Bounded sequences in $L^p(\Omega;\rd)$ are pre-compact with respect to weak two-scale convergence. To be precise (see \cite[Theorem 1.2]{allaire}), \begin{proposition} \label{prop:2-scale-compactness} Let $\{u_{\ep}\}\subset L^p(\Omega;\rd)$ be bounded. Then, there exists $v\in L^p(\Omega\times Q;\rd)$ such that, up to the extraction of a (non relabeled) subsequence, $u_{\ep}\wkts v$ weakly two-scale in $L^p(\Omega\times Q;\R^d)$, and, in particular $$u_{\ep}\wk \iQ v(x,y)\,dy\quad\text{weakly in }L^p(\Omega;\rd).$$ \end{proposition} The following result will play a key role in the proof of the limsup inequality (see \cite[Proposition 2.4, Lemma 2.5 and Remark 2.6]{fonseca.kromer}). \begin{proposition} \label{prop:simple-2-scale} Let $v\in L^p(\Omega;C_{\rm per}(\Rn;\rd))$ or $v\in L^p_{\rm per}(\Rn;C(\overline{\Omega};\rd))$. Then, the sequence $\{v_{\ep}\}$ defined as $$v_{\ep}(x):=v\Big(x,\frac{x}{\ep}\Big)$$ is $p-$equiintegrable, and $$v_{\ep}\sts v\quad\text{strongly two-scale in }L^p(\Omega;\rd).$$ \end{proposition} \subsection{The unfolding operator} \label{subsection:unfolding} We collect here the definition and some properties of the \emph{unfolding operator} (see e.g. \cite{cioranescu.damlamian.griso, cioranescu.damlamian.griso08, visintin, visintin1}). \begin{definition} Let $u\in L^p(\Omega;\rd)$. For every $\ep>0$, the unfolding operator $T_{\e}:L^p(\Omega;\rd)\to L^p(\R^N;L^p_{\rm per}(\Rn;\rd))$ is defined componentwise as \be{eq:unfolding-operator} T_{\e}(u)(x,y):=u\Big(\e\floor[\Big]{\frac{x}{\e}}+\e(y-\floor{y})\Big)\quad\text{for a.e. }x\in\Omega\text{ and }y\in\Rn,\ee where $u$ is extended by zero outside $\Omega$ and $\floor{\cdot}$ denotes the least integer part. \end{definition} The next proposition and the subsequent theorem allow to express the notion of two-scale convergence in terms of $L^p$ convergence of the unfolding operator. \begin{proposition} \label{prop:isometry} (see \cite{cioranescu.damlamian.griso, visintin1}) $T_{\ep}$ is a nonsurjective linear isometry from $L^p(\Omega;\rd)$ to $L^p(\R^N\times Q;\rd)$. \end{proposition} The following theorem provides an equivalent characterization of two-scale convergence in our framework (see \cite[Proposition 2.5 and Proposition 2.7]{visintin1},\cite[Theorem 10]{lukkassen.nguetseng.wall}). \begin{theorem} \label{thm:equivalent-two-scale} Let $\Omega$ be an open bounded domain and let $v\in L^p(\Omega\times Q;\R^d)$. Assume that $v$ is extended to be $0$ outside $\Omega$. Then the following conditions are equivalent: \begin{enumerate} \item[(i)]$u_{\ep}\wkts v\quad\text{weakly two scale in }L^p(\Omega\times Q;\rd),$ \item[(ii)] $T_{\ep}u_{\ep}\wk v$ weakly in $L^p(\R^N\times Q;\rd)$. \end{enumerate} Moreover, $$u^{\ep}\sts v\quad\text{strongly two scale in }L^p(\Omega\times Q;\rd)$$ if and only if $$T_{\ep}u_{\ep}\to v\quad\text{strongly in }L^p(\R^N\times Q;\rd).$$ \end{theorem} The following proposition is proved in \cite[Proposition A.1]{fonseca.kromer}. \begin{proposition} \label{prop:conv-unf-op} For every $u\in L^p(\Omega;\R^d)$ (extended by $0$ outside $\Omega$), $$\|u-T_{\ep}u\|_{L^p(\R^N\times Q;\R^d)}\to 0$$ as $\ep\to 0$. \end{proposition} \section{$\pdeor$-quasiconvex functions} \label{section:pt-quas} In this section we recall the notion of $\pdeor$-quasiconvexity and $\pdeor$-quasiconvex envelope, and we provide some examples of $\pdeor$-quasiconvex functions in the case in which $\pdeor$ has variable coefficients. We start by recalling the main definitions when $\pdeor=\pdeor^c$, where $\pdeor^c$ is a first order differential operator with constant coefficients, that is, for every $u\in L^p(\Omega;\R^d)$, $$\pdeor^c u(x):=\sum_{i=1}^N A^i_c\frac{\partial u(x)}{\partial x_i}\in W^{-1,p}(\Omega;\R^l),$$ with $A^i_c\in \M^{l\times d}$ for $i=1,\cdots, N$. \begin{definition} Let $f:\Omega\times \rd\to [0,+\infty)$ be a Carath\'eodory function, let $\pdeor^c$ be a first order differential operator with constant coefficients, and consider the set $$\mathcal{C}_{\rm const}:=\Big\{w\in C^{\infty}_{\rm per}(\R^N;\R^d):\,\int_Q{w(y)\,dy}=0\quad\text{and}\quad \sum_{i=1}^N A^i_c\frac{\partial w(y)}{\partial y_i}=0\Big\}.$$ The \emph{$\pdeor^c$-quasiconvex envelope} of $f$ is the function $Q^{\pdeor^c}f:\Omega\times\R^d\to [0,+\infty)$, given by \be{eq:def-A-quasiconvex-envelope-const} Q^{\pdeor^c}f(x,\xi):=\inf\Big\{\iq f(x,\xi+w(y))\,dy:\quad w\in\mathcal{C}_{\rm const}\Big\}, \ee for a.e. $x\in\Omega$ and for all $\xi\in \R^d$. We say that $f$ is \emph{$\pdeor^c$-quasiconvex} if $$f(x,\xi)=Q^{\pdeor^c} f(x,\xi)\quad\text{for a.e. }x\in\Omega\text{ and for all } \xi\in\rd.$$ \end{definition} Similarly, in the case in which $\pdeor$ depends on the space variable, the definitions of $\pdeor$-quasiconvex envelope and $\pdeor$-quasiconvex function read as follows. \begin{definition} Let $f:\Omega\times \rd\to [0,+\infty)$ be a Carath\'eodory map, let $\pdeor$ be a first order differential operator with variable coefficients, and for every $x\in\Omega$ consider the set \be{eq:def-c-x} \cx:=\Big\{w\in C^{\infty}_{\rm per}(\R^N;\R^d):\,\int_Q{w(y)\,dy}=0\quad\text{and}\quad \sum_{i=1}^N A^i(x)\frac{\partial w(y)}{\partial y_i}=0 \Big\}. \ee The \emph{$\pdeor$-quasiconvex envelope} of $f$ is the function $\qa f:\Omega\times\R^d\to[0,+\infty)$, given by \be{eq:def-A-quasiconvex-envelope} \qa f(x,\xi):=\inf\Big\{\iq f(x,\xi+w(y))\,dy:\quad w\in \cx\Big\}, \ee for a.e. $x\in\Omega$ and for all $\xi\in \R^d$. We say that $f$ is \emph{$\pdeor$-quasiconvex} if $$f(x,\xi)=\qa f(x,\xi)\quad\text{for a.e. }x\in\Omega\text{ and for all } \xi\in\rd.$$ \end{definition} We finally introduce, for every $x_0\in\Omega$, the notion of pointwise $\pdeor(x_0)$-quasiconvex envelope and pointwise $\pdeor(x_0)$-quasiconvexity. \begin{definition} Let $x_0\in\Omega$. Let $f:\Omega\times \rd\to [0,+\infty)$ be a Carath\'eodory map and let $\pdeor$ be a first order differential operator with variable coefficients. The \emph{pointwise $\pdeor(x_0)$-quasiconvex envelope} of $f$ is the function $Q^{\pdeor(x_0)}f:\Omega\times\R^d\to[0,+\infty)$ given by $$Q^{\pdeor(x_0)}f(x,\xi):=\inf\Big\{\iq f(x,\xi+w(y))\,dy:\quad w\in \mathcal{C}_{x_0}\Big\}$$ for a.e. $x\in\Omega$ and for all $\xi\in\rd$. We say that $f$ is pointwise $\pdeor(x_0)$-quasiconvex in $x_0\in\Omega$, if $$f(x,\xi)=Q^{\pdeor(x_0)}f(x,\xi)\quad\text{for a.e. }x\in\Omega\text{ and for all }\xi\in\rd.$$ \end{definition} We stress that $\pdeor$-quasiconvexity and pointwise $\pdeor(x)$-quasiconvexity are related by the following ``fixed point" relation: \be{eq:fixed-point-id}\qa f(x,\xi)=Q^{\pdeor(x)}f(x,\xi)\quad\text{for a.e. }x\in\Omega\text{ and for all }\xi\in\R^d.\ee The remaining part of this section is devoted to illustrating these concepts with some explicit examples of $\pdeor$-quasiconvex functions. We first exhibit an example where $\pdeor$-quasiconvexity reduces to ${\pdeor^c}$-quasiconvexity for a suitable operator ${\pdeor^c}$ with constant coefficients. \begin{example} \label{ex:basic} Let $1<p<+\infty$ and define $$f(x,\xi):=a(x)b(\xi)\quad\text{for a.e }x\in\Omega\text{ and every }\xi\in\R^d,$$ with $a\in L^p(\Omega)$ and $b\in C(\R^d)\cap L^p(\R^d)$. In order for $f$ to be $\pdeor$-quasiconvex, the function $b$ must satisfy $$b(\xi)=\inf\Big\{\iq b(\xi+w(y))\,dy:\,\quad w\in \cup_{x\in\Omega}\,\cx\Big\}.$$ Consider the case in which $\cx$ is the same for every $x$, for example when the differential constraint is provided by the operator: $$\pdeor w(x):=\sum_{i=1}^NM(x)A^i_c\frac{\partial w(x)}{\partial x_i},$$ where $M:\Omega\to \M^{l\times l}$ and $\det M(x)>0$ for every $x\in\Omega$. In this case, $$\cx=\Big\{w\in C^{\infty}_{\rm per}(\Rn;\rd):\iq w(y)\,dy=0\text{ and }{\pdeor^c}w=0\Big\}$$ for every $x$, where $$\pdeor^c w(x):=\sum_{i=1}^NA^i_c\frac{\partial w(x)}{\partial x_i}.$$ Hence, $f$ is $\pdeor$-quasiconvex if and only if $b$ is $\pdeor^c$-quasiconvex. \end{example} In the previous example, $\pdeor$-quasiconvexity could be reduced to $\pdeor^c$-quasiconvexity owing to the fact that the class $\cx$ was constant in $x$. We provide now an example where an analogous phenomenon occurs, despite the fact that $\cx$ varies with respect to $x$. To be precise, we consider the case in which $\pdeor$ is a smooth perturbation of the divergence operator. In this situation, the $\pdeor$-quasiconvex envelope of $f$ coincides with its convex envelope. \begin{example} \label{ex:traditional} We consider a smooth perturbation of the divergence operator in a set $\Omega\subset \R^2$, that is $$\pdeor u:=\Big(\begin{array}{cc}a(x)& 0\\0&1\end{array}\Big)\nabla u\quad\text{for every }u\in L^p(\Omega;\R^2),\quad 1<p<+\infty,$$ with $a\in C(\overline{\Omega})$ and $$\tfrac 12\leq a(x)<1\quad\text{for every }x\in\Omega.$$ We notice that $$\ker \mathbb{A}(x,\lambda)=\Big\{\xi\in\R^2:a(x)\lambda_1\xi_1+\lambda_2\xi_2=0\Big\},$$ and therefore $${\rm rank }\,\mathbb{A}(x,\lambda)=1\quad\text{for every }x\in\Omega\text{ and }\lambda\in \R^2\setminus\{0\}.$$ In this situation, the class $\cx$ depends on $x$, since we have $$\cx=\Big\{w\in C^{\infty}_{\rm per}(\R^2;\R^2):\quad\int_Q w(y)\,dy=0\quad\text{and}\quad a(x)\frac{\partial w_1(y)}{\partial y_1}+\frac{\partial w_2(y)}{\partial y_2}=0\Big\},$$ although $$\cup_{\lambda\in\mathbb{S}^1}\ker \mathbb{A}(x,\lambda)=\R^2.$$ Let now $f:\rd\to [0,+\infty)$ be a continuous map. By \cite[Proposition 3.4]{fonseca.muller}, it follows that $\qa f(\xi)$ coincides with the convex envelope of $f$ evaluated at $\xi$, exactly as in the case of the divergence operator (see \cite[Remark 3.5 (iv)]{fonseca.muller}). \end{example} We conclude this section with an example in which the notion of $\pdeor$-quasiconvexity can not be reduced to $\pdeor^c$-quasiconvexity with respect to a constant operator $\pdeor^c$. \begin{example} \label{ex:nontraditional} Here we consider a smooth perturbation of the {\rm curl} operator in a set $\Omega\subset \R^2$. Let $\pdeor :L^p(\Omega;\R^2)\to W^{-1,p}(\Omega;\R^4)$ be given by $$\pdeor u=\sum_{i=1,2}A^i(x)\frac{\partial u}{\partial x_i}\quad\text{for }u\in L^p(\Omega;\R^2),\quad1<p<+\infty,$$ where $$A^i_{(j,k),(q)}(x):=a_i(x)\delta_{ij}\delta_{qk}-a_i(x)\delta_{ik}\delta_{qj}\,\text{ for every }x\in\Omega,\quad1\leq j,k,q\leq 2,$$ with $a_2(x)\equiv 1$, and $a_1\in C(\overline{\Omega})$ satisfies $$\tfrac 12\leq a_1(x)\leq 1\quad\text{for every }x\in{\Omega}.$$ We first notice that \begin{eqnarray*} \ker\mathbb{A}(x,\lambda)&:=&\Big\{\xi\in \R^2:\, \lambda_1 a_1(x)\xi_2=\lambda_2 \xi_1\Big\}\\ &=&\Big\{\xi\in \R^2:\,\xi_2=\alpha \lambda_2\text{ and } \xi_1=\alpha a_1(x)\lambda_1,\,\alpha\in\R\Big\}, \end{eqnarray*} hence $${\rm rank }\,\mathbb{A}(x,\lambda)=3\quad\text{for every }x\in\Omega,\,\lambda\in\mathbb{S}^1.$$ The class $\cx$ depends on $x$ and there holds \ba{eq:cx-example} &\cx=\Big\{w\in C^{\infty}_{\rm per }(\R^2;\R^2):\,\int_Q w(y)\,dy=0\quad\text{and}\quad a_1(x)\frac{\partial w_{2}(y)}{\partial y_1}=\frac{\partial w_{1}(y)}{\partial y_2}\Big\}\\ \nn&\quad=\Big\{w\in C^{\infty}_{\rm per }(\R^2;\R^2):\,\int_Q w(y)\,dy=0,\,w_1(y)=a_1(x)\frac{\partial \varphi(y)}{\partial y_1},\\ \nn&\qquad\text{and}\quad w_2(y)=\frac{\partial \varphi(y)}{\partial y_2},\,\text{where }\varphi\in C^{\infty}_{\rm per}(\R^2)\Big\}. \end{align} Let now $g:\Omega\times \R^2\to [0,+\infty)$ be a quasiconvex function and let \mbox{$f:\Omega\times \R^2\to [0,+\infty)$} be defined as $$f(x,\xi):=g\Big(x,\Big(\frac{\xi_1}{a_1(x)},\xi_2\Big)\Big) \text{ for a.e. }x\in\Omega\text{ and for every }\xi\in\R^2.$$ We claim that $$\qa f(x,\xi)=f(x,\xi)\quad\text{for a.e. }x\in\Omega\text{ and for every }\xi\in\R^2.$$ Indeed, by \eqref{eq:cx-example} there holds \bas &\inf_{w \in \cx} \int_Q f(x,\xi+w(y))\,dy\\ &\quad=\inf\Big\{\int_Q f\Big(x,\Big(\xi_1+a_1(x)\frac{\partial \varphi_1(y)}{\partial y_1},\xi_2+\frac{\partial \varphi_2(y)}{\partial y_2}\Big)\Big)\,dy:\,\varphi\in C^{\infty}_{\rm per}(\R^2)\Big\}\\ &\quad=\inf\Big\{\int_Q g\Big(x,\Big(\frac{\xi_1}{a_1(x)}+\frac{\partial\varphi(y)}{\partial y_1},\xi_2+\frac{\partial\varphi(y)}{\partial y_2}\Big)\Big)\,dy:\,\varphi\in C^{\infty}_{\rm per}(\R^2)\Big\}\\ &\quad=Qg\Big(x,\Big(\frac{\xi_1}{a_1(x)},\xi_2\Big)\Big) \end{align*} for a.e. $x\in\Omega$ and for every $\xi\in\R^2$, where $Qg$ denotes the quasiconvex envelope of the function $g$. The claim follows by the definition of $f$ and the quasiconvexity of $g$. \end{example} \section{A homogenization result for $\pdeor$-free fields} \label{section:A-free} In this section we prove a homogenization result for oscillating integral energies under weak $L^p$ convergence of $\pdeor-$vanishing maps. Fix $1<p<+\infty$ and consider a function $f:\Omega\times\Rn\times\rd\to[0,+\infty)$ satisfying \eqref{eq:hp-f-1}--\eqref{eq:growth-p-f-3}. In analogy with the case of constant coefficients (see \cite[Definition 2.9]{fonseca.kromer}), we define the class of $\pdeor$-free fields as the set \be{eq:def-A-f}\mathscr{F}:=\Big\{v\in L^p(\Omega\times Q;\R^d):\,\pdeor_y v=0\quad\text{and }\pdeor_x \iQ v(x,y)\,dy=0\Big\},\ee where both the previous differential conditions are in the sense of $W^{-1,p}$. We aim at obtaining a characterization of the homogenized energy \ba{eq:liminf-to-charact} &\inf\Big\{\liminf_{\ep\to 0} \iO f\Big(x,\frac{x}{\ep},u_{\ep}(x)\Big)\,dx:\,\,u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\R^d)\\ \nn&\quad\text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\rl)\Big\}. \end{align} We start with a preliminary lemma, which will allow us to define a pointwise projection operator. We will be using the notation introduced in Sections \ref{section:prel} and \ref{section:pt-quas}. \begin{lemma} \label{lemma:proj-operator} Let $1<p<+\infty$. Let $A^i\in C^{\infty}_{\rm per}(\R^N;\M^{l\times d})$, $i=1\cdots, N$, and assume that the associated first order differential operator $\pdeor$ satisfies \eqref{eq:constant-rank-condition}. Then, for every $x\in\Omega$ there exists a projection operator $$\Pi(x): L^p(Q;\rd)\to L^p(Q;\rd)$$ such that \begin{enumerate} \item[(P1)]$\Pi(x)$ is linear and bounded, and vanishes on constant maps, \item[(P2)]$\Pi(x)\circ \Pi(x)\psi(y)=\Pi(x)\psi(y)\quad\text{and}\quad\pdeor_y(x) (\Pi(x)\psi(y))=0\quad\text{in }W^{-1,p}(Q;\rl)$ for a.e. $x\in\Omega$, {for every }$\psi\in L^p(Q;\rd)$, \item[(P3)]there exists a constant $C=C(p)>0$, independent of $x$, such that $$\|\psi(y)-\Pi(x)\psi(y)\|_{L^p(Q;\rd)}\leq C\|\pdeor_y(x) \psi(y)\|_{W^{-1,p}(Q;\rl)}$$ {for a.e. }$x\in\Omega$, { for every }$\psi\in L^p(Q;\rd)$ with $\iQ\psi(y)\,dy=0$, \item[(P4)]if $\{\psi_n\}$ is a bounded $p$-equiintegrable sequence in $L^p(Q;\R^d)$, then $\{\Pi(x)\psi_n(y)\}$ is a $p$-equiintegrable sequence in $L^p(\Omega\times Q;\R^d)$, \item[(P5)] if $\varphi\in C^1(\Omega;C^{\infty}_{\rm per}(\Rn;\rd))$ then the map $\varphi_{\Pi}$, defined by $$\varphi_{\Pi}(x,y):=\Pi(x)\varphi(x,y)\quad\text{for every }x\in\Omega\text{ and }y\in\R^N,$$ satisfies $\varphi_{\Pi}\in C^1(\Omega; C^{\infty}_{\rm per}(\R^N;\rd))$. \end{enumerate} \end{lemma} \begin{proof} For every $x\in\Omega$, let $\Pi(x)$ be the projection operator provided by \cite[Lemma 2.14]{fonseca.muller}. Properties (P1) and (P2) follow from \cite[Lemma 2.14]{fonseca.muller}. In order to prove (P3), fix $x\in\Omega$ and $\psi\in C^{\infty}_{\rm per}(\Rn;\rd)$. Let $\mathbb{A}$, $\PP$ and $\QQ$ be the operators defined in Subsection \ref{subsection:prel}. Writing the operator $\Pi(x)$ explicitly, we have $$\Pi(x)\psi(y):=\sum_{\lambda\in\mathbb{Z}^N\setminus\{0\}}\PP(x,\lambda)\hat{\psi}(\lambda)e^{2\pi i y\cdot\lambda},$$ where $$\hat{\psi}(\lambda):=\iq \psi(y)e^{-2\pi i y\cdot\lambda}\,dy,\quad\text{for every }\lambda\in\Z^N\setminus\{0\}$$ are the Fourier coefficients associated to $\psi$. By the (-1)-homogeneity of the operator $\mathbb{Q}$ (see Proposition \ref{prop:properties-P-Q}) we deduce \ba{eq:proj-ast1} &|\psi(y)-\Pi(x)\psi(y)|^p=\Big|\sum_{\lambda\in\Z^N\setminus\{0\}}\mathbb{Q}(x,\lambda)\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)e^{2\pi iy\cdot\lambda}\Big|^p\\ \nn&\quad=\Big|\sum_{\lambda\in\Z^N\setminus\{0\}}\frac{1}{|\lambda|}\mathbb{Q}\Big(x,\frac{\lambda}{|\lambda|}\Big)\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)e^{2\pi iy\cdot\lambda}\Big|^p. \end{align} For $1<p<2$, by the smoothness of $\mathbb{Q}$ and by applying first H\"older's inequality and then Hausdorff-Young inequality, we obtain the estimate \ba{eq:proj-ast2} & |\psi(y)-\Pi(x)\psi(y)|^p\\ \nn&\quad\leq \Big(\max_{\lambda\in\Z^N,\,|\lambda|=1}\|Q(x,\lambda)\|\Big)^p\Big(\sum_{\lambda\in\Z^N\setminus\{0\}}\frac{1}{|\lambda|^p}\Big)\Big(\sum_{\lambda\in\Z^N\setminus\{0\}}|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|^{p'}\Big)^{\frac{p}{p'}}\\ &\nn\quad\leq C\Big(\sum_{\lambda\in\Z^N\setminus\{0\}}|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|^{p'}\Big)^{\frac{p}{p'}}\leq C\|\pdeor_y(x)\psi(y)\|^p_{L^p(Q;\R^d)}, \end{align} where we used the fact that \be{eq:proj-ast3} |\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|\leq C|\widehat{\pdeor_y(x)\psi(y)}(\lambda)| \ee for every $x\in\Omega$ and $\lambda\in\Z^N\setminus\{0\}$, by the definition of the Fourier coefficients, and where both constants in \eqref{eq:proj-ast2} and \eqref{eq:proj-ast3} are independent of $\lambda$ and $x$. Consider now the case in which $p\geq 2$. By \eqref{eq:proj-ast1} we have \ba{eq:proj-star1} & |\psi(y)-\Pi(x)\psi(y)|^p\\ \nn&\quad\leq \Big(\max_{\lambda\in\Z^N,\,|\lambda|=1}\|Q(x,\lambda)\|\Big)^p\Big(\sum_{\lambda\in\Z^N\setminus\{0\}}\frac{1}{|\lambda|^{p'}}\Big)^{\frac{p}{p'}}\Big(\sum_{\lambda\in\Z^N\setminus\{0\}}|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|^{p}\Big)\\ &\nn\quad\leq C\Big(\sum_{\lambda\in\Z^N\setminus\{0\}}|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|^{p}\Big)\\ &\nn\quad\leq C\Big(\sup_{\lambda\in\Z^N\setminus\{0\}}|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|^{p-2}\Big)\sum_{\lambda\in\Z^N\setminus\{0\}}|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|^{2}. \end{align} By the definition of Fourier coefficients and by H\"older's inequality we have $$|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|\leq C\|\pdeor_y(x)\psi(y)\|_{L^p(Q;\R^l)}$$ for every $x\in\Omega$ and $\lambda\in\Z^N\setminus\{0\}$. In view of \cite[Theorem 2.9]{fonseca.muller}, $$\sum_{\lambda\in\Z^N\setminus\{0\}}|\mathbb{A}(x,\lambda)\hat{\psi}(\lambda)|^{2}=\|\pdeor_y(x)\psi(y)\|^2_{L^2(Q;\R^l)}.$$ Therefore by \eqref{eq:proj-star1}, applying again H\"older's inequality, \ba{eq:proj-point1} & |\psi(y)-\Pi(x)\psi(y)|^p\\ &\nn\quad\leq C\|\pdeor_y(x)\psi(y)\|_{L^p(Q;\R^l)}^{p-2}\|\pdeor_y(x)\psi(y)\|^2_{L^2(Q;\R^l)}\leq C\|\pdeor_y(x)\psi(y)\|_{L^p(Q;\R^l)}^{p} \end{align} where the constant $C$ is independent of $x$ and $y$. Property (P3) follows by \eqref{eq:proj-ast2} and \eqref{eq:proj-point1} via a density argument. (P4) follows directly from (P3), arguing as in the proof of \cite[Lemma 2.14 (iv)]{fonseca.muller}. Let now $\varphi\in C^1(\Omega;C^{\infty}_{\rm per}(\Rn;\rd))$. The regularity of the map $\varphi_{\Pi}$ is a direct consequence of Proposition \ref{prop:properties-P-Q}, the definition of $\Pi$ and the regularity of $\pdeor$. Indeed, \be{eq:direct-def}\varphi_{\Pi}(x,y):=\sum_{\lambda\in\Z^N\setminus\{0\}}\PP(x,\lambda)\hat{\varphi}(x,\lambda)e^{2\pi i y\cdot\lambda},\ee for every $x\in\Omega$ and $y\in \R^N$, where $$\hat{\varphi}(x,\lambda):=\iq \varphi(x,\xi)e^{-2\pi i\xi\cdot\lambda}\,d\xi$$ for every $x\in\Omega$ and $\lambda\in\Z^N\setminus\{0\}$. By the regularity of $\varphi$ and by \cite[Theorem 2.9]{fonseca.muller} we obtain the estimate $$\Big(4\pi^2\sum_{\lambda\in\Z^N\setminus\{0\}}|\lambda|^2|\hat{\varphi}(x,\lambda)|^2\Big)^{\tfrac12}\leq\Big\|\sum_{i=1}^N\frac{\partial\varphi}{\partial y_i}(x,y)\Big\|_{L^2(Q;\R^d)}\leq C$$ for every $x\in\Omega$, hence by Proposition \ref{prop:properties-P-Q} and Cauchy-Schwartz inequality there holds \ba{eq:proj-point2} &\sum_{\lambda\in\Z^N\setminus\{0\},\,|\lambda|\geq n}|\mathbb{P}(x,\lambda)\hat{\varphi}(x,\lambda)e^{2\pi i y\cdot\lambda}|\leq C\sum_{\lambda\in\Z^N\setminus\{0\},\,|\lambda|\geq n}|\hat{\varphi}(x,\lambda)|\\ &\nn\quad\leq C\Big(\sum_{\lambda\in\Z^N\setminus\{0\},\,|\lambda|\geq n}|\hat{\varphi}(x,\lambda)|^2|\lambda|^2\Big)^{\tfrac12}\Big(\sum_{\lambda\in\Z^N\setminus\{0\},\,|\lambda|\geq n}\frac{1}{|\lambda|^2}\Big)^{\tfrac12}\\ &\nn\quad \leq C\Big(\sum_{\lambda\in\Z^N\setminus\{0\},\,|\lambda|\geq n}\frac{1}{|\lambda|^2}\Big)^{\tfrac12}. \end{align} By \eqref{eq:proj-point2} the series in \eqref{eq:direct-def} is uniformly convergent, and hence $\varphi_{\Pi}$ is continuous. The differentiability of $\varphi_{\Pi}$ follows from an analogous argument. \end{proof} For every $v\in L^p(\Omega\times Q;\rd)$, let \ba{eq:def-s-v} \mathcal{S}_v:=&\Big\{\{u_{\ep}\}\subset L^p(\Omega;\rd): u_{\ep}\wkts v\quad\text{weakly two-scale in }L^p(\Omega\times Q;\R^d)\\ \nn&\quad\text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\rl) \Big\}. \end{align} Let also \be{eq:def-s} \mathcal{S}:=\Big\{\cup\mathcal{S}_v: v\in L^p(\Omega\times Q;\rd)\Big\}.\ee We provide a characterization of the set $\mathcal{S}$. \begin{lemma} \label{lemma:A-free-fields} Let $v\in L^p(\Omega\times Q;\R^d)$. Let $\pdeor$ be a first order differential operator with variable coefficients, satisfying \eqref{eq:constant-rank-condition}. The following conditions are equivalent: \begin{enumerate} \item[(C1)] $v\in \mathscr{F}$ (see \eqref{eq:def-A-f});\\ \item[(C2)] $\mathcal{S}_v$ is nonempty. \end{enumerate} \end{lemma} \begin{proof} We first show that (C2) implies (C1). Let $v\in L^p(\Omega\times Q;\rd)$ and let $\{u_{\ep}\}\subset \mathcal{S}_v$. Consider a test function $\psi\in W^{1,p}_0(\Omega;\rl)$. Then $$\scal{\pdeor u_{\ep}}{\psi}\to 0\quad\text{as }\ep\to 0.$$ On the other hand, \be{eq:free-field-eq} \scal{\pdeor u_{\ep}}{\psi}:=-\sum_{i=1}^N\iO\Big(\frac{\partial A^i(x)}{\partial x_i}u_{\ep}(x)\cdot\psi(x)+A^i(x)u_{\ep}(x)\cdot\frac{\partial \psi(x)}{\partial x_i}\Big)\,dx \ee and by Proposition \ref{prop:2-scale-compactness}, up to the extraction of a (not relabeled) subsequence, \be{eq:free-field-wk} u_{\ep}\wk \iQ v(x,y)\,dy\quad\text{weakly in }L^p(\Omega;\rd).\ee Passing to the limit in \eqref{eq:free-field-eq} yields $$\pdeor_x \iQ v(x,y)\,dy=0\quad\text{in }W^{-1,p}(\Omega;\rl).$$ In order to deduce the second condition in the definition of $\mathscr{F}$, we consider a sequence of test functions $\big\{\ep\varphi\big(\frac{x}{\ep}\big)\psi(x)\big\}$, where $\varphi\in W^{1,p'}_0(Q;\rd)$ and $\psi\in C^{\infty}_c(\Omega)$. Since this sequence is uniformly bounded in $W^{1,p'}_0(\Omega;\rd)$, we have $$\scal{\pdeor u_{\ep}}{\ep\varphi\Big(\frac{\cdot}{\ep}\Big)\psi}\to 0$$ as $\ep\to 0$, where \bas &\scal{\pdeor u_{\ep}}{\ep\varphi\Big(\frac{\cdot}{\ep}\Big)\psi}\\ &\quad=-\ep\sum_{i=1}^N\iO\Big(\frac{\partial A^i(x)}{\partial x_i}u_{\ep}(x)\cdot \varphi\Big(\frac{x}{\ep}\Big)\psi(x)+A^i(x)u_{\ep}(x)\cdot\varphi\Big(\frac{x}{\ep}\Big)\frac{\partial\psi(x)}{\partial x_i}\Big)\,dx\\ &\qquad - \sum_{i=1}^N\iO A^i(x)u_{\ep}(x)\cdot\frac{\partial\varphi}{\partial x_i}\Big(\frac{x}{\ep}\Big)\psi(x)\,dx. \end{align*} Passing to the subsequence of $\{u_{\ep}\}$ extracted in \eqref{eq:free-field-wk}, the first line of the previous expression converges to zero. By the definition of two-scale convergence, the second line converges to $$- \sum_{i=1}^N\int_{\Omega}\int_Q A^i(x)v(x,y)\cdot\frac{\partial \varphi(y)}{\partial y_i}\psi(x)\,dy\,dx,$$ and thus $$\scal{\sum_{i=1}^N A^i(x)\frac{\partial v(x,\cdot)}{\partial y_i}}{\varphi}_{W^{-1,p}(Q;\rl),W^{1,p'}_0(Q;\rl)}=0$$ for a.e. $x\in\Omega$, that is $$\pdeor_y v=0\quad\text{in }W^{-1,p}(Q;\rl)\quad\text{for a.e. }x\in\Omega.$$ This completes the proof of (C1).\\ Assume now that (C1) holds true, i.e., $v\in \mathscr{F}$. In order to construct the sequence $\{u_{\ep}\}$, set $$v_1(x,y)=v(x,y)-\iQ v(x,z)\,dz.$$ We first assume that $v_1\in C^1(\Omega;C^1_{\rm per}(\Rn;\rd))$. Defining $$u_{\ep}(x):=\iQ v(x,y)\,dy+v_1\Big(x,\frac{x}{\ep}\Big)\quad\text{for a.e. }x\in\Omega,$$ by Proposition \ref{prop:simple-2-scale} we have $u_{\ep}\sts v$ strongly two-scale in $L^p(\Omega\times Q;\R^d)$. Moreover, by the definition of $\mathscr{F}$ and Propositions \ref{prop:2-scale-compactness} and \ref{prop:simple-2-scale}, \bas \sum_{i=1}^N A^i(x)\frac{\partial u_{\ep}(x)}{\partial x_i}&=\sum_{i=1}^N A^i(x)\frac{\partial}{\partial x_i}\Big(\iQ v(x,y)\,dy+v_1\Big(x,\frac{x}{\ep}\Big)\Big)\\ &=\sum_{i=1}^N A^i(x)\frac{\partial v}{\partial x_i}\Big(x,\frac{x}{\ep}\Big)\wk \sum_{i=1}^N A^i(x)\frac{\partial}{\partial {x_i}}\iQ v(x,y)\,dy=0 \end{align*} weakly in $L^p(\Omega;\rl)$. Hence $v$ satisfies $(C2)$. In the general case in which $v_1\in L^p(\Omega\times Q;\rd)$, we first need to approximate $v_1$ in order to keep the periodicity condition during the subsequent regularization. To this purpose, we extend $v_1$ to $0$ outside $\Omega\times Q$, we consider a sequence $\{\varphi_j\}\in C^{\infty}_c(Q)$ such that $0\leq \varphi_j\leq 1$ and $\varphi_j\to 1$ pointwise, and we define the maps $$v_1^j(x,y):=\varphi_j(y)v_1(x,y)\quad\text{for a.e. }x\in\Omega\quad\text{and }y\in Q.$$ Extend these maps to $\Omega\times \R^N$ by periodicity. It is straightforward to see that \be{eq:double-lp-conv} v_1^j\to v_1\quad\text{strongly in }L^p(\Omega;L^p_{\rm per}(\R^N;\rd)) \ee by the dominated convergence theorem. Moreover \be{eq:double-right-conv} \|\pdeor_y v_1^j\|_{W^{-1,p}(Q;\rl)}\to 0\quad\text{strongly in }L^p(\Omega). \ee Indeed, by \eqref{eq:double-lp-conv}, and since $\pdeor_y v=0$, $$\|\pdeor_y(x) v_1^j(x,\cdot)\|_{W^{-1,p}(Q;\rl)}\to 0\quad\text{for a.e. }x\in\Omega,$$ and $$\|\pdeor_y(x) v_1^j(x,\cdot)\|^p_{W^{-1,p}(Q;\rl)}\leq C\|\varphi_j\|^p_{L^{\infty}(Q)}\|v_1(x,y)\|^p_{L^p(Q;\rd)}\leq C\|v_1(x,y)\|^p_{L^p(Q;\rd)}$$ for a.e. $x\in\Omega$. Thus \eqref{eq:double-right-conv} follows by the dominated convergence theorem.\\ Convolving first with respect to $y$ and then with respect to $x$ we construct a sequence $\{v_1^{\delta,j}\}\in C^{\infty}(\Omega; C^{\infty}_{\rm per}(\Rn;\rd))$ such that \be{eq:v-uno-delta-wk} v_1^{\delta,j}\to v_1^j\quad\text{strongly in }L^p(\Omega;L^p_{\rm per}(\Rn;\rd)), \ee and \be{eq:approx-double-right} \|\pdeor_y v_1^{\delta,j}\|_{W^{-1,p}(Q;\rl)}\to 0\quad\text{strongly in }L^{p}(\Omega),\ee as $\delta\to 0$. In view of \eqref{eq:double-lp-conv}--\eqref{eq:approx-double-right}, a diagonal argument provides a subsequence $\{\delta(j)\}$ such that $\{v_1^{\delta(j),j}\}$ satisfies \be{eq:limit-set1} v_1^{\delta(j),j}\to v_1\quad\text{strongly in }L^p(\Omega;\,L^p_{\rm per}(\R^N;\R^d)) \ee and \be{eq:limit-set2} \|\pdeor_yv_1^{\delta(j),j}\|_{W^{-1,p}(Q;\R^l)}\to 0\quad\text{strongly in }L^p(\Omega). \ee Set $$w^j(x,y):=\Pi(x)\Big(v_1^{\delta(j),j}(x,y)-\iQ v_1^{\delta(j),j}(x,y)\,dy\Big)$$ for a.e. $x\in\Omega$ and $y\in Q$. By Lemma \ref{lemma:proj-operator} we have $w^j\in C^{\infty}(\Omega;\,C^{\infty}_{\rm per}(\R^N;\R^d))$, \be{eq:limit-set1p} \pdeor_y w^j=0\quad\text{in }W^{-1,p}(Q;\R^l)\quad\text{for a.e. }x\in\Omega, \ee and \bas &\|w^j-v_1\|^p_{L^p(\Omega\times Q;\R^d)}\leq C\Big(\Big\|w^j-v_1^{\delta(j),j}+\iQ v_1^{\delta(j),j}(x,y)\,dy\Big\|^p_{L^p(\Omega\times Q;\R^d)}\\ &\nn\qquad+\|v_1^{\delta(j),j}-v_1-\iQ v_1^{\delta(j),j}(x,y)\,dy\|^p_{L^p(\Omega\times Q;\R^d)}\Big)\\ &\nn\quad\leq C\Big(\int_{\Omega}\|\pdeor_y(x) v_1^{\delta(j),j}(x,y)\|^p_{W^{-1,p}(Q;\R^l)}\,dx\\ &\nn\qquad+\|v_1^{\delta(j),j}-v_1-\iQ v_1^{\delta(j),j}(x,y)\,dy\|^p_{L^p(\Omega\times Q;\R^d)}\Big). \end{align*} Therefore, in view of \eqref{eq:limit-set1} and \eqref{eq:limit-set2}, \be{eq:limit-set2p}w^j\to v_1\quad\text{strongly in }L^p(\Omega\times Q;\R^d).\ee We set $$u_{\ep}^{j}(x):=\iQ v(x,y)\,dy+w^j\Big(x,\frac{x}{\ep}\Big)\quad\text{for a.e. }x\in\Omega.$$ By Proposition \ref{prop:simple-2-scale} and \eqref{eq:limit-set2p}, \be{eq:limit-set-point1} u_{\ep}^{j}\sts v\quad\text{strongly two-scale in }L^p(\Omega\times Q;\R^d)\ee as $\ep\to 0$ and $j\to +\infty$, in this order. Moreover, by \eqref{eq:limit-set1p} and since $v\in \mathscr{F}$, $$\pdeor u_{\ep}^{j}=\sum_{i=1}^N A^i(x)\frac{\partial w^j}{\partial x_i}\Big(x,\frac{x}{\ep}\Big).$$ By \eqref{eq:limit-set2p}, Proposition \ref{prop:simple-2-scale}, and the compact embedding of $W^{-1,p}$ into $L^p$, we conclude that \be{eq:limit-set-point2} \pdeor u_{\ep}^{j}\to \pdeor \iQ v_1(x,y)\,dy=0\ee strongly in $W^{-1,p}(\Omega;\R^l)$, as $\ep\to 0$ and $j\to +\infty$, in this order. By \eqref{eq:limit-set-point1}, \eqref{eq:limit-set-point2}, and Theorem \ref{thm:equivalent-two-scale} it follows in particular that $$\lim_{j\to +\infty}\lim_{\ep\to 0}\Big(\|T_{\ep}u_{\ep}^{j}-v\|_{L^p(\Omega\times Q;\R^d)}+\|\pdeor u_{\ep}^{j}\|_{W^{-1,p}(\Omega;\R^l)}\Big)=0.$$ Attouch's diagonalization lemma \cite[Lemma 1.15 and Corollary 1.16]{attouch} provides us with a subsequence $\{j(\ep)\}$ such that, setting $u_{\ep}:=u_{\ep}^{j(\ep)}$, there holds $$T_{\ep}u_{\ep}\sts v\quad\text{strongly two-scale in }L^p(\Omega\times Q;\R^d)$$ and $$\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\R^l).$$ The thesis follows applying again Theorem \ref{thm:equivalent-two-scale}. \end{proof} In order to state the main result of this section we introduce the classes \be{eq:def-u-f}\mathscr{U}:=\{u\in L^p(\Omega;\rd):\,\pdeor u=0\}\ee and \be{eq:def-w-f}\mathscr{W}:=\Big\{w\in L^p(\Omega\times Q;\rd): \iQ w(x,y)\,dy=0\quad\text{and }\pdeor_y w=0\Big\}.\ee It is clear that $v\in \mathscr{F}$ if and only if $$\iQ v(x,y)\,dy\in \mathscr{U}\quad\text{and}\quad v-\iQ v(x,y)\,dy\in \mathscr{W}.$$ Let ${\mathscr{E}}_{\rm hom}:L^p(\Omega;\rd)\to L^p(\Omega;\rd)$ be the functional \be{eq:def-E-hom}{\mathscr{E}}_{\rm hom}(u):= \begin{cases}\liminf_{n\to +\infty}\inf_{w\in\mathscr{W}}\int_{\Omega}\int_Q f(x,ny, u(x))\,dy\,dx&\text{if }u\in\mathscr{U},\\ +\infty&\text{otherwise in }L^p(\Omega;\rd).\end{cases}\ee We now provide a first characterization of \eqref{eq:liminf-to-charact}. \begin{theorem} \label{thm:main-result-A-free} Under the assumptions of Theorem \ref{thm:main}, for every $u\in L^p(\Omega;\R^d)$ there holds \begin{multline} \label{eq:main-A-free} \inf\Big\{\liminf_{\ep\to 0}\iO f\Big(x,\frac{x}{\ep},u_{\ep}(x)\Big)\,dx:u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\rd)\\ \text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\rl)\Big\}\\ =\inf\Big\{\limsup_{\ep\to 0}\iO f\Big(x,\frac{x}{\ep},u_{\ep}(x)\Big)\,dx:u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\rd)\\ \text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\rl)\Big\}={\mathscr{E}}_{\rm hom}(u). \end{multline} \end{theorem} We subdivide the proof of Theorem \ref{thm:main-result-A-free} into the proof of a limsup inequality (Corollary \ref{thm:limsup-A-free}) and a liminf inequality (Propositions \ref{thm:liminf-A-free-1} and \ref{thm:liminf-A-free-2}). We first show how an adaptation of the construction in Lemma \ref{lemma:A-free-fields} yields an outline for proving the limsup inequality in \eqref{eq:main-A-free}. \begin{proposition} \label{thm:limsup-basic-A-free} Under the assumptions of Theorem \ref{thm:main}, for every $n\in N$, $u\in\mathscr{U}$ and $w\in\mathscr{W}$ there exists a sequence $\{u_{\ep}\}\in \mathcal{S}_{u+w}$ (see \eqref{eq:def-s-v}) such that \ba{eq:1-thm45} &u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\rd),\\ &\label{eq:2-thm45}\limsup_{\ep\to 0}\iO f\Big(x,\frac{x}{\ep},u_{\ep}(x)\Big)\,dx\leq \int_{\Omega}\int_Q f\big(x,ny,u(x)+w(x,y)\big)\,dy\,dx. \end{align} \end{proposition} \begin{proof} \emph{Step 1}: We first assume that $u\in C(\Omega;\R^d)$ and $w\in C^1({\Omega}; C^1_{\rm per}(\R^N;\rd))$. \\ Arguing as in \cite[Proof of Proposition 2.7]{fonseca.kromer} we introduce the auxiliary function $$g(x,y):=f(x,ny,u(x)+w(x,y))$$ for every $x\in {\Omega}$ and for a.e. $y\in\R^N.$ By definition, $g\in C(\Omega;L^p_{\rm per}(\R^N))$. Hence, setting $$g_{\ep}(x):=g\Big(x,\frac{x}{n\ep}\Big)\quad\text{for a.e. }x\in\Omega,$$ Proposition \ref{prop:simple-2-scale} yields \bas &\lim_{\ep \to 0}\iO f\Big(x,\frac{x}{\ep}, u(x)+w\Big(x,\frac{x}{n\ep}\Big)\Big)\,dx=\lim_{\ep \to 0}\iO g_{\ep}(x)\,dx\\ &\quad= \int_{\Omega}\int_Q g(x,y)\,dy\,dx=\int_{\Omega}\int_Q f(x,ny,u(x)+w(x,y))\,dy\,dx. \end{align*} Define $$u_{\ep}(x):=u(x)+w\Big(x,\frac{x}{n\ep}\Big)\quad\text{for a.e. }x\in{\Omega}.$$ By the periodicity of $w$ in the second variable and by the definition of $\mathscr{W}$, $$u_{\ep}\wk u+\iQ w(x,y)\,dy=u\quad\text{weakly in }L^p(\Omega;\rd).$$ By Proposition \ref{prop:simple-2-scale}, $u_{\ep}\sts u+w$ strongly two-scale in $L^p(\Omega\times Q;\R^d)$. Finally (recalling the definitions of the classes $\mathscr{U}$ and $\mathscr{W}$) by the regularity of $w$ and by Proposition \ref{prop:simple-2-scale}, $$ \pdeor u_{\ep}=\sum_{i=1}^N A^i(x)\frac{\partial w}{\partial x_i}\Big(x,\frac{x}{n\ep}\Big)\wk \sum_{i=1}^N A^i(x)\frac{\partial}{\partial_{x_i}}\iQ w(x,y)\,dy=0$$ weakly in $L^p(\Omega;\rl)$ and hence strongly in $W^{-1,p}(\Omega;\rl)$, due to the compact embedding of $L^p$ into $W^{-1,p}$.\\ \emph{Step 2}: Consider the general case in which $u\in \mathscr{U}$ and $w\in \mathscr{W}$. Arguing as in the second part of the proof of Lemma \ref{lemma:A-free-fields} (up to \eqref{eq:limit-set2p}), we construct a sequence $\{w^{j}\}\in C^1(\Omega; C^1_{\rm per}(\Rn;\rd))$ such that \be{eq:prop-strong} w^{j}\to u+w\quad\text{strongly in }L^p(\Omega;L^p_{\rm per}(\Rn;\rd)), \ee and $$\pdeor_y w^{j}= 0\quad\text{in }W^{-1,p}(\Omega;\R^l)\quad\text{for a.e. }x\in\Omega,\quad\text{for every }j.$$ Set $$u_{\ep}^{j}(x):=w^{j}\Big(x,\frac{x}{n\ep}\Big)\quad\text{for a.e. }x\in\Omega.$$ By Proposition \ref{prop:simple-2-scale}, there holds \be{eq:limsup-A}u_{\ep}^{j}\sts u+w\quad\text{strongly two-scale in }L^p(\Omega\times Q;\R^d),\ee as $\ep\to 0$ and $j\to +\infty$, in this order. In addition, arguing as in the proof of \eqref{eq:limit-set-point2}, we have \be{eq:limsup-B}\pdeor u_{\ep}^{j}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\R^l)\ee as $\ep\to 0$ and $j\to +\infty$, in this order. To conclude, it remains to study the asymptotic behavior of the energies associated to the sequence $\{u_{\ep}^j\}$. Consider the functions $$g^{j}(x,y):=f(x,ny,w^{j}(x,y))$$ and $$g^{j}_{\ep}(x):=g^{j}\Big(x,\frac{x}{n\ep}\Big).$$ Arguing as in Step 1, we obtain \ba{eq:limsup-C} &\lim_{j\to +\infty}\lim_{\ep\to 0}\iO f\Big(x,\frac{x}{\ep}, u_{\ep}^{j}(x)\Big)\,dx= \lim_{j\to +\infty}\lim_{\ep\to 0}\iO g_{\ep}^{j}(x)\,dx\\ &\nn\quad=\lim_{j\to +\infty}\int_{\Omega}\int_Q g^{j}(x,y)\,dy\,dx= \int_{\Omega}\int_Q f(x,ny, u(x)+w(x,y))\,dy\,dx \end{align} where we used the periodicity of $g_{\ep}^{j}$, together with \eqref{eq:growth-p-f-3} and \eqref{eq:prop-strong}. In view of \eqref{eq:limsup-A}--\eqref{eq:limsup-C}, Attouch's diagonalization lemma \cite[Lemma 1.15 and Corollary 1.16]{attouch}, and Theorem \ref{thm:equivalent-two-scale}, we obtain a subsequence $\{j(\ep)\}$ such that $u_{\ep}:=u_{\ep}^{j(\ep)}$ satisfies both \eqref{eq:1-thm45} and \eqref{eq:2-thm45}. \end{proof} Proposition \ref{thm:limsup-basic-A-free} yields the following limsup inequality. \begin{corollary} \label{thm:limsup-A-free} Under the assumptions of Theorem \ref{thm:main}, for every $u\in L^p(\Omega;\R^d)$ \bas &\inf\Big\{\limsup_{\ep\to 0}\iO f\Big(x,\frac{x}{\ep},u_{\ep}(x)\Big)\,dx:u_{\ep}\wk u\quad\text{weakly in }L^p(\Omega;\rd)\\ &\quad\text{and }\pdeor u_{\ep}\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\rl)\Big\}\leq{\mathscr{E}}_{\rm hom}(u). \end{align*} \end{corollary} We now turn to the proof of the liminf inequality in Theorem \ref{thm:main-result-A-free}. For simplicity, we subdivide it into two intermediate results. \begin{proposition} \label{thm:liminf-A-free-1} Under the assumptions of Theorem \ref{thm:main}, for every sequence $\ep_n\to 0^+$, $u\in \mathscr{U}$ and $\{u_{n}\}\in L^p(\Omega;\rd)$ with $u_n\wk 0$ weakly in $L^p(\Omega;\rd)$ and $\pdeor u_n\to 0$, there exists a $p$-equiintegrable family of functions $$\mathscr{V}:=\{v_{\nu,n}:\nu,n\in\N\}$$ such that $\mathscr{V}$ is a bounded subset of $L^p(\Omega;\R^d)$, for every $\nu\in\N$ and as $n\to +\infty$ \bas &v_{\nu,n}\wk 0\quad\text{weakly in }L^p(\Omega;\rd),\\ &\pdeor v_{\nu,n}\to 0\quad\text{strongly in }W^{-1,q}(\Omega;\rl)\quad\text{for every }1<q<p. \end{align*} Furthermore, \bas &\liminfn \iO f\Big(x,\frac{x}{\en}, u(x)+u_n(x)\Big)\,dx\geq \sup_{\nu\in\N}\Big\{\liminfn \iO f(x,\nu n x,u(x)+v_{\nu,n}(x))\,dx\Big\}. \end{align*} \end{proposition} \begin{proof} The proof follows the argument of \cite[Proof of Proposition 3.8]{fonseca.kromer}. We sketch the main steps for the convenience of the reader.\\ \emph{Step 1}:\\ We first truncate our sequence in order to achieve $p-$equiintegrability. Arguing as in \cite[Proof of Lemma 2.15]{fonseca.muller}, we construct a $p$-equiintegrable sequence $\{\tilde{u}_n\}\in L^p(\Omega;\rd)$ such that \bas &\tilde{u}_n-u_n\to 0\quad\text{strongly in }L^q(\Omega;\rd)\quad\text{for every }1<q<p,\\ &\tilde{u}_n\wk 0\quad\text{weakly in }L^p(\Omega;\rd),\\ &\pdeor u_n\to 0\quad\text{strongly in }W^{-1,q}(\Omega;\rl)\quad\text{for every }1<q<p, \end{align*} and $$\liminfn \iO f\Big(x,\frac{x}{\ep_n}, u(x)+u_n(x)\Big)\,dx\geq \liminfn \iO f\Big(x,\frac{x}{\ep_n}, u(x)+\tilde{u}_n(x)\Big)\,dx.$$ \emph{Step 2}: we consider the sequence $$\tilde{k}_{\nu,n}:=\frac{1}{\nu \ep_n}.$$ If $\{\tilde{k}_{\nu,n}\}$ is a sequence of integers (without loss of generality we can assume that it is increasing as $n$ increases), then there is nothing to prove and we simply set $$v_{\nu,k}:=\begin{cases} \tilde{u}_n&\text{if }k=\tilde{k}_{\nu,n},\\ 0&\text{otherwise}.\end{cases}$$ In the case in which $\{\tilde{k}_{\nu,n}\}$ is not a sequence of integers, we define $$k_{\nu,n}:=\frac{\theta_{\nu,n}}{\nu\ep_n},$$ where $$\theta_{\nu,n}:=\nu \ep_n\floor[\Big]{\frac{1}{\nu \ep_n}}.$$ In particular \be{eq:theta-nu-n} \theta_{\nu,n}\to 1\quad\text{as }n\to +\infty. \ee An adaptation of \cite[Lemma 2.8]{fonseca.kromer} applied to $\{\tilde{u}_n\}$ yields a $p-$equiintegrable sequence $\{\bar{u}_n\}\in L^p(Q;\rd)$ such that \begin{eqnarray} \nonumber&& \tilde{u}_n-\bar{u}_n\to 0\quad\text{strongly in }L^p(\Omega;\rd),\\ \nonumber&&\tilde{u}_n\wk 0\quad\text{weakly in }L^p(Q\setminus \Omega;\rd),\\ \label{eq:pdeor-bar-un}&&\pdeor \bar{u}_n\to 0\quad\text{strongly in }W^{-1,q}(Q;\rl)\quad\text{for every }1<q<p. \end{eqnarray} Arguing as in \cite[Proof of Proposition 3.8]{fonseca.kromer} we obtain $$\liminfn \iO f\Big(x,\frac{x}{\ep_n}, u+u_n(x)\Big)\,dx\geq \liminfn \iO f(x, \nu k_{\nu,n}x,u(x)+v_{\nu, k_{\nu,n}}(x))\,dx,$$ where $$v_{\nu, k_{\nu,n}}(x):=\bar{u}_n(\theta_{\nu,n}x)\quad\text{for a.e. }x\in\Omega,$$ $\nu\in \N$ and $n\in\N$ are large enough so that $\theta_{\nu,n}\Omega\subset Q$. Setting $$v_{\nu,n}:=\begin{cases}v_{\nu,k_{\nu,n}}&\text{if }n=k_{\nu,n},\\0&\text{otherwise},\end{cases}$$ the sequence $\{v_{\nu, n}\}$ is uniformly bounded in $L^p(\Omega;\R^d)$, $p$-equiintegrable, and satisfies $$v_{\nu, n}\wk 0\quad\text{weakly in }L^p(\Omega;\rd)$$ as $n\to +\infty$. To conclude, it remains only to show that \be{eq:thesis-pde-liminf}\pdeor v_{\nu,n}\to 0\quad\text{strongly in }W^{-1,q}(\Omega;\rl)\quad\text{for every }1<q<p \ee as $n\to +\infty$. Let $q$ as above be fixed, and let $\varphi\in W^{1,q'}_0(\Omega;\rl)$. A change of variables yields \bas &|\scal{\pdeor v_{\nu,k_{\nu,n}}}{\varphi}|\\ &\quad=\Big|\iO \Big(\sum_{i=1}^N A^i(x)\bar{u}_n(\theta_{\nu,n}x)\cdot\frac{\partial \varphi(x)}{\partial x_i}+\sum_{i=1}^N\frac{\partial A^i(x)}{\partial x_i}\bar{u}_n(\theta_{\nu,n}x)\cdot\varphi(x)\Big)\,dx\Big|\\ &\quad=\frac{1}{\theta_{\nu,N}^{N}}\Big|\int_{\theta_{\nu,N}\Omega} \Big(\sum_{i=1}^N A^i\Big(\frac{y}{\theta_{\nu,n}}\Big)\bar{u}_n(y)\cdot\frac{\partial\varphi}{\partial x_i}\Big(\frac{y}{\theta_{\nu,n}}\Big)\\ &\qquad+\sum_{i=1}^N\frac{\partial A^i}{\partial x_i}\Big(\frac{y}{\theta_{\nu,n}}\Big)\bar{u}_n(y)\cdot\varphi\Big(\frac{y}{\theta_{\nu,n}}\Big)\Big)\,dy\Big|. \end{align*} For $n$ big enough $\theta_{\nu,N}\Omega\subset Q$. Hence, by \eqref{eq:theta-nu-n}, adding and subtracting the quantity $$\frac{1}{\theta_{\nu,N}^{N}}\int_{\theta_{\nu,N}\Omega} \Big(\sum_{i=1}^N A^i(y)\bar{u}_n(y)\cdot\frac{\partial\varphi}{\partial x_i}\Big(\frac{y}{\theta_{\nu,n}}\Big)+\sum_{i=1}^N\frac{\partial A^i}{\partial x_i}(y)\bar{u}_n(y)\cdot\varphi\Big(\frac{y}{\theta_{\nu,n}}\Big)\Big)\,dy,$$ we deduce the upper bound \bas & |\scal{\pdeor v_{\nu,n}}{\varphi}|\\ &\quad \leq C\sum_{i=1}^N\Big\|A^i(y)-A^i\Big(\frac{y}{\theta_{\nu,N}}\Big)\Big\|_{C^0(\bar{Q};\M^{l\times d})}\|\bar{u}_n\|_{L^q(Q;\rd)}\|\varphi\|_{W^{1,q'}_0(\Omega;\rl)}\\ &\qquad +C\sum_{i=1}^N\Big\|\frac{\partial A^i}{\partial y_i}(y)-\frac{\partial A^i}{\partial x_i}\Big(\frac{y}{\theta_{\nu,N}}\Big)\Big\|_{C^0(\bar{Q};\M^{l\times d})}\|\bar{u}_n\|_{L^q(Q;\rd)}\|\varphi\|_{W^{1,q'}_0(\Omega;\rl)}\\ &\qquad +C\|\pdeor \bar{u}_n\|_{W^{-1,q}(\Omega;\rl)}\|\varphi\|_{W^{1,q'}_0(\Omega;\rl)}. \end{align*} Property \eqref{eq:thesis-pde-liminf} follows now by \eqref{eq:theta-nu-n} and \eqref{eq:pdeor-bar-un}. \end{proof} To complete the proof of the liminf inequality in \eqref{eq:main-A-free} we apply the \emph{unfolding operator} (see Subsection \ref{subsection:unfolding}) to the set $\mathscr{V}$ constructed in Proposition \ref{thm:liminf-A-free-1}. \begin{proposition} \label{thm:liminf-A-free-2} Under the assumptions of Theorem \ref{thm:main}, for every $u\in\mathscr{U}$ and every family $\mathscr{V}=\{v_{\nu,n}:\,\nu, n\in \N\}$ as in Proposition \ref{thm:liminf-A-free-1} there holds $$\liminf_{\nu\to +\infty}\liminfn \iO f(x,\nu n x, u(x)+v_{\nu,n}(x))\,dx\geq {\mathscr{E}}_{\rm hom}(u).$$ \end{proposition} \begin{proof} Fix $u\in \mathscr{U}$ and let $\{v_{\nu,n}:\,\nu, n\in \N\}$ be $p-$equiintegrable and bounded in $L^p(\Omega;\R^d)$, with \be{eq:vnun-wk} v_{\nu,n}\wk 0\quad\text{weakly in }L^p(\Omega;\rd) \ee and \be{eq:vnun-pde} \pdeor v_{\nu,n}\to 0\quad\text{strongly in }W^{-1,q}(\Omega;\rl)\quad\text{for every }1<q<p, \ee as $n\to +\infty$, for every $\nu\in\N$. Fix $\Omega'\subset\subset \Omega$ and for $z\in\mathbb{Z}^N$ and $n\in\N$, define $$Q_{\nu,z}:=\frac{z}{\nu}+\frac{1}{\nu}Q,$$ and $$Z^{\nu}:=\{z\in \mathbb{Z}^N: Q_{\nu,z}\cap \Omega'\neq\emptyset\}.$$ We consider the maps $$T_{\frac{1}{\nu}}v_{\nu,n}(x,y):=v_{\nu,n}\Big(\frac{1}{\nu}\floor{\nu x}+\frac{1}{\nu}y\Big)\quad\text{for a.e. }x\in\Omega, y\in Q,$$ where we have extended the sequence $\{v_{\nu,n}\}$ to zero outside $\Omega$. A change of variables yields \bas &\int_{\Omega}f(x,\nu n x,u(x)+v_{\nu,n}(x))\,dx\geq \sum_{z\in\Z^{\nu}}\int_{Q_{\nu,z}}f(x,\nu n x,u(x)+v_{\nu,n}(x))\,dx\\ &\quad=\nu^N \sum_{z\in\Z^{\nu}}\iQ f\Big(\frac{z}{\nu}+\frac{y}{\nu}, ny, u\Big(\frac{z}{\nu}+\frac{y}{\nu}\Big)+v_{\nu,n}\Big(\frac{z}{\nu}+\frac{y}{\nu}\Big)\Big)\,dy\\ &\quad=\sum_{z\in\Z^{\nu}}\int_{Q_{\nu,z}}\iQ f\Big(\frac{\floor{\nu x}}{\nu}+\frac{y}{\nu}, ny, T_{\frac{1}{\nu}}u(x,y)+T_{\frac{1}{\nu}}v_{\nu,n}(x,y)\Big)\,dy\,dx\\ &\quad\geq \sum_{z\in\Z^{\nu}}\int_{Q_{\nu,z}\cap\Omega'}\iQ f\Big(\frac{\floor{\nu x}}{\nu}+\frac{y}{\nu}, ny, T_{\frac{1}{\nu}}u(x,y)+T_{\frac{1}{\nu}}v_{\nu,n}(x,y)\Big)\,dy\,dx, \end{align*} where the last inequality is due to \eqref{eq:growth-p-f-3}. By \cite[Proposition 3.6 (i)]{fonseca.kromer} and Proposition \ref{prop:conv-unf-op} we conclude that \ba{eq:almost-final-liminf} &\iO f(x,\nu n x, u(x)+v_{\nu,n}(x))\,dx\\ \nn&\quad\geq \sigma_{\nu}+\sum_{z\in Z^{\nu}}\int_{Q_{\nu,z}\cap \Omega'}\iQ f(x,ny,u(x)+\hat{v}_{\nu,z,n}(y))\,dy\,dx, \end{align} where $$\hat{v}_{\nu,z,n}(y):=T_{\frac{1}{\nu}}v_{\nu,n}\Big(\frac{z}{\nu},y\Big)$$ for a.e. $y\in Q$, and $\sigma_{\nu}\to 0$ as $\nu\to +\infty$. The sequence $\{\hat{v}_{\nu,z,n}\}$ is $p$-equiintegrable by \cite[Proposition A.2]{fonseca.kromer}, and is uniformly bounded by \eqref{eq:vnun-wk} and Proposition \ref{prop:isometry}, since $$\iQ|\hat{v}_{\nu,z,n}(y)|^p\,dy=\frac{1}{\nu^N}\int_{Q_{\nu,z}}|v_{\nu,n}(x)|^p\,dx.$$ By the boundedness of $\{v_{\nu,n}:\,\nu,n\in\N\}$ in $L^p(\Omega;\R^d)$, and by \eqref{eq:vnun-wk} there holds \be{eq:vznun-wk}\hat{v}_{\nu,z,n}\wk 0\quad\text{weakly in }L^p(Q;\rd)\ee as $n\to +\infty$, for every $z\in Z^{\nu}$, $\nu\in\N$. Denoting by $\chi_{Q_{\nu,z}\cap\Omega'}$ the characteristic functions of the sets ${Q_{\nu,z}\cap\Omega'}$, we claim that \be{eq:most-difficult-estimate} \limsup_{\nu\to +\infty}\limsup_{n\to+\infty}\Big\|\Big\|\pdeor_y(x)\sum_{\nu\in\Z^{\nu}}\chi_{Q_{\nu,z}\cap\Omega'}(x)\hat{v}_{\nu,z,n}(y)\Big\|_{W^{-1,q}(Q;\rl)}\Big\|_{L^q(\Omega)}=0\ee for every $1< q<p$. Indeed, fix $1<q<p$, and let $\psi\in W^{1,q'}_0(Q;\rl)$. Then \bas \Big|\scal{\pdeor_y\Big(\frac{z}{\nu}\Big)\hat{v}_{\nu,z,n}}{\psi}\Big|&= \Big|\iQ\sum_{i=1}^N A^i\Big(\frac{z}{\nu}\Big)v_{\nu,n}\Big(\frac{z}{\nu}+\frac{y}{\nu}\Big)\cdot\frac{\partial \psi(y)}{\partial y_i}\,dy\Big|\\ &=\nu^N\Big|\int_{Q_{\nu,z}}\sum_{i=1}^N A^i\Big(\frac{z}{\nu}\Big)v_{\nu,n}(x)\cdot\frac{\partial\psi}{\partial y_i}(\nu x-z)\,dx\Big|. \end{align*} Adding and subtracting to the previous expression the quantity $$\nu^N\int_{Q_{\nu,z}}\sum_{i=1}^N A^i(x)v_{\nu,n}(x)\cdot\frac{\partial\psi}{\partial y_i}(\nu x-z)\,dx,$$ and setting $\phi^{\nu}_z(x):=\psi(\nu x-z)$ for a.e. $x\in\Omega$, we obtain the estimate \begin{align*} & \Big|\scal{\pdeor_y\Big(\frac{z}{\nu}\Big)\hat{v}_{\nu,z,n}}{\psi}\Big|\\ &\quad\leq \nu^N\Big\|\sum_{i=1}^N \Big(A^i\Big(\frac{z}{\nu}\Big)-A^i(x)\Big)v_{\nu,n}(x)\Big\|_{L^q(Q_{\nu,z};\rl)}\Big\|\frac{\partial\psi}{\partial y_i}(\nu x-z)\Big\|_{L^{q'}(Q_{\nu,z};\rl)}\\ &\qquad +\nu^{N-1}\Big\|\frac{\partial}{\partial x_i}(A^i(x)v_{\nu,n}(x))\Big\|_{W^{-1,q}(Q_{\nu,z};\rl)}\|\phi^{\nu}_z\|_{W^{1,q'}_0(Q_{\nu,z};\rl)}. \end{align*} A change of variables yields the upper bound $$\Big\|\frac{\partial\psi}{\partial y_i}(\nu x-z)\Big\|_{L^{q'}(Q_{\nu,z};\rl)}+\frac{\|\phi^\nu_z\|_{W^{1,q'}_0(Q_{\nu,z};\rl)}}{\nu} \leq \frac{C}{\nu^{\frac{N}{q'}}}\|\psi\|_{W^{1,q'}_0(Q;\rl)}.$$ Thus, by the regularity of the operators $A^i$, \ba{eq:compl1} \Big|\scal{\pdeor_y\Big(\frac{z}{\nu}\Big)\hat{v}_{\nu,z,n}}{\psi}\Big|&\leq C\nu^{\frac{N}{q}-1}\Big\|\sum_{i=1}^N\frac{\partial A^i}{\partial x_i}\Big\|_{L^{\infty}(Q;\M^{l\times d})}\|v_{\nu,n}\|_{L^q(Q_{\nu,z};\rd)}\|\psi\|_{W^{1,q'}_0(Q;\rl)}\\ \nn&\quad+C\nu^{\frac{N}{q}}\Big\|\frac{\partial}{\partial x_i}(A^i(x)v_{\nu,n}(x))\Big\|_{W^{-1,q}(Q_{\nu,z};\rl)}\|\psi\|_{W^{1,q'}_0(Q;\rl)}. \end{align} Using again the Lipschitz regularity of the operators $A^i$, $i=1,\cdots,N$, we deduce \ba{eq:compl2} & \|\pdeor_y(x)\hat{v}_{\nu,z,n}(y)\|_{W^{-1,q}(Q;\rl)}\leq \sum_{i=1}^N \Big\|A^i(x)-A^i\Big(\frac{z}{\nu}\Big)\Big\|_{L^{\infty}(Q;\M^{l\times d})}\|\hat{v}_{\nu,z,n}\|_{L^q(Q;\rl)}\\ \nn&\qquad+\Big\|\pdeor_y\Big(\frac{z}{\nu}\Big)\hat{v}_{\nu,z,n}(y)\Big\|_{W^{-1,q}(Q;\rl)}\\ \nn&\quad \leq \frac{C}{\nu}\Big\|\sum_{i=1}^N\frac{\partial A^i}{\partial x_i}\Big\|_{L^{\infty}(Q;\M^{l\times d})}\|\hat{v}_{\nu,z,n}\|_{L^q(Q;\rl)}+\Big\|\pdeor_y\Big(\frac{z}{\nu}\Big)\hat{v}_{\nu,z,n}(y)\Big\|_{W^{-1,q}(Q;\rl)} \end{align} for a.e. $x\in Q_{\nu,z}$. Hence, by \eqref{eq:compl1} and \eqref{eq:compl2}, we obtain \bas &\sum_{z\in\Z^{\nu}}\int_{Q_{\nu,z}\cap\Omega'}\|\pdeor_y(x)\hat{v}_{\nu,z,n}(y)\|_{W^{-1,q}(Q;\R^l)}^q\,dx\\ &\quad \leq \sum_{z\in\Z^{\nu}}\frac{C}{\nu^q}\int_{Q_{\nu,z}\cap\Omega'} \Big\|\sum_{i=1}^N\frac{\partial A^i}{\partial x_i}\Big\|^q_{L^{\infty}(Q;\M^{l\times d})}\|\hat{v}_{\nu,z,n}\|^q_{L^q(Q;\R^d)}\,dx\\ &\qquad+\sum_{z\in\Z^{\nu}}\int_{Q_{\nu,z}\cap\Omega'}\|\pdeor_y\Big(\frac{z}{\nu}\Big)\hat{v}_{\nu,z,n}(y)\|_{W^{-1,q}(Q;\R^l)}^q\,dx\\ &\quad\leq \frac{C}{\nu^q}\Big\|\sum_{z\in \Z^{\nu}}\chi_{Q_{\nu,z}\cap\Omega'}(x)\hat{v}_{\nu,z,n}(y)\Big\|_{L^q(\Omega\times Q;\R^d)}^q+\frac{C}{\nu^q}\|v_{\nu,n}\|_{L^q(\Omega;\R^d)}\\ &\qquad+\frac{C}{\nu^q}\sum_{z\in \Z^{\nu}}\int_{Q_{\nu,z}\cap\Omega'}(x)\hat{v}_{\nu,z,n}\Big\|\sum_{i=1}^N\frac{\partial A^i v_{\nu,n}}{\partial x_i}\Big\|_{W^{-1,q}(Q_{\nu,z};\rl)}\\ &\quad\leq \frac{C}{\nu^q}\Big\|\sum_{z\in \Z^{\nu}}\chi_{Q_{\nu,z}\cap\Omega'}(x)\hat{v}_{\nu,z,n}(y)\Big\|_{L^q(\Omega\times Q;\R^d)}^q+\frac{C}{\nu^q}\|v_{\nu,n}\|_{L^q(\Omega;\R^d)}\\ &\qquad+C\nu^{\frac{N}{q}}\Big\|\sum_{i=1}^N\frac{\partial A^i v_{\nu,n}}{\partial x_i}\Big\|_{W^{-1,q}(\Omega;\rl)} \end{align*} Property \eqref{eq:most-difficult-estimate} follows now by \eqref{eq:vnun-wk} and \eqref{eq:vnun-pde}, and by the compact embedding of $L^p$ into $W^{-1,p}$. Consider the maps $$w_{\nu,n}(x,y):=\begin{cases}\Pi(x)\Big(\hat{v}_{\nu,z,n}(y)-\iQ \hat{v}_{\nu,z,n}(\xi)\,d\xi\Big)-\iQ\Pi(x)\Big(\hat{v}_{\nu,z,n}(y)-\iQ \hat{v}_{\nu,z,n}(\xi)\,d\xi\Big)\,dy\\ \qquad\qquad\text{for }x\in Q_{\nu,z}\cap\Omega',\,z\in Z^{\nu},\,y\in Q,\\ 0\quad\qquad\phantom{ii}\text{otherwise in }\Omega.\end{cases}$$ By Lemma \ref{lemma:proj-operator} the sequence $\{w_{\nu,n}\}$ is $p$-equiintegrable, and $$\pdeor_y w_{\nu,n}=0\quad\text{in }W^{-1,p}(Q;\R^l)\quad\text{for a.e. }x\in\Omega,$$ for all $\nu,n\in N$. In particular, $\{w_{\nu,n}\}\subset \mathscr{W}$. We claim that \be{eq:replace-vznun} \Big\|w_{\nu,n}(x,y)-\sum_{z\in Z^{\nu}}\chi_{Q_{\nu,z}\cap\Omega'}(x)\hat{v}_{\nu,z,n}(y)\Big\|_{L^q(\Omega\times Q;\rd)}\to 0 \ee as $n\to+\infty$, $\nu\to +\infty$, for every $1<q<p$. In fact, by Lemma \ref{lemma:proj-operator} there holds \bas \Big\|\Pi(x)\Big(\hat{v}_{\nu,z,n}(y)-\iQ\hat{v}_{\nu,z,n}(\xi)\,d\xi\Big)-\hat{v}_{\nu,z,n}(y)\Big\|^q_{L^q(Q;\R^d)}\\ \leq C\Big(\|\pdeor_y(x)\hat{v}_{\nu,z,n}(y)\|^q_{W^{-1,q}(Q;\R^l)}+\Big|\iQ\hat{v}_{\nu,z,n}(y)\,dy\Big|^q\Big). \end{align*} Therefore \ba{eq:liminf-star} &\Big\|\sum_{z\in\Z^{\nu}}\chi_{Q_{\nu,z}\cap\Omega'}(x)\Big(\Pi(x)\Big(\hat{v}_{\nu,z,n}(y)-\iQ\hat{v}_{\nu,z,n}(\xi)\,d\xi\Big)-\hat{v}_{\nu,z,n}(y)\Big)\Big\|^q_{L^q(\Omega\times Q;\R^d)}\\ \nn&\leq C\Big(\sum_{z\in\Z^{\nu}}\int_{Q_{\nu,z}\cap\Omega'}\|\pdeor_y(x)\hat{v}_{\nu,z,n}(y)\|^q_{W^{-1,q}(Q;\R^l)}\,dx\\ \nn&\quad+\sum_{z\in\Z^{\nu}}\int_{Q_{\nu,z}\cap\Omega'}\Big|\iQ\hat{v}_{\nu,z,n}(y)\,dy\Big|^q\,dx\Big). \end{align} The first term in the right-hand side of \eqref{eq:liminf-star} converges to zero as $n\to +\infty$ and $\nu\to +\infty$, in this order, owing to \eqref{eq:most-difficult-estimate}. The second term in the right-hand side of \eqref{eq:liminf-star} converges to zero as $n\to +\infty$ and $\nu\to +\infty$, in this order, by the dominated convergence theorem, owing to \eqref{eq:vznun-wk} and the uniform boundedness in $L^p$ of $\{\hat{v}_{\nu,z,n}\}$. Hence, both the left-hand side of \eqref{eq:liminf-star} and the quantity $$\iQ\Big\{\sum_{z\in\Z^{\nu}}\chi_{Q_{\nu,z}\cap\Omega'}(x)\Pi(x)\Big(\hat{v}_{\nu,z,n}(y)-\iQ\hat{v}_{\nu,z,n}(\xi)\,d\xi\Big)\Big\}\,dy\to 0,$$ converge to zero as $n\to +\infty$ and $\nu\to +\infty$, and we obtain \eqref{eq:replace-vznun}. Up to the extraction of a (not relabeled) subsequence, we can assume that \ba{eq:en-is-lim} & \liminf_{\nu\to +\infty}\liminfn \sum_{z\in Z^{\nu}}\int_{Q_{\nu,z}\cap \Omega'}\iQ f(x,ny,u(x)+\hat{v}_{\nu,z,n}(y))\,dy\,dx\\ \nn&\quad =\lim_{\nu\to +\infty}\liminfn \sum_{z\in Z^{\nu}}\int_{Q_{\nu,z}\cap \Omega'}\iQ f(x,ny,u(x)+\hat{v}_{\nu,z,n}(y))\,dy\,dx. \end{align} Hence, in view of \eqref{eq:replace-vznun} and \eqref{eq:en-is-lim} we can extract a subsequence $\{n(\nu)\}$ such that \ba{eq:point-en} \lim_{\nu\to +\infty}\liminf_{n\to +\infty} \sum_{z\in Z^{\nu}}\int_{Q_{\nu,z}\cap \Omega'}\iQ f(x,ny,u(x)+\hat{v}_{\nu,z,n}(y))\,dy\,dx\\ =\lim_{\nu\to +\infty}\sum_{z\in Z^{\nu}}\int_{Q_{\nu,z}\cap \Omega'}\iQ f(x,n(\nu)y,u(x)+\hat{v}_{\nu,z,n(\nu)}(y))\,dy\,dx, \end{align} and \be{eq:point2-en} w_{\nu,n(\nu)}(x,y)-\sum_{z\in Z^{\nu}}\chi_{Q_{\nu,z}\cap\Omega'}(x)\hat{v}_{\nu,z,n(\nu)}(y)\to 0\quad\text{strongly in }L^q(\Omega\times Q;\rd), \ee for every $1<q<p$. Going back to \eqref{eq:almost-final-liminf}, by \cite[Proposition 3.5 (ii)]{fonseca.kromer}, \eqref{eq:point-en} and \eqref{eq:point2-en}, \bas &\liminf_{\nu\to +\infty}\liminf_{n\to+\infty}\iO f(x,\nu n x, u(x)+v_{\nu,n}(x))\,dx\\ &\quad\geq \liminf_{\nu\to +\infty}\int_{\Omega'}\iQ f(x, n(\nu) y, u(x)+w_{\nu,n(\nu)}(x,y))\,dy\,dx. \end{align*} By the $p$-equiintegrability of $\{w_{\nu,n(\nu)}\}$ and by \eqref{eq:growth-p-f-3}, letting $|\Omega\setminus\Omega'|$ tend to zero, we conclude \bas &\liminf_{\nu\to +\infty}\liminf_{n\to+\infty}\iO f(x,\nu n x, u(x)+v_{\nu,n}(x))\,dx\\ &\quad\geq \liminf_{\nu\to +\infty}\int_{\Omega}\iQ f(x, n(\nu) y, u(x)+w_{\nu,n(\nu)}(x,y))\,dy\,dx\\ &\quad\geq \liminf_{\nu\to +\infty} \inf_{w\in \mathscr{W}}\int_{\Omega}\int_Q f(x,n(\nu)y,w(x,y))\,dy\,dx\\ &\quad\geq \liminf_{n\to +\infty}\inf_{w\in \mathscr{W}}\int_{\Omega}\int_Q f(x,ny,w(x,y))\,dy\,dx={\mathscr{E}}_{\rm hom}(u). \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main-result-A-free}] The proof follows by combining Corollary \ref{thm:limsup-A-free} with Propositions \ref{thm:liminf-A-free-1} and \ref{thm:liminf-A-free-2}. \end{proof} \begin{corollary} \label{cor:local} Under the same assumptions of Theorem \ref{thm:main-result-A-free}, for every $u\in\mathscr{U}_F$ $${\mathscr{E}}_{\rm hom}(u)=\iO f_{\rm hom}(x,u(x))\,dx,$$ where $$f_{\rm hom}(x,u(x))=\liminfn \inf_{v\in\mathcal{C}_x}\iQ f(x,ny,u(x)+v(y))\,dy,$$ and $\mathcal{C}_x$ is the class defined in \eqref{eq:def-c-x}. \end{corollary} \begin{proof} We omit the proof of this corollary as it follows from \cite[Remark 3.3 (ii)]{fonseca.kromer} and by adapting the arguments in \cite[Corollary 3.2]{fonseca.kromer} and Lemma \ref{lemma:measurability} below. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] The thesis results from Theorem \ref{thm:main-result-A-free} and Corollary \ref{cor:local}. \end{proof} We conclude this section by showing that Theorem \ref{thm:main-result-A-free} yields a relaxation result in the framework of $\pdeor-$quasiconvexity with variable coefficients. Before stating the corollary, we prove a preliminary lemma which guarantees the measurability of the function $x\mapsto \qa f(x,u(x))$ for every $u\in L^p(\Omega;\rd)$. \begin{lemma} \label{lemma:measurability} Let $1<p<+\infty$, $u\in L^p(\Omega;\rd)$, let $\pdeor$ be as in Theorem \ref{thm:main}, and let $f:\Omega\times \rd\to [0,+\infty)$ be a Carath\'eodory function satisfying $$0\leq f(x,\xi)\leq C(1+|\xi|^p)\quad\text{for a.e. }x\in\Omega\times\rd,\text{ and for all }\xi\in\R^d.$$ Then the map $$x\mapsto \qa f(x,u(x))$$ is measurable in $\Omega$. \end{lemma} \begin{proof} We first remark that \be{eq:bounded-test-fc} \qa f(x,u(x))=\inf_{r\in (0,+\infty)} \qa^r f(x,u(x))\quad\text{for a.e. }x\in\Omega, \ee where \bas \qa^r f(x,u(x)):=\inf &\Big\{ \iQ f(x,u(x)+w(y))\,dy:\, w\in\mathcal{C}_x\text{ and }\|w\|_{L^p(Q;\rd)}\leq r\Big\}, \end{align*} and $\cx$ is the class defined in \eqref{eq:def-c-x}. Clearly $$\qa^{r} f(x,u(x))\geq \qa f(x,u(x))$$ for a.e. $x\in\Omega$, for every $r\in\N$. Moreover, for every $\ep>0$ there exists $w_{\ep}\in \mathcal{C}_x$ such that \bas \qa f(x,u(x))&\geq \iQ f(x,u(x)+w_{\ep}(y))\,dy-\ep\\ &\quad\geq \qa^{\|w_{\ep}\|_{L^p(Q;\rd)}}f(x,u(x))-\ep\geq \inf_{r\in\N}\qa^r f(x,u(x))-\ep, \end{align*} which in turn implies the second inequality in \eqref{eq:bounded-test-fc}. By \eqref{eq:bounded-test-fc} it is enough to show that $x\mapsto \qa^r f(x,u(x))$ is measurable for every $r\in\N$. We claim that \be{eq:same-set} \qa^r f(x,u(x))=\sup_{n\in\N} \qa^{r,n} f(x,u(x))\quad\text{for a.e. }x\in\Omega, \ee where \bas \qa^{r,n} f(x,u(x))=\inf&\Big\{\iQ f(x,u(x)+w(y))\,dy+n\|\pdeor_y(x) w(x,\cdot)\|_{W^{-1,p}(Q;\rl)}:\\ &\quad w\in L^p(Q;\rd),\,\iQ w(y)\,dy=0\text{ and }\|w\|_{L^p(Q;\rd)}\leq r\Big\}. \end{align*} Clearly, $$\qa^{r,n}f(x,u(x))\leq \qa^r f(x,u(x))$$ for a.e. $x\in\Omega$, for all $n\in\N$. To prove the opposite inequality, fix $x\in\Omega$, and for every $n\in\N$, let $w_n\in L^p(Q;\rd)$, with $\iQ w_n(y)\,dy=0$ and $\|w_n\|_{L^p(Q;\rd)}\leq r$, be such that \ba{eq:wn-pde} &\iQ f(x,u(x)+w_n(y))\,dy+n\|\pdeor_y(x) w_n\|_{W^{-1,p}(Q;\rl)}\\ &\nonumber\quad\leq \qa^{r,n} f(x,u(x))+\frac{1}{n}\leq f(x,u(x))+\frac{1}{n} \end{align} (the last inequality holds because $0\in\mathcal{C}_x$ for every $x\in\Omega$). Since $\{w_n\}$ is uniformly bounded in $L^p(Q;\rd)$ and by \eqref{eq:wn-pde} $$\|\pdeor_y(x) w_n\|_{W^{-1,p}(Q;\rl)}\to 0$$ as $n\to +\infty$, there exists a map $w\in L^p(Q;\rd)$, with $\iQ w(y)\,dy=0$, $\|w\|_{L^p(Q;\rd)}\leq r$ and $\pdeor_y(x) w=0$ such that $$w_n\wk w\quad\text{weakly in }L^p(Q;\rd).$$ By \cite[Lemma 3.1]{braides.fonseca.leoni} we can construct a sequence $\{\tilde{w}_n\}$ such that $\iQ \tilde{w}_n(y)\,dy=0$, $\pdeor_y(x) \tilde{w}_n=0$ for every $n\in\N$, and \bas \liminfn \iQ f(x,u(x)+\tilde{w}_n(y))\,dy&\leq \liminfn \iQ f(x,u(x)+w_n(y))\,dy\\ &\leq\sup_{n\in\N}\qa^{r,n} f(x,u(x)). \end{align*} In view of \eqref{eq:bounded-test-fc} we have $$\qa^r f(x,u(x))\leq \qa f(x,u(x))\leq \iQ f(x,u(x)+\tilde{w}_n(y))\,dy\quad\text{for every }n\in\N,$$ and we obtain the second inequality in \eqref{eq:same-set}. By the measurability of $$x\mapsto \qa^{r,n}f(x,u(x))$$ for every $r,n\in\N$ (we can reduce it to a countable pointwise infimum of measurable functions), we deduce the measurability of $$x\mapsto \qa^r f(x,u(x))$$ for every $r\in\N$, which in turn implies the thesis. \end{proof} For every $D\in\mathcal{O}(\Omega)$ and $u\in L^p(\Omega;\R^d)$, define \begin{align} \label{eq:def-I} \mathcal{I}(u,D):=\inf\Big\{&\liminf_{n\to +\infty}\int_{D}f(x,u_n(x)):\,u_n\wk u\quad\text{weakly in }L^p(\Omega;\R^m)\, \\ &\nonumber\text{and }\pdeor u_n\to 0\quad\text{strongly in }W^{-1,p}(\Omega;\R^l)\Big\}. \end{align} Corollary \ref{cor:local} provides us with the following integral representation of $\mathcal{I}$. \begin{corollary} \label{thm:relax} Let $1<p<+\infty$ and let $\pdeor$ be as in Theorem \ref{thm:main}. Let $f:\Omega\times \rd\to [0,+\infty)$ be a Carath\'eodory function satisfying $$0\leq f(x,\xi)\leq C(1+|\xi|^p)\quad\text{for a.e. }x\in\Omega,\text{ and for all }\xi\in\R^d.$$ Then $$\int_D \qa f(x,u(x))\,dx=\mathcal{I}(u,D)$$ for all $D\in\Oo(\Omega)$ and $u\in L^p(\Omega;\rd)$ with $\pdeor u=0$. \end{corollary} \section*{acknowledgements} The authors thank the Center for Nonlinear Analysis (NSF Grant No. DMS-0635983), where this research was carried out, and also acknowledge support of the National Science Foundation under the PIRE Grant No. OISE-0967140. The research of I. Fonseca and E. Davoli was funded by the National Science Foundation under Grant No. DMS- 0905778. The research of E. Davoli was also supported by the Austrian FWF project ``Global variational methods for nonlinear evolution equations''. E. Davoli is a member of the INdAM-GNAMPA Project 2015 ``Critical phenomena in the mechanics of materials: a variational approach''. The research of I. Fonseca was further partially supported by the National Science Foundation under Grant No. DMS-1411646. \bibliographystyle{plain}
1,477,468,751,359
arxiv
\section{Introduction} \label{sec:intro} Conditional transition systems (CTS) have been introduced in \cite{ABHKMS12} as a model for systems whose behaviour is guarded by different conditions. Before an execution, a condition is chosen by the environment from a pre-defined set of conditions and, accordingly, the CTS is instantiated to a classical labelled transition system (LTS). In this work, we consider \emph{ordered} sets of conditions which allow for a change of conditions during runtime. It is allowed to replace a condition by a smaller condition, called upgrade. An upgrade activates additional transitions compared to the previous instantiation of the system. Our focus lies on formulating a notion of behavioural equivalence, called \emph{conditional bisimilarity}, that is insensitive to changes in behaviour that may occur due to upgrades. Given two states, we want to determine under which conditions they are behaviourally equivalent. To compute this, we adopt a dual, but equivalent, view from lattice theory due to Birkhoff to represent a CTS by a lattice transition system (LaTS). In general, LaTSs are more compact in nature than their CTS counterparts. Moreover, we also develop an efficient procedure based on matrix multiplication to compute conditional bisimilarity. Such questions are relevant when we compare a system with its specification or we want to modify a system in such a way that its observable behaviour is invariant. Furthermore, one requires minimisation procedures for transition systems that are potentially very large and need to be made more compact to be effectively used in analysis. An application of CTSs with upgrades is to model systems that deteriorate over time. Consider a system that is dependent on components that break over time or require calibration, in particular sensor components. In such systems, due to inconsistent sensory data from a sensor losing its calibration, additional behaviour in a system may be enabled (which can be modelled as an upgrade) and chosen nondeterministically. Another field of interest, which will be explored in more detail, are software product lines (SPLs). SPLs refer to a software engineering method for managing and developing a collection of similar software systems with common features. To ensure correctness of such systems in an efficient way, it is common to specify the behaviour of many products in a single transition system and provide suitable analysis methods based on model-checking or behavioural equivalences (see \short{\cite{DBLP:conf/icse/CordyCPSHL12,terBeek2016:MTS,Classen:2013:FTS,Atlee:2015:MBI:2820126.2820133,Classen:2010:MCL:1806799.1806850,Classen:2011:symbolic,Dubslaff:2014:PMC,Chrszon2016:profeat}}\full{\cite{DBLP:conf/icse/CordyCPSHL12,terBeek2016:MTS,Classen:2013:FTS,Atlee:2015:MBI:2820126.2820133,Classen:2010:MCL:1806799.1806850,Classen:2011:symbolic,Dubslaff:2014:PMC,Chrszon2016:profeat,Gruler:2008:PL-ccs}}). Featured transition systems (FTS) -- a recent extension of conventional transition system proposed by Classen et al.\cite{Classen:2013:FTS} -- have become the standard formalism to model an SPL. An important issue usually missing in the theory of FTSs is the notion of self-adaptivity \cite{Cordy2013:adaptivefts}, i.e., the view that features or products are not fixed a priori, but may change during runtime. We will show that FTSs can be considered as CTSs without upgrades where the conditions are the powerset of the features. Additionally, we propose to incorporate a notion of upgrades into software product lines, that cannot be captured by FTSs. Furthermore, we also consider deactivation of transitions in \short{\cite{bkks:cts-upgrades-arxiv}}\full{Appendix~\ref{sec:deactivating-transitions}}, to which our techniques can easily be adapted, though some mathematical elegance is lost in the process. Our contributions are as follows. First, we make the different levels of granularity -- features, products and sets of products -- in the specification of SPLs explicit and give a theoretical foundation in terms of Boolean algebras and lattices. Second, we present a theory of behavioural equivalences with corresponding games and algorithms and applications to conventional and adaptive SPLs. Third, we present our implementation based on binary decision diagrams (BDDs), which provides a compact encoding of a propositional formula and also show how they can be employed in a lattice-based setting. Lastly, we show how a BDD-based matrix multiplication algorithm provides us with an efficient way to check bisimilarity relative to the naive approach of checking all products separately This paper is organised as follows. Section~\ref{sec:preliminaries} recalls the fundamentals of lattice theory relevant to this paper. Then, in Section~\ref{sec:cts} we formally introduce CTSs and conditional bisimilarity. In Section~\ref{sec:lats}, using the Birkhoff duality, it is shown that CTSs can be represented as lattice transition systems (LaTSs) whose transitions are labelled with the elements from a distributive lattice. Moreover, the bisimilarity introduced on LaTSs is shown to coincide with the conditional bisimilarity on the corresponding CTSs. In Section~\ref{sec:matrix-mult}, we show how bisimilarity can be computed using a form of matrix multiplication. Section~\ref{sec:spl} focusses on the translation between an FTS and a CTS, and moreover, a BDD-based implementation of checking bisimilarity is laid out. Lastly, we conclude with a discussion on related work and future work in Section~\ref{sec:conclusion}. All the proofs can be found in \short{\cite{bkks:cts-upgrades-arxiv}}\full{Appendix~\ref{sec:proofs}}. \section{Preliminaries} \label{sec:preliminaries} We now recall some basic definitions concerning lattices, including the well-known Birkhoff's duality result from \cite{dp:lattices-order}. \begin{defn}[\rm Lattice, Heyting Algebra, Boolean Algebra] \label{def:lattice} Let $(\mathbb{L},\sqsubseteq)$ be a partially ordered set. If for each pair of elements $\ell,m\in\mathbb{L}$ there exists a supremum $\ell\sqcup m$ and an infimum $\ell\sqcap m$, we call $(\mathbb L,\sqcup,\sqcap)$ a \emph{lattice}. A \emph{bounded lattice} has a top element $1$ and a bottom element $0$. A lattice is \emph{complete} if every subset of $\mathbb{L}$ has an infimum and a supremum. It is \emph{distributive} if $(\ell \sqcup m)\sqcap n=(\ell\sqcap n)\sqcup (m\sqcap n)$ holds for all $\ell,m,n\in\mathbb{L}$. A bounded lattice $\mathbb L$ is a \emph{Heyting algebra} if for any $\ell,m\in\mathbb L$, there is a greatest element $\ell'$ such that $\ell\sqcap \ell'\sqsubseteq m$. The residuum and negation are defined as $\ell\rightarrow m=\bigsqcup\{\ell'\mid \ell\sqcap \ell'\sqsubseteq m\}$ and $\neg \ell=\ell \rightarrow 0$. A \emph{Boolean algebra} $\mathbb L$ is a Heyting algebra satisfying $\neg \neg \ell = \ell$ for all $\ell\in\mathbb L$. \end{defn} \begin{example} Given a set of atomic propositions $N$, consider $\mathbb{B}(N)$, the set of all Boolean expressions over $N$, i.e., the set of all formulae of propositional logic. We equate every subset $C\subseteq N$ with the evaluation that assigns $\mathit{true}$ to all $f\in C$ and $\mathit{false}$ to all $f\in N\backslash C$. For $b\in\mathbb{B}(N)$, we write $C\models b$ whenever $C$ satisfies $b$. Furthermore we define $\llbracket b \rrbracket = \{C\subseteq N \mid C\models b\} \in \mathcal{P}(\mathcal{P}(N))$. Two Boolean expressions $b_1,b_2$ are called equivalent whenever $\llbracket b_1 \rrbracket = \llbracket b_2 \rrbracket$. Furthermore $b_1$ implies $b_2$ ($b_1\models b_2$), whenever $\llbracket b_1\rrbracket \subseteq \llbracket b_2\rrbracket$. The set $\mathbb{B}(N)$, quotiented by equivalence, is a Boolean algebra, isomorphic to $\mathcal{P}(\mathcal{P}(N))$, where $\llbracket b_1 \rrbracket \sqcup \llbracket b_2 \rrbracket = \llbracket b_1 \rrbracket \cup \llbracket b_2 \rrbracket = \llbracket b_1 \lor b_2 \rrbracket$, analogously for $\sqcap,\cap,\land$, $\lnot \llbracket b \rrbracket = \mathcal{P}(N)\backslash \llbracket b \rrbracket = \llbracket \lnot b \rrbracket$, and $\llbracket b_1 \rrbracket \to \llbracket b_2 \rrbracket = \mathcal{P}(N)\backslash \llbracket b_1 \rrbracket \cup \llbracket b_2 \rrbracket = \llbracket \lnot b_1 \lor b_2 \rrbracket$. \end{example} Distributive lattices and Boolean algebras give rise to an interesting duality result, which was first stated for finite lattices by Birkhoff and extended to the infinite case by Priestley \cite{dp:lattices-order}. In the sequel we will focus on finite distributive lattices (which are Heyting algebras). We first need the following concepts. \begin{defn Let $\mathbb{L}$ be a lattice. An element $n\in\mathbb L\setminus\{0\}$ is said to be \emph{(join-)irreducible} if whenever $n=\ell\sqcup m$ for elements $\ell,m\in\mathbb L$, it always holds that $n=\ell$ or $n=m$. We write $\mathcal J(\mathbb L)$ for the set of all irreducible elements of $\mathbb L$. Let $(S,\le)$ be a partially ordered set. A subset $S'\subseteq S$ is \emph{downward-closed}, whenever $s'\in S'$ and $s\le s'$ implies $s\in S'$. We write $\mathcal{O}(S)$ for the set of all downward-closed subsets of $S$ and $\history s = \{s' \mid s' \leq s\}$ for the downward-closure of $s\in S$. \end{defn} \begin{example} For our example of a Boolean algebra $\mathbb{B}(N)$, quotiented by equivalence, the irreducibles are the complete conjunctions of literals, or, alternatively, all sets $C\subseteq N$. \end{example} We can now state the Birkhoff's representation theorem for finite distributive lattices~\cite{dp:lattices-order}. \begin{theorem} \label{th:birkhoff} If $\mathbb L$ is a finite distributive lattice, then $(\mathbb L,\sqcup,\sqcap)\cong(\mathcal O(\mathcal J(\mathbb L)),\cup,\cap)$ via the isomorphism $\eta:\mathbb L\rightarrow\mathcal O(\mathcal J(\mathbb L))$, defined as $\eta(\ell)=\{\ell'\in\mathcal J(\mathbb L)\mid \ell'\sqsubseteq \ell\}$. Furthermore, given a finite partially ordered set $(S,\leq)$, the downward-closed subsets of $S$, $(\mathcal O(S),\cup,\cap)$ form a distributive lattice, with inclusion ($\subseteq$) as the partial order. The irreducibles of this lattice are all downward-closed sets of the form $\history{s}$ for $s\in S$. \end{theorem} \begin{example}\label{ex:Lattice} Consider the lattice $\mathbb L=\{0,a,b,c,d,e,f,1\}$ with the order depicted in Figure~\ref{fig:ex:mot-Birkhoff}. \begin{figure} \centering \scalebox{0.8}{ \begin{tikzpicture}[on grid, node distance=.8cm] \node (top) {$1$} ; \node[below=of top] (anker) {} ; \node[right= of anker] (ac) {$d$} ; \node[left= of anker] (d) {$f$} ; \node[below= of anker] (ab) {$c$} ; \node[below=of d] (anker2) {} ; \node[below= of anker2] (a) {$a$} ; \node[right= of a] (anker3) {} ; \node[right= of anker3] (b) {$b$} ; \node[right= of ab] (anker4) {} ; \node[right= of anker4] (c) {$e$} ; \node[below=of a] (anker5) {} ; \node[right= of anker5] (bot) {$0$} ; \begin{scope}[-] \draw (top) edge (d) ; \draw (top) edge (ac) ; \draw (d) edge (ab) ; \draw (ac) edge (c) ; \draw (ac) edge (ab) ; \draw (ab) edge (a) ; \draw (ab) edge (b) ; \draw (c) edge (b) ; \draw (a) edge (bot) ; \draw (b) edge (bot) ; \end{scope} \end{tikzpicture} \quad \begin{tikzpicture}[on grid, node distance=.8cm] \node (top) {$\{a,b,e,f\}$} ; \node[below=of top] (anker) {} ; \node[right= of anker] (ac) {$\{a,b,e\}$} ; \node[left= of anker] (d) {$\{a,b,f\}$} ; \node[below= of anker] (ab) {$\{a,b\}$} ; \node[below=of d] (anker2) {} ; \node[below= of anker2] (a) {$\{a\}$} ; \node[right= of a] (anker3) {} ; \node[right= of anker3] (b) {$\{b\}$} ; \node[right= of ab] (anker4) {} ; \node[right= of anker4] (c) {$\{b,e\}$} ; \node[below=of a] (anker5) {} ; \node[right= of anker5] (bot) {$\emptyset$} ; \begin{scope}[-] \draw (top) edge (d) ; \draw (top) edge (ac) ; \draw (d) edge (ab) ; \draw (ac) edge (c) ; \draw (ac) edge (ab) ; \draw (ab) edge (a) ; \draw (ab) edge (b) ; \draw (c) edge (b) ; \draw (a) edge (bot) ; \draw (b) edge (bot) ; \end{scope} \end{tikzpicture}} \caption{An example motivating Birkhoff's representation theorem.}\label{fig:ex:mot-Birkhoff} \vspace{-0.6cm} \end{figure} \noindent The irreducible elements are $a,b,e,f$, i.e.\ exactly those elements that have a unique direct predecessor. On the right we depict the dual representation of the lattice in terms of downward-closed sets of irreducibles, ordered by inclusion. This example suggests an embedding of a distributive lattice $\mathbb{L}$ into a Boolean algebra, obtained by taking the powerset of irreducibles. \end{example} \begin{proposition}[\rm Embedding] \label{prop:emb-lat-ba} A finite distributive lattice $\mathbb L$ embeds into the Boolean algebra $\mathbb B = \mathcal{P}(\mathcal J(\mathbb L))$ via the mapping $\eta:\mathbb L\rightarrow\mathbb B$ given by $\eta(\ell)=\{\ell'\in\mathcal J(\mathbb L)\mid \ell'\sqsubseteq \ell\}$. \end{proposition} We will simply assume that $\mathbb{L}\subseteq \mathbb{B}$. Since an embedding is a lattice homomorphism, supremum and infimum coincide in $\mathbb L$ and $\mathbb B$ and we write $\sqcup,\sqcap$ for both versions. Negation and residuum may however differ and we distinguish them via a subscript, writing $\neg_{\mathbb L}, \neg_{\mathbb B}$ and $\to_{\mathbb L},\to_{\mathbb B}$. Given such an embedding, we can approximate elements of a Boolean algebra in the embedded lattice. \begin{defn \label{def:approx} Let a complete distributive lattice $\mathbb L$ that embeds into a Boolean algebra $\mathbb B$ be given. Then, the \emph{approximation} of $\ell\in\mathbb B$ is given by: $\app{\ell}_\mathbb L=\bigsqcup\{\ell'\in\mathbb L\mid \ell'\sqsubseteq \ell\}.$ \end{defn} If the lattice is clear from the context, we will in the sequel drop the subscript $\mathbb{L}$ and simply write $\app{\ell}$. For instance, in the previous example, the set of irreducibles $\{a,e,f\}$, which is not downward-closed, is approximated by $\app{\{a,e,f\}} = \{a\}$. \begin{lemma} \label{lem:approximation} Let $\mathbb L$ be a complete distributive lattice that embeds into a Boolean algebra~$\mathbb B$. For $\ell$, $m\in\mathbb{B}$, we have $\app{\ell\sqcap m}=\app{\ell}\sqcap\app{m}$ and furthermore that $\ell\sqsubseteq m$ implies $\app{\ell}\sqsubseteq \app{m}$. If $\ell,m\in\mathbb L$, then $\app{\ell\sqcup\neg m}=m\rightarrow_\mathbb L \ell$. \end{lemma} Note that in general it does not hold that $\app{\ell\sqcup m}=\app{\ell}\sqcup\app{m}$ and $\app{\ell\sqcup\neg m}=\app{m}\rightarrow_\mathbb L \app{\ell}$ for arbitrary $\ell,m\in\mathbb{B}$. To witness why these equations fail to hold, take $\ell=\{a,e\}$ and $m=\{b,f\}$ in the previous example as counterexample. \section{Conditional Transition Systems} \label{sec:cts} In this section we introduce conditional transition systems together with a notion of behavioural equivalence based on bisimulation. In \cite{ABHKMS12}, such transition systems were already investigated in a coalgebraic setting, where the set of conditions was trivially ordered. In the sequel, we will always use CTS for the variant with upgrades defined as follows: \begin{defn} \label{CTS} A \emph{conditional transition system} (CTS) over an alphabet $A$ and a finite ordered set of conditions $(\Phi,\leq)$ is a triple $(X,A,f)$, where $X$ is a set of states and $f: X \times A \rightarrow (\Phi\rightarrow \mathcal{P}(X))$ is a function mapping every ordered pair in $X \times A$ to a monotone function of type $(\Phi,\leq) \rightarrow (\mathcal{P}(X),\supseteq)$. As usual, we write $x\xrightarrow{a,\phi} y$ whenever $y\in f(x,a)(\phi)$. \end{defn} Intuitively, a CTS evolves as follows. Before the system starts acting, it is assumed that a condition $\phi\in \Phi$ is chosen arbitrarily which may represent a selection of a valid product of the system. Now all the transitions that have a condition greater than or equal to $\phi$ are activated, while the remaining transitions are inactive. Henceforth, the system behaves like a standard transition system; until at any point in the computation, the condition is changed to a smaller one (say, $\phi'$) signifying a selection of a valid, upgraded product. This, in turn, has a propelling effect in the sense that now (de)activation of transitions depends on the new condition $\phi'$, rather than on the old condition $\phi$. Note that due to the monotonicity restriction we have that $x\xrightarrow{a,\phi} y$ and $\phi'\le \phi$ imply $x\xrightarrow{a,\phi'} y$. That is, active transitions remain active during an upgrade, but new transitions may become active. In \short{\cite{bkks:cts-upgrades-arxiv}}\full{Appendix~\ref{sec:deactivating-transitions}}, we weaken this requirement by discussing a mechanism for deactivating transitions via priorities on the alphabet. \begin{figure} \centering \scalebox{0.6}{ \begin{tikzpicture}[state/.style={rounded rectangle,draw}] \node (i) {}; \node[state] (r) at ($(i.center)+(1,0)$) {ready}; \node[state] (rec) at ($(r.center)+(4,0)$) {received}; \node[state] (rsafe) at ($(rec.center)+(1.5,2)$) {safe}; \node[state] (runsafe) at ($(rec.center)+(1.5,-2)$) {unsafe}; \path[->] (i) edge (r) (r) edge node[above] {receive,$\mathbf b$} (rec) (rec) edge node[below right] {check,$\mathbf b$} (rsafe) (rec) edge node[above right] {check,$\mathbf b$} (runsafe) (rsafe) edge node [above] {u,$\mathbf b$} (r) (runsafe) edge node [above right] {u,$\mathbf b$} (r) (runsafe) edge[bend left] node [below left] {e,$\mathbf a$} (r); \end{tikzpicture} } \caption{Adaptive routing protocol with the alphabet $A=\{\text{receive},\text{check},\text{u},\text{e}\}$.}\label{fig:protocol} \vspace{-0.6cm} \end{figure} \begin{example} \label{ex:cts-bisim} Consider an example (simplified from \cite{Cordy2013:adaptivefts}) of an adaptive routing protocol modelled as a CTS in Figure~\ref{fig:protocol}. The system has two products: the \emph{basic} system, denoted $\mathbf{b}$, with no encryption feature and the \emph{advanced} system, denoted $\mathbf{a}$, with an encryption feature. The ordering on the products is $\mathbf{a} < \mathbf{b}$. Transitions that are present due to monotonicity are omitted. Initially, the system is in state 'ready' and is waiting to receive a message. Once a message is received there is a check whether the system's environment is safe or unsafe, leading to non-deterministic branching. If the encryption feature is present, then the system can send an encrypted message (e) from the unsafe state only; otherwise, the system sends an unencrypted message (u) regardless of the state being 'safe' or 'unsafe'. Note that such a behaviour description can easily be encoded by a transition function. E.g., $f(\text{received},\text{check})(\mathbf b)=\{\text{safe},\text{unsafe}\}$ and $f(\text{received},a)(x)=\emptyset$ (for $x\in\{\mathbf a,\mathbf b\}$ and $a\in A\setminus\{\text{check}\}$) specifies the transitions that can be fired from the received state to the (un)safe states. \end{example} Next, we turn our attention towards (strong) bisimulation relations for CTSs which consider the ordering of conditions in their transfer properties. \begin{defn}\label{def:cts-bisim} Let $(X,A,f)$, $(Y,A,g)$ be two CTSs over the same set of conditions $(\Phi,\leq)$. For a condition $\phi\in\Phi$, we define $f_\phi(x,a)=f(x,a)(\phi)$ to denote the traditional \emph{($A$-)labelled transition system} induced by a CTS $(X,A,f)$. Two states $x\in X,y\in Y$ are \emph{conditionally bisimilar} under a condition $\phi\in \Phi$, denoted $x \sim_\phi y$, if there is a family of relations $R_{\phi'}\subseteq X\times Y$ (for every $\phi'\leq \phi$) such that \begin{enumerate}[label=(\roman*)] \item each relation $R_{\phi'}$ is a traditional bisimulation relation between $f_{\phi'}$ and $g_{\phi'}$, \item whenever $\phi'\leq \phi''$, we have $R_{\phi'}\supseteq R_{\phi''}$, and \item $R_\phi$ relates $x$ and $y$, i.e., $(x,y)\in R_\phi$. \end{enumerate} \end{defn} \begin{example} Consider the CTS illustrated in Figure~\ref{fig:protocol} where the condition $\mathbf b$ of the transition `$\text{received} \xrightarrow{\text{check}, \mathbf b} \text{unsafe}$' is replaced by $\mathbf a$. Let $\text{ready}_1$ and $\text{ready}_2$ denote the initial states of the system before and after the above modification, respectively. Then, we find $\text{ready}_1 \sim_{\mathbf a} \text{ready}_2$; however, $\text{ready}_1 \not\sim_{\mathbf b} \text{ready}_2$. To see why the latter fails to hold, let $R_{\mathbf b}$ be the bisimulation relation in the traditional sense between the states $\text{ready}_1,\text{ready}_2$ under condition $\mathbf b$. Then, one finds that the states $\text{unsafe}_1,\text{safe}_2$ are bisimilar in the traditional sense, i.e., $(\text{unsafe}_1,\text{safe}_2)\in R_{\mathbf b}$. However, the two states cannot be related by any traditional bisimulation relation under condition $\mathbf a$; thus, violating Condition 2 of Definition~\ref{def:cts-bisim}. Indeed, the two systems behave differently. In the first, it is possible to perform actions $\text{receive}$, $\text{check}$ (arrive in state $\text{unsafe}$), do an upgrade, and send an encrypted message ($\text{e}$), which is not feasible in the second system because the $\text{check}$ transition forces the system to be in the safe state before doing the upgrade. However, without upgrades, the above systems would be bisimilar for both products. \end{example} We end this section by adapting the classical bisimulation game to conditional transition systems; thus, incorporating our intuitive explanation of upgrades with the notion of bisimilarity. \begin{defn}[\rm Bisimulation Game] Given two CTSs $(X,A,f)$ and $(Y,A,g)$ over a poset $(\Phi, \leq)$, a state $x\in X$, a state $y\in Y$, and a condition $\phi\in\Phi$, the bisimulation game is a round-based two-player game that uses both CTSs as game boards. Let $(x,y,\phi)$ be a game instance indicating that $x,y$ are marked and the current condition is $\phi$. The game progresses to the next game instance as follows: \begin{itemize} \item Player~1 is the first one to move. Player 1 can decide to make an upgrade, i.e., replace the condition $\phi$ by a smaller one (say $\phi'\leq\phi$, for some $\phi'\in\Phi$). \item Player~1 can choose the marked state $x\in X$ (or $y\in Y$) and performs a transition $x\xrightarrow{a,\phi'}x'$ ($y\xrightarrow{a,\phi'}y'$). \item Player~2 then has to simulate the last step, i.e., if Player~1 made a step $x\xrightarrow{a,\phi'}x'$, Player~2 is required to make step $y\xrightarrow{a,\phi'}y'$ and vice-versa. \item In turn, the new game instance is $(x',y',\phi')$. \end{itemize} Player~1 wins if Player~2 cannot simulate the last step performed by Player~1. Player~2 wins if the game never terminates or Player~1 cannot make another step. \end{defn} So bisimulation is characterised as follows: Player~2 has a winning strategy for a game instance $(x,y,\phi)$ if and only if $x\sim_\phi y$. The proof and the computation of the winning strategies for both players are given in \short{\cite{bkks:cts-upgrades-arxiv}}\full{Appendix~\ref{sec:proofs-strategies}}. \section{Lattice Transition Systems} \label{sec:lats} Recall from Section~\ref{sec:preliminaries} that there is a duality between partial orders and distributive lattices. In fact, as we will show below, this result can be lifted to the level of transition systems as follows: a conditional transition system over a poset is equivalent to a transition system whose transitions are labelled by the downward-closed subsets of the poset \begin{defn} \label{def:LatticeCTS} A \emph{lattice transition system} (LaTS) over a finite distributive lattice $\mathbb L$ and an alphabet $A$ is a triple $(X,A,\alpha)$ with a set of states $X$ and a transition function $\alpha:X \times A \times X \rightarrow \mathbb L$. A LaTS $(X,A,\alpha)$ is \emph{finite} if the sets $X,A$ are finite. \end{defn} Note that superficially, lattice transition systems resemble weighted automata \cite{hwaDKV}. However, while in weighted automata the lattice annotations are seen as weights that are accumulated, in CTSs they play the role of guards that control which transitions can be taken. Furthermore, the notions of behavioural equivalence are quite different. Given a CTS $(X,A,f)$ over $(\Phi,\le)$, we can easily construct a LaTS over $\mathcal{O}(\Phi)$ by defining $\alpha(x,a,x') = \{\phi\in\Phi \mid x'\in f(x,a)(\phi)\}$ for $x,x'\in X$, $a\in A$. Due to monotonicity, $\alpha(x,a,x')$ is always downward-closed. Similarly, a LaTS can be converted into a CTS by using the Birkhoff duality and by taking the irreducibles as conditions. \begin{theorem} The set of all CTSs over a set of conditions $\Phi$ is isomorphic to the set of all LaTSs over the lattice whose elements are the downward-closed subsets of $\Phi$. \label{thm:CTSequiv} \end{theorem} So every LaTS over a finite distributive lattice gives rise to a CTS in our sense (cf. Definition~\ref{CTS}) and since finite Boolean algebras are finite distributive lattices, conditional transition systems in the sense of \cite{ABHKMS12} are CTSs in our sense as well. We chose the definition of a CTS using posets instead of the dual view using lattices, because this view yields a natural description which models transitions in terms of conditions (product versions), though when computing with CTSs we often choose the lattice view. By adopting this view, conditional bisimulations can be computed symbolically and hence more efficiently (cf. Section~\ref{sec:bdd-based-repr}). \begin{defn} \label{def:lattice-bisimulation} Let $(X,A,\alpha)$ and $(Y,A,\beta)$ be any two LaTSs over a lattice $\mathbb L$. A conditional relation $R$, i.e., a function of type $R: X\times Y \rightarrow \mathbb L$ is a \emph{lattice bisimulation} for $\alpha,\beta$ if and only if the following transfer properties are satisfied. \begin{enumerate}[label=(\roman*)] \item For all $x,x'\in X$, $y\in Y$, $a\in A$, $\ell\in \mathcal J(\mathbb L)$ whenever $x \xrightarrow{a,\ell} x'$ and $\ell \sqsubseteq R(x,y)$, there exists $y'\in Y$ such that $y\xrightarrow{a,\ell} y'$ and $\ell \sqsubseteq R(x',y')$. \item Symmetric to (i) with the roles of $x$ and $y$ interchanged. \end{enumerate} In the above, we write $x \xrightarrow {a,\ell} x'$, whenever $\ell \sqsubseteq \alpha(x,a,x')$. \end{defn} For $\phi\in\Phi$, a transition $x \xrightarrow {a,\phi} x'$ exists in the CTS if and only if there is a transition $x \xrightarrow {a,\history \phi} x'$ in the corresponding LaTS. Hence they are denoted by the same symbol. \begin{theorem} \label{thm:bisim-correspondence} Let $(X,A,f)$ and $(Y,A,g)$ be any two CTSs over $\Phi$. Two states $x\in X,y\in Y$ are conditionally bisimilar under a condition $\phi$ if and only if there is a lattice bisimulation $R$ between the corresponding LaTSs such that $\phi \in R(x,y)$. \end{theorem} Incidentally, the order in $\mathbb L$ gives rise to a natural order on lattice bisimulations. For any two lattice bisimulations $R_1, R_2: X\times Y\rightarrow\mathbb L$, we write $R_1\sqsubseteq R_2$ if and only if $R_1(x,y)\sqsubseteq R_2(x,y)$ for all $x\in X,y\in Y$. As a result, taking the element-wise supremum of a family of lattice bisimulations is again a lattice bisimulation. Therefore, the greatest lattice bisimulation for a LaTS always exists, just like in the traditional case. \begin{lemma} \label{lem:unionisbisim} Let $R_i\in X\times Y\rightarrow\mathbb L, i\in I$ be lattice bisimulations for a pair of LaTSs $(X,A, \alpha)$ and $(Y,A,\beta)$. Then $\bigsqcup\{R_i\mid i\in I\}$ is a lattice bisimulation. \end{lemma} \section{Computation of Lattice Bisimulation} \label{sec:matrix-mult} The goal of this section is to present an algorithm that computes the greatest lattice bisimulation between a given pair of LaTSs. In particular, we first characterise lattice bisimulation as a post-fixpoint of an operator $F$ on the set of all conditional relations. Then, we show that this operator $F$ is monotone with respect to the ordering relation $\sqsubseteq$; thereby, ensuring that the greatest bisimulation always exists by applying the well-known Knaster-Tarski fixpoint theorem. Moreover, on finite lattices and finite sets of states, the usual fixpoint iteration starting with the trivial conditional relation (i.e., the constant $1$-matrix over $\mathbb L$) can be used to compute the greatest lattice bisimulation. Lastly, we give a translation of $F$ in terms of matrices using a form of matrix multiplication found in the literature of residuated lattices \cite{Belohlavek:2012:matrix-multi} and database design \cite{Kohout:1985:matrix-mult}. \subsection{A Fixpoint Approach} Throughout this section, we let $\alpha: X\times A\times X\rightarrow\mathbb L$, $\beta: Y\times A\times Y\rightarrow\mathbb L$ denote any two LaTSs, $\mathbb L$ denote a finite distributive lattice, and $\mathbb B$ denote the Boolean algebra that this lattice embeds into \begin{defn \label{def:F-operator} Recall the residuum operator $\to$ on a lattice and define three operators $F_1,F_2,F:(X\times Y\rightarrow\mathbb L)\rightarrow(X\times Y\rightarrow\mathbb L)$ in the following way (for $R\in X\times Y\rightarrow\mathbb L$, $x\in X$, $y\in Y$): { \allowdisplaybreaks \begin{align*} &F_1(R)(x,y) = \\ &\bigsqcap_{a\in A,x'\in X}\bigg(\alpha(x,a,x') \rightarrow \big(\bigsqcup_{y'\in Y}(\beta(y,a,y')\sqcap R(x',y'))\big) \bigg),\\ \full{ &F_2(R)(x,y) = \\ &\bigsqcap_{a\in A,y'\in Y}\bigg(\beta(y,a,y') \rightarrow \big(\bigsqcup_{x'\in X}(\alpha(x,a,x')\sqcap R(x',y'))\big)\bigg),\\ } &F(R)(x,y) = F_1(R)(x,y) \sqcap F_2(R)(x,y). \end{align*} }\short{The operator $F_2$ is defined analogously to $F_1$ where the roles of $x,y$ as well as $x',y'$ and $\alpha,\beta$ are interchanged.} \end{defn} Note that the above definition is provided for a distributive lattice, viewing it in classical two-valued Boolean algebra results in the well-known transfer properties of a bisimulation. \begin{theorem} \label{thm:fixpoint-lattice-bisim} A conditional relation $R$ is a lattice bisimulation if and only if $R\sqsubseteq F(R)$. \end{theorem} Next, it is easy to see that $F$ is a monotone operator with respect to the ordering $\sqsubseteq$ on $\mathbb L$ since the infimum and supremum are both monotonic, and moreover, the residuum operation is monotonic in the second component. As a result, we can use the following fixpoint iteration to compute the greatest bisimulation while working with finite lattices and finite sets of states. \begin{myalgorithm}\label{algo:partref} Let $(X,A,\alpha)$ and $(Y,A,\beta)$ be two finite LaTSs. Fix $R_0$ as $R_0(x,y)=1$ for all $x\in X,y\in Y$. Then, compute $R_{i+1}=F(R_i)$ for all $i\in\mathbb N_0$ until $R_{i}\sqsubseteq R_{i+1}$. Lastly, return $R_i$ as the greatest bisimulation \end{myalgorithm} Suppose $\alpha=\beta$, it is not hard to see that the fixpoint iteration must stabilise after at most $|X|$ steps, since each $R_i$ induces equivalence relations for all conditions $\phi$ and refinements regarding $\phi$ are immediately propagated to every $\phi'\ge \phi$. An equivalence relation can be refined at most $|X|$ times, limiting the number of iterations. \subsection{Lattice Bisimilarity is Finer than Boolean Bisimilarity} We now show the close relation of the notions of bisimilarity for a LaTS defined over a finite distributive lattice $\mathbb L$ and a Boolean algebra $\mathbb B$. As usual, let $(X,A,\alpha)$ and $(Y,A,\beta)$ be any two LaTSs together with the restriction that the lattice $\mathbb L$ embeds into the Boolean algebra $\mathbb B$. Moreover, let $F_{\mathbb L}$ and $F_{\mathbb B}$ be the monotonic operators as defined in Definition~\ref{def:F-operator} over the lattice $\mathbb L$ and the Boolean algebra $\mathbb B$, respectively. We say that $R$ is an $\mathbb{L}$-bisimulation (resp. $\mathbb B$-bisimulation) whenever $R\sqsubseteq F_{\mathbb L}(R)$ (resp. $R \sqsubseteq F_{\mathbb B}(R)$). \begin{proposition}\label{prop:lat-bisim&bool-bisim} \mbox{} \begin{enumerate}[label=(\roman*)] \item If $R:X\times Y\rightarrow\mathbb L$, then $\app{F_{\mathbb B}(R)} = F_\mathbb L(R)$ \item Every $\mathbb L$-bisimulation is also a $\mathbb B$-bisimulation. \item A $\mathbb B$-bisimulation $R:X\times Y\rightarrow\mathbb B$ is an $\mathbb L$-bisimulation whenever all the entries of $R$ are in $\mathbb L$. \end{enumerate} \end{proposition} However, even though the two notions of bisimilarity are closely related, they are not identical, i.e., it is not true that whenever a state $x$ is bisimilar to a state $y$ in $\mathbb B$ that it is also bisimilar in $\mathbb L$ (see Example~\ref{ex:cts-bisim} where we encounter a $\mathbb{B}$-bisimulation, which is not an $\mathbb{L}$-bisimulation). \subsection{Matrix Multiplication} An alternative way to represent a LaTS $(X,A,\alpha)$ is to view the transition function $\alpha$ as a family of matrices $\alpha_a: X\times X \rightarrow \mathbb L$ (one for each action $a\in A$) with $\alpha_a(x,x')=\alpha(x,a,x')$, for every $x,x'\in X$. We use standard matrix multiplication (where $\sqcup$ is used for addition and $\sqcap$ for multiplication), as well as a special form of matrix multiplication \cite{Belohlavek:2012:matrix-multi,Kohout:1985:matrix-mult}. \begin{defn}[\rm $\otimes$-multiplication] Given an $X\times Y$-matrix $U\colon X\times Y\to\mathbb{L}$ and a $Y\times Z$-matrix $V\colon Y\times Z\to \mathbb{L}$, we define the $\otimes$-multiplication of $U$ and $V$ as follows: \full{\[ U\otimes V\colon X\times Z \to \mathbb{L} \]} \[(U\otimes V)(x,z)=\bigsqcap_{y\in Y} \big(U(x,y)\rightarrow_{\mathbb L} V(y,z)\big) \enspace.\] \end{defn} \begin{theorem}\label{thm:bisim-imp} Let $R:X\times Y \rightarrow L$ be a conditional relation between a pair of LaTSs $(X,A,\alpha)$ and $(Y,A,\beta)$. Then, $F(R)= \bigsqcap_{a\in A}((\alpha_a\otimes(R\cdot{\beta_a}^T))\sqcap(\beta_a\otimes(\alpha_a\cdot R)^T)^T)$, where $A^T$ denotes the transpose of a matrix $A$. \end{theorem} We end this section by making an observation on LaTSs over a Boolean algebra. In a Boolean algebra, it is well-known that the residuum operator can be replaced by the negation and join operators. Thus, in this case, using only the standard matrix multiplication and (componentwise) negation we get $U\otimes V = \lnot (U\cdot (\lnot V))$ which further simplifies $F(R)$ as: \[F(R)=\bigsqcap_{a\in A}\bigl(\neg(\alpha_a \cdot \neg(R \cdot \beta_a^T)) \sqcap \neg(\neg(\alpha_a \cdot R) \cdot \beta_a^T)\bigr) \enspace.\] This reduction is especially relevant to software product lines with no upgrade features. \section{Application and Implementation} \label{sec:spl} \subsection{Featured Transition Systems} A Software Product Line (SPL) is commonly described as ``a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets [artifacts] in a prescribed way'' \cite{2001:SPL:501065}. The idea of designing a set of software systems that share common functionalities in a collective way is becoming prominent in the field of software engineering (cf. \cite{Metzger:2014:SPL:2593882.2593888}). In this section we show that a featured transition system (FTS) \cite{DBLP:conf/icse/CordyCPSHL12} -- a well-known formal model that is expressive enough to specify an SPL -- is a special instance of a CTS. \begin{defn} A \emph{featured transition system} (FTS) over a finite set of \emph{features} $N$ is a tuple $\mathcal F = (X,A, T, \gamma)$, where $X$ is a finite set of states, $A$ is a finite set of actions and $T\subseteq X\times A \times X$ is the set of transitions. Finally, $\gamma: T\rightarrow \mathbb{B}(N)$ assigns a Boolean expression over $N$ to each transition. \end{defn} FTSs are often accompanied by a so-called \emph{feature diagram} \cite{Cordy2013:adaptivefts,Classen:2010:MCL:1806799.1806850,Classen:2013:FTS}, a Boolean expression $d\in \mathbb{B}(N)$ that specifies admissible feature combinations. Given a subset of features $C\subseteq N$ (called \emph{configuration} or \emph{product}) such that $C\models d$ and an FTS $\mathcal F=(X,A,T,\gamma)$, a state $x\in X$ can perform an $a$-transition to a state $y\in X$ in the configuration $C$, whenever $(x,a,y)\in T$ and $C\models \gamma(x,a,y)$. It is easy to see that an FTS is a CTS, where the conditions are subsets of $N$ satisfying $d$ with the discrete order. Moreover, an FTS is a special case of an LaTS due to Theorem~\ref{thm:CTSequiv} and $\mathcal O(\llbracket d \rrbracket,=) = \mathcal P(\llbracket d \rrbracket)$. Given an FTS $\mathcal F = (X, A, T, \gamma)$ and a feature diagram $d$, then the corresponding LaTS is $(X, A, \alpha)$ with $\alpha(x,a,y)=\llbracket \gamma(x,a,y)\land d\rrbracket$, if $(x,a,y)\in T$; $\alpha(x,a,y)=\emptyset$, if $(x,a,y)\not\in T$. Furthermore, we can extend the notion of FTSs by fixing a subset of upgrade features $U\subseteq N$ that induces the following ordering on configurations $C,C'\in \llbracket d\rrbracket$: \begin{align*} C\le C' \iff \ & \forall f\in U(f\in C'\Rightarrow f\in C) \ \land \\ & \forall f\in (N\backslash U)\, (f\in C' \iff f\in C). \end{align*} Intuitively, the configuration $C$ can be obtained from $C'$ by ``switching'' on one or several upgrade features $f\in U$. Notice that it is this upgrade ordering on configurations which gives rise to the partially ordered set of conditions in the definition of a CTS. Hence, in the following we will consider the lattice $\mathcal O(\llbracket d \rrbracket, \le)$ (i.e., the set of all downward-closed subsets of $\llbracket d \rrbracket$). \subsection{BDD-Based Representation} \label{sec:bdd-based-repr} In this section, we discuss our implementation of lattice bisimulation using a special form of binary decision diagrams (BDDs) called \emph{reduced and ordered binary decision diagrams} (ROBDDs). Our implementation can handle adaptive SPLs that allow upgrade features, using finite distributive lattices. Note that non-adaptive SPLs based on Boolean algebras are a special case. BDD-based implementations of FTSs without upgrades have already been mentioned in \cite{DBLP:conf/icse/CordyCPSHL12,Classen:2011:symbolic}. A \emph{binary decision diagram} (BDD) is a rooted, directed, and acyclic graph which serves as a representation of a Boolean function. Every BDD has two distinguished terminal nodes $1$ and $0$, representing the logical constants \emph{true} and \emph{false}. The inner nodes are labelled by the atomic propositions of a Boolean expression $b\in\mathbb{B}(N)$ represented by the BDD, such that on each path from the root to the terminal nodes, every variable of the Boolean formula occurs at most once. Each inner node has exactly two distinguished outgoing edges called \emph{high} and \emph{low} representing the case that the atomic proposition of the inner node has been set to \emph{true} or \emph{false}. Given a BDD for a Boolean expression $b\in\mathbb{B}(N)$ and a configuration $C\subseteq N$ (representing an evaluation of the atomic propositions), we can check whether $C\models b$ by following the path from the root node to a terminal node. At a node labelled $f\in N$ we go to the \textit{high}-successor if $f\in C$ and to the \emph{low}-successor if $f\not\in C$. If we arrive at the terminal node labelled $1$ we have established that $C\models b$, otherwise $C\not\models b$. We use a special class of BDDs called ROBDDs (see \cite{And97} for more details) in which the order of the variables occurring in the BDD is fixed and redundancy is avoided. If both the child nodes of a parent node are identical, the parent node is dropped from the BDD and isomorphic parts of the BDD are merged. The advantage of ROBDDs is that two equivalent Boolean formulae are represented by exactly the same ROBDD (if the order of the variables is fixed). Furthermore, there are polynomial-time implementations for the basic operations -- negation, conjunction, and disjunction. These are however sensitive to the ordering of atomic propositions and an exponential blowup cannot be ruled out, but often it can be avoided. \begin{wrapfigure}{R}{0.17\textwidth} \centering \vspace{-0.2cm}% \scalebox{0.7}{\input{BDD.tex}} \caption{BDD for $b$. } \label{fig:BDD} \vspace{-0.4cm} \end{wrapfigure} Consider a Boolean expression $b$ with $\llbracket b\rrbracket = \{\emptyset, \{f_2,f_3\},$ $ \{f_0,f_1\}, \{f_0,f_1, f_2, f_3\}\}$ and the ordering on the atomic propositions as $f_0,f_1,f_2,f_3$. Figure~\ref{fig:BDD} shows the corresponding ROBDD representation for $b$, where the inner nodes, terminal nodes, and high (low) edges are depicted as circles, rectangles, and solid (dashed) lines, respectively. Formally, an ROBDD $b$ over a set of features $N$ is an expression in one of the following forms: $ 0$, or $1$, or $(f,b_1,b_0)$. Here, $0,1$ denote the two terminal nodes and the triple $(f,b_1,b_0)$ denotes an inner node with variable $f\in N$ and $b_0,b_1$ as the \emph{low}- and \emph{high}-successors, respectively. If $b=(f,b_1,b_0)$, we write $\mathit{root}(b) = f$, $\mathit{high}(b) = b_1$, and $\mathit{low}(b) = b_0$. Note that the elements of the Boolean algebra $\mathcal{P}(\mathcal{P}(N))$ {\it correspond exactly} to ROBDDs over $N$. We now discuss how ROBDDs can be used to specify and manipulate elements of the lattice $\mathcal{O}(\llbracket d\rrbracket,\le)$. In particular, computing the infimum (conjunction) and the supremum (disjunction) in the lattice $\mathcal{O}(\llbracket d\rrbracket,\le)$ is standard, since this lattice can be embedded into $\mathcal{P}(\mathcal{P}(N))$ and the infimum and supremum operations coincide in both structures. Thus, it remains to characterize the lattice elements and the residuum operation. We say that an ROBDD $b$ is {\em downward-closed} w.r.t. $\le$ (or simply, downward-closed) if the set of configurations $\llbracket b \rrbracket$ is downward-closed w.r.t. $\le$. The following lemma characterises when an ROBDD $b$ is downward-closed. It follows from the fact that $F\in \mathcal{P}(\mathcal{P}(N))$ is downward-closed if and only if for all $C\in F, f\in U$ we have $C\cup \{f\}\in F$. \begin{lemma} \label{lem:highsmallerlow} An ROBDD is downward-closed if and only if for each node labelled with a upgrade feature, the \emph{low}-successor implies the \emph{high}-successor. \end{lemma} Next, we compute the residuum in $\mathcal{O}(\llbracket d\rrbracket,\le)$ by using the residuum operation of the Boolean algebra $\mathcal{P}(\mathcal{P}(N))$. For this, we first describe how to approximate an element of the Boolean algebra (represented as an ROBDD) in the lattice $\mathcal{O}(\mathcal{P}(N),\le)$. \begin{algorithm}\caption{Approximation $\lfloor\!\!\lfloor b \rfloor\!\!\rfloor$ of an ROBDD $b$ in the lattice $\mathcal{O}(\mathcal{P}(N),\le)$} \label{alg:approxROBDD} \mbox{} \textbf{Input: } An ROBDD $b$ over a set of features $N$ and a set of upgrade features $U\subseteq N$. \textbf{Output: } An ROBDD $\approxL b$, which is the best approximation of $b$ in the lattice. \begin{algorithmic}[1] \Procedure{$\approxL b$}{} \If {$b$ is a leaf} \Return {$b$} \ElsIf {$root(b) \in U$} \Return \\ {\qquad\qquad $\textit{build}(\mathit{root}(b), \approxL{\mathit{high}(b)}, \approxL{\mathit{high}(b)}\wedge\approxL{\mathit{low}(b)})$} \Else { \Return{$\mathit{build}(\mathit{root}(b), \approxL{\mathit{high}(b)}, \approxL{\mathit{low}(b)})$}} \EndIf \EndProcedure \end{algorithmic} \end{algorithm} In the above algorithm, for each non-terminal node that carries a label in $U$ (line $3$), we replace the \emph{high}-successor with the conjunction of the \emph{low} and the \emph{high}-successor using the procedure described above. Since this might result in a BDD that is not reduced, we apply the $\mathit{build}$ procedure appropriately, which simply transforms a given ordered BDD into an ROBDD. The result of the algorithm $\approxL b$ coincides with the approximation $\app{b}$ of the ROBDD $b$ seen as an element of the Boolean algebra $\mathcal{P}(\mathcal{P}(N))$ (Definition~\ref{def:approx}). \begin{lemma} \label{lem:approximationROBDD} For an ROBDD $b$, $\approxL b$ is downward-closed. Furthermore, $\approxL b\models b$ and there is no other downward-closed ROBDD $b'$ such that $\approxL b\models b' \models b$. Hence $\approxL{b} = \app{b}$. \end{lemma} For each node in the BDD we compute at most one supremum, which is quadratic. Hence the entire runtime of the approximation procedure is at most cubic. Finally, we discuss how to compute the residuum in $\mathcal{O}(\llbracket d\rrbracket,\le)$. \begin{proposition} \label{prop:residuuminOd} Let $b_1,b_2$ be two ROBDD which represent elements of $\mathcal{O}(\llbracket d\rrbracket,\le)$, i.e., $b_1,b_2$ are both downward-closed and $b_1\models d$, $b_2\models d$. (i) $\app{\lnot b_1\lor b_2\lor \lnot d}\land d$ is the residuum $b_1\to b_2$ in the lattice $\mathcal{O}(\llbracket d\rrbracket,\le)$. (ii) If $d$ is downward-closed, then this simplifies to $b_1\to b_2 = \app{\lnot b_1\lor b_2}\land d$. Here, $\lnot$ is the negation in the Boolean algebra $\mathcal{P}(\mathcal{P}(N))$. \end{proposition} \subsection{Implementation and Runtime Results} \label{sec:impl-runtime} We have implemented an algorithm that computes the lattice bisimulation relation based on the matrix multiplication (see Theorem~\ref{thm:bisim-imp}) in a generic way. Specifically, this implementation is independent of how the irreducible elements are encoded, ensuring that no implementation details of operations such as matrix multiplication can interfere with the runtime results. For our experiments we instantiated it in two possible ways: with bit vectors representing feature combinations and with ROBDDs as outlined above. Our results show a significant advantage when we use BDDs to compute lattice bisimilarity. The implementation is written in C\# and uses the CUDD package by Fabio Somenzi via the interface PAT.BDD \cite{NguyenSLDL12}. \begin{figure} \centering \scalebox{0.7}{ \raisebox{100pt}{$\alpha:$}\hspace{-0.5cm} \begin{tikzpicture}[x={(1.2cm,-.7cm)},y={(0,2cm)},z={(1.8cm,.7cm)}] \node[state] (q1) {$0$} ; \node[right=of q1] (anker) {} ; \node[state,above= of anker] (q3) {$2$} ; \node[state,below= of anker] (q2) {$1$} ; \begin{scope}[->] \draw (q1) edge node [above right]{$b, \llbracket f \rrbracket$} (q2) ; \draw[loop left] (q1) edge node [above]{$b, \llbracket f \rrbracket$} (q1) ; \draw[bend right] (q1) edge node [right] {$b,\llbracket\mathit{true}\rrbracket$} (q3) ; \draw[bend right] (q3) edge node [left] {$c, \llbracket f \rrbracket$} (q1) ; \end{scope} \end{tikzpicture} \quad \raisebox{100pt}{$\beta:$}\hspace{-0.5cm} \begin{tikzpicture}[x={(1.2cm,-.7cm)},y={(0,2cm)},z={(1.8cm,.7cm)}] \node[state] (q1) {$0$} ; \node[right=of q1] (anker) {} ; \node[state,above= of anker] (q3) {$2$} ; \node[state,below= of anker] (q2) {$1$} ; \begin{scope}[->] \draw (q1) edge node [above right]{$b, \llbracket f \rrbracket$} (q2) ; \draw[loop left] (q1) edge node [above]{$b, \llbracket f \rrbracket$} (q1) ; \draw[bend right] (q1) edge node [right] {$b,\llbracket f \rrbracket$} (q3) ; \draw[bend right] (q3) edge node [left] {$c, \llbracket f \rrbracket$} (q1) ; \end{scope} \end{tikzpicture}} \caption{Components for $\alpha$ and $\beta$, where $f$ is viewed as a Boolean expression indicating the presence of feature $f$.} \vspace{-0.6cm} \label{ex:ctscomp} \end{figure} To show that the use of BDDs can potentially lead to an exponential gain in speed when compared to the naive bit-vector implementation, we executed the algorithm on a family of increasingly larger LaTSs over an increasingly larger number of features, where all features are upgrade features. Let $F$ be a set of features. Our example contains, for each feature $f\in F$, one disconnected component in both LaTSs that is depicted in Figure~\ref{ex:ctscomp}: the component for $\alpha$ on the left, the component for $\beta$ is on the right. The only difference between the two is in the guard of the transition from state $0$ to state $2$. The quotient of the times taken without BDDs and with BDDs is growing exponentially by a factor of about~$2$ for each additional feature \short{(see the runtime results in \cite{bkks:cts-upgrades-arxiv})}\full{(see the table in Appendix~\ref{sec:runtime-results})}. Due to fluctuations, an exact rate cannot be given. By the eighteenth iteration (i.e. 18~features and copies of the basic component), the implementation using BDDs needed 17~seconds, whereas the version without BDDs took more than 96~hours. The nineteenth iteration exceeded the memory for the implementation without BDDs, but terminated within 22~seconds with BDDs. \section{Conclusion, Related Work, and Future Work} \label{sec:conclusion} In this paper, we endowed CTSs with an order on conditions to model systems whose behaviour can be upgraded by replacing the current condition by a smaller one. Corresponding verification techniques based on behavioural equivalences can be important for SPLs where an upgrade to a more advanced version of the same software should occur without unexpected behaviour. To this end, we proposed an algorithm, based on matrix multiplication, that allows to compute the greatest bisimulation of two given CTSs. Interestingly, the duality between lattices and downward-closed sets of posets, as well as the embedding into a Boolean algebra proved to be fruitful when developing it and proving its correctness. There are two ways in which one can extend CTSs as a specification language: first, in some cases it makes sense to specify that an advanced version offers improved transitions with respect to a basic version. For instance, in our running example, allowing the router to send unencrypted messages in an unsafe environment is superfluous because the advanced version always has the encryption feature. Such a situation can be modelled in a CTS by adding a precedence relation over the set of actions, leading to the deactivation of transitions, which is worked out in \short{\cite{bkks:cts-upgrades-arxiv}}\full{Appendix~\ref{sec:deactivating-transitions}}. The second question is how to incorporate downgrades: one solution could be to work with a pre-order on conditions, instead of an order. This simply means that two conditions $\phi\neq\psi$ with $\phi\le \psi$, $\psi\le\phi$ can be merged since they can be exchanged arbitrarily. Naturally, one could study more sophisticated notions of upgrade and downgrade in the context of adaptivity. As for the related work on adaptive SPLs, literature can be grouped into either empirical or formal approaches; however, given the nature of our work, below we rather concentrate only on the formal ones \cite{Cordy2013:adaptivefts,Chrszon2016:profeat,Dubslaff:2014:PMC,terBeek:2015:statistical}. Cordy et al. \cite{Cordy2013:adaptivefts} model an adaptive SPL using an FTS which encodes not only a product's transitions, but also how some of the features may change via the execution of a transition. In contrast, we encode adaptivity by requiring a partial order on the products of an SPL and its effect on behaviour evolution by the monotonicity requirement on the transition function. Moreover, instead of studying the model checking problem as in \cite{Cordy2013:adaptivefts}, our focus was on bisimilarity between adaptive SPLs. In \cite{Dubslaff:2014:PMC,Chrszon2016:profeat,Lochau2017}, alternative ways to model adaptive SPLs by using the synchronous parallel composition on two separate computational models is presented. Intuitively, one models the static aspect of an SPL, while the other focuses on adaptivity by specifying the dynamic (de)selection of features. For instance, Dubslaff et al. \cite{Dubslaff:2014:PMC} used two separate Markov decision processes (MDP) to model an adaptive SPL. They modelled the core behaviour in an MDP called \emph{feature module}; while dynamic (de)activation of features is modelled separately in a MDP called \emph{feature controller}. In retrospect, our work shows that for monotonic upgrades it is possible to \emph{compactly} represent an adaptive SPL over one computational model (CTSs in our case) rather than a parallel composition of two. In \cite{terBeek:2015:statistical}, a process calculus QFLan motivated by concurrent constraint programming was developed. Thanks to an in-built notion of a store, various aspects of an adaptive SPL such as (un)installing a feature and replacing a feature by another feature can be modelled at run-time by operational rules. Although QFLan has constructs to specify quantitative constraints in the spirit of \cite{Dubslaff:2014:PMC}, their aim is to obtain statistical evidence by performing simulations. Behavioural equivalences such as (bi)simulation relations have already been studied in the literature of traditional SPLs. In \cite{DBLP:conf/icse/CordyCPSHL12}, the authors proposed a definition of simulation relation between any two FTSs (without upgrades) to combat the state explosion problem by establishing a simulation relation between a system and its refined version. In contrast, the authors in \cite{Atlee:2015:MBI:2820126.2820133} used simulation relations to measure the discrepancy in behaviour caused by feature interaction, i.e., whether a feature that is correctly designed in isolation works correctly when combined with the other features or not. (Bi)simulation relations on lattice Kripke structures were also studied in \cite{doi:10.1142/S0129054110007192}, but in a very different context (in model-checking rather than in the analysis of adaptive SPLs). Disregarding the differences between transition systems and Kripke structures (i.e., forgetting the role of atomic propositions), the definition of bisimulation in \cite{doi:10.1142/S0129054110007192} is quite similar to our Definition~\ref{def:F-operator} (another similar formula occurs in \cite{DBLP:conf/icse/CordyCPSHL12}). However, in \cite{doi:10.1142/S0129054110007192} the stronger assumption of finite distributive de Morgan algebras is used, the results are quite different and symbolic representations via BDDs are not taken into account. Moreover, representing the lattice elements and computing residuum over them using the BDDs is novel in comparison with \cite{DBLP:conf/icse/CordyCPSHL12,doi:10.1142/S0129054110007192}. Lastly, Fitting \cite{DBLP:conf/aiml/Fitting02} studied bisimulation relations in the setting of unlabelled transition systems and gave an elegant characterisation of bisimulation when transition systems and the relations over states are viewed as matrices. By restricting ourselves to LaTSs over Boolean algebras and fixing our alphabet to be a singleton set, we can establish the following correspondence between Fitting's formulation of bisimulation and lattice bisimulation (see \short{\cite{bkks:cts-upgrades-arxiv}}\full{Appendix~\ref{sec:proofs-lattices}} for the proof). \begin{theorem}\label{thm:fitting} Let $(X,\alpha)$ be a LaTS over an atomic Boolean algebra $\mathbb B$. Then, a conditional relation $R:X\times X \rightarrow \mathbb B$ is a lattice bisimulation for $\alpha$ if and only if $R\cdot\alpha \sqsubseteq \alpha \cdot R$ and $R^T \cdot \alpha \sqsubseteq \alpha \cdot R^T$. Here we interpret $\alpha$ as a matrix of type $X \times X \rightarrow \mathbb L$ by dropping the occurrence of action labels. \end{theorem} In hindsight, we are treating general distributive lattices that allow us to conveniently model and reason about upgrades. \smallskip \noindent \textbf{Current and future work:} In the future we plan to obtain runtime results for systems of varying sizes. In particular, we are interested in real-world applications in the field of SPLs, together with other applications, such as modelling transition systems with access rights or deterioration. On the more theoretical side of things, we have worked out the coalgebraic concepts for CTSs \cite{bkks:cts-upgrades-coalgebraic} and compared the matrix multiplication algorithm to the final chain algorithm presented in \cite{ABHKMS12}, when applied to CTSs. \smallskip \noindent\textbf{Acknowledgements:} We thank Filippo Bonchi and Mathias H\"ulsbusch for interesting discussions on earlier drafts. \bibliographystyle{plainurl}
1,477,468,751,360
arxiv
\section{Introduction} Comprehensive and precise measurements of hadronic $D$~~meson decays provide important inputs for the experimental studies of both charm and beauty decays~\cite{2009besphys}. One category of decay modes $D\to\phi{P}$ ($P$ represents a pseudoscalar particle) has simple Feynman diagrams as depicted in Fig.~\ref{figure:feynman}. This facilitates theoretical predictions and their comparisons~\cite{2014qinqin,2010chy} to experimental measurements. However, the experimental measurements of $D\to\phi{P}$ are still limited~\cite{pdg} due to the relative low branching fractions (BF) which are suppressed by phase space due to the $\phi$ meson mass. The singly Cabibbo-suppressed (SCS) decays of $D^{+}\to \phi\pi^{+}$~\cite{2008phipip}, $D^{0}\to \phi\pi^{0}$~\cite{2007phipi0}, and $D^{0}\to\phi\eta$~\cite{2004phieta} have been studied by CLEO, BaBar and Belle, respectively. The BF of the doubly Cabibbo-suppressed (DCS) decay $D^{+}\to\phi K^{+}$ is derived out according to two measurements of the total BF for $D^{+}\to{K^-K^+K^+}$ and the intermediate fraction of $\phi K^{+}$ at LHCb~\cite{2019phik,2019kkk}. \begin{figure*}[tbp] \centering \subfigure[Color suppressed $D^+\to\phi\pi^+$]{ \includegraphics[width=4.8cm]{Dtophipi+} } \quad \subfigure[Color suppressed $D^0\to\phi\pi^{0}$]{ \includegraphics[width=4.8cm]{Dtophipi0} } \quad \subfigure[W-annihilation $D^+\to\phi{K^+}$]{ \includegraphics[width=4.8cm]{DtophiK+} } \quad \subfigure[Color suppressed $D^0\to\phi\eta$]{ \includegraphics[width=4.8cm]{Dtophieta1} } \quad \subfigure[W-exchange $D^0\to\phi\eta$]{ \includegraphics[width=4.8cm]{Dtophieta2} } \quad \subfigure[W-exchange $D^0\to\phi\eta$]{ \includegraphics[width=4.8cm]{Dtophieta3} } \caption{Feynman diagrams of four $D\to\phi{P}$ decay modes. } \label{figure:feynman} \end{figure*} According to isospin symmetry between $u$ and $d$ quarks, the BFs for $D^{0}\to\phi\pi^{0}$ and $D^{+}\to\phi\pi^{+}$ are connected~\cite{2014qinqin,2010chy} as follows: \begin{equation} \frac{{\cal B}(D^{0}\to\phi\pi^{0})}{{\cal B}(D^{+}\to\phi\pi^{+})}=\frac{1}{2}\frac{\Gamma_{D^{+}}}{\Gamma_{D^{0}}}=\frac{1}{2}\frac{\tau_{D^{0}}}{\tau_{D^{+}}}. \label{eq:isospin} \end{equation} However, the current experimental result for the BF ratio deviates from prediction value of Eq.~\eqref{eq:isospin} by $2.7\sigma$ as shown in Table~\ref{tab:isospin}. Therefore, improved measurement is necessary to further test it and help to understand the strong interaction in $D$ meson hadronic decays. In this analysis, we study four two-body decay modes of $D\to\phi{P}$, which are $D^{+}\to\phi K^{+}$, $D^{+}\to\phi\pi^{+}$, $D^{0}\to\phi\pi^{0}$, and $D^{0}\to\phi\eta$, based on a data set of 2.93 fb$^{-1}$~\cite{luminosity} taken at $\sqrt{s} = 3.773$~GeV~with the BESIII detector. Due to energy conservation, the $D$ and $\bar{D}$ mesons from $e^+e^- \to \psi(3770) \to D\bar{D}$ are always produced in a pair without any other accompanying hadrons. Throughout this paper, charge-conjugate modes are implied. \begin{tablehere} \caption{Current result of the ratio of ${\cal B}(D^0\to\phi\pi^0)$ to ${\cal B}(D^+\to\phi\pi^+)$.} \label{tab:isospin} \centering \begin{footnotesize} \begin{tabular}{lcc} \toprule The ratio & Experiment result~(\%) & Prediction(\%) \\ \midrule $\frac{{\cal B}(D^{0}\to\phi\pi^{0})}{{\cal B}(D^{+}\to\phi\pi^{+})}$ & $24.6\pm1.8$~\cite{2008phipip,2007phipi0} & $19.7\pm0.2$~\cite{lifetime}\\ \bottomrule \end{tabular} \end{footnotesize} \end{tablehere} \section{BESIII Detector and Monte Carlo Simulation} The BESIII detector is a magnetic spectrometer~\cite{Ablikim:2009aa} located at the Beijing Electron Positron Collider (BEPCII)~\cite{Yu:IPAC2016-TUYA01}. The cylindrical core of the BESIII detector consists of a helium-based multilayer drift chamber (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0~T magnetic field. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identifier modules interleaved with steel. The acceptance of charged particles and photons is 93\% of the $4\pi$ solid angle. The charged-particle momentum resolution at $1~{\rm GeV}/c$ is $0.5\%$, and the specific energy loss ($dE/dx$) resolution is $6\%$ for electrons from Bhabha scattering. The EMC measures photon ~~energies with a resolution of $2.5\%$ ($5\%$) at $1$~GeV in the barrel (end cap) region. The time resolution of the TOF barrel section is 68~ps, while that of the end cap is 110~ps. Simulated samples produced with the {\sc geant4}-based~\cite{geant4} Monte Carlo (MC) package, which includes the geometric description~\cite{geo1,geo2} of the BESIII detector and the detector response, are used to determine the detection efficiency and to estimate the backgrounds. The simulation includes the beam energy spread and initial state radiation (ISR) in the $e^+e^-$ annihilations modelled with the generator {\sc kkmc}~\cite{ref:kkmc}. The inclusive MC samples consist of the production of $D\bar{D}$ pairs, the non-$D\bar{D}$ decays of the $\psi(3770)$, the ISR production of the $J/\psi$ and $\psi(3686)$ states, and the continuum processes incorporated in {\sc kkmc}~\cite{ref:kkmc}. The equivalent luminosity of the inclusive MC samples is about 10 times that of the data. The known decay modes are modelled with {\sc evtgen}~\cite{ref:evtgen} using branching fractions taken from the Particle Data Group~\cite{pdg}, and the remaining unknown decays from the charmonium states with {\sc lundcharm}~\cite{ref:lundcharm}. The final state radiations (FSR) from charged final state particles are incorporated with the {\sc photos} (version 2.02) package~\cite{photos,photos2}. The signal processes are generated separately taking the spin-matrix elements into account in {\sc evtgen}. For each signal channel, 200 000 events are simulated. \section{Event Selection} Candidates of the decay modes $D\to\phi P$ are reconstructed by combining the final states of $K^{\pm}$, $\pi^{\pm}$, $\pi^0$, and $\eta$ particles with BESIII offline software system~\cite{2009besphys,boss}, where $\phi$ mesons are detected via decays to $K^+K^-$. Candidates for $\pi^0$ and $\eta$ are identified from $\pi^0\to\gamma\gamma$ and $\eta\to\gamma\gamma$, respectively. Selected charged tracks must satisfy $|\cos\theta| < 0.93$, where $\theta$ is the polar angle with respect to the beam axis. The distance of closest approach of the track to the interaction point is required to be less than 10 cm in the beam direction and less than 1 cm in the plane perpendicular to the beam. Separation of charged kaons from charged pions is implemented by combining the energy loss ($dE/dx$) in the MDC and the time-of-flight information from the TOF. We calculate the probabilities $P(K)$ and $P(\pi)$ with the hypothesis of $K$ or $\pi$, and require that $K$ candidates have $P(K)>P(\pi)$, while $\pi$ candidates have $P(\pi)>P(K)$. Photon candidates are selected from neutral showers deposited in the EMC crystals, with energies larger than 25 MeV in the barrel ($|\cos{\theta}|<0.8$) and 50 MeV in the end cap ($0.86<|\cos{\theta}|<0.92$). To reduce fake photons due to beam background or electronic noise, the shower clusters are required to be within [0, 700] ns from the event start time. Furthermore, the photon candidates are required to be at least $10^{\circ}$ away from any charged tracks to remove fake photons caused by the interactions of hadrons in the EMC. The~$\pi^{0}\,(\eta)$~candidates are formed with pairs of photon candidates, whose invariant mass,~$M_{\gamma\gamma}$,~is required to be within [0.115, 0.150] ([0.500, 0.560])~GeV/$c^{2}$. To improve momentum resolution, a 1C kinematic fit constraining the reconstructed $\pi^0(\eta)$ mass to the nominal mass~\cite{pdg} is performed and the fitted four-momentum of the $\pi^{0}(\eta)$ is used in further analysis. \section{Data Analysis} In the rest frame of the initial $e^+e^-$ system, the total collision energy is shared equally by the $D\bar{D}$ pair. Hence, in this frame two variables, the energy difference $\Delta{E}$ and the beam constrained mass $M_{\rm BC}$ related to energy and momentum conservation, respectively, are defined as \begin{align*} & \Delta{E}\equiv E_{D}-\sqrt{s}/2, \\ &M_{\rm BC}\equiv \sqrt{s/4 c^{4}-|\vec{p}_{\rm D}|^{2}/c^{2}}, \end{align*} where $\vec{p}_{\rm D}$ is the momentum of the $D$ candidate. Signals for the four $\phi P$ decay modes are expected to peak around zero in $\Delta{E}$ distributions and the $D$ nominal mass in $M_{\rm BC}$ distributions. To suppress combinatorial background, the $\Delta{E}$ of the $D$ candidates are required to be within the regions listed in Table~\ref{tab:fitresult} for the different signal modes, which correspond to about $3\sigma$ coverage. The asymmetric boundaries of the $\Delta{E}$ region for the $\phi\pi^0$ and $\phi\eta$ modes are due to energy leakage in the EMC when reconstructing the photon energy. If there is more than one $D$ candidate left for one signal decay mode in an event, the candidate with the smallest $|\Delta{E}|$ is chosen for further analysis. More than $60\%$ of events have multiple candidates for $D^0$ decay modes and $20\%$ for $D^+$ decay modes. According to the studies on MC samples, the probability to select the correct candidate by choosing the minimum $|\Delta{E}|$ is more than $90\%$. In addition, the credibility of this method is verified and proven to be robust by studying the high-statistics inclusive MC samples. \begin{table*}[tp] \centering \caption{For each signal mode, the requirement on $\Delta{E}$, signal yields $N_{\rm sig}^i$, MC-determined detection efficiency $\varepsilon^i$, branching fraction ${\cal B}^i$ in this work, and the corresponding world results ${\cal B}_{\rm ext}$. } \label{tab:fitresult} \begin{footnotesize} \begin{tabular}{lccccc} \toprule Decay mode &$\Delta{E}$(GeV) & $N_{\rm sig}^i$ & $\varepsilon^i$(\%) & ${\cal B}^i(\times10^{-4})$ & ${\cal B}_{\rm ext}(\times10^{-4})$ \\ \hline $D^{+}\to\phi\pi^{+}$ &$[-0.020,0.019]$ & $17527\pm152$ & $37.7\pm0.1$ & $57.0\pm0.5\pm1.3$ & $53.7\pm2.3$~\cite{pdg}\\ \multirow{2}{*}{$D^{+}\to\phi{K^{+}}$} & \multirow{2}{*}{$[-0.019,0.018]$ } & \multirow{2}{*}{$12^{+28}_{-12}$ } & \multirow{2}{*}{$23.7\pm0.1$ } & $0.062^{+0.144}_{-0.062}\pm0.002$ & \multirow{2}{*}{$0.085\pm0.011$~\cite{pdg,2019phik,2019kkk}} \\ ~ & ~ & ~ & ~ & $<0.21$ at $90\%$ CL & ~\\ $D^{0}\to\phi\pi^{0}$ &$[-0.077,0.035]$ & $3333\pm76$ & $27.7\pm0.1$ & $11.68\pm0.28\pm0.28$ & $13.2\pm0.8$~\cite{pdg} \\ $D^{0}\to\phi\eta$ &$[-0.040,0.038]$ & $102\pm26$ & $13.7\pm0.1$ & $1.81\pm0.46\pm0.06$ & $1.4\pm0.5$~\cite{pdg}\\ \bottomrule \end{tabular} \end{footnotesize} \end{table*} \begin{figure*}[tp] \centering \subfigure[$D^+\to\phi\pi^+$]{ \includegraphics[width=3.7cm]{dalitz_phipi} } \subfigure[$D^+\to\phi{K^+}$]{ \includegraphics[width=3.7cm]{dalitz_phik} } \quad \subfigure[$D^0\to\phi\pi^0$]{ \includegraphics[width=3.7cm]{dalitz_phipi0} } \subfigure[$D^0\to\phi\eta$]{ \includegraphics[width=3.7cm]{dalitz_phieta} } \caption{ Two-dimensional distributions of $M_{\rm BC}$ and $M_{\rm KK}$ in data for the four signal modes. } \label{fig:colz} \end{figure*} \begin{figure*}[tp] \centering \subfigure[$D^+\to\phi\pi^+$]{ \includegraphics[width=7.5cm]{2dfitphipi_d} } \quad \subfigure[$D^+\to\phi{K^+}$]{ \includegraphics[width=7.5cm]{2dfitphik_d} } \quad \subfigure[$D^0\to\phi\pi^0$]{ \includegraphics[width=7.5cm]{2dfitpi0_d} } \quad \subfigure[$D^0\to\phi\eta$]{ \includegraphics[width=7.5cm]{2dfit_d} } \caption{ (Color online) Two-dimensional unbinned maximum likelihood fits to the distributions of $M_{\rm BC}$ and $M_{\rm KK}$ in data for the four signal modes. The points with error bars are data, the (red) thick curves are the total fits, the (blue) long dashed curves describe the signals, the (violet) dotted curves represent backgrounds of true $\phi$ mesons not from $D\to\phi P$ decay modes, the (black) dashed curves describe backgrounds from $D\to K^+K^-P$ without a $\phi$ meson, and the shaded area show the combinatorial backgrounds. } \label{fig:fitresult} \end{figure*} As shown in Fig.~\ref{fig:colz} and Fig.~\ref{fig:fitresult}, clear peaks are seen in the $M_{\rm BC}$ and $M_{\rm KK}$ distributions for the four signal modes, which correspond to the $D\to K^+K^- P$ signals and $\phi\to K^+K^-$ signals, respectively. According to the studies based on the inclusive MC samples, three types of background events will pass through above selection criteria. The first one is a true $D$ meson decaying to $K^+K^-P$ final states without a $\phi$ meson involved ($D\to{K^+K^-}P$), the second one is a true $\phi$ meson not from the corresponding signal mode (Cont. $\phi{PX}$) and the third one is the combinatorial background from neither of the previous two sources (Comb. bkg). Two-dimensional unbinned extended maximum likelihood fits to the obtained distributions of $M_{\rm BC}$ and $M_{\rm KK}$ are performed to extract yields of signals, as shown in Fig.~\ref{fig:fitresult}. The $M_{\rm KK}$ variable is employed here to discriminate the $\phi$ meson signal from the non-resonant $K^+K^-$ final state. The probability density functions of the $D$ meson and $\phi$ meson signals are modeled by the MC-simulated signal shapes convoluted with Gaussian functions that describe the resolution differences between MC simulations and data. The combinatorial backgrounds in $M_{\rm BC}$ ($M_{\rm KK}$) are described with (inverted) ARGUS~\cite{Albrecht:1990am} functions based on the studies on the inclusive MC sample. Since the correlation between $M_{\rm KK}$ and $M_{\rm BC}$ can be neglected, these two variables are considered uncorrelated in the fit. The parameters of the (inverted) ARGUS and Gaussian functions in two-dimensional fits are fixed according to one-dimensional fits to the corresponding $M_{\rm BC}$ and $M_{\rm KK}$ distributions. The obtained signal yields are given in Table~\ref{tab:fitresult}. \section{Branching Fraction} The branching fractions for the $D\to\phi P$ decays can be calculated by \begin{eqnarray} {\cal B}^i=\frac{N_{\rm sig}^i}{2\cdot{N_{D\bar{D}}}\cdot\varepsilon^i\cdot{\cal B}^i_{\rm sub}}, \label{eq:bf} \end{eqnarray} where $i$ denotes a signal mode of $D\to\phi P$, $N_{\rm sig}^i$ is the signal yield extracted in data, $N_{D\bar{D}}$ is the number of $D\bar{D}$ event in data, which is $(8296\pm31\pm64)\times10^{3}$ for $D^{+}D^{-}$ and $(10597\pm28\pm89)\times10^{3}$ for $D^{0}\bar{D}^{0}$~\cite{2014ndd} in the data set we analyzed, $\varepsilon^i$ is the reconstruction efficiency determined from MC simulation of the signal mode, and ${\cal B}^i_{\rm sub}$ are the branching fractions of the intermediate decay processes $\phi\to{K^+K^-}$ and $\pi^0/\eta\to\gamma\gamma$, quoted from PDG~\cite{pdg}. The branching fraction for each decay mode is calculated in Table~\ref{tab:fitresult}. \begin{figurehere} \centering \includegraphics[width=6cm]{draft_phik_smear} \caption{Likelihood curve as the function of assumed ${\cal B}(D^{+}\to\phi{K^{+}})$. The arrow points to the position of upper limit at the $90\%$ CL.} \label{fig:smear} \end{figurehere} The statistical significance of the $D^0\to\phi\eta$ signal is evaluated, taken as $\sqrt{-2\ln(\mathcal{L}_{0}^{\rm stat}/\mathcal{L}_{\rm max}^{\rm stat})}$ where $\mathcal{L}_{\rm max}^{\rm stat}$ and $\mathcal{L}_0^{\rm stat}$ are the maximum likelihood values with and without signal, respectively, to be $4.2\sigma$. Since the significance of the observed $D^+\to\phi K^+$ signal is $0.8\sigma$, the upper limit of ${\cal B}(D^+\to\phi K^+)$ is estimated by a likelihood scan method, which takes into account the systematic uncertainties as follows \begin{eqnarray} \mathcal{L}_i({\cal B}^i)=\int^{1}_{-1}\mathcal{L}^{\rm stat}[(1+\Delta){\cal B}^i]\exp\left(-\frac{\Delta^2}{2\sigma^{2}_{i,{\rm syst}}}\right)\,d\Delta. \label{eq:likelihood} \end{eqnarray} Here, $\Delta$ is the relative deviation of the estimated branching fraction from the nominal value and $\sigma_{i,{\rm syst}}$ is the total systematic uncertainty given in Table~\ref{tab:sys_err}. The likelihood curve calculated according to Eq.~\eqref{eq:likelihood} is shown in Fig.~\ref{fig:smear}. The upper limit on ${\cal B}(D^+\to\phi{K^+})$ at the $90\%$ confidence level (CL) is estimated to be $2.1\times10^{-5}$ by integrating the likelihood curve in the physical region, ${\cal B}^i>0$. \begin{table*}[tp] \caption{Summary of systematic uncertainties in percentage.} \centering \footnotesize \begin{tabular}{lcccccccccc} \toprule Source & $D^+\to\phi\pi^+$ & $D^+\to\phi K^{+}$ & $D^{0}\to\phi\pi^{0}$ & $D^{0}\to\phi\eta$ & $\frac{{\cal B}(D^{0}\to\phi\pi^{0})}{{\cal B}(D^{+}\to\phi\pi^{+})}$ \\ \hline Tracking &$1.0$ &$1.1$ &$0.8$ &$1.0$ & $0.3$\\ PID &$1.2$ &$1.0$ &$0.6$ &$0.6$ & $0.4$\\ $\pi^{0}$ reconstruction &$-$ &$-$ &$1.2$ &$-$ & $1.2$\\ $\eta$ reconstruction &$-$ &$-$ &$-$ &$1.8$ & $-$\\ $\Delta E$ requirement &$0.2$ &$0.2$ &$0.2$ &$0.2$ & $0.3$\\ 2D fit &$0.4$ &$2.5$ &$0.4$ &$2.0$ & $0.6$\\ $N_{D\overline{D}}$ uncertainty &$0.9$ &$0.9$ &$0.9$ &$0.9$ & $1.3$\\ ${\cal B}(\phi \to K^{+} K^{-})$ &$1.0$ &$1.0$ &$1.0$ &$1.0$ & $-$\\ ${\cal B}(\pi^{0}, \eta \to \gamma\gamma)$ &$-$ &$-$ &$0.1$ &$0.5$ & $0.1$\\ QC effect &$-$ &$-$ &$1.0$ &$1.0$ & $1.0$\\ \hline Total &$2.2$ &$3.3$ &$2.4$ &$3.5$ & $2.2$ \\ \bottomrule \end{tabular} \label{tab:sys_err} \end{table*} \section{Systematic Uncertainties}\label{section:sys} The following sources of systematic uncertainties, as given in Table~\ref{tab:sys_err}, are considered. The total systematic uncertainty is determined by adding all contributions in quadrature. The uncertainties of tracking and particle identification (PID) for charged kaon and pion mesons, as well as $\pi^0(\eta)$ reconstruction, have been studied in previous works by using control samples of $D$ hadronic events~\cite{2017trackpid}. The uncertainties are weighted according to the kinematics of the candidates. Furthermore, in order to estimate the systematic uncertainty caused by the selected $\pi^0(\eta)$ signal regions, the requirements on $M_{\gamma\gamma}$ are varied and the resultant changes on the BFs are $0.7\%$ ($1.1\%$). This uncertainty is combined with that of $\pi^0(\eta)$ reconstruction, the quadrature sum of which is given as $1.2\% (1.8\%)$. Requirements on $\Delta E$ are studied by smearing the corresponding $\Delta E$ distribution in inclusive MC samples with Gaussian functions and re-calculating detection efficiencies. The changes of the efficiencies are assigned as the corresponding uncertainties. Systematic uncertainty related to the two-dimensional fit includes parameters of Gaussian and ARGUS functions, fit range and background models. For the fixed parameters in the Gaussian and ARGUS functions, their values are varied by $\pm1\sigma$ from the one-dimensional fit results and the largest resultant change is assigned as the systematic uncertainty. The uncertainty due to the fit range is estimated by repeating the fits with a series of varied ranges and the corresponding changes are found to be negligible. For the background models, potential background of $D\to f_0(980) P$ is included in the fit and the change on the number of signal events is assigned as uncertainty. This uncertainty is larger for ${\cal B}(D^+\to\phi{K^+})$ and ${\cal B}(D^0\to\phi\eta)$ due to the smaller signal yields. The uncertainties of the quoted $N_{D\bar{D}}$ from Ref.~\cite{2014ndd}, ${\cal B}(\phi\to K^{+}K^{-})$ and ${\cal B}(\pi^0/\eta \to \gamma \gamma)$ from PDG~\cite{pdg} are taken into account for the relevant signal modes. Since $D^0$ and $\overline D^0$ are coherently produced in the process $e^{+}e^{-}\to \psi(3770)\to D^0\overline D^0$, quantum coherence (QC)~\cite{2006qc1} should be considered according to the equation \begin{displaymath} \Delta N^{obs}_{CP} = y_{CP}\cdot N^{obs}_{CP}. \end{displaymath} The uncertainty depends on the $D^0-{\overline D^0}$ mixing parameter $y_{CP}$, and is taken to be $1.0\%$~\cite{y} conservatively. For the systematic uncertainties of $\frac{{\cal B}(D^{0}\to\phi\pi^{0})}{{\cal B}(D^{+}\to\phi\pi^{+})}$, the effects related to $K^\pm$ tracking and PID are mostly cancelled, owing to their same kinematic phase space. The remaining systematic uncertainties in Table~\ref{tab:sys_err} are considered independently and summed up in quadrature. \section{Summary} The decays of $D^{+}\to\phi\pi^{+}$, $D^{0}\to\phi\pi^{0}$, $D^{0}\to\phi\eta$, and $D^{+}\to\phi K^{+}$ are studied by analyzing $2.93~\rm fb^{-1}$ data taken at $\sqrt{s} = 3.773$~GeV~with the BESIII detector. The obtained BFs are consistent with previous results, as listed in Table~\ref{tab:fitresult}, while the precisions of the BFs for the first three modes are improved. In addition, the upper limit on ${\cal B}(D^{+}\to\phi K^{+})$ of $2.1\times10^{-5}$ at $90\%$ CL is reported. Our results of ${\cal B}(D\to\phi\pi)$ and ${\cal B}(D^0\to\phi\eta)$ are consistent with the previous measurements. Furthermore, the ratio of ${\cal B}(D^0\to\phi\pi^0)$ to ${\cal B}(D^+\to\phi\pi^+)$ is calculated to be $(20.49\pm0.50\pm0.45)\%$, which is smaller than the previous result $(24.6\pm1.8)\%$~\cite{2008phipip,2007phipi0}. Meanwhile, the deviation from the predicted value of $(19.7\pm0.2)\%$ in Eq.~\eqref{eq:isospin} is reduced from $2.7\sigma$ to $1.2\sigma$, which shows better agreement than the previous measurement. Hence, our results support the isospin symmetry between these two $D$ meson decay modes. \section{Acknowledgments} The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11625523, 11635010, 11735014, 11822506, 11835012; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1532257, U1532258, U1732263, U1832207; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; ERC under Contract No. 758462; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; STFC (United Kingdom); The Knut and Alice Wallenberg Foundation (Sweden) under Contract No. 2016.0157; The Royal Society, UK under Contracts Nos. DH140054, DH160214; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0012069; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt
1,477,468,751,361
arxiv
\section{Exact Algorithm for cd-Chromatic Number} Let $G$ denote the input graph on $n$ vertices. Given a coloring of $V(G)$, we can check in polynomial time whether it is a cd-coloring or not. Therefore, to compute $\chi_{cd}(G)$, we can iterate over all possible colorings of $V(G)$ with at most $n$ colors and return a valid cd-coloring that uses the minimum number of colors. This brute force algorithm runs in $2^{\mathcal{O}(n \log n)}$ time. In this section we present an algorithm which runs in $\mathcal{O}(2^{n}n^4 \log n)$ time. The idea for this algorithm is inspired by an exact algorithm for \textsc{$b$-Chromatic Number} presented in \cite{panolan2015b}. We first list some preliminaries on polynomials and Fast Fourier Transform following the framework of \cite{panolan2015b}. A binary vector $\phi$ is a finite sequence of bits and $val(\phi)$ denotes the integer $d$ of which $\phi$ is the binary representation. All vectors considered here are binary vectors and are synonymous to binary numbers. Further, they are the binary representations of integers less than $2^n$ and are assumed to consist of $n$ bits. $\phi_1 + \phi_2$ denotes the vector obtained by the bitwise addition of the binary numbers (vectors) $\phi_1$ and $\phi_2$. Let $U= \{ u_1 , u_2,\dots , u_n \}$ denote a universe with a fixed ordering on its elements. The {\em characteristic vector} of a set $S \subseteq U$, denoted by $\psi(S)$, is the vector of length $|U|$ whose $j^{\text{th}}$ bit is $1$ if $u_j \in S$ and $0$ otherwise. The {\em Hamming weight} of a vector $\phi$ is the number of $1$s in $\phi$ and it is denoted by $\mathcal{H}(\phi)$. Observe that $\mathcal{H}(\psi(S)) = |S|$. The Hamming weight of an integer is define as hamming weight of its binary representation. To obtain the claimed running time bound for our exponential-time algorithm, we make use of the algorithm for multiplying polynomials based on the Fast Fourier Transform. \begin{lemma}[\cite{schonhage1971schnelle}] \label{lemma:fft} Two polynomials of degree at most $d$ over any commutative ring $\mathcal{R}$ can be multiplied using $\mathcal{O}(d \cdot \log d \cdot \log \log d)$ additions and multiplications in $\mathcal{R}$. \end{lemma} \noindent Let $z$ denote an indeterminate variable. We use the monomial $z^{val(\psi(S))}$ to represent the set $S \subseteq U$ and as a natural extension, we use univariate polynomials to represent a family of sets. \begin{definition}[Characteristic Polynomial of a Family of Sets] For a family $\mathcal{F} = \{S_1, S_2, \dots , S_q\}$ of subsets of $U$, the characteristic polynomial of $\mathcal{F}$ is defined as $p_\psi(\mathcal{F}) = \sum_{i = 1}^{q} z ^{val(\psi (S_i))}$. \end{definition} \begin{definition}[Representative Polynomial] For a polynomial $p(z) = \sum_{i = 1}^{q} a_i \cdot z ^{i} $, we define its representative polynomial as $ \sum_{i = 1}^{q} b_i \cdot z ^{i} $ where $b_i = 1$ if $a_i \neq 0$ and $b_i = 0$ if $a_i = 0$. \end{definition} \begin{definition} [Hamming Projection]The Hamming projection of the polynomial $p(z)= \sum_{i = 1}^{q} a_i \cdot z ^{i}$ to the integer $h$ is defined as $\mathcal{H}_{h}(p(z)) := \sum_{i = 1}^{q} b_i \cdot z ^{i} $ where $b_i = a_i$ if $\mathcal{H}(i) = h$ and $b_i = 0$ otherwise. \end{definition} \noindent Next, for two sets $S_1,S_2 \subseteq U$, we define a modified multiplication operation $(\star)$ of the monomials $z^{\psi(S_1)}$ and $z^{\psi(S_2)}$ in the following way. \[ z^{val(\psi(S_1))} \star z^{val(\psi(S_2))} = \begin{cases} z^{val(\psi(S_1)) + val(\psi(S_2))} & \text{if } S_1 \cap S_2 = \emptyset \\ 0 & \text{otherwise} \end{cases} \] \noindent For a polynomial function $p(z)$ of $z$ and a positive integer $\ell \ge 2$, we inductively define the polynomial $p(z)^{\ell}$ as $p(z)^{\ell} := p(z)^{\ell - 1} \star p(z)$. Here, coefficients of monomials follow addition and multiplications defined over underlying field. We now describe an algorithm for implementing the $\star$ operation using the standard multiplication operation and the notion of Hamming weights of bit strings associated with exponents. \begin{algorithm}[H] \KwIn{Two polynomials $q(z), r(z)$ of degree at most $2^n$} \KwOut{$q(z) \star r(z)$ } Initialize polynomials $t(z)$ and $t'(z)$ to 0\\ \For{ each ordered pair $(i, j) \text{ such that } i + j \le n$}{ Compute $s_i(z) = \mathcal{H}_i(q(z))$ and $s_j(z) = \mathcal{H}_j(r(z))$\\ Compute $s_{ij}(z) = s_i(z) * s_j(z)$ using Lemma~\ref{lemma:fft} \label{step:multiply}\\ $t'(z) = t(z) + \mathcal{H}_{i + j}(s_{ij}(z))$\\ Set $t(z)$ as the representative polynomial of $t'(z)$ \label{step:rep-poly} } \Return $t(z)$ \caption{Compute ($\star$) product of two polynomials} \label{alg:compute-star} \end{algorithm} \begin{lemma} \label{lemma:star} Let $\mathcal{F}_1$ and $\mathcal{F}_2$ be two families of subsets of $U$. Let $\mathcal{F}$ denote the collection $\{S_1 \cup S_2 |\ S_1 \in \mathcal{F}_1, S_2 \in \mathcal{F}_2 \text{ and } S_1 \cap S_2 = \emptyset\}$. Then, $p_\psi(\mathcal{F}_1) \star p_\psi(\mathcal{F}_2)$ computed by Algorithm~\ref{alg:compute-star} is $p_\psi(\mathcal{F})$. \end{lemma} \begin{proof} Define $q(z)=p_\psi(\mathcal{F}_1)$, $r(z)= p_\psi(\mathcal{F}_2)$ and $t(z)=q(z) \star r(z)$. Let $S_1 \in \mathcal{F}_1$ and $S_2 \in \mathcal{F}_2$ be sets such that $S_1 \cap S_2 = \emptyset$. Define $S=S_1 \cup S_2$ and let $\phi_1, \phi_2$ and $\phi$ be the characteristic vectors of $S_1, S_2$, and $S$ respectively. We claim that the term $z^{val(\phi)}$ is present in $t(z)$. For a vector $\phi$ and an integer $i \in [n]$, let $\phi[i]$ denote the $i^{th}$ bit in $\phi$. As $\phi[i]$ is 1 if and only if exactly one of the two bits $\phi_1[i]$, $\phi_2[i]$ is 1, it follows that there is no carry at any position (and hence no overflow) while adding $\phi_1$ and $\phi_2$. Therefore, $\phi=\phi_1 + \phi_2$ is a binary string of $n$ bits and $\mathcal{H}(\phi) = \mathcal{H}(\phi_1) + \mathcal{H}(\phi_2)$. Now, as $q(z)$ contains $z^{val(\phi_1)}$ and $r(z)$ contains $z^{val(\phi_2)}$, in the execution of Algorithm \ref{alg:compute-star}, for $i = |S_1|$ and $j = |S_2|$, polynomials $s_i(z)$ and $s_j(z)$ contain $z^{val(\phi_1)}$ and $z^{val(\phi_2)}$ respectively. Step~\ref{step:multiply} multiplies $s_i(z)$ and $s_j(z)$ using Fast Fourier Transformation to obtain $s_{ij}(z)$. As $\mathcal{H}(\phi_1)=i$, $\mathcal{H}(\phi_2)=j$ and $\mathcal{H}(\phi_1) + \mathcal{H}(\phi_2) = i + j$, $s_{ij}(z)$ contains the term $z^{val(\phi)}=z^{val(\phi_1) + val(\phi_2)}$. Moreover, $z^{val(\phi)}$ is present in $\mathcal{H}_{i + j}(s_{ij}(z))$ and hence it is a monomial in $t(z)$ as Step~\ref{step:rep-poly} ensures that every monomial in $t(z)$ is of the form $z^d$ for some integer $d$. Next, we show that for every monomial $z^d$ in $t(z)$, there is a set $S \in \mathcal{F}$ such that $d=val(\psi(S))$. Let $i$ and $j$ be integers such that $\mathcal{H}_{i + j}(s_{ij}(z))$ contains the term $z^d$. As $t(z)$ was initialized to $0$, $z^d$ was obtained as the product of two terms $z^{d_1}, z^{d_2}$ in $s_i(z)$ and $s_j(z)$ respectively such that $d_1 + d_2 = d$. Let $S_1 \in \mathcal{F}_1$ be the set such that $\psi(S_1)$ is the binary representation of $d_1$. Similarly, let $S_2 \in \mathcal{F}_2$ be the set such that $\psi(S_2)$ is the binary representation of $d_2$. Let $\phi_1$ and $\phi_2$ be the characteristic vectors of $S_1$ and $S_2$ respectively. Then, $|S_1|=i$, $|S_2|=j$ and there is no integer $k$ between $1$ and $n$ such that $\phi_1[k]=\phi_2[k]=1$. Therefore, $S_1 \cap S_2 =\emptyset$ and $z^d=z^{val(S_1 \cup S_2)}$. Hence, the claimed set $S$ is $S_1 \cup S_2$ which is in $\mathcal{F}$ as $S_1 \cap S_2 =\emptyset$. \end{proof} \begin{corollary} \label{cor:running-time} Given a polynomial $p(z)$ of degree at most $2^n$, there is an algorithm that computes $p(z)^{\ell}$ in $\mathcal{O}(2^n n^3 \log n \cdot l)$ time. \end{corollary} \begin{proof} By Lemma~\ref{lemma:fft}, an execution of the Fast Fourier multiplication algorithm takes $\mathcal{O}(2^n n \log n)$ time. As the \textbf{for} loop of Algorithm \ref{alg:compute-star} is executed $n^2$ times, the total time to compute $p(z)^{\ell}$ is $\mathcal{O}(2^n n^3 \log n)$. \end{proof} \noindent We now prove a result which correlates the existence of a partition of a set with the presence of a monomial in a polynomial associated with it. \begin{lemma} \label{lemma:generalized-partition} Consider a universe $U$ and a family $\mathcal{F}$ of its subsets with characteristic polynomial $p(z)$. For any $W \subseteq U$, $W$ is the disjoint union of $\ell$ sets from $\mathcal{F}$ if and only if there exists a monomial $z^{val(\psi(W))}$ in $p(z)^{\ell}$. \end{lemma} \begin{proof} Let $W$ be the disjoint union of $S_1, S_2, \dots, S_{\ell}$ such that $S_i \in \mathcal{F}$ for all $i \in [\ell]$. For any $j \in [n]$, the $j^{\text{th}}$ bit of $\psi(W)$ is 1 if and only if there is exactly one $S_i$ such that $j^{th}$ bit of $\psi(S_i)$ is 1. Thus, $val(\psi(W)) = val(\psi(S_1)) + val(\psi(S_2)) + \dots + val(\psi(S_{\ell}))$. Now, for every $S_i$ there is a term $z^{val(\psi(S_i))}$ in $p(z)$. Further, as the $S_i$'s are pairwise disjoint, the monomial $z^{val(\psi(S_1))} \star z^{val(\psi(S_2))} \star \cdots \star z^{val(\psi(S_{\ell}))}$ which is equal to $z^{val(\psi(W))}$ is present in $p(z)^{\ell}$. We prove the converse by induction on $\ell$. For $\ell=1$, the statement is vacuously true and for $\ell=2$, the claim holds from the proof of Lemma \ref{lemma:star}. Assume that the claim holds for all the integers which are smaller than $\ell$, that is, if there exists a monomial $z^{val(\psi(W))}$ in $p(z)^{\ell - 1}$ then $W$ can be partitioned into $\ell - 1$ disjoint sets from $\mathcal{F}$. If there exists a monomial $z^{val(\psi(W))} $ in $p(z)^{\ell} = p(z)^{\ell - 1} \star p(z)$ then it is the product of two monomials, say $z^{val(\psi(W_1))}$ in $p(z)^{\ell - 1}$ and $z^{val(\psi(W_2))}$ in $p(z)$ respectively with $W_1 \cap W_2 = \emptyset$. By induction hypothesis, $W_1$ is the disjoint union of $S_1, S_2, \dots, S_{\ell - 1}$ such that $S_i \in \mathcal{F}$ for all $i \in [\ell - 1]$. Also, $W_2$ is in $\mathcal{F}$ and since $W_1 \cap W_2 = \emptyset$, $S_i \cap W_2 = \emptyset$ for each $i$. Therefore, $W$ can be partitioned into sets $S_1, S_2, \dots, S_{\ell - 1}, W_2$ each of which belong to $\mathcal{F}$. \end{proof} \noindent Now we are in a position to prove the main theorem of this section. \begin{theorem} Given a graph $G$ on $n$ vertices, there is an algorithm which finds its cd-chromatic number in $\mathcal{O}(2^n n^4 \log n)$ time. \end{theorem} \begin{proof} Fix an arbitrary ordering on $V(G)$. With $V(G)$ as the universe, we define the family $\mathcal{F}$ of its subsets as follows. $$\mathcal{F} := \{X \subseteq V(G) |\ X \text{ is an independent set and } \exists \ y \in V(G) \text{ s.t. } X \subseteq N(y)\}$$ Note that every set in $\mathcal{F}$ is an independent set and there exists a vertex which dominates it. That is, $\mathcal{F}$ is the collection of the possible color classes in any cd-coloring of $G$. Let $p(z)$ be the characteristic polynomial of $\mathcal{F}$. By Lemma~\ref{lemma:generalized-partition}, if there exists a monomial $z^{val(\psi(V(G)))}$ in $p(z)^{\ell}$ then $V(G)$ can be partitioned into $\ell$ sets each belonging to $\mathcal{F}$. Hence the smallest integer $\ell$ for which there exists a monomial $z^{val(\psi(V(G)))}$ in $p(z)^{\ell}$ is $\chi_{cd}(G)$. By Corollary~\ref{cor:running-time}, $p(z)^{\ell}$ can be computed in $\mathcal{O}(2^n n^3 \log n \cdot l)$ time. As the cd-chromatic number of a graph is upper bounded by $n$, the claimed running time bound for the algorithm follows. \end{proof} \section{FPT Algorithms for cd-Chromatic Number} Determining whether a graph $G$ has cd-chromatic number at most $q$ is \NP-hard on general graphs for $q \ge 4$. This implies that the \textsc{cd-Coloring} problem parameterized by the number of colors is para-\NP-hard on general graphs. Thus this necessitates the search for special classes of graphs where \textsc{cd-Coloring} is \FPT. In this section we give \FPT\ algorithms for \textsc{cd-Coloring} on chordal graphs and graphs of girth at least $5$. We start by proving that \textsc{cd-Coloring} parameterized by the number of colors and treewidth of the graph is \FPT.\ Towards this, we will use Courcelle's powerful theorem which interlinks the fixed parameter tractability of a certain graph property with its expressibility as an MSO formula. We can write many graph theoretical properties as an MSO formula. Following are three examples which we will use in writing an MSO formula to check whether a graph has cd-chromatic number at most $q$. \begin{itemize} \item To check whether $V_1, V_2, \dots ,V_q$ is a partition of $V(G)$. $${\sf Part}(V_1, V_2, \dots ,V_q) \equiv \forall u \in V(G)[\exists i \in [q] [(u \in V_i) \land (\forall j \in [q][i \neq j \Rightarrow u \not\in V_j)]]]$$ \item To check whether a given vertex set $V_i$ is an independent set or not. $${\sf IndSet}(V_i) \equiv \forall u \in V_i [\forall v \in V_i [\lnot adj(u, v)]]$$ \item To check whether given vertex set $V_i$ is dominated by some vertex or not. $${\sf Dom}(V_i) \equiv \exists u \in V(G)[\forall v \in V_i[adj(u, v)]]$$ \end{itemize} We use $\phi(G, q)$ to denote the MSO formula which states that $G$ has cd-chromatic number at most $q$. We use the formulas defined above as macros in $\phi(G, q)$. \vspace{0.25cm} \begin{tabular}{ccl} $\phi(G, q)$ & $\equiv$ & $\exists V_1, V_2, \dots, V_q \subseteq V(G)[{\sf Part}(V_1, V_2, \dots, V_q) \land$\\ & & ${\sf IndSet}(V_1) \land \dots \land {\sf IndSet}(V_q) \land {\sf Dom}(V_1) \land \cdots \land {\sf Dom}(V_q)$] \end{tabular} It is easy to see that the length of $\phi(G, q)$ is upper bounded by a linear function of $q$. By applying Theorem~\ref{thm:courcelle} we obtain the following result. \begin{theorem} \label{thm:general-graph-fpt} \textsc{cd-Coloring} parameterized by the number of colors and the treewidth of the input graph is \FPT. \end{theorem} \subsection{Chordal Graphs} As the graph gets more structured, we expect many \NP-hard problems to get \emph{easier} in some sense on the restricted class of graphs having that structure. For example, \textsc{Chromatic-Coloring} is \NP-hard on general graphs but it is polynomial time solvable on chordal graphs. However, \textsc{cd-Coloring} is \NP-hard even on the chordal graphs and we show that it is \FPT\ when parameterized by the number of colors on chordal graphs. \begin{theorem} \textsc{cd-Coloring} parameterized by the number of colors is \FPT\ on chordal graphs. \end{theorem} \begin{proof} For a chordal graph $G$, ${\mathbf{tw}}(G) = \omega(G) - 1$ where $\omega(G)$ is the size of a maximum clique in $G$ \cite{golumbic2004algorithmic}. Since, a cd-coloring is also a proper coloring, no two vertices in a clique can be in the same color class. Thus, if $\omega(G) \geq k$ then we can conclude that $(G, k)$ is NO instance of \textsc{cd-Coloring}. Otherwise, $\omega(G) \le k$ which implies that ${\mathbf{tw}}(G) \le k$. This bound and Theorem~\ref{thm:general-graph-fpt} imply that \textsc{cd-Coloring} parameterized by the number of colors is \FPT\ on chordal graphs. \end{proof} \subsection{Graphs with girth at least $5$} In this section, we show that \textsc{cd-coloring} on graphs of girth at least five is \FPT\ with respect to the solution size as the parameter. By Observation~\ref{obs:graph-connected}, we can assume that the input graph $G$ is connected. We can define cd-coloring of a connected graph as a proper coloring such that every color class is contained in the open neighbourhood of some vertex. In other words, we do not allow a vertex to dominate itself. One can verify that the two definitions of cd-coloring are identical on connected graphs. We now define the notion of a {\em total-dominating set} of a graph $G$. A set $S\subseteq V(G)$ is called a {\em total-dominating set} if $V(G)=\bigcup_{v\in S} N(v)$. That is, for every vertex $v\in V(G)$, there exists a vertex $u\in S$, $u\neq v$, such that $v\in N(u)$. Our interest in total-dominating set is because of its relation to cd-coloring in graphs that do not contain triangles, that is, graphs of girth at least 4. In particular, we need the following lemma. The first proof of this has appeared in \cite{merouane2015dominated}. For the sake of completeness, we present a proof here. \begin{lemma}[Theorem~$4$ in \cite{merouane2015dominated}]\label{lemma:mim-TDS-CDcol} If $g(G) \ge 4$, then the size of a minimum total dominating set is equal $\chi_{cd}(G)$. \end{lemma} \begin{proof} Let $\phi$ be a cd-coloring of $G$ that uses $\chi_{cd}(G)$ colors and let $V_1, \dots ,V_q$ be the color classes in this coloring. Then, for every color class $V_i$, there is a vertex $v_i$ such that $V_i \subseteq N(v_i)$. Let $X$ denote the set of these vertices. Then, $X$ has at most $q$ vertices and by definition, it is a total dominating set of $G$. Hence, the size of a minimum total dominating set of a graph is at most the cd-chromatic number of the graph. Suppose $X = \{v_1, v_2, \dots, v_k\}$ is a minimum total dominating set of $G$. We construct a cd-coloring of $G$ using at most $k$ colors. We define the color classes in the following way. Let $V_1 = N(v_1)$ and for $i = 2, \dots, k$, define $V_i = N(v_i) \setminus (V_1 \cup V_2 \cup \dots \cup V_{i-1})$. Note that $V_1, \dots , V_q$ forms a partition of $V(G)$. Since, $g(G)\geq 4$, it follows that each $V_i$ is an independent set. Furthermore, since $X$ is a total dominating set, for each $i \in [k]$, we have a vertex $v_i\in X$ such that $V_i\subseteq N(v_i)$. Hence, this gives a cd-coloring of $G$. Therefore, the cd-chromatic number of a graph is at most the cardinality of a minimum total dominating set. Now the lemma follows by combining the above two inequalities. \end{proof} Lemma~\ref{lemma:mim-TDS-CDcol} shows that to prove that \textsc{cd-Coloring} is \FPT\ on graphs of girth at least four, it suffices to show that finding a total dominating set of size at most $k$ is \FPT\ on these graphs. This leads to the \textsc{Total Dominating Set} problem. Given a graph $G$ and an integer $k$, the \textsc{Total Dominating Set} problem asks whether there exists a total dominating set of size at most $k$. Observe that we can test whether $G$ has a total dominating set of size at most $k$ by enumerating all subsets $S$ of $V(G)$ of size at most $k$ and checking whether any of them forms a total-dominating set. This immediately gives an algorithm with running time $n^{\mathcal{O}(k)}$ for \textsc{cd-Coloring} on graphs with girth at least $4$, as the checking part can be done in polynomial time. It is not hard to modify the reduction given in~\cite{raman2008short} to show that \textsc{Total Dominating Set} is $W[2]$ hard on bipartite graphs. Thus, Lemma~\ref{lemma:mim-TDS-CDcol} implies that even \textsc{cd-Coloring} is $W[2]$ hard on bipartite graphs. Hence, if we need to show that \textsc{cd-Coloring} is \FPT, we must assume that the girth of the input graph is at least $5$. In the rest of this section, we show that \textsc{cd-Coloring} is \FPT\ on graphs with girth at least $5$ by showing that \textsc{Total Dominating Set} is \FPT\ on those graphs. Before proceeding further, we note some simple properties of graphs with girth at least $5$. \begin{observation} \label{obs:nbd-ind} For a graph $G$, if $g(G) \ge 5$ then for any $v$ in $V(G)$, $N(v)$ is an independent set and for any $u, v$ in $V(G)$, $|N(v) \cap N(u)| \le 1$. \end{observation} Raman and Saurabh \cite{raman2008short} defined a variation of \textsc{Set Cover} problem, namely, \textsc{Bounded Intersection Set Cover}. An input to the problem consists of a universe $\mathcal{U}$, a collection $\mathcal{F}$ of subsets of $\mathcal{U}$ and a positive integer $k$ with the property that for any two $S_i, S_j$ in $\mathcal{F}$, $|S_i \cap S_j| \leq c$ for some constant $c$ and the objective is to check whether there exists a sub-collection $\mathcal{F}_0$ of $\mathcal{F}$ of size at most $k$ such that $\bigcup_{S \in \mathcal{F}_0} = \mathcal{U}$. In the same paper, the authors proved that the \textsc{Bounded Intersection Set Cover} is \FPT\ when parameterized by the solution size. \textsc{Total Dominating Set} on $(G, k)$ where $G$ has girth at least $5$ can be reduced to \textsc{Bounded Intersection Set Cover} with $\mathcal{U} = V(G)$ and $\mathcal{F} = \{N(v) |\ \forall v \in V(G)\}$. By Observation~\ref{obs:nbd-ind}, we can fix the constant $c$ to be $1$. Hence we have the following lemma. \begin{lemma} \label{lemma:tds-fpt} On graphs with girth at least $5$, \textsc{Total Dominating Set} is \FPT\ when parameterized by the solution size. \end{lemma} We now prove that the problem has a polynomial kernel and use it to design another \FPT\ algorithm. \begin{lemma} \label{lemma:tds-kernel} \textsc{Total Dominating Set} admits a kernel on $\mathcal{O}(k^3)$ vertices on the class of graphs with girth at least 5. \end{lemma} \begin{proof} We start the proof with the following claim which says that every high degree vertex should be included in every total dominating set of size at most $k$. \begin{claim} In a graph $G$ with $g(G) \ge 5$, if there is a vertex $u$ with degree at least $k + 1$, then any total dominating set of size at most $k$ contains $u$. \end{claim} \begin{proof} Suppose there exists a total dominating set $X$ of $G$ of size at most $k$ which does not contain $u$. Since $N(u)$ (having size at least $k + 1$) is dominated by $X$ and no vertex can dominate itself, by the Pigeon Hole Principle, there exists a vertex, say $w$, in $X$ which is adjacent to at least two vertices, say, $v_1, v_2$ in $N(u)$. This implies that $w,v_1,v_2,u$ form a cycle of length $4$, contradicting the fact that girth of $G$ is at least 5. \end{proof} Suppose $G$ has a total dominating set of size at most $k$. Construct a tri-partition of $V(G)$ as follows: \begin{eqnarray*} H & = & \{u \in V(G) ~|~ |N(u)| \ge k + 1\}; \\ J & = & \{v \in V(G) ~|~ v\notin H,~\exists u \in H \text{ such that } (u,v) \in E(G)\}; \\ R & = & V(G) \setminus (H \cup J) \end{eqnarray*} By the above claim, $H$ is contained in every total dominating set of size at most $k$. Hence, the size of $H$ is upper bounded by $k$. Note that there is no edge between a vertex in $H$ and a vertex in $R$. Thus, $R$ has to be dominated by at most $k$ vertices from $J \cup R$. However, the degree of vertices in $J \cup R$ is at most $k$ and hence $|R| \le \mathcal{O}(k^2)$ and $|J \cap N(R)|$ is upper bounded by $\mathcal{O}(k^3)$. We will now bound the size of $J^\star=J \setminus N(R)$. For that, we first apply the following reduction rule on the vertices in $J^\star$. \begin{Reduction Rule} \label{rr:nbd-subset} For $u, v \in J^\star$, if $N(u) \cap H \subseteq N(v) \cap H$ then delete $u$. \end{Reduction Rule} The correctness of this reduction follows from the observation that all the vertices in $J$ have been dominated by the vertices in $H$. The only reason any vertex in $J^\star$ is part of a total dominating set is because that vertex is used to dominate some vertex in $H$. If this is the case then the vertex $u$ in the solution can be replaced by the vertex $v$. In the reverse direction, if $X$ is a total dominating set of $G - \{u\}$ and $|X| \leq k$, then $H \subseteq X$. Hence $u$ is dominated by $x \in X \cap H$ in $G$ too. That is, $X$ is a total dominating set of $G$. All that remains is to bound the size of $J^\star$. We partition $J^\star$ into two sets namely $J_1$ and $J_2$. The set $J_1$ is the set of vertices which are adjacent to exactly one vertex in $H$ whereas each vertex in $J_2$ is adjacent to at least two vertices in $H$. After exhaustive application of Reduction Rule~\ref{rr:nbd-subset}, no two vertices in $J_1$ can be adjacent to one vertex in $H$ and hence $|J_1| \le |H| \le k$. Any vertex in $J_2$ is adjacent to at least two vertices in $H$. For every vertex $u$ in $J_2$, we assign a pair of vertices in $H$ to which $u$ is adjacent. By Observation~\ref{obs:nbd-ind}, no two vertices in $J_2$ can be assigned to the same pair and hence the size of $J_2$ is upper bounded by $\binom{k}{2} \le k^2$. Combining all the bounds, we get a kernel with $\mathcal{O}(k^3)$ vertices. \end{proof} Combining Lemmas ~\ref{lemma:mim-TDS-CDcol} and \ref{lemma:tds-kernel} we obtain the following theorem. \begin{theorem} On graphs with girth at least $5$, \textsc{cd-Coloring} admits an algorithm running in $\mathcal{O}(2^{\mathcal{O}(q^{3})} q^{12} \log q^3)$ time and an $\mathcal{O}(q^3)$ sized vertex kernel, where $q$ is number of colors. \end{theorem} \section{Preliminaries} The set of integers $\{1,2,\ldots,k\}$ is denoted by $[k]$. All graphs considered in this paper are finite, undirected and simple. For the terms which are not explicitly defined here, we use standard notations from \cite{diestel2000graph}. For a graph $G$, its vertex set is denoted by $V(G)$ and its edge set is denoted by $E(G)$. For a vertex $v \in V(G)$, its (open) {\em neighbourhood} $N_G(v)$ is the set of all vertices adjacent to it and its {\em closed neighborhood} is the set $N_G(v) \cup \{v\}$. We omit the subscript in the notation for neighbourhood if the graph under consideration is clear from the context. The degree of a vertex $v$ is the size of its open neighborhood. For a set $S\subseteq V(G)$, the {\it subgraph of $G$ induced by $S$}, denoted by $G[S]$, is defined as the subgraph of $G$ with vertex set $S$ and edge set $\{(u,v) \in E(G) :u,v\in S\}$. The subgraph of $G$ obtained after deleting $S$ (and the edges incident on it) is denoted as $G- S$. The {\em girth} of a graph is the length of a smallest cycle. A set $D \subseteq V(G)$ is said to be a {\em dominating set} of $G$ if every vertex in $V(G) \setminus D$ is adjacent to some vertex in $D$. A {\em proper coloring} of $G$ with $q$ colors is a function $f: V(G) \rightarrow [q]$ such that for all $(u,v) \in E(G)$, $f(u) \neq f(v)$. For a proper coloring $f$ of $G$ with $q$ colors and $i \in [q]$, $f^{-1} (i) \subseteq V(G)$ is called a {\em color class} in the coloring $f$. The {\em chromatic number} $\chi(G)$ of $G$ is the minimum number of colors required in a proper coloring of $G$. A {\em clique} is a graph which has an edge between every pair of vertices. The {\em clique number} $\omega(G)$ of $G$ is the size of a largest clique which is a subgraph of $G$. A {\em vertex cover} is a set of vertices that contains at least one endpoint of every edge in the graph. An {\em independent set} is a set of pairwise nonadjacent vertices. A graph is said to be a {\em bipartite graph} if its vertex set can be partitioned into two independent sets. An {\em odd cycle transversal} is a set of vertices whose deletion from the graph results in a bipartite graph. A {\em tree-decomposition} of a graph $G$ is a pair $(\mathbb{T},\mathcal{ X}=\{X_{t}\}_{t\in V({\mathbb T})})$ such that \begin{itemize} \item $\bigcup_{t\in V(\mathbb{T})}{X_t}=V(G)$, \item for every edge $(x,y)\in E(G)$ there is a $t\in V(\mathbb{T})$ such that $\{x,y\}\subseteq X_{t}$, and \item for every vertex $v\in V(G)$ the subgraph of $\mathbb{T}$ induced by the set $\{t\mid v\in X_{t}\}$ is connected. \end{itemize} \noindent The {\em width} of a tree decomposition is $\max_{t\in V(\mathbb{T})} |X_t| -1$ and the {\em treewidth} of $G$, denoted by ${\mathbf{tw}}(G)$, is the minimum width over all tree decompositions of $G$. The syntax of {\em Monadic Second Order Logic (MSO)} of graphs includes the logical connectives $\vee$, $\wedge$, $\neg$, $\Rightarrow$, $\Leftrightarrow$, variables for vertices, edges, sets of vertices, sets of edges, the quantifiers $\forall$, $\exists$ that can be applied to these variables and the following five binary relations. \begin{itemize} \item $u \in U$ where $u$ is a vertex variable and $U$ is a vertex set variable; \item $e \in F$ where $e$ is an edge variable and $F$ is an edge set variable; \item ${\mathbf{inc}}(e,u)$, where $e$ is an edge variable, $u$ is a vertex variable, and the interpretation is that the edge $e$ is incident with the vertex $u$; \item ${\mathbf{adj}}(u,v)$, where $u$ and $v$ are vertex variables and the interpretation is that $u$ and $v$ are adjacent; \item equality of variables representing vertices, edges, sets of vertices, and sets of edges. \end{itemize} \noindent For an MSO formula $\phi$, $||\phi||$ denotes the length of its encoding as a string. \begin{theorem}[Courcelle's theorem, \cite{Courcelle90,Courcelle92}] \label{thm:courcelle} Let $\phi$ be a graph property that is expressible in MSO. Suppose $G$ is a graph on $n$ vertices with treewidth $tw$ equipped with the evaluation of all the free variables of $\phi$. Then, there is an algorithm that verifies whether $\phi$ is satisfied in $G$ in $f(||\phi||, tw) \cdot n$ time for some computable function $f$. \end{theorem} \noindent We end the preliminaries section with following simple observations \begin{observation} \label{obs:graph-connected} If $G_1,\ldots,G_l$ are the connected components of $G$, then $\chi_{cd}(G) = \sum_{i=1}^{l} \chi_{cd}(G_i)$. \end{observation} \begin{observation} \label{obs:color-dom-set} If $G$ is $q$-cd-colorable, then $G$ has a dominating set of size at most $q$. \end{observation} \section{Complexity of CD-Partization} In this section, we study the complexity of \textsc{cd-Partization}. As recognizing graphs with cd-chromatic number at most $q$ is \NP-hard on general graphs for $q \geq 4$, the deletion problem is also \NP-hard on general graphs for such values of $q$. For $q=1$, the problem is trivial as $\chi_{cd}(G)=1$ if and only if $G$ is the graph on one vertex. In this section, we show \NP-hardness for $q \in \{2,3\}$. We remark that $\mathcal{G}=\{G \mid \chi_{cd}(G) \leq q\}$ is not a hereditary graph class and so the generic result of Lewis and Yannakakis \cite{node-del} does not imply the claimed \NP-hardness. \subsection{Para-\NP-hardness in General Graphs} Consider the following problem. \defdecproblem{Partization}{Graph $G$, integers $k$ and $q$}{Does there exist $S \subseteq V(G)$, $|S| \leq k$, such that $\chi(G-S)\leq q$?} Once again if $q$ is fixed, we refer to the problem as \textsc{$q$-Partization}. Observe that the classical \NP-complete problems \textsc{Vertex Cover} \cite{garey} and \textsc{Odd Cycle Transversal} \cite{garey} are \textsc{1-Partization} and \textsc{2-Partization}, respectively. Now, we proceed to show the claimed hardness. \begin{theorem} \textsc{$q$-\textsc{cd-Partization}} is \NP-complete for $q \in \{2,3\}$. \end{theorem} \begin{proof} The problem is in \NP\ as determining if the cd-chromatic number of a graph is at most $q \in \{1,2,3\}$ is polynomial-time solvable. Given an instance $(G,k)$ of \textsc{$q$-Partization} where $q \in \{1,2\}$, we construct the instance $(G',k)$ of \textsc{$(q+1)$-\textsc{cd-Partization}} as follows: $G'$ is obtained from $G$ by adding a new vertex $v$ adjacent to every vertex in $V(G)$ and adding $k+q+2$ new vertices $v_1,\cdots,v_{k+q+2}$ adjacent to $v$. We claim that $G$ has a set of $k$ vertices whose deletion results in a $q$-colorable graph if and only if $G'$ has a set of $k$ vertices whose deletion results in a $(q+1)$-cd-colorable graph. Consider a set $S$ of $k$ vertices such that $\chi(G-S) \leq q$. Then, $G'-S$ is $(q+1)$-cd-colorable as a new color can be assigned to $v$ and any of the $q$ colors of $G-S$ can be assigned to $v_1,\cdots,v_{k+q+2}$. The color class containing $v$ is a singleton set. This class is dominated by all vertices in $G'-(S \setminus \{v\})$. Further, $v$ dominates each of the other $q$ color classes as $v$ is a universal vertex in $G'$. Conversely, let $S' \subseteq V(G')$ be a minimal set of at most $k$ vertices such that $\chi_{cd}(G'-S') \leq q+1$. Now, if $v \in S'$, then vertices $v_1,\cdots,v_{k+q+2}$ are isolated in $G-\{v\}$ implying that either $|\{v_1,\cdots,v_{k+q+2}\} \cap S'| \geq k+1$ or $\chi_{cd}(G'-S') > q+1$. So, we can assume that $v \notin S'$. Further, as $S'$ is minimal, it follows that $\{v_1,\cdots,v_{k+q+2}\} \cap S'=\emptyset$. Also, as $v$ is a universal vertex in $G'$, we have that $\chi(G-(S' \setminus \{v\})) \leq q$. So, $S'$ is a subset of $V(G)$ of size at most $k$ such that $G-S'$ is $q$-colorable. \end{proof} \subsection{\NP-hardness and Fixed-Parameter Tractability in Split Graphs} A graph is a {\em split graph} if its vertex set can be partitioned into a clique and an independent set. As split graphs are perfect (clique number is equal to the chromatic number for every induced subgraph), we have the following observation. \begin{obs} \label{split-r-col} A split graph $G$ is $r$-colorable if and only if $\omega(G) \leq r$. \end{obs} The following result is known for the corresponding deletion problem. \begin{theorem}[\cite{cor,yan}] \textsc{Partization on Split Graphs} is \NP-complete. \end{theorem} This hardness was shown by a reduction from \textsc{Set Cover} \cite{garey}. We modify this reduction to show that \textsc{cd-Partization}\ is \NP-complete on split graphs. The problem is in \NP\ as the cd-chromatic coloring of a split graph can be verified in polynomial time due to the following result. \begin{theorem} [\cite{caldam16}] \label{split-cd} If $G$ is a connected split graph $G$, then $\omega(G)=\chi_{cd}(G)$. Furthermore, there is an $\mathcal{O}({|V(G)|}^2)$ time algorithm that returns a minimum cd-coloring of $G$. \end{theorem} \begin{theorem} \label{split-hard} \textsc{\textsc{cd-Partization}} on split graphs is \NP-hard. \end{theorem} \begin{proof} Consider a \textsc{Set Cover} instance $(U, \mathcal{F}, k)$ where $U=\{x_1,\cdots,x_n\}$ is a finite set and $\mathcal{F}$ is a family $\{S_1,\cdots,S_m\}$ of subsets of $U$. The problem is to determine if there is a collection of at most $k$ sets in $\mathcal{F}$ such that each element of $U$ is in at least one set of the collection. The corresponding instance of \textsc{$cd$-Partization} is $(G, k'=m-k, q=k+1)$ where $G$ is a split graph on the vertex set $C \cup I \cup \{w_0,w_1,\cdots,w_{k+k'+2}\}$ where $C=\{u_i \mid S_i \in \mathcal{F}\}$ and $I=\{v_i \mid x_i \in U\}$. Also, $(v_i,u_j) \in E(G)$ if and only if $x_i \notin S_j$ and $w_0$ is adjacent to every vertex in $C \cup I \cup \{w_1,\cdots,w_{k+k'+2}\}$. Further, $I \cup \{w_1,\cdots,w_{k+k'+2}\}$ and $C$ induce an independent set and a clique, respectively, in $G$. We claim that a set $\mathcal{F}' \subseteq \mathcal{F}$ of size $k$ is a set cover if and only if $G-S'$ is $q$-cd-colorable where $S'=\{u_i \in C \mid S_i \in \mathcal{F} \setminus \mathcal{F}'\}$ and $|S'|=k'$. Consider a set cover $\mathcal{F}' \subset \mathcal{F}$ of size $k$. If there is a clique $Q$ (without loss of generality assume $w_0 \in Q$) of size $k+2$ in $G -S'$, then $Q$ must contain an element $v_i \in I$ that is adjacent to $k$ vertices in $C\setminus S'$. However, since $\mathcal{F}'$ is a set cover, $v_i$ is non-adjacent to at least one $u_j$ in $C\setminus S'$ leading to a contradiction. Thus, $S'$ has a non-empty intersection with every $(k+2)$-clique in $G$. As $G$ is a split graph, it is $(k+1)$-colorable due to Observation \ref{split-r-col}. Further, $G-S'$ is $(k+1)$-cd-colorable as the color class containing $\{w_0\}$ is a singleton set (since it is an universal vertex) which is dominated by itself and the other color classes are dominated by $w_0$. Conversely, consider a minimal subset $S'$ of $k'$ vertices such that $G - S'$ is $(k+1)$-cd-colorable. Now, if $w_0 \in S'$, then vertices $w_1,\cdots,u_{k+k'+2}$ are isolated in $G-\{w_0\}$ implying that either $|\{w_1,\cdots,w_{k+k'+2}\} \cap S'| \geq k'+1$ or $\chi_{cd}(G-S') > k+1$. So, we can assume that $w_0 \notin S'$. Further, as $S'$ is minimal, it follows that $\{w_1,\cdots,w_{k+k'+2}\} \cap S'=\emptyset$. Now, all vertices in $S'$ must belong to $C$. If there exists $v_i \in S' \cap I$, there is a clique of size $k+2$ in $G - S'$ as $C$ is a clique. Also, no vertex in $I$ is adjacent to all nodes in $C \setminus S'$ as if there is such a vertex $v_i$ then there is a $(k+2)$-clique in $G - S'$. Thus, every vertex in $I$ is nonadjacent to at least one element in $C \setminus S'$ implying that $\{s_i \in \mathcal{F} \mid u_i \in C \setminus S'\}$ is a set cover of $(U, \mathcal{F})$ of size at most $k$. \end{proof} As \textsc{Set Cover} parameterized by solution size is $\W[2]$-hard \cite{fpt-book}, we have the following result. \begin{corollary} \textsc{\textsc{cd-Partization}} on split graphs parameterized by $q$ is $\W[2]$-hard. \end{corollary} Now, we show that the problem is $\FPT$ with respect to $q$ and $k$. \begin{theorem} \textsc{cd-Partization}\ on split graphs is $\FPT$ with respect to parameters $q$ and $k$. Furthermore, the problem does not admit a polynomial kernel unless \NP~$\subseteq$~\coNP/poly\xspace. \end{theorem} \begin{proof} Compute a maximum clique $Q$ of $G$ in polynomial time. If $|Q| \leq q$, then the input instance is an YES instance as $\chi_{cd}(G) \leq q$ from Theorem \ref{split-cd}. Otherwise, choose an arbitrary subset of size $q+1$ from $Q$. Since any solution contains at least one of the $q+1$ vertices, a straightforward branching algorithm runs in $\mathcal{O}^*((q+1)^k)$ time. Now, we move on to the kernelization hardness. \textsc{Set Cover} is known not to admit a polynomial kernel when parameterized by the solution size $k'$ and family size $m$ as combined parameters unless \NP~$\subseteq$~\coNP/poly\xspace~\cite{fpt-book}. The reduction in Theorem \ref{split-hard} produces instances of \textsc{cd-Partization}\ where solution size $k$ is $m-k'$ and $q$ is $k'+1$ implying that $q+k$ is $m+1$. Therefore, an $(q+k)^{\mathcal{O}(1)}$ kernel for \textsc{cd-Partization}\ implies an $m^{\mathcal{O}(1)}$ kernel for \textsc{Set Cover} which is unlikely. \end{proof} \section{Concluding Remarks} In this work, we described exact and \FPT\ algorithms for problems associated with cd-coloring. We also explored the complexity of finding the cd-chromatic number in graphs of girth at least $5$ and chordal graphs. On the former graph class, we described a polynomial kernel. The kernelization complexity on other graph classes and whether the problem is \FPT\ parameterized by only treewidth are natural directions for further study. It is also interesting to get an exact function when parameterized by treewidth and the number of colors. \section{Deletion to 3-cd-Colorable Graphs} In a graph $G$, an edge $e=(u,v)$ is said to be a dominating edge if $N(u) \cup N(v)=V(G)$. Let $\overline{N[v]}$ denote the set $V(G) \setminus N[v]$. The following characterization of 3-cd-colorable graphs is known from \cite{caldam16}. \begin{theorem}[\cite{caldam16}] \label{3-char} A connected graph $G$ satisfies $\chi_{cd}(G) \leq 3$ if and only if $G$ is one of the following types.\\ (Type 0) $G$ is a graph on at most 3 vertices. \\ (Type 1) $G$ is a bipartite graph with a dominating edge.\\ (Type 2) $G$ has a vertex $v$ such that $G-v$ is a bipartite graph with a dominating edge.\\ (Type 3) $G$ has an ordered pair $(x,y)$ of adjacent vertices such that, \vspace{-.15cm} \begin{itemize} \item $V(G)=\{x,y\} \uplus X \uplus Y$, \item $G[X \cup \{y\}]$ is a bipartite graph with at least one edge, \item $Y \cup \{x\}$ is an independent set, $Y \cup \{x\} \subseteq N(y)$ and $X \cup \{y\} \subseteq N(x)$. \end{itemize} \vspace{-.15cm} (Type 4) $G$ has an ordered set $(x,y,z)$ of vertices inducing a triangle such that, \vspace{-.15cm} \begin{itemize} \item $V(G)=\{x,y,z\} \uplus X \uplus Y \uplus Z$, \item $X \subseteq N(x)$, $Y \subseteq N(y)$ and $Z \subseteq N(z)$, \item $X \cup \{y\}$, $Y \cup \{z\}$ and $Z \cup \{x\}$ are independent sets. \end{itemize} \vspace{-.15cm} (Type 5) $G$ has an ordered triple $(x,y,z)$ of vertices such that, \vspace{-.15cm} \begin{itemize} \item $V(G)=\{x,y\} \uplus X \uplus Y \uplus Z$, \item $z \in X\cup Y$, $(x,y) \notin E(G)$ and $(x,z), (y,z) \in E(G)$, \item $X \subseteq N(x)$, $Y \subseteq N(y)$ and $Z \subseteq N(z)$, \item $X$, $Y$, $Z \cup \{x\}$ and $Z \cup \{y\}$ are independent sets. \end{itemize} \end{theorem} We refer to the ordered sets in Types 3, 4 and 5 as dominators. In \cite{caldam16}, they are called as d-pair, cd-triangle and NB-triplet respectively. Now, we proceed to solve \textsc{$3$-\textsc{cd-Partization}}. Let $G$ be the input graph on $n$ vertices, $m$ edges and $k$ be a positive integer. Consider a set $S \subseteq V(G)$ such that $H=G-S$ is $3$-cd-colorable. Then, $H$ is of one of the types listed in Theorem \ref{3-char}. Before we proceed to describe algorithms for each of these types, we list the following well-known results on \textsc{Vertex Cover} and \textsc{Odd Cycle Transversal} that we use in our algorithms. \begin{theorem}[\cite{vc-fpt-best}] \label{vc-best} Given a graph $G$ and a positive integer $k$, there is an algorithm running in $\mathcal{O}^*(1.2738^k)$ time that determines if $G$ has a vertex cover of size at most $k$ or not. \end{theorem} \begin{theorem}[\cite{oct-fpt-best}] \label{oct-best} Given a graph $G$ and a positive integer $k$, there is an algorithm running in $\mathcal{O}^*(2.3146^k)$ time that determines if $G$ has an odd cycle transversal of size at most $k$ or not. \end{theorem} Here we use notation $\mathcal{O}^*$ to suppress functions which are polynomial in size of input. As we would subsequently show, our algorithms reduce the problem of finding an optimum deletion set into finding appropriate vertex covers and constrained odd cycle transversals. The current best parameterized algorithm for finding a vertex cover can straightaway be used as a subroutine in our algorithm while the one for finding an odd cycle transversal requires the following results. Consider a graph $G$ and let $v$ be a vertex in $G$. Define the graph $G'$ to be the graph obtained from $G$ by deleting $v$ and adding a new vertex $v_{ij}$ for each pair $v_i$, $v_j$ of neighbors of $v$; further $v_{ij}$ is adjacent to $v_i$ and $v_j$. \begin{lemma} \label{constrained-oct1} $G$ has a minimal odd cycle transversal of size at most $k$ that excludes vertex $v$ if and only if $G'$ has a minimal odd cycle transversal of size at most $k$. \end{lemma} \begin{proof} Consider an odd cycle transversal $O$ of $G$ excluding $v$ and let $(X,Y)$ be a bipartition of $G-O$. Without loss of generality, let $v \in X$. Then, every vertex in $N(v)$ is either in $O$ or in $Y$. Thus, $X'=(X \setminus \{v\}) \cup (V(G') \setminus V(G))$ is an independent set in $G'$. Consequently, $(X',Y)$ is a bipartition of $G'-O$ implying that $O$ is an odd cycle transversal of $G'$. Conversely, any odd cycle transversal $O'$ of $G'$ can be modified to one that excludes each vertex in $\{v_{ij} \in V(G') \mid v_i,v_j \in N(v)\}$ without increasing the size since any induced odd cycle through $v_{ij}$ is also an induced odd cycle through $v_i$ and $v_j$. Then, it follows that $O'$ is an odd cycle transversal of $G$ that excludes $v$. \end{proof} Let $P,Q \subseteq V(G)$ be two disjoint sets. Let $G''$ be the graph constructed from $G$ by adding an independent set $I_P$ of $k+1$ new vertices each of which is adjacent to every vertex in $P$ and an independent set $I_Q$ of $k+1$ new vertices each of which is adjacent to every vertex in $Q$. Further, every vertex in $I_P$ is adjacent to every vertex in $I_Q$. \begin{lemma} \label{constrained-oct2} $G$ has a minimal odd cycle transversal $O$ of size at most $k$ such that $G-O$ has a bipartition $(X,Y)$ with $P \subseteq X$ and $Q \subseteq Y$ if and only if $G''$ has a minimal odd cycle transversal of size at most $k$. \end{lemma} \begin{proof} Suppose $G-O$ has a bipartition $(X,Y)$ such that $P \subseteq X$ and $Q \subseteq Y$. Then, $G''-O$ has a bipartition $(X',Y')$ where $X'=X \cup I_Q$ and $Y'=Y \cup I_P$. Thus, $O$ is an odd cycle transversal of $G''$ too. Conversely, consider a minimal odd cycle transversal $O'$ of size $k$ of $G''$. Clearly, $O'$ excludes at least one vertex $a$ from $I_P$ and at least one vertex $b$ from $I_Q$. Consider an arbitrary bipartition $(A,B)$ of $G''-O'$ and let $a \in A$ and $b\in B$. Then, as $O'$ is minimal $I_P \subseteq A$ and $I_Q \subseteq B$. That is, $O' \cap (I_P \cup I_Q) =\emptyset$. Further, as any two vertices $p \in I_P$ and $q \in I_Q$ are adjacent, $I_P \cap V(G''-O') \subseteq A$ and $I_Q \cap V(G''-O') \subseteq B$. Thus, $P \subseteq B$ and $Q \subseteq A$. \end{proof} \subsection{Deletion to Types 0, 1 and 2} It is trivial to check if $G$ has a solution whose deletion results in a graph $H$ with at most 3 vertices. So, deletion to Type 0 is easy. Now, suppose $H$ is of Type 1. Then, we need to identify an edge of $G$ that is a dominating edge for $H$. We describe an algorithm based on this observation. \begin{algorithm}[H] \DontPrintSemicolon \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{A graph $G$ and a positive integer $k$.} \Output{$S \subseteq V(G)$, $|S| \leq k$ such that $G-S$ is of Type 1 (if one exists).} \BlankLine \nl \For{each edge $(x,y)$ in $G$} { Let $X'=N(x)\cap \overline{N[y]}$ and $Y'=N(y)\cap \overline{N[x]}$.\; Let $S'$ be $V(G) \setminus (X' \cup Y')$ and decrease $k$ by $|S'|$.\; \nl \For{each $k_1$ and $k_2$ such that $k_1+k_2 \leq k$} { \nl Compute a vertex cover $S_1$ of $G[X']$ with $|S_1| \leq k_1$ (if one exists).\; /* $(X' \setminus S_1) \cup \{y\}$ is an independent set */\; \nl Compute a vertex cover $S_2$ of $G[Y']$ with $|S_2| \leq k_2$ (if one exists).\; /* $(Y' \setminus S_2) \cup \{x\}$ is an independent set */\; \If{$S_1$ and $S_2$ are non-empty sets} {\Return{$S' \cup S_1 \cup S_2$ } } } } \caption{Deletion-to-Type1$(G,k)$} \label{type1} \end{algorithm} \begin{lemma} \label{type1-thm} Algorithm \ref{type1} runs in $\mathcal{O}^*(1.2738^k)$ time. \end{lemma} \begin{proof} The outer loop (step 1) is executed at most $m$ times (once for each edge) and the inner loop (step 2) is executed at most $k^2$ times. Let $(x,y)$ be an edge in $G$. We need to extend $\{x\}$ and $\{y\}$ into independent sets $Y$ and $X$ respectively, such that $X$ is dominated by $x$ and $Y$ is dominated by $y$. Clearly, neighbors of $x$ and non-neighbors of $y$ cannot be in $Y$. Similarly, neighbors of $y$ and non-neighbors of $x$ cannot be in $X$. No common neighbor of $x$ and $y$ can be in either $X$ or $Y$. Thus, the candidates for $X$ and $Y$ are $X'=N(x)\cap \overline{N[y]}$ and $Y'=N(y)\cap \overline{N[x]}$ respectively. All vertices in $V(G) \setminus (X' \cup Y')$ are in any solution. Let $k'=k-|V(G) \setminus (X' \cup Y')|$. Then, $G$ has a $3$-cd-partization solution $S$ of size at most $k$ such that $G-S$ is of Type 1 with $(u,v)$ as a dominating edge if and only if there exists integers $k_1$, $k_2$ with $k_1+k_2 \leq k'$ such that $G[X']$ has a vertex cover of size at most $k_1$ and $G[Y']$ has a vertex cover of size at most $k_2$. Now, Steps 3 and 4 take $\mathcal{O}^*(1.2738^k)$ time from Theorem \ref{vc-best}. Thus, the overall running time is $\mathcal{O}^*(1.2738^k)$. \end{proof} Suppose $H$ is of Type 2. Then, for each vertex $v$ of $G$, we simply run Algorithm \ref{type1} on $G-\{v\}$ with parameter $k$. \begin{algorithm}[H] \DontPrintSemicolon \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{A graph $G$ and a positive integer $k$.} \Output{$S \subseteq V(G)$, $|S| \leq k$ such that $G-S$ is of Type 2 (if one exists).} \BlankLine \nl \For{each vertex $x$ in $G$} { Deletion-to-Type1$(G-\{x\},k)$.\; } \caption{Deletion-to-Type2$(G,k)$} \label{type2} \end{algorithm} \begin{lemma} \label{type2-thm} Algorithm \ref{type2} runs in $\mathcal{O}^*(1.2738^k)$ time. \end{lemma} \begin{proof} As Algorithm \ref{type2} calls Algorithm \ref{type1} at most $n$ times, its running time is $\mathcal{O}^*(1.2738^k)$. \end{proof} \subsection{Deletion to Type 3} Suppose $H$ is of Type 3 with dominator $(x,y)$. Then, the following holds. \begin{obs}[\cite{caldam16}] $\overline{N_H[x]}$ is an independent set and $\overline{N_H[x]} \subseteq N_H(y)$. Further, $N_H(x)$ induces a bipartite graph with at least one edge. \end{obs} This observation leads to the following algorithm. \begin{algorithm}[H] \DontPrintSemicolon \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{A graph $G$ and a positive integer $k$.} \Output{$S \subseteq V(G)$, $|S| \leq k$ such that $G-S$ is of Type 3 (if one exists).} \BlankLine \nl \For{each ordered pair $(x,y)$ of adjacent vertices in $G$} { Let $Y'= N(y) \cap \overline{N[x]}$ and $X'=N(x)$.\; Let $S'$ be $V(G) \setminus (X' \cup Y')$ and decrease $k$ by $|S'|$.\; \nl \For{each $k_1$ and $k_2$ such that $k_1+k_2 \leq k$} { \nl Compute a vertex cover $S_1$ of $G[Y']$ with $|S_1| \leq k_1$ (if one exists).\; /* $(Y' \setminus S_1) \cup \{x\}$ is an independent set */\; \nl Compute a minimal odd cycle transversal $S_2$ of at most $k_2$ vertices (if one exists) in $G[X']$ such that $y \notin S_2$.\; \If{$S_1$ and $S_2$ are non-empty sets} {\Return{$S' \cup S_1 \cup S_2$ } } } } \caption{Deletion-to-Type3$(G,k)$} \label{type3} \end{algorithm} \begin{lemma} \label{type3-thm} Algorithm \ref{type3} runs in $\mathcal{O}^*(2.3146^k)$ time. \end{lemma} \begin{proof} The outer loop (step 1) is executed at most $2m$ times (as there are two ordered pairs for each edge) and the inner loop (step 2) is executed at most $k^2$ times. Consider an edge $(x,y)$ in $G$. If $(x,y)$ is a dominator in $H$, then we need to extend $\{x\}$ into an independent set $Y$ that is dominated by $y$ and extend $\{y\}$ into an induced bipartite graph $B$ (with at least one edge) such that $V(B)$ is dominated by $x$. Observe that $Y$ contains only neighbors of $y$ and $V(B)$ contains only neighbors of $x$. Further, a neighbor of $y$ that is not adjacent to $x$ cannot be in $V(B)$ and a neighbor of $y$ that is adjacent to $x$ cannot be in $Y$. Thus, the candidates for $V(B)$ and $Y$ are $X'=N(x)$ and $Y'=N(y) \cap \overline{N[x]}$ respectively. All vertices in $V(G) \setminus (X' \cup Y')$ are in any solution. Let $k'=k-|V(G) \setminus (X' \cup Y')|$. Now, $G$ has a $3$-cd-partization solution $S$ of size at most $k$ such that $G-S$ is of Type 3 with $(x,y)$ as a dominator if and only if there exists integers $k_1$ and $k_2$ with $k_1+k_2 \leq k'$ such that $G[Y']$ has a vertex cover of size at most $k_1$ and $G[X']$ has an odd cycle transversal of size $k_2$ not containing $y$ such that the resultant bipartite graph is non-edgeless. Clearly step 3 takes $\mathcal{O}^*(1.2738^k)$ time. For step 4, we need to find a minimal odd cycle transversal that excludes vertex $y$. We construct a graph $G'$ obtained from $G[X']$ by deleting $y$ and adding a new vertex $y_{ij}$ for each pair $y_i$, $y_j$ of neighbors of $y$; further $y_{ij}$ is adjacent to $y_i$ and $y_j$. From Lemma \ref{constrained-oct1}, we have that $G[X']$ has a minimal odd cycle transversal of size at most $k_2$ not containing $y$ if and only if $G'$ has a minimal odd cycle transversal of size at most $k_2$. Now, by using Theorem \ref{oct-best}, it follows that step 4 takes $\mathcal{O}^*(2.3146^k)$ time and this gives us the claimed running time of the algorithm. \end{proof} \subsection{Deletion to Type 4} Suppose $H$ is of Type 4 and has $(x,y,z)$ as a dominator. Then, we have the following observation. \begin{obs}[\cite{caldam16}] $N_H(x) \cap N_H(y) \cap N_H(z)=\emptyset$ and $\overline{N_H[x]} \cap \overline{N_H[y]} \cap \overline{N_H[z]}=\emptyset$. Further, $X=N_H(x) \cap \overline{N_H[y]}$, $Y=N_H(y) \cap \overline{N_H[z]}$ and $Z=N_H(z) \cap \overline{N_H[x]}$. \end{obs} Now, we have the following algorithm. \begin{algorithm}[H] \DontPrintSemicolon \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{A graph $G$ and a positive integer $k$} \Output{$S \subseteq V(G)$, $|S| \leq k$ such that $G-S$ is of Type 4 (if one exists)} \BlankLine \nl \For{each ordered triple $(x,y,z)$ of pairwise adjacent vertices of $G$} { Let $X'=N(x) \cap \overline{N[y]}$, $Y'=N(y) \cap \overline{N[z]}$ and $Z'=N(z) \cap \overline{N[x]}$.\; Let $S'$ be $V(G) \setminus (X' \cup Y' \cup Z')$ and decrease $k$ by $|S'|$.\; \nl \For{each $k_1$, $k_2$ and $k_3$ such that $k_1+k_2+k_3 \leq k$} { \nl Compute a vertex cover $S_1$ of $G[X']$ with $|S_1| \leq k_1$ (if one exists).\; /* $(X' \setminus S_1) \cup \{y\}$ is an independent set */\; \nl Compute a vertex cover $S_2$ of $G[Y']$ with $|S_2| \leq k_2$ (if one exists).\; /* $(Y' \setminus S_2) \cup \{z\}$ is an independent set */\; \nl Compute a vertex cover $S_3$ of $G[Z']$ with $|S_3| \leq k_3$ (if one exists).\; /* $(Z' \setminus S_3) \cup \{x\}$ is an independent set */\; \If{$S_1$, $S_2$ and $S_3$ are non-empty sets} {\Return{$S' \cup S_1 \cup S_2 \cup S_3$ } } } } \caption{Deletion-to-Type4$(G,k)$} \label{type4} \end{algorithm} \begin{lemma} \label{type4-thm} Algorithm \ref{type4} runs in $\mathcal{O}^*(1.2738^k)$ time. \end{lemma} \begin{proof} The outer loop (step 1) is executed at most $n^3$ times and the inner loop (step 2) is executed at most $k^3$ times. Consider a triangle $\{x,y,z\}$ in $G$. If $(x,y,z)$ is a dominator in $H$, then we need to extend $\{x\}$, $\{y\}$, $\{z\}$ into independent sets $Y$, $Z$, $X$ dominated by $y$, $z$ and $x$ respectively. Thus, the candidates for $X$, $Y$ and $Z$ are sets $X'=N(x) \cap \overline{N[y]}$, $Y'=N(y) \cap \overline{N[z]}$ and $Z'=N(z) \cap \overline{N[x]}$. All vertices in $S'=V(G) \setminus (X' \cup Y' \cup Z')$ are in any solution. Let $k'=k-|V(G) \setminus S'|$. Then, $G$ has a $3$-cd-partization solution $S$ of size at most $k$ such that $G-S$ is of Type 4 with $(x,y,z)$ as a dominator if and only if there exists integers $k_1$, $k_2$ and $k_3$ with $k_1+k_2+k_3 \leq k'$ such that $G[Y']$ has a vertex cover of size at most $k_1$, $G[Z']$ has a vertex cover of size at most $k_2$ and $G[Z']$ has a vertex cover of size at most $k_3$. Steps 3, 4 and 5 take $\mathcal{O}^*(1.2738^k)$ time from Theorem \ref{vc-best} and the overall running time is $\mathcal{O}^*(1.2738^k)$. \end{proof} \subsection{Deletion to Type 5} Suppose $H$ is of Type 5 and has $(x,y,z)$ as a dominator. Then, we have the following observation. \begin{obs}[\cite{caldam16}] $\overline{N_H[x]} \cap \overline{N_H[y]}$ is an independent set. Further, $z \in N_H(x) \cup N_H(y)$ and $\overline{N_H[x]} \cap \overline{N_H[y]} \subseteq N_H(z)$. Moreover, in $G-Z$, $N[x] \cup N[y] =V(G-Z)$, $N(x) \setminus N(y) \subseteq X$ and $N(y) \setminus N(x) \subseteq Y$. \end{obs} Now, we have the following algorithm. \begin{algorithm}[t] \DontPrintSemicolon \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{A graph $G$ and a positive integer $k$} \Output{$S \subseteq V(G)$, $|S| \leq k$ such that $G-S$ is of Type 5 (if one exists)} \BlankLine \nl \For{each ordered triple $(x,y,z)$ of vertices of $G$ such that $(x,y) \notin E(G)$ and $(x,z), (y,z) \in E(G)$} { Let $Z'$ be the set $N(z) \cap (\overline{N[y]} \cap \overline{N[x]})$.\; Let $S'$ be $(N(y) \cap N(z)) \setminus N(x)$.\; Let $B'$ be the set $\{z\} \cup (((N(x) \cup N(y)) \setminus S')$.\; Let $S''$ be $V(G) \setminus (Z' \cup B')$ and decrease $k$ by $|S''|$.\; \nl \For{each $k_1$ and $k_2$ such that $k_1+k_2 \leq k$} { \nl Compute a vertex cover $S_1$ of $G[Z']$ with $|S_1| \leq k_1$ (if one exists).\; /* $(Z' \setminus S_1) \cup \{x,y\}$ is an independent set */\; \nl Compute a minimal odd cycle transversal $S_2$ of $G[B']$ with $|S_2| \leq k_2$ not containing $z$ (if one exists) such that the resultant bipartite graph has a bipartition $(X,Y)$ such that $X \subseteq N(x)$, $Y \subseteq N(y)$ and $z \in Y$.\; \If{$S_1$ and $S_2$ are non-empty sets} {\Return{$S'' \cup S_1 \cup S_2$ } } } } \caption{Deletion-to-Type5$(G,k)$} \label{type5} \end{algorithm} \begin{lemma} \label{type5-thm} Algorithm \ref{type5} runs in $\mathcal{O}^*(2.3146^k)$ time. \end{lemma} \begin{proof} Consider an ordered triple $(x,y,z)$ of vertices in $G$. If $(x,y,z)$ is a dominator in $H$, then we need to extend $\{x,y\}$ into an independent set $Z$ that is dominated by $z$ and extend $\{z\}$ into a bipartite graph $B$ with bipartition $(X,Y)$ such that $X$ is dominated by $x$ and $Y$ is dominated by $y$. Thus, the candidates for $Z$ and $V(B)$ are $Z'=N(z) \cap (\overline{N[y]} \cap \overline{N[x]})$ and $B'=\{z\} \cup ((N(x) \cup N(y)) \setminus (N(y)\cap N(z)) \setminus N(x))$ respectively. All vertices in $V(G) \setminus (Z' \cup B')$ are in any solution. Let $k'=k-|V(G) \setminus (Z' \cup B')|$. Then, $G$ has a $3$-cd-partization solution $S$ of size at most $k$ such that $H=G-S$ is of Type 5 with $(x,y,z)$ as a dominator if and only if there exists integers $k_1$ and $k_2$ with $k_1+k_2 \leq k'$ such that $G[Z']$ has a vertex cover of size at most $k_1$ and $G[B']$ has an odd cycle transversal of size $k_2$ not containing $z$ such that the resultant bipartite graph has a bipartition $(X,Y)$ such that $X \subseteq N(x)$, $Y \subseteq N(y)$ and $z \in Y$. Step 3 takes $\mathcal{O}^*(1.2738^k)$ time. For step 4, we use Lemmas \ref{constrained-oct1} and \ref{constrained-oct2}. Let $G'$ be the graph obtained from $G[B']$ by deleting $z$ and adding a new vertex $z_{ij}$ for each pair $z_i$, $z_j$ of neighbors of $z$, adjacent to $z_i$ and $z_j$. Now, a minimal odd cycle transversal of $G'$ corresponds to a minimal odd cycle transversal of $G[B']$ not containing $z$. However, we also need the additional constraint that such an odd cycle transversal results in a bipartite graph $B$ which has a bipartition $(X,Y)$ such that $X \subseteq N(x)$ and $Y \subseteq N(y)$. The possible vertices in $B$ are from the set $\{z\} \cup (((N(x) \cup N(y)) \setminus (N(y) \cap N(z)) \setminus N(x))$. The following observations on vertices from this set are easy to verify. \begin{itemize} \item $N_x=N(x) \setminus (N(y) \cup N(z))$ cannot be dominated by $y$ and $N_y=N(y) \setminus (N(x) \cup N(z))$ cannot be dominated by $x$. \item $N_{zx}=(N(x) \cap N(z)) \setminus N(y)$ and $N_{xyz}=N(x) \cap N(y) \cap N(z)$ cannot be in a part of the bipartition that contains $z$. \end{itemize} It follows that we need an odd cycle transversal (of size at most $k_2$) of $G[B']$ after deleting which the resultant bipartite graph has a 2-coloring in which any vertex from $P=\{z\} \cup N_y$ receives color 1 and any vertex from $Q=N_x \cup N_{zx} \cup N_{xyz}$ receives color 2. This is achieved by constructing graph $G''$ from $G'$ by adding an independent set $I_P$ of $k_2+1$ new vertices each of which is adjacent to every vertex in $P$ and an independent set $I_Q$ of $k_2+1$ new vertices each of which is adjacent to every vertex in $Q$. Further, every vertex in $I_P$ is adjacent to every vertex in $I_Q$. Now, $G[B']$ has a minimal odd cycle transversal of size at most $k_2$ not containing $z$ such that the resultant bipartite graph has a bipartition $(X,Y)$ such that $X \subseteq N(x)$, $Y \subseteq N(y)$ and $z \in Y$ if and only if $G''$ has a minimal odd cycle transversal of size at most $k_2$. Now, using Theorem \ref{oct-best}, it follows that step 4 takes $\mathcal{O}^*(2.3146^k)$ time and the overall running time is dominated (upto polynomial factors) by this computation. \end{proof} From Lemmata \ref{type1-thm}, \ref{type2-thm}, \ref{type3-thm}, \ref{type4-thm} and \ref{type5-thm}, we have the following result. \begin{theorem} Given a graph $G$ and an integer $k$, there is an algorithm that determines if there is a set $S$ of size $k$ whose deletion results in a graph $H$ with $\chi_{cd}(H) \leq 3$ in $\mathcal{O}^*(2.3146^k)$ time. \end{theorem} \section{Introduction} Graph coloring is a classical problem in the fields of combinatorics and algorithm design. A {\em proper coloring} of a graph is an assignment of colors to its vertices such that no two adjacent vertices receive the same color. Equivalently, a proper coloring is a partition of the vertex set into independent sets. In this context, these independent sets are also called {\em color classes}. A proper coloring of a graph $G$ using $q$ colors is called a {\em $q$-coloring} of $G$ and the minimum number of colors required in a proper coloring is called as the {\em chromatic number} of $G$. Determining the chromatic number of a graph is a classical \NP-hard problem. This problem has been widely investigated in the areas of exact algorithms~\cite{BjorklundHK09,GaspersKLT09,GaspersL12,Kratsch08,LAWLER197666,RooijB11}, approximation algorithms~\cite{BlumK97,GuhaK99,Kim10,LenzenW10}, and parameterized algorithms~\cite{AlberBFKN02,AlonG09,Cai03a,DowneyFMR08}. Further, variants of the graph coloring like \textsc{Edge-Chromatic Number}, \textsc{Achromatic Number}, \textsc{$b$-Chromatic Number}, \textsc{Total Chromatic Number}, \textsc{Dominator Coloring} and \textsc{Class Domination Coloring} have also been well studied~\cite{Gera2006,Gera2007,dom-col-3}. In this work, we initiate the study of \textsc{Class Domination Coloring} (also called \textsc{cd-Coloring} or \textsc{Dominated Coloring}) in the realm of parameterized complexity and exact exponential time algorithms. A {\em cd-coloring} is a proper coloring of the graph in which every color class is contained in the neighbourhood of some vertex. See Figure~\ref{fig:cd-col} for an example. The minimum number of colors needed in any cd-coloring of $G$ is called the {\em class domination chromatic number} or {\em cd-chromatic number} of $G$ and is denoted by $\chi_{cd}(G)$. Also, $G$ is said to be $q$-cd-colorable if $\chi_{cd}(G) \leq q$. The \textsc{cd-Coloring} problem is formally defined as follows. \defdecproblem{cd-Coloring}{A graph $G$ and a positive integer $q$.}{Is $\chi_{cd}(G) \leq q$?} \textsc{cd-Coloring} is \NP-complete for $q \geq 4$ and polynomial-time solvable for $q \leq 3$ \cite{caldam16}. A characterization of graphs that admit 3-cd-colorings is also known \cite{caldam16}. \textsc{cd-Coloring} has also been studied on many restricted graph classes like split graphs, $P_4$-free graphs~\cite{caldam16} and middle and central graphs of $K_{1,n}$, $C_n$ and $P_n$~\cite{vs10}. See also \cite{abid2018dominated, merouane2015dominated, shalu2017lower, shalu2020complexity, choopani2018dominated}. We study this problem in the context of exact exponential-time algorithms and parameterized complexity. The field of exact algorithms typically deals with designing algorithms for \NP-hard problems that are faster than brute-force search while the goal in parameterized complexity is to provide efficient algorithms for \NP-complete problems by switching from the classical view of single-variate measure of the running time to a multi-variate one. In parameterized complexity, we consider instances $(I,k)$ of a parameterized problem $\Pi \subseteq \Sigma^* \times \mathbb{N}$, where $\Sigma$ is a finite alphabet. Algorithms in this area have running times of the form $f(k)|I|^{\mathcal{O}(1)}$, where $k$ is an integer measuring some part of the instance. This integer $k$ is called the {\em parameter}, and a problem that admits such an algorithm is said to be {\em fixed-parameter tractable} (\FPT). In most of the cases, the solution size is taken to be the parameter, which means that this approach results in efficient (polynomial-time) algorithms when the solution is of small size. A \emph{kernelization} algorithm for a parameterized problem $\Pi$ is a polynomial time procedure which takes as input an instance $(x,k)$ of $\Pi$ and returns an instance $(x',k')$ such that $(x,k) \in \Pi$ if and only if $(x',k')\in \Pi$ and $|x'| \leq h(k)$ and $k' \leq g(k)$, for some computable functions $h,g$. The returned instance is called a {\it kernel} and $h(k)+g(k)$ is its {\it size}. We say that $\Pi$ admits a {\em polynomial kernel} if $h$ and $g$ are polynomials. For more background on parameterized complexity, we refer the reader to the monographs \cite{fpt-book,ParameterizedComplexityBook,FlumGroheBook,RN}. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{cd-col1.png} \caption{An example of a cd-Coloring of a graph} \label{fig:cd-col} \end{figure} We first observe that parameterizing \textsc{cd-Coloring} by the solution size (which is the number of colors) does not help in designing efficient algorithms as the problem is para-\NP-hard (\NP-hard even when the parameter is a constant). Hence, this problem is unlikely to be \FPT\ when parameterized by the solution size. Then, we describe an $\mathcal{O}(2^n n^4 \log n)$-time algorithm for finding the cd-chromatic number of a graph using polynomial method. Next, we show that \textsc{cd-Coloring} is \FPT\ when parameterized by the number of colors and the treewidth of the input graph. Further, we show that the problem is \FPT\ when parameterized by the number of colors on chordal graphs. Kaminski and Lozin \cite{lozin2007coloring} showed that determining if a graph of girth at least $g$ admits a proper coloring with at most $q$ colors or not is \NP-complete for any fixed $q \ge 3$ and $g \ge 3$. In particular, \textsc{Chromatic Number} is para-\NP-hard for graphs of girth at least 5. In contrast, we show that \textsc{cd-Coloring} is \FPT\ on this graph class and admits a kernel with $\mathcal{O}(q^3)$ vertices. On a graph $G$ that is not $q$-cd-colorable, a natural optimization question is to check if we can delete at most $k$ vertices from $G$ such that the cd-chromatic number of the resultant graph is at most $q$. We define this problem as follows. \defdecproblem{\textsc{cd-Partization}}{Graph $G$, integers $k$ and $q$}{Does there exist $S \subseteq V(G)$, $|S| \leq k$, such that $\chi_{cd}(G-S)\leq q$?} If $q$ is fixed, then we refer to the problem as \textsc{$q$-\textsc{cd-Partization}}. Once again, from parameterized complexity point of view, this question is not interesting on general graphs for values of $q$ greater than three, as in those cases, an \FPT\ algorithm with deletion set (solution) size as the parameter is a polynomial-time recognition algorithm for $q$-cd-colorable graphs. Hence, the deletion question is interesting only on graphs where the recognition problem is polynomial-time solvable. We show that \textsc{$q$-\textsc{cd-Partization}} is \NP-complete for each $q \geq 2$, and that for $q \in \{2,3\}$, the problem is \FPT\ with respect to the solution size as the parameter. Our algorithms use the known parameterized algorithms for finding a vertex cover and an odd cycle transversal of a graph as subroutines. We also show that \textsc{cd-Partization}\ remains \NP-complete on split graphs and is \FPT\ when parameterized by the number of colors and solution size. \section{} \bibliographystyle{elsarticle-num}
1,477,468,751,362
arxiv
\section{Introduction} Lorentz covariance in local inertial frames is a well established symmetry at the energies of present day experiments. However, its validity at high energies is subject to test. Possible Lorentz invariance violations may arise from dynamical modifications induced by quantum gravity (QG). The effects of such violations in the range well below the Planck energy $(E_{P}\sim 10^{19}\,GeV)$ have been recently the object of intense scrutiny \cite{GAC,GPED,URRU,analysis,JACOBSON}. This issue is closely linked to theoretical and experimental research, based on the Standard Model Extension \cite{KOSTELECKY}% , concerning Lorentz and CPT violations \cite{CPTMEETS}. Heuristic loop QG derivations of such effects \cite{GPED,URRU} make it clear that a better understanding of the corresponding semiclassical limit is required \cite{THIEMANN}. String theory has also provided models for explaining such QG induced corrections \cite{QFMODEL}. Moreover, effective field theory models have been constructed that include higher dimension Lorentz invariance violating (LIV) operators \cite{MP}. Synchrotron radiation arising from the model in \cite{MP} has been extensively analyzed in Ref. \cite{MU}. These effective theories use a reduced number of degrees of freedom to describe the physics at a low energy scale, ignoring the detailed dynamics inherent to Planck energies. In other words, if QG dominates at a scale $E_{QG}$, usually considered of the order of $E_{P}$, a corresponding low energy effective theory can be visualized as an expansion in powers of $\,\;\tilde{\xi}\simeq E_{QG}^{-1}$, truncated at a given finite order. In this way it will be a good description, hopefully simpler than the original one, for energies $E\ll E_{QG}$. This restricted validity relaxes some of the constraints usually required for physical theories, such as renormalizability. Stability and causality, perhaps of more essential status than the Lorentz symmetry itself, are assumed to remain valid at the low energy regime \cite{LEHNERT}. Nevertheless, fine tuning problems arise when considering radiative corrections \cite{CPSUV}, which can be circumvented by extending the notion of dimensional regularization \cite{ALFARO}. In fact, one of the possible manifestations of QG at low energy is the appearance of correction terms related to the scale $E_{QG}$\ in the standard particle propagation and interaction properties. The most direct interpretation of such corrections, though not the only one \cite{ADDINT}, is in terms of a spontaneous breaking of Lorentz covariance at high energies. If this is so, the effective theory will be covariant under Lorentz transformation between inertial frames (passive or observer transformations), and the observable Lorentz symmetry violations will be associated with rotations and boosts of the fields in a given inertial frame (active or particle transformations). In this case the space-time coordinates at low energy remain commutative. We focuse here on a general analysis of QG effects in electrodynamics in these models, where we can introduce the full usual mathematical framework of field theory, specially Fourier transformations. QG may also modify the space-time itself so that the coordinates become noncommutative, as in the case of Double Special Relativity models, for example. In this situation, which will not be discussed here, ordering ambiguities preclude a direct use of such transformations. Considering now the electromagnetic field, the models proposed to describe low energy effects of QG can usually be expressed in terms of modified dispersion relations, with a polynomial dependence in energy and momentum. Such modifications include standard Lorentz invariance violations as well as possible extensions of Lorentz covariance \cite{REVIEWS}. Most of these approaches can be unified in the description% \begin{align} & i\mathbf{k}\cdot\mathbf{D}=4\pi\rho,\;\;\;\;\;\;\mathbf{k}\cdot \mathbf{B}=0,\label{GENMAXW1}\\ & \mathbf{k}\times\mathbf{E}-\omega\mathbf{B}=0,\;\;i\mathbf{k}% \times\mathbf{H}+i\omega\mathbf{D}=4\pi\mathbf{j},\label{GENMAXW2}% \end{align} where the auxiliary fields% \begin{equation} D^{i}=\alpha^{ij}E_{j}+\rho^{ij}B_{j},\quad H^{i}=\beta^{ij}B_{j}+\sigma ^{ij}E_{j},\label{CRH}% \end{equation} are such that the coefficients $\alpha^{ij}$, $\beta^{ij}$, $\rho^{ij}$ and $\sigma^{ij}$ depend on the energy $\omega$ and the momentum $\mathbf{k}$ of the electromagnetic field. These equations correspond to a higher order linear dynamics. Equations\ (\ref{GENMAXW1}) and (\ref{GENMAXW2}) strongly resemble the usual description for an electromagnetic field in a medium \cite{LANDAU}, where the fields $\mathbf{D}$ and $\mathbf{H}$ are characterized by constitutive relations of the form (\ref{CRH}). In terms of electrodynamics in media, we can interpret the low energy QG corrections in terms of a dispersive bianisotropic media. From a heuristic point of view, as shown below, these effective media are non-local in space and time, which can be interpreted as a footprint of the granularity induced by QG. Strictly speaking, the effective models are characterized by Eqs.(\ref{GENMAXW1}-\ref{GENMAXW2}), but under certain restrictions it is also possible to pose an action from which they derive. Although not essential, this is a useful approach to visualize general features of the dynamics. Let us recall the Lagrangian for the electromagnetic field in a local medium% \begin{equation} L=-\frac{1}{4}F_{\mu\nu}\chi^{\left[ \mu\nu\right] \left[ \alpha \beta\right] }F_{\alpha\beta}-4\pi j_{\mu}A^{\mu},\label{L}% \end{equation} where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$, and $\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }$ contains the information about the medium. This structure warrants gauge invariance and hence charge conservation. The dynamics is given by the equations of motion% \begin{equation} \partial_{\mu}H^{\mu\nu}=4\pi j^{\nu},\label{cem}% \end{equation} together with the constitutive relations% \begin{equation} H^{\mu\nu}=\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }% F_{\alpha\beta}.\label{ccr}% \end{equation} Defining the electric and magnetic fields as $F_{0i}=E_{i}\;$and $F_{ij}=-\epsilon_{ijk}B_{k}$ respectively, and the corresponding components of $H^{\mu\nu}$, $H^{0i}=D^{i}$ and $H^{ij}=-\epsilon^{ijk}H^{k}$, the constitutive relations become% \begin{equation} D^{i}=2\chi^{\left[ 0i\right] \left[ 0j\right] }E_{j}-\chi^{\left[ 0i\right] \left[ mn\right] }\epsilon_{mnj}B_{j},\quad H^{i}=\epsilon _{ilk}\chi^{\left[ lk\right] \left[ 0j\right] }E_{j}-\frac{1}{2}% \epsilon_{ilk}\chi^{\left[ lk\right] \left[ mn\right] }\epsilon_{mnj}% B_{j}.\label{CR2}% \end{equation} In our notation the components of any three-dimensional vector $\mathbf{V\;}% $are given by those with subindices $V_{i}$. The equations of motion incorporating QG corrections acquire a similar form, but with one important difference arising from the nonlocal character of the effective medium. The $\chi^{\left[ \mu\nu\right] \left[ \alpha \beta\right] }$ is now a non-local tensor, such that% \begin{equation} L=-\frac{1}{4}\int\;d^{4}\tilde{x}\;F_{\mu\nu}(x^{\sigma})\chi^{\left[ \mu \nu\right] \left[ \alpha\beta\right] }(x^{\sigma}-\tilde{x}^{\sigma })\;F_{\alpha\beta}(\tilde{x}^{\sigma})-4\pi j_{\mu}(x^{\sigma})A^{\mu }(x^{\sigma}),\label{lt}% \end{equation} instead of (\ref{L}). As usual if $F_{\mu\nu}$ and $L$ are real, the reality of $\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(x^{\sigma }-\tilde{x}^{\sigma})$ and subsequently of \begin{equation} H^{\mu\nu}(x)=\int\;d^{4}\tilde{x}\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(x^{\sigma}-\tilde{x}^{\sigma})\;F_{\alpha\beta}% (\tilde{x}^{\sigma}). \end{equation} are implied. Writing $\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(x^{\sigma}-\tilde{x}^{\sigma})$ in terms of its Fourier transform% \begin{equation} \chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(x^{\sigma}% -\tilde{x}^{\sigma})=\int d^{4}k\;e^{-ik\cdot(x-\tilde{x})}\;\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(k^{\sigma}), \end{equation} we can easily demonstrate that (\ref{lt}) can also be written a \begin{equation} L=-\frac{1}{4}F_{\mu\nu}(x^{\sigma})\;\left[ {\hat{\chi}}^{\left[ \mu \nu\right] \left[ \alpha\beta\right] }(i\partial_{\sigma})\;F_{\alpha\beta }(x^{\sigma})\right] ,\label{LNL}% \end{equation} with $\hat{\chi}^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }% $\textbf{\ }being a derivative operator. In terms of the Fourier transform, the reality of $\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(x^{\sigma}-\tilde{x}^{\sigma})$\ is stated as% \begin{equation} \left[ \chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(k^{\sigma })\right] ^{\ast}=\chi^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }(-k^{\sigma}), \end{equation} which holds similarly for the transformed fields$\;F_{\alpha\beta}(k^{\sigma })$\ and $H_{\alpha\beta}(k^{\sigma})$.$\;$If $\hat{\chi}^{\left[ \mu \nu\right] \left[ \alpha\beta\right] }$ is symmetric, in the sense that for each set of index values ($\mu$,$\nu$,$\alpha$,$\beta$) (no sum with respect to repeated indices)% \begin{equation} \int d^{4}x\;F_{\mu\nu}\left( {\hat{\chi}}^{\left[ \mu\nu\right] \left[ \alpha\beta\right] }F_{\alpha\beta}\right) =\int d^{4}x\;F_{\alpha\beta }\left( {\hat{\chi}}^{\left[ \alpha\beta\right] \left[ \mu\nu\right] }F_{\mu\nu}\right) \label{sym}% \end{equation} is satisfied, then it is possible to perform integrations by parts making the equations of motion of the same form as in the usual non-operator case. Thus the Fourier transform of the equations of motion and constitutive relations acquire the structure of (\ref{GENMAXW1}-\ref{GENMAXW2}) and (\ref{CRH}) respectively. In the following we assume that this property is satisfied. When the components of $\hat{\chi}^{\left[ \mu\nu\right] \left[ \alpha \beta\right] }$ do not correspond to a standard Lorentz tensor, this Lagrangian describes a model where the Lorentz symmetry is broken by the medium. We use this approach, where the QG modifications are described phenomenologically by constitutive relations, to discuss the main properties of QG induced effects in electrodynamics. A low energy expansion is developed in terms of the parameter ${\tilde{\xi}}$ $\simeq E_{QG}^{-1}$. Working to order ${\tilde{\xi}}^{2}$ allows us to present $\hat{\chi}^{[\mu\nu ][\alpha\beta]}\;$in the form% \begin{equation} \hat{\chi}^{[\mu\nu][\alpha\beta]}=\chi_{0}^{[\mu\nu][\alpha\beta]}+\chi _{1}^{[\mu\nu]\theta\lbrack\alpha\beta]}\partial_{\theta}+\chi_{2}^{[\mu \nu]\{\theta\psi\}[\alpha\beta]}\partial_{\theta}\partial_{\psi}% ,\label{GENSUS}% \end{equation} where the constant coefficients $\chi_{1}^{[\mu\nu]\theta\lbrack\alpha\beta ]},\chi_{2}^{[\mu\nu]\{\theta\psi\}[\alpha\beta]}$ are proportional to $\tilde{\xi}\;,\tilde{\xi}^{2}\;$respectively. They are antisymmetric in the indices inside square brackets and symmetric in the indices inside curly brackets. In this way we are considering a Lagrangian depending up to third derivatives in the basic electromagnetic potential $A_{\mu}.\;$Moreover, the integration conditions (\ref{sym}) require the following symmetry properties% \begin{equation} \chi_{0}^{[\mu\nu][\alpha\beta]}=\chi_{0}^{[\alpha\beta][\mu\nu]},\;\chi _{1}^{[\mu\nu]\theta\lbrack\alpha\beta]}=-\;\chi_{1}^{[\alpha\beta ]\theta\lbrack\mu\nu]}\;,\;\;\chi_{2}^{[\mu\nu]\{\theta\psi\}[\alpha\beta ]}=\chi_{2}^{[\alpha\beta]\{\theta\psi\}[\mu\nu]}. \end{equation} Once the coefficients of the constitutive relations have been promoted to derivative operators\ we obtain the relations \begin{align} {\hat{\chi}}^{\left[ 0i\right] \left[ 0j\right] } & =\frac{1}{2}% \hat{\alpha}^{ij},\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\hat{\chi}}^{\left[ 0i\right] \left[ mn\right] }=-\frac{1}{2}\epsilon_{mnj}\hat{\rho}^{ij},\\ {\hat{\chi}}^{\left[ mn\right] \left[ 0j\right] } & =+\frac{1}{2}% \epsilon_{mni}\hat{\sigma}^{ij},\ \ \ \ \ \ \ \ \ {\hat{\chi}}^{\left[ lk\right] \left[ mn\right] }=+\frac{1}{2}\epsilon_{kli}\hat{\beta}% ^{ij}\epsilon_{jmn}, \end{align} by comparing Eqs. (\ref{CR2}) and (\ref{CRH}). The expansion\ (\ref{GENSUS}) induces the corresponding form in the coefficients of the constitutive relations \begin{equation} \hat{\alpha}^{ij}=\alpha_{0}^{\left( ij\right) }+\tilde{\xi}\alpha _{1}^{\left( ij\right) \psi}\partial_{\psi}+\tilde{\xi}^{2}\alpha _{2}^{\left( ij\right) \psi\theta}\partial_{\psi}\partial_{\theta },\label{ALPHA}% \end{equation} and similarly for $\hat{\rho}^{ij},\, \hat{\sigma}^{ij}$ and $\hat{\beta}% ^{ij}$, in an obvious notation. Here $\alpha_{0}^{\left( ij\right) }% ,\alpha_{1}^{\left( ij\right) \psi}$and $\alpha_{2}^{\left( ij\right) \psi\theta}$ are constant coefficients. To analyze the propagation of the fields and to define the corresponding refraction index, the dependence of the constitutive relations on $\omega$ and $\mathbf{k}$ has to be made explicit. To achieve this we first expand the coefficients of the constitutive relations in space derivatives, maintaining covariance under rotations. Considering that these\textbf{\ }models can be understood as perturbative descriptions in terms of the parameter ${\tilde {\xi}}$ we have, up to order ${\tilde{\xi}}^{2}$, \begin{equation} \hat{\alpha}^{ij}=\alpha_{0}(\partial_{t})\eta^{ij}+\alpha_{1}(\partial _{t}){\tilde{\xi}}\epsilon^{ijr}\partial_{r}+\alpha_{2}(\partial_{t}% ){\tilde{\xi}}^{2}\partial^{i}\partial^{j},\label{alpha}% \end{equation} with analogous expansions for $\hat{\beta}^{ij},\hat{\sigma}^{ij}$ and $\hat{\rho}^{ij}$. Here $\alpha_{A}$, $\beta_{A}$, $\sigma_{A}$, and $\rho _{A}$, with $A=0,1,2$, are $S0(3)$ scalar operators. This approach can be generalized to models with preferred spatial directions. The symmetry of ${\hat{\chi}}$ in Eq. (\ref{sym}) implies that the terms in $\hat{\alpha}% ^{ij},$ $\hat{\rho}^{ij}$and $\hat{\beta}^{ij}$ with an even number of derivatives are symmetric under $i\leftrightarrow j$, while the terms with an odd number of derivatives are antisymmetric. In the case of $\rho^{ij}$ and $\sigma^{ij}$ Eq. (\ref{sym}) leads to% \begin{equation} \rho_{A}=-\sigma_{A}. \end{equation} Furthermore, we can also consistently expand the coefficients $\alpha_{A}$, $\beta_{A}$, $\sigma_{A}$, and $\rho_{A}$ in powers of ${\tilde{\xi}\;}$, according to \begin{equation} \zeta_{A}\simeq\zeta_{A0}+\zeta_{A1}{\tilde{\xi}}\partial_{t}+\zeta _{A2}{\tilde{\xi}}^{2}\partial_{t}^{2},\;\;\;\zeta_{A}=\{\alpha_{A},\beta _{A},\rho_{A},\sigma_{A}\},\;\;\; \end{equation} with $\zeta_{A0},$ $\zeta_{A1},\;\zeta_{A2}\;$being constant coefficients. This expansion leads to the identification of the following coefficients \begin{align} \alpha_{0}^{\left( ij\right) } & =\beta_{0}^{\left( ij\right) }=\eta ^{ij},\;\;\rho_{0}^{\left( ij\right) }=0,\;\;\alpha_{1}^{\left( ij\right) 0}=\beta_{1}^{\left( ij\right) 0}=\rho_{1}^{\left( ij\right) 0}=0,\nonumber\\ \alpha_{1}^{\left( ij\right) r} & =\epsilon^{ijr}\alpha_{10},\;\beta _{1}^{\left( ij\right) r}=\epsilon^{ijr}\beta_{10},\;\rho_{1}^{\left( ij\right) r}=\epsilon^{ijr}\rho_{10},\;\;\;\;\;\nonumber\\ \alpha_{2}^{\left( ij\right) 00} & =\eta^{ij}\alpha_{00},\;\;\beta _{2}^{\left( ij\right) 00}=\eta^{ij}\beta_{02}^{2},\;\rho_{2}^{\left( ij\right) 00}=\eta^{ij}\rho_{02},\;\;\alpha_{2}^{\left( ij\right) 0p}% =\beta_{2}^{\left( ij\right) 0p}=\rho_{2}^{\left( ij\right) 0p}% =0,\nonumber\\ \alpha_{2}^{\left( ij\right) mn} & =\frac{1}{2}\left( \delta^{im}% \delta^{jn}+\delta^{in}\delta^{jm}\right) \alpha_{20},\;\;\beta_{2}^{\left( ij\right) mn}=\frac{1}{2}\left( \delta^{im}\delta^{jn}+\delta^{in}% \delta^{jm}\right) \beta_{20},\ \ \rho_{2}^{\left( ij\right) mn}=\frac {1}{2}\left( \delta^{im}\delta^{jn}+\delta^{in}\delta^{jm}\right) \rho _{20}.\label{CONSTCOEF}% \end{align} The above partition is consistent with the requirement of covariance under rotations. We have taken $\alpha_{00}=\beta_{00}=1$ and $\rho_{00}=\sigma _{00}=0$ to recover the usual vacuum as the background for ${\tilde{\xi}=0}$. The fact that this theory does not hold at high energies will be coded by cutoff $\Omega\ll E_{QG}$. We provide a general description of such modified electrodynamics including expressions for the equations of motion, the energy-momentum tensor\ and the Green functions as well as the corresponding refraction indexes, up to second order in $\tilde{\xi}$. \section{Equations of motion} Equations (\ref{alpha}), together with the corresponding ones for the remaining coefficients of the constitutive relations give% \begin{align} \mathbf{D} & =\left( \alpha_{0}+\alpha_{2}{\tilde{\xi}}^{2}\mathbf{k}% ^{2}\right) \mathbf{E}-\left( \sigma_{0}+i\alpha_{1}{\tilde{\xi}\omega }\right) \mathbf{B}+\left( i\sigma_{1}{\tilde{\xi}}+\omega\alpha_{2}% {\tilde{\xi}}^{2}\right) \left( \mathbf{k\times B}\right) ,\label{D1}\\ \mathbf{H} & =\left( \beta_{0}-i{\omega}\sigma_{1}{\tilde{\xi}}\right) \mathbf{B}-i\beta_{1}{\tilde{\xi}}\left( \mathbf{k\times B}\right) +\sigma_{0}\mathbf{E}.\label{H1}% \end{align} In the approximation to order ${\tilde{\xi}}^{2}$ here considered, we have ${\tilde{\xi}}^{2}k^{2}\simeq{\tilde{\xi}}^{2}\omega^{2}$ and we can write% \begin{align} \mathbf{D} & =d_{1}(\omega)\mathbf{E}+id_{2}(\omega)\mathbf{B}+d_{3}% (\omega){\tilde{\xi}}\left( \mathbf{k\times B}\right) ,\label{D2}\\ \mathbf{H} & =h_{1}(\omega)\mathbf{B}+i{h}_{2}(\omega)\mathbf{E}% +ih_{3}(\omega){\tilde{\xi}}\left( \mathbf{k\times B}\right) ,\label{H2}% \end{align} where the functions $d_{i}(\omega)$ and $h_{i}(\omega)$ depend only on $\omega$ and admit a series expansion in powers of ${\tilde{\xi}}\omega$, characterizing each specific model. From Eqs. (\ref{GENMAXW1}-\ref{GENMAXW2}) we get the equations for $\mathbf{E}$ and $\mathbf{B}$% \begin{align} id_{1}\left( \mathbf{k}\cdot\mathbf{E}\right) & =4\pi\rho\left( \omega,\mathbf{k}\right) ,\label{INHOM1}\\ i\omega d_{1}\mathbf{E}+\left( h_{3}k^{2}-g(\omega)\right) {\tilde{\xi}% }\mathbf{B}+\left( \omega d_{3}{\tilde{\xi}+}h_{1}\right) \left( i\mathbf{k}\times\mathbf{B}\right) & =4\pi\mathbf{j}\left( \omega ,\mathbf{k}\right) ,\label{INHOM2}% \end{align} where we denote% \begin{equation} \left( d_{2}+h_{2}\right) \omega=g(\omega){\tilde{\xi}}.\label{INHOM0}% \end{equation} The expressions (\ref{D1}-\ref{H1}) indeed indicate that the above combination\ is of order ${\tilde{\xi}}$. We thus see that in fact there are only three independent functions of $\omega$ and $k$ which determine the dynamics% \begin{equation} P=d_{1},\quad Q=h_{1}+\omega d_{3}{\tilde{\xi}},\quad R=\left( h_{3}% k^{2}-g(\omega)\right) {\tilde{\xi}}.\label{R}% \end{equation} Using the homogeneous equation $\omega\mathbf{B}=\mathbf{k}\times\mathbf{E}$ that yields $\omega\left( \mathbf{k}\times\mathbf{B}\right) =\left( \mathbf{k\cdot E}\right) \mathbf{k-}k^{2}\mathbf{E}$, and charge conservation $\omega\rho-\mathbf{k\cdot J}=0$, we decouple the equations for fields $\mathbf{E}$ and $\mathbf{B}$. Finally, we introduce the standard potentials $\Phi$ and $\mathbf{A}$ \begin{equation} \mathbf{B}=i\mathbf{k}\times\mathbf{A},\qquad\mathbf{E}=i\omega\mathbf{A-}% i\mathbf{k}\Phi. \end{equation} in the radiation gauge, $\mathbf{k}\cdot\mathbf{A=}0$, in which case we have \begin{align} \Phi & =4\pi\left( k^{2}P\right) ^{-1}\rho,\label{PHIRG}\\ \left( k^{2}Q-\omega^{2}P\right) \mathbf{A}+iR\left( \mathbf{k}% \times\mathbf{A}\right) & =4\pi\left[ \mathbf{j}-\left( \mathbf{j}% \cdot\mathbf{\hat{k}}\right) \mathbf{\hat{k}}\right] =4\pi\mathbf{j}% _{T},\label{EQARG}% \end{align} from Eqs. (\ref{INHOM1}-\ref{INHOM2}). The presence of birefringence depends on the parity violating term proportional to\textbf{\ }$R$\textbf{.} It is clear that a diagonalization is obtained in a circular polarization basis. Decomposing the vector potential and the current in such a basis% \begin{equation} \mathbf{A}=\mathbf{A}^{+}+\mathbf{A}^{-},\;\;\;\;\mathbf{j}_{T}=\mathbf{j}% _{T}^{+}+\mathbf{j}_{T}^{-}, \end{equation} and recalling the basic properties $\mathbf{\hat{k}}\times\mathbf{A}% ^{+}=-i\mathbf{A}^{+}$, $\mathbf{\hat{k}}\times\mathbf{A}^{-}=i\mathbf{A}^{-}% $, we separate (\ref{EQARG}) into the decoupled equations \cite{PDELC} \begin{equation} \left[ k^{2}Q-\omega^{2}P+\lambda k\,R\right] \mathbf{A}^{\lambda}% =4\pi\mathbf{j}_{T}^{\lambda},\;\;\;\;\lambda=\pm1.\label{af}% \end{equation} In terms of the basic functions introduced in the constitutive relations (\ref{D2}-\ref{H2}), the factor in (\ref{af}) is rewritten as \begin{equation} k^{2}Q-\omega^{2}P+\lambda k\,R=\lambda h_{3}k^{3}{\tilde{\xi}+}\left( h_{1}+d_{3}{\omega\tilde{\xi}}\right) k^{2}-\lambda g{\tilde{\xi}}% k-d_{1}\omega^{2}. \end{equation} This is the key expression to obtain the Green functions and the refraction indices. \section{Generalized Energy-Momentum tensor} Any application of this modified electrodynamics related to radiation and its properties requires the construction of the corresponding energy momentum tensor. This section is devoted to such a construction.\textbf{\ }The theories under consideration are of higher order in the field derivatives, and thus call for an extension of the standard Noether theorem. The manipulations are highly simplified by proceeding in a covariant notation, the point being that the tensorial\textbf{\ }operator\textbf{\ }$\hat{\chi}^{[\mu\nu][\alpha\beta ]}$\textbf{\ }is\textbf{\ }constructed in a given reference frame and satisfies only passive Lorentz covariance. We assume that active Lorentz invariance is violated while active translation invariance is maintained, so that there is an energy momentum tensor given by the Noether theorem. Before constructing this tensor in our particular case we recall the general formalism for\textbf{\ }a Lagrangian including up to three derivatives in the fields. This can be useful also when consistently including dimension five and six operators in the matter field coupled to the above electrodynamics. We start from an action of the form% \begin{equation} S=\int d^{4}x\;L(\Phi_{A},\;\Phi_{A,\mu},\;\partial_{\nu}\Phi_{A,\mu },\;\partial_{\nu}\partial_{\rho}\Phi_{A,\mu}),\label{ACTION3}% \end{equation} where we consider $\Phi_{A}$,$\;\Phi_{A,\mu}$,$\;\partial_{\nu}\Phi_{A,\mu}% $,$\;\partial_{\nu}\partial_{\rho}\Phi_{A,\mu}$, to be independent fields, i.e. at this level\textbf{\ }we take for example that\textbf{\ }$\partial_{\nu}% \Phi_{A,\mu}\neq\partial_{\mu}\Phi_{A,\nu}$. Applying the standard action principle one derives the Euler-Lagrange (EL)\ equations% \begin{equation} 0=\frac{\delta L}{\delta\Phi_{A}}-\partial_{\mu}\left( \frac{\delta L}% {\delta\Phi_{A,\mu}}\right) +\partial_{\mu}\partial_{\nu}\left( \frac{\delta L}{\delta\left( \partial_{\nu}\Phi_{A,\mu}\right) }\right) -\partial_{\mu }\partial_{\nu}\partial_{\rho}\left( \frac{\delta L}{\delta\left( \partial_{\nu}\partial_{\rho}\Phi_{A,\mu}\right) }\right) .\label{GENEL}% \end{equation} Assuming that translations generated by $x^{\prime\mu}=x^{\mu}+a^{\mu}$\ are a symmetry of the action (\ref{ACTION3}), Noether's theorem leads to the energy momentum tensor% \begin{align} T_{\;\;\sigma}^{\tau} & =-\delta_{\sigma}^{\tau}\;L+\Phi_{A,\sigma}\left( \frac{\partial L}{\partial\Phi_{A,\tau}}-\partial_{\nu}\left( \frac{\partial L}{\partial\left( \partial_{\nu}\Phi_{A,\tau}\right) }\right) +\partial_{\rho}\partial_{\nu}\left( \frac{\partial L}{\partial\left( \partial_{\rho}\partial_{\nu}\Phi_{A,\tau}\right) }\right) \right) \nonumber\\ & +\left( \partial_{\sigma}\Phi_{A,\mu}\right) \left( \frac{\partial L}{\partial\left( \partial_{\tau}\Phi_{A,\mu}\right) }-\partial_{\rho }\left( \frac{\partial L}{\partial\left( \partial_{\rho}\partial_{\tau}% \Phi_{A,\mu}\right) }\right) \right) +\left( \partial_{\sigma}% \partial_{\nu}\Phi_{,\mu}\right) \left( \frac{\partial L}{\partial\left( \partial_{\tau}\partial_{\nu}\Phi_{A,\mu}\right) }\right) ,\label{EMTENSOR}% \end{align} whose conservation $\partial_{\tau}T_{\;\;\sigma}^{\tau}=0\;$can be directly verified via the equations of motion (\ref{GENEL}). Next we apply the above general results to our Lagrangian (\ref{LNL}) together with the realization (\ref{GENSUS}), where $\Phi_{A}=A_{\alpha}$. The corresponding derivatives are% \begin{align} \frac{\delta L}{\delta A_{\alpha}} & =0,\;\;\;\frac{\delta L}{\delta A_{\alpha,\tau}}=-F_{\mu\nu}\chi_{0}^{[\mu\nu][\tau\alpha]}-\frac{1}{2}% \chi_{1}^{[\tau\alpha]\theta[\gamma\beta]}\partial_{\theta}F_{\gamma\beta }-\frac{1}{2}\chi_{1}^{[\tau\alpha]\{\sigma\theta\}[\gamma\beta]}\partial_{\sigma }\partial_{\theta}F_{\gamma\beta},\label{DER01}\\ \frac{\delta L}{\delta\left( \partial_{\nu}A_{\alpha,\tau}\right) } & =-\frac{1}{2}F_{\theta\sigma}\chi_{1}^{[\theta\sigma]\nu[\tau\alpha] },\;\;\;\;\;\frac{\delta L}{\delta\left( \partial_{\rho}\partial_{\nu }A_{\alpha,\tau}\right) }=-\frac{1}{2}F_{\theta\sigma}\chi_{2}^{[\theta \sigma]\{\rho\nu\}[\tau\alpha]}.\label{DER23}% \end{align} The equations of motion outside the sources can be written as \begin{equation} 0=\partial_{\tau}H^{\tau\alpha},\label{EQMOUT}% \end{equation} where% \begin{equation} H^{\tau\alpha}=-\left( \frac{\delta L}{\delta A_{\alpha,\tau}}\right) +\partial_{\nu}\left( \frac{\delta L}{\delta\partial_{\nu}A_{\alpha,\tau}% }\right) -\partial_{\nu}\partial_{\rho}\left( \frac{\delta L}{\delta \partial_{\nu}\partial_{\rho}A_{\alpha,\tau}}\right) =\hat{\chi}^{[\tau \alpha][\theta\psi]}F_{\alpha\beta}.\label{DEFH}% \end{equation} Now let us consider the energy-momentum tensor (\ref{EMTENSOR}). Let us observe that the second term in this equation is precisely $-A_{\alpha,\sigma }H^{\tau\alpha}$ which is not directly gauge invariant. It can be rewritten as% \begin{equation} -A_{\alpha,\sigma}H^{\tau\alpha}=-F_{\sigma\alpha}H^{\tau\alpha}% -A_{\sigma,\alpha}H^{\tau\alpha}=-F_{\sigma\alpha}H^{\tau\alpha}% -\partial_{\alpha}\left( A_{\sigma}H^{\tau\alpha}\right) ,\label{GINV}% \end{equation} by using the equations of motion. The last term is identically conserved and does not contribute to the corresponding charges. The remaining contributions are% \begin{align} \frac{\partial L}{\partial\left( \partial_{\tau}A_{\alpha,\mu}\right) }-\partial_{\rho}\left( \frac{\partial L}{\partial\left( \partial_{\rho }\partial_{\tau}A_{\alpha,\mu}\right) }\right) & =-\frac{1}{2}\chi _{1}^{[\theta\psi]\tau[\mu\alpha]}F_{\theta\psi}+\frac{1}{2}\chi_{2}% ^{[\theta\psi]\{\rho\tau\}[\mu\alpha]}\partial_{\rho}F_{\theta\psi},\nonumber\\ \left( \partial_{\sigma}\partial_{\nu}\Phi_{,\mu}\right) \left( \frac{\partial L}{\partial\left( \partial_{\tau}\partial_{\nu}\Phi_{A,\mu }\right) }\right) & =-\frac{1}{4}\left( \partial_{\sigma}\partial_{\nu }F_{\mu\alpha}\right) F_{\theta\sigma}\chi_{2}^{[\theta\sigma]\{\nu\tau\}[\mu\alpha ]}, \end{align} which are naturally gauge invariant. The final gauge invariant, non-symmetric energy-momentum tensor is \begin{align} T_{\sigma}^{\tau} & =-\delta_{\sigma}^{\tau}\;L-F_{\sigma\alpha}H^{\tau \alpha}-\frac{1}{4}\left( \partial_{\sigma}F_{\mu\alpha}\right) \chi _{1}^{[\theta\psi]\tau[\mu\alpha]}F_{\theta\psi}\nonumber\\ & +\frac{1}{4}\left( \partial_{\sigma}F_{\mu\alpha}\right) \chi_{2}% ^{[\theta\psi]\{\rho\tau\}[\mu\alpha]}\left( \partial_{\rho}F_{\theta\psi }\right) -\frac{1}{4}F_{\theta\psi}\chi_{2}^{[\theta\psi]\{\nu\tau\}[\mu\alpha] }\left( \partial_{\sigma}\partial_{\nu}F_{\mu\alpha}\right) .\label{FINEMT}% \end{align} A direct but rather long calculation allows to verify the conservation$\;\partial_{\tau}T_{\sigma}^{\tau}=0$, via the equations of motion\ (\ref{EQMOUT}). To express the energy-momentum components in terms of the fields $\mathbf{E}$ and $\mathbf{B}$ we use the following $\left( 3+1\right) $ splitting% \begin{align} W_{\mu\nu}\chi_{1}^{\left[ \mu\nu\right] \tau\left[ \alpha\beta\right] }U_{\alpha\beta} & =4W_{0i}\chi^{\left[ 0i\right] \tau\left[ 0m\right] }_1U_{0m}+2\chi^{\left[ 0s\right] \tau\left[mn\right]}_1\left[ W_{0s}% U_{mn}-W_{mn}U_{0s}\right] +W_{ij}\chi^{\left[ ij\right] \tau\left[ mm\right]}_1 U_{mm},\nonumber\\ W_{\mu\nu}\chi_{2}^{\left[ \mu\nu\right] \{\tau\rho\}\left[ \alpha \beta\right] }U_{\alpha\beta} & =4W_{0i}\chi^{\left[ 0i\right]\{\tau \rho\}\left[0m\right]}_2 U_{0m}+2\chi^{\left[ 0s\right]\{\tau\rho\}\left[ mn\right]}_2\left[ W_{0s}U_{mn}+W_{mn}U_{0s}\right] +W_{ij}\chi^{\left[ ij\right] \{\tau\rho\}\left[mm\right]}_2 U_{mm}, \end{align} for antisymmetric fields $W_{\mu\nu},\, U_{\alpha\beta}.$ We also recall the relation% \begin{equation} -L-F_{0\alpha}H^{0\alpha}=\frac{1}{2}\left( \mathbf{E\cdot D+B\cdot H}\right) , \end{equation} which is useful in calculating the energy density. Let us illustrate the above construction by writing the energy density $u=T_{0}^{0}\;$and\textbf{\ }the Poynting vector\ $S_{i}=T_{0}^{i}\;$in terms of the fields $\mathbf{E}$ and $\mathbf{B}$ to first order in\textbf{\ }% $\tilde{\xi}$\textbf{:}% \begin{align} \mathbf{S} & =\mathbf{E\times B+}\frac{1}{2}\tilde{\xi}\alpha_{10}% \mathbf{E}\times\partial_{t}\mathbf{E}+\tilde{\xi}\beta_{10}\left[ \frac {1}{2}\left( \partial_{t}\mathbf{B}\right) \mathbf{\times B}-\mathbf{E\times }\left( \mathbf{\nabla}\times\mathbf{B}\right) \right] \mathbf{+}% \sigma_{10}\tilde{\xi}\frac{1}{2}\partial_{t}\left[ \mathbf{E}\times \mathbf{B}\right] ,\label{POYNTXI}\\ u & =\frac{1}{2}(\mathbf{E}^{2}+\mathbf{B}^{2})-\frac{1}{2}\beta_{10}% \tilde{\xi}\mathbf{B}\cdot\mathbf{\nabla}\times\mathbf{B}-\frac{1}{2}% \alpha_{10}\tilde{\xi}\mathbf{E}\cdot\mathbf{\nabla}\times\mathbf{E}-\frac {1}{2}\sigma_{10}\tilde{\xi}\mathbf{\nabla}\cdot\left[ \mathbf{E}% \times\mathbf{B}\right] .\label{EDENSXI}% \end{align} The terms proportional to $\sigma_{10}\;$correspond to the liberty of modifying the energy momentum tensor\ as% \begin{equation} \tilde{T}_{\sigma}^{\tau}=T_{\sigma}^{\tau}+\partial_{\rho}V_{\sigma}% ^{[\tau\rho]},\;\;V_{\sigma}^{[\tau\rho]}=-V_{\sigma}^{[\rho\tau]},\label{emf}% \end{equation} which was previously used in Eq.\ (\ref{GINV}). In the $(3+1)$ partition the above means% \begin{equation} \tilde{u}=u-\mathbf{\nabla\cdot Q,\;\;\;\tilde{S}=S}+\partial_{t}% \mathbf{Q}+\mathbf{\nabla\times W,}% \end{equation} with $Q_{i}=V_{0}^{[i0]}$ and $W_{i}=\frac{1}{2}\epsilon_{ijk}V_{0}^{[jk]}% $. The last terms in Eqs. (\ref{POYNTXI}) and (\ref{EDENSXI})\ correspond to the choice \begin{equation} \mathbf{Q}=\frac{1}{2}\sigma_{10}\tilde{\xi}\left( \mathbf{E}\times \mathbf{B}\right) ,\ \ \ \ \ \ \mathbf{W}=\mathbf{0}. \end{equation} Thus, the contributions proportional to $\sigma_{10}\;$can be eliminated and one recovers the corresponding expressions that can be obtained directly from Maxwell's equations. Using the fields $\mathbf{E}$\ , $\mathbf{B},% $\ $\mathbf{D}$\ and $\mathbf{H}$\textbf{, }together with the equations of motion (\ref{cem}-\ref{ccr}) in vacuum and the freedom given by the energy-momentum tensor transformation (\ref{emf}), these magnitudes can also be written to first order in $\tilde{\xi}$\ in a much more compact form as% \begin{align} \mathbf{S} & =\frac{1}{2}\left( \mathbf{E}\times\mathbf{B}+\mathbf{D}% \times\mathbf{H}\right) -\frac{1}{2}\left( \alpha_{10}-\beta_{10}\right) {\tilde{\xi}}\left( \mathbf{B\times}\left( \mathbf{\nabla}\times \mathbf{E}\right) -\mathbf{E\times}\left( \mathbf{\nabla}\times \mathbf{B}\right) \right) ,\\ u & =\frac{1}{2}(\mathbf{E\cdot D+B\cdot H}). \end{align} \section{Green functions} The exact retarded Green function for the potential $\mathbf{A}$ in the circular polarization basis is\ \cite{PDELC,PLBTOBE}% \begin{equation} G_{ij}^{ret}(\omega,\mathbf{R})=\int\frac{d^{3}k}{\left( 2\pi\right) ^{3}% }e^{i\mathbf{k}\cdot\mathbf{r}}\,\tilde{G}_{ij}^{ret}(\omega,\mathbf{k}% )=\frac{1}{2}\int\frac{d^{3}k}{\left( 2\pi\right) ^{3}}e^{i\mathbf{k}% \cdot\mathbf{R}}\,\sum_{\lambda}G^{\lambda}(\omega,\mathbf{k})\left( \delta_{ik}-\frac{{k}_{i}{k}_{k}}{k^{2}}+i\lambda\epsilon_{irk}\frac{{k}_{r}% }{k}\right) ,\label{gg}% \end{equation} where $\mathbf{R}=\mathbf{r}-\mathbf{r}^{\prime},$ ${\hat{k}}_{i}% =k_{i}/|\mathbf{k}|$, $k=|\mathbf{k}|$, and $G^{\lambda}(\omega,\mathbf{k})$ is obtained from Eq.(\ref{af}),% \begin{equation} G^{\lambda}(\omega,\mathbf{k})=\frac{1}{k^{2}Q-\omega^{2}P+\lambda k\,R},\;\;\;\;\lambda=\pm1.\label{gl}% \end{equation} Taking the analytic continuation $\omega\rightarrow\omega+i\epsilon$ to obtain the causal Green functions, only the poles in the upper half plane of\textbf{\ }$k$ make a contribution to the integration. By successive rescalings, the denominator in Eq. (\ref{gl}) can be written in a more convenient form% \begin{equation} Qk^{2}-P\omega^{2}+\lambda kR=-n_{0}^{2}\omega^{2}Qa(M^{\lambda}% -M_{0})(M^{\lambda}-M_{+})(M^{\lambda}-M_{-}),\label{gl1}% \end{equation} where we introduce the notation \begin{align} Q & =h_{1}+d_{3}{\omega\tilde{\xi}},\qquad a=h_{3}n_{0}\chi , \qquad c=\frac{g}{\omega^{2}n_{0}}\chi, \\ \chi & =\frac{\tilde{\xi}\omega}{h_{1}+d_{3}{\tilde{\xi}\omega}% },\qquad M^{\lambda}=\frac{\lambda k}{n_{0}\omega},\qquad n_{0}^{2}=\frac{d_{1}% }{h_{1}+\omega\tilde{\xi}d_{3}}.\label{xin}% \end{align} To study the modifications to the dynamics it is enough to expand each root in powers of the small parameter $\chi$% \begin{equation} M_{0}\simeq\frac{1}{\tilde{\beta}_{1}}\chi^{-1},\ \ \ M_{\pm}\simeq\pm\left[ 1+\frac{1}{2}\left( \tilde{\beta}_{1}-\tilde{\alpha}_{1}\right) \left( \lambda\chi+\frac{1}{4}\left( 5\tilde{\beta}_{1}-\,\tilde{\alpha}_{1}\right) \chi^{2}\right) \right] ,\label{P2}% \end{equation} where $\tilde{\beta}_{1}=h_{3}n_{0}$ and $\tilde{\alpha}_{1}=g/(\omega ^{2}n_{0})$. Since the parameter $\lambda$ and the momentum $k$ appear only in the combination $\lambda k$, we have the symmetry property% \begin{equation} G^{\lambda}(\omega,k)=G^{-\lambda}(\omega,-k),\label{simprop}% \end{equation} which will be useful in the final calculation of the Green functions $G^{\lambda}(\omega,\mathbf{R})$. In the radiation approximation, the integral in (\ref{gg})\ produces% \begin{equation} G_{ik}^{ret}(\omega,\mathbf{R})=-\frac{i}{\left( 2\pi\right) ^{2}}\frac {1}{R}\sum_{\lambda}\frac{1}{2}\left( \delta_{ik}-n_{i}n_{k}+i\lambda \epsilon_{ipk}n_{p}\right) \int_{-\Omega}^{\Omega}kdke^{ikR}\,G^{\lambda }(\omega,k), \end{equation} where $n_{i}=x_{i}/r$ and $R=|\mathbf{r}-\mathbf{r}^{\prime}|.$ From now on we set $R=r$\ in all denominators and understand that $R=r-n\cdot r^{\prime}\;$in the exponentials. We also neglect terms of order higher than\textbf{\ }% $1/r$\textbf{. }The cutoff $\Omega < E_{QG}$ defines the low energy domain of the effective theory. In this way we identify $G^{\lambda}(\omega,\mathbf{R})$ as \begin{equation} G^{\lambda}(\omega,\mathbf{R})=-\frac{i}{\left( 2\pi\right) ^{2}}\frac{1}% {r}\int_{-\Omega}^{\Omega}kdke^{ikR}\,G^{\lambda}(\omega,k).\label{GLAMBDA}% \end{equation} The factor $e^{ikR}$ forces us to close the integration contour in the upper half complex plane, choosing for example a semicircle with radius $k=\Omega$ , picking up the poles in this region. Our description is valid only for momenta $k << \Omega$. According to Eqs. (\ref{P2}), the pole at $M_{0}^{\lambda}$ corresponds to the momentum value$\;|k_{0}|=\left| Qh_{3}^{-1}\right| \,\tilde{\xi}^{-1}.$ In the present\textbf{\ }approximation\textbf{\ }the contribution to the integral of this pole, together with the one of the semicircle in the upper half complex plane, can be neglected. The two remaining poles, which are the ones that contribute to the integral, are located at small displacements with respect to $|k_{\pm}|=n_{0}\omega << E_{QG}$. In this way we take% \begin{equation} G^{\lambda}(\omega,k)=\frac{1}{n_{0}^{2}\omega^{2}QaM_{0}}\frac{1}{\left( M^{\lambda}-M_{+}\right) \left( M^{\lambda}-M_{-}\right) }. \end{equation} From the leading order expressions in (\ref{P2})\ we conclude that the pole that contributes in the case $\lambda=+1$ is $\left( \omega+i\epsilon\right) n_{0}M_{+}$, while for $\lambda=-1$ it is $-\left( \omega+i\epsilon\right) n_{0}M_{-}$. The resulting integral is% \begin{equation} G^{\lambda}(\omega,\mathbf{R})=\frac{1}{4\pi Q}\frac{1}{r}\frac{2n_{\lambda}% }{n_{-}+n_{+}}e^{i\omega n_{\lambda}R},\label{glor}% \end{equation} where we have considered that the dominant term in $M_{0}$ yields $aM_{0}=1$. The refraction indices are% \begin{equation} n_{\lambda}(\omega)=\lambda n_{0}M_{\lambda}.\label{REFIND}% \end{equation} The minus sign in $n_{-}$ is because $M_{-}$ starts with a $-1$. Up to the order considered, the refraction indices are given by the expressions \begin{equation} n_{\lambda}(\omega)=n_{0}\left[ 1+\lambda\left( \tilde{\beta}_{1}% -\tilde{\alpha}_{1}\right) \frac{\chi}{2}+\left( \tilde{\beta}_{1}% -\tilde{\alpha}_{1}\right) \left( 5\tilde{\beta}_{1}-\tilde{\alpha}% _{1}\right) \frac{\chi^{2}}{8}\right] .\label{ri}% \end{equation} From Eq. (\ref{ri}), and using Eqs. (\ref{ALPHA}), (\ref{CONSTCOEF}), (\ref{INHOM1}-\ref{INHOM0}), (\ref{xin}) and (\ref{P2}), we can obtain the second order expansion for\textbf{\ }$n_{\lambda}$% \begin{equation} n_{\lambda}(\omega)\simeq1+\lambda\left( \tilde{\xi}\omega\right) n_{1}+\left( \tilde{\xi}\omega\right) ^{2}n_{2}, \end{equation} with the real coefficients $n_{1}$ and $n_{2}$ given by \begin{equation} n_{1}=\frac{1}{2}\left( \alpha_{10}-\beta_{10}\right) ,\ \ \ \ \ \ n_{2}% =\frac{1}{8}\left[ \left( \alpha_{10}-\beta_{10}\right) \left( \alpha _{10}-5\beta_{10}\right) +4\left( \beta_{02}-\alpha_{02}\right) \right] . \end{equation} According to this the phase velocity is not $1$\textbf{. }Due to the dispersive character of the background it becomes% \[ v_{ph}(\omega)\simeq1-\lambda\left( \tilde{\xi}\omega\right) n_{1}+\left( \tilde{\xi}\omega\right) ^{2}\left( \left( n_{1}\right) ^{2}-n_{2}\right) . \] Thus, the Green function in terms of space-time coordinates is% \begin{align} G^{\lambda}(\tau,\mathbf{R}) & =\frac{1}{4\pi r}\int_{-\Omega}^{\Omega }d\omega\frac{2n_{\lambda}}{Q\left( n_{-}+n_{+}\right) }e^{i\omega n_{\lambda}R}e^{-i\omega\tau}\nonumber\\ & =\frac{1}{4\pi r}\int_{-\Omega}^{\Omega}d\omega\left[ 1+\frac{\lambda}% {2}\left( \tilde{\xi}\omega\right) \left( \alpha_{10}-\beta_{10}\right) -\left( \tilde{\xi}\omega\right) ^{2}\left( \alpha_{20}-\beta_{02}\right) \right] e^{i\omega\left[ 1+\lambda\left( \tilde{\xi}\omega\right) n_{1}+\left( \tilde{\xi}\omega\right) ^{2}n_{2}\right] R}e^{-i\omega\tau}% \end{align} where $\tau=t-t^{\prime}$. If $\;\Omega\rightarrow\infty$ the choice of the poles warrants the causal behavior of the Green function. But the frequency cutoff could introduce some violation of causality. To investigate this\textbf{\ }possibility, we compute the Fourier transform of the Green function to obtain its time dependent expression, by expanding the integrand in powers of ${\tilde \xi}$% \begin{align} G^{\lambda}(\tau,\mathbf{R}) & \simeq\frac{1}{4\pi r}\int_{-\Omega}^{\Omega }d\omega\left\{ 1+\lambda n_{1}\left( 1+i\omega R\right) \omega{\tilde \xi}-\left[ \alpha_{20}-\beta_{02}-i\left( n_{2}+n_{1}^{2}\right) R\omega+\frac{1}% {2}n_{1}^{2}R^{2}\omega^{2}\right] \omega^{2}{\tilde \xi}^{2}\right\} e^{i\omega \left( R-\tau\right) }\nonumber\\ & =\frac{1}{2\pi r}\left\{ 1-i\lambda n_{1}{\tilde \xi}\left( 1+R\partial_{R}\right) \partial_{R}+{\tilde \xi}^2\left[ \alpha_{20}-\beta_{02}-\left( n_{2}+n_{1}^{2}\right) R\partial_{R}-\frac{1}{2}n_{1}^{2}R^{2}\partial_{R}^{2}\right] \partial _{R}^{2}\right\} \frac{\sin\left( R-\tau\right) \Omega}{R-\tau}. \end{align} This shows that the main effect of the cutoff is to spread the propagating field around the light cone, within a wedge defined by $R\simeq\tau\pm \pi/2\Omega$. Returning to $G^{\lambda}(\omega,\mathbf{R})$, Eq. (\ref{glor}), we can characterize the effect of the cutoff in the causal behavior of the Green function using the generalized susceptibility theorem \cite{LANDAU1}, a generalization of the Kramers-Kronig relations. Its real and imaginary parts as functions of the frequency $\omega$ are% \begin{align} \text{Re}\;G^{\lambda}(\omega,R) & \simeq\frac{1}{4\pi r}\left\{ \left[ 1+ {\tilde \xi}^2\left( \alpha_{20}-\beta_{02}-\left( n_{2}+n_{1}^{2}\right) R\partial _{R}-\frac{1}{2}n_{1}^{2}R^{2}\partial_{R}^{2}\right) \partial_{R}^{2}\right] \cos\omega R+\lambda n_{1}{\tilde \xi}\left( 1+R\partial_{R}\right) \partial_{R}\sin\left( \omega R\right) \right\} ,\nonumber\label{re}\\ & \\ \text{Im}\;G^{\lambda}(\omega,R) & \simeq\frac{1}{4\pi r}\left\{ \left[ 1+{\tilde \xi}^2\left( \alpha_{20}-\beta_{02}-\left( n_{2}+n_{1}^{2}\right) R\partial _{R}-\frac{1}{2}n_{1}^{2}R^{2}\partial_{R}^{2}\right) \partial_{R}^{2}\right] \sin\omega R-\lambda n_{1}{\tilde \xi}\left( 1+R\partial_{R}\right) \partial_{R}\cos\omega R\right\} .\nonumber\label{im}\\ & \end{align} To have a causal behavior they must satisfy the\textbf{\ }Kramers-Kronig relation% \begin{equation} \left. \text{Im\ }G(\omega)\right\vert _{KK}=-\frac{1}{\pi}P\int_{-\Omega }^{\Omega}d\omega^{\prime}\frac{\text{Re}\;G(\omega^{\prime})-\text{Re}% \;G(\Omega)}{\omega^{\prime}-\omega}% \end{equation} which gives% \begin{align} \left. \text{Im}G^\lambda(\omega\text{)}\right\vert _{KK} & =-\frac{1}{4\pi^{2}% r}\left\{ \left[ 1+{\tilde \xi}^2\left( \alpha_{20}-\beta_{02}-\left( n_{2}+n_{1}% ^{2}\right) R\partial_{R}-\frac{1}{2}n_{1}^{2}R^{2}\partial_{R}^{2}\right) \partial_{R}^{2}\right] \ P\int_{-\Omega}^{\Omega}d\omega^{\prime }\frac{\cos\left( \omega^{\prime}R\right) -\cos\left( \Omega R\right) }{\omega^{\prime}-\omega}\right. \nonumber\\ & \left. +\lambda n_{1}{\tilde \xi}\left( 1+R\partial_{R}\right) \partial_{R} \ P\int_{-\Omega}^{\Omega}d\omega^{\prime}\frac{\sin\left( \omega^{\prime }R\right) -\sin\left( \Omega R\right) }{\omega^{\prime}-\omega}\right\} \label{imgkk}% \end{align} For $\omega/\Omega\ll1$ the integrals reduce to% \begin{align} P\int_{-\Omega}^{\Omega}d\omega^{\prime}\frac{\cos\left( \omega^{\prime }R\right) -\cos\left( \Omega R\right) }{\omega^{\prime}-\omega} & \simeq2\left( 1-\cos\omega R\right) \frac{\omega}{\Omega}\cos\left( \Omega R\right) -\left[ \pi+\left[ \left( \Omega R\right) \left( \cos\Omega R\right) -\sin\Omega R\right] \left( \frac{\omega}{\Omega}\right) ^{2}\right] \sin\omega R,\nonumber\\ P\int_{-\Omega}^{\Omega}d\omega^{\prime}\frac{\sin\left( \omega^{\prime }R\right) -\sin\left( \Omega R\right) }{\omega^{\prime}-\omega} & \simeq2\left( 1-\sin\omega R\right) \frac{\omega}{\Omega}\cos\left( \Omega R\right) +\left[ \pi+\left[ \left( \Omega R\right) \left( \cos\Omega R\right) -\sin\Omega R\right] \left( \frac{\omega}{\Omega}\right) ^{2}\right] \cos\omega R.\nonumber\\ & \label{R77} \end{align} Furthermore, in the case of a radiation field $\Omega R\gg1$ and hence the factors $\cos\Omega R$ and $\sin\Omega R$ become strongly oscillating, nullifying the contributions of the terms where they appear (which also have a factor $\left( \omega/\Omega\right) ^{n},$with $n\geq1$). Thus we can take% \begin{equation} P\int_{-\Omega}^{\Omega}d\omega^{\prime}\frac{\cos\left( \omega^{\prime }R\right) -\cos\left( \Omega R\right) }{\omega^{\prime}-\omega}\simeq -\pi\sin\omega R,\qquad P\int_{-\Omega}^{\Omega}d\omega^{\prime}\frac {\sin\left( \omega^{\prime}R\right) -\sin\left( \Omega R\right) }% {\omega^{\prime}-\omega}\simeq\pi\cos\omega R. \ \label{R78} \end{equation} Replacing these integrals in (\ref{imgkk}), we finally get that if Re $G^\lambda(\omega)$ is given by Eq. (\ref{re}), the imaginary part of the Green function, Im $G^\lambda(\omega)$, must be \begin{equation} \left. \text{Im}\;G^\lambda(\omega)\right\vert _{KK}=\frac{1}{4\pi r}\left\{ \left[ 1+{\tilde \xi}^2\left( \alpha_{20}-\beta_{02}-\left( n_{2}+n_{1}^{2}\right) R\partial _{R}-\frac{1}{2}n_{1}^{2}R^{2}\partial_{R}^{2}\right) \partial_{R}^{2}\right] \sin\omega R-\lambda n_{1}{\tilde \xi}\left( 1+R\partial_{R}\right) \partial_{R}\cos\omega R\right\} , \end{equation} which coincides with Eq. (\ref{im}), obtained by direct computation. \section{Final remarks} To summarize, in the preceding sections we have presented a general description for a large class of effective models for the electromagnetic field incorporating dynamical corrections motivated by QG and leading to departures from standard physics. The main features characterizing the models to which such a description is applicable are: (1) the validity of gauge invariance and charge conservation, (2) the use of standard commuting space-time coordinates together with the corresponding Fourier transform methods (which is not the case of Double Special Relativity models, for example), (3) the assumption that effective field theories constitute an appropriate tool for describing the low energy behavior of remnant effects which could arise from quantum gravity, (4) the assumption that low energy dynamics is linear in the potential field, (5) the inclusion of non-local effects via the operator character of the generalized susceptibilities. This description makes it also possible to include anisotropic corrections in the constitutive relations, for example via additional non-dynamical tensors arising from spontaneous Lorentz symmetry breaking, a case which is not considered in this work. The proposed formalism is quite similar to the usual electrodynamics in a medium, except for the non-local character of the effective QG corrections, mirroring the granularity of the space-time induced by quantum fluctuations of the metric. This feature leads to an electrodynamics with non-local constitutive relations, which contains terms connecting $\mathbf{D}$ and $\mathbf{H}$ with both $\mathbf{E}$ and $\mathbf{B}$. Thus the QG modifications can be modelled by a dispersive bianisotropic medium, where the propagation of the electromagnetic field is characterized by a refraction index, whose first order term in the perturbative parameter $\tilde{\xi}$ is directly related to vacuum birrefringence. We have considered the models from the point of view of active transformations, i.e. observable Lorentz symmetry violations associated with boosts in a given reference frame The effective models correspond in fact to high order theories. Hence we used an adequate generalization of the Noether theorem to find the energy-momentum tensor to second order in the LIV parameter $\hat{\xi}$. Next we determined the density of energy and momentum carried by the electromagnetic field, for which we give explicit expressions to order $\tilde{\xi}$. They acquire a simple form when written in terms of the fields $\mathbf{E}$,$\;\mathbf{B}\;$and$\;\mathbf{D}$% \textbf{,}$\;\mathbf{H}$ , which shows the convenience of the latter for describing the dynamics, in an analogous way to the usual electrodynamics in media. This theory is valid only for low energies. We have also studied the consequences of this fact by using an explicit cutoff $\;\Omega\ll E_{QG} \sim \tilde{\xi }^{-1}$. In fact, the results in Eqs. (\ref{R77}) and (\ref{R78}) show that the introduction of the cutoff does not produce any significant causality violation in the radiation regime ($\Omega R >>>1$) because the expected modifications proportional to $\left(\omega/\Omega \right)^n$ in (\ref{R77}), which are the subject of possible signals of new physics in these approaches, are further suppressed by highly oscillating terms proportional to $\sin(\Omega R), \, \cos(\Omega R)$ thus nullifying the impact of causality violation upon the corresponding observational effects. The most outstanding manifestation of the cutoff is a spreading of the propagation of the electromagnetic field around the light cone. In fact there are two sources for such a spreading. One is due to the cutoff and the other arises from the dispersive character of the effective medium, which leads to an $\omega $-dependent phase velocity. The relation between both effects depends on the relative value of $\Omega$ and $\tilde{\xi}^{-1}$. In any case, for distances large enough from the source ($\omega R\gg1$), the dispersive effect will finally dominate. There remains to discuss the causal behavior of the full theory. There are two possible sources of acausality. One is related to the dispersive character of the effective medium, while the other is related to the existence of velocities $v>1$ that leads to photons propagating to the past in highly boosted reference frames, and hence to the possibility of acausal loops. This issue is beyond the scope of the present work, and will be discussed in detail elsewhere. \section*{Acknowledgements} R.M. acknowledges partial support from CONICET-Argentina. L.F.U is partially supported by projects CONACYT-40745F,\ CONACYT-47211F and DGAPA-UNAM-IN104503-3.
1,477,468,751,363
arxiv
\section{Introduction} Dihadron jet-like correlations with a high transverse momentum ($\pT$) trigger particle provide a unique tool to study the hot and dense medium created in relativistic heavy-ion collisions, owing to the fact that the away-side partner jet has to traverse the entire medium due to the surface bias of the production points of high-$\pT$ particles. Dihadron correlations of intermediate $\pT$ associated particles revealed novel structures on the away side of the high $\pT$ trigger particle. The away-side correlations in central Au+Au collisions are significantly broader than in pp and d+Au collisions and in restricted kinematic range, double-peaked away from $\dphi=\pi$~\cite{jetspec,Horner,Phenix}. This observation has motivated many theoretical investigations of its physics origin~\cite{Stoecker,Casalderrey,Ruppert,Renk}. Three-particle correlations showed evidence of conical emission of away-side correlated hadrons~\cite{3part}. The conical emission angle is found to be independent of the associated particle $\pT$, suggesting the underlying physics mechanism may be Mach-cone shock waves. Three-particle cumulant in azimuth relative to the reaction plane~\cite{3part} confirms this finding. Recent study~\cite{Takahashi} suggests that fluctuations in initial conditions may also create an away-side double-hump structure in dihadron correlations. Such mechanisms, however, would not generate conical emission structures in three-particle correlations~\cite{Qian}. The conical emission angle is measured to be $\theta = 1.37 \pm 0.02 {\rm (stat.)} \pm 0.06 {\rm (syst.)}$~\cite{3part}. If Mach cones are indeed the underlying mechanism, one may obtain, ideally, the medium's speed of sound via $c_s=\cos(\theta)$. However, model studies~\cite{Renk} indicate that the Mach cone angle would be strongly affected by the medium expansion. The effects depend on the relative configurations of the Mach cones and the collision geometry. We attempt to study these effects by exploiting dihadron correlations as a function of the trigger particle azimuth relative to the reaction plane (RP). Such correlations are sensitive to the collision geometry as well as the orientation of the possible Mach cones. \section{Dihadron correlation analysis relative to the reaction plane} We use the second Fourier harmonic to determine the event plane (EP) azimuth $\psiEP$~\cite{flowMethod}, using particles below $\pT = 2$~\gev. To avoid self-correlation, particles from the $\pT$ bin used for our correlation analysis are excluded from the EP construction. Non-flow correlations, such as dijets, can influence the EP determination. To reduce this effect, we use the modified reaction plane ({\sc mrp}) method~\cite{v2MRP}, excluding particles within $|\deta|<0.5$ of the highest $\pT$ particle in the event from the EP construction. We divide our data into six slices in $\phis$, the trigger particle azimuth relative to the EP, and analyze azimuthal correlations separately in each slice. The correlation background has a flow modulation that depends on the trigger and associated particle's second and fourth harmonic anisotropies, $\vf$ and $\vv$, and the EP resolutions~\cite{Bielcikova}. We obtain the EP resolutions by the sub-event method~\cite{flowMethod}. There are several measurements of elliptic flow anisotropies. The two-particle and the {\sc mrp} methods give similar results and significantly overestimate elliptic flow due to non-flow~\cite{v2MRP}. The major component of non-flow is the measured small-angle minijet correlation~\cite{minijet}. Since away-side pairs contain much less non-flow and the elliptic flow effect is symmetric between near- and away-side, we use \vAS $=\sqrt{\mean{\cos2\dphi}}$, computed from untriggered two-particle azimuthal correlations in inclusive events for our centrality and $\pT$ bins, as our upper limit of systematic uncertainty. The four particle method ($\flow{4}$)~\cite{Aihong} gives the smallest anisotropy parameter for the centrality range 20-60\% used in the present study~\cite{Kettler}. The $\flow{4}$ is likely an underestimate of elliptic flow because the flow fluctuation effect is negative in $\flow{4}$. We note that $\flow{4}$ may still contain some non-flow effects, however the agreement between $\flow{4}$ and the Lee-Yang-Zero method suggests that such non-flow effects are small. We take $\vf=($\vAS$+\flow{4})/2$ and the range between \vAS\ and $\flow{4}$ as our uncertainty. We parameterized the $\vv$ measurement~\cite{v2MRP} by $\vv=1.15\vf^2$. The background level $B$ is normalized using the Zero-Yield-At-Minimum (\zyam) method. The background levels can be different for the different $\phis$ slices because of the net effect of the variations in jet-quenching with $\phis$ and the centrality cuts on particle multiplicity in $|\eta|<0.5$. In our correlation analysis $B$ is treated independently in individual $\phis$ slices. The systematic uncertainty of $B$ due to \zyam\ itself is assessed by varying the size of the $\dphi$ normalization range between $\pi/12$ and $\pi/6$. The systematic uncertainty due to deviation of $B$ from \zyam\ is assessed by Gaussian fits to the \zyam-subtracted correlation functions, similar to those in Fig.~\ref{fig:corr_symm} but with a free pedestal, as well as by comparing $B$ to those obtained from asymmetric correlation functions (see Fig.~\ref{fig:corr_asym}). The effect of the $B$ uncertainty is an approximate constant shift to the baseline of the correlation functions, without significant change to their shapes. The systematic uncertainty on $B$ is not included in the results reported in these proceedings. When assessing the systematic uncertainty due to elliptic flow on our correlation results, we use \zyam\ to adjust background normalization. Due to the interplay between flow modulation and \zyam\ normalization, the uncertainty on our correlation functions due to flow is larger for in-plane than out-of-plane trigger particles (see Figs.~\ref{fig:corr_symm} and~\ref{fig:corr_asym}). \red{Further details of the analysis can be found in Ref.~\cite{corrRP}.} \section{Dihadron correlation results relative to the reaction plane} In this work we focus on the away-side dihadron correlations. We avoid the near-side jet-like component by analyzing azimuthal correlations at large $|\deta|>0.7$. Figure~\ref{fig:corr_symm} shows the azimuthal correlations in six slices of $|\phis|$ from in- to out-of-plane\red{~\cite{corrRP}}. The positive and negative $\phis$ slices are combined. The near-side peak is mainly due to the ridge which decreases from in-plane to out-of-plane. The away-side structure changes dramatically, from singly peaked in-plane to distinctively double-peaked out-of-plane. It appears that whenever there is a large ridge on the near side, there is a comparable component at $\dphi=\pi$ on the away side. \begin{figure}[hbt] \begin{center} \hfill \includegraphics[width=0.98\textwidth]{fig19_ridge0_4gneq_vs_phis_Josh_C20-60_T3-4_A1-2.eps} \end{center} \vspace{-3.3cm}\hfill STAR preliminary\hspace*{5.3cm} \vspace{2.1cm} \caption{Azimuthal correlation at $|\deta|>0.7$ versus trigger azimuth relative to RP, $|\phis|$, in 20-60\% Au+Au collisions\red{~\cite{corrRP}}. The trigger and associated $\pT$ ranges are $3<\pTt<4$~\gev\ and $1<\pTa<2$~\gev. The shaded areas are systematic uncertainties due to flow. The curves are Gaussian fit result: back-to-back ridges at $\dphi=0$ and $\pi$, and away-side peaks symmetric about $\dphi=\pi$.} \label{fig:corr_symm} \end{figure} Many experimental observations suggest that the ridge and the jet-like component are unrelated~\cite{Puschke,Netrakanti,Nattrass}. If they are indeed unrelated, then due to symmetry there ought to be an identical excess of momentum on the away-side to balance the ridge. This momentum balance is different from statistical momentum balance to an extra, fluctuating high-$\pT$ particle. In other words, there is no criterion to determine which is near-side and which is away-side if the ridge and the jet are unrelated; the two sides have to be considered equally. It is thus attempting to fit the large $\deta$ azimuthal correlation with two away-side Gaussians symmetric about $\dphi=\pi$ and two ridge Gaussians back-to-back at $\dphi=0$ and $\pi$. We keep identical width for the ridge Gaussians but allow their magnitudes to vary independently. The fit results are superimposed in Fig.~\ref{fig:corr_symm}. Figure~\ref{fig:corr_symm_fit}(a) shows the fit double-peak angle as a function of $\phis$ for three associated $\pTa$ bins\red{~\cite{corrRP}}. The peak angle is approximately constant for $|\phis|<45^{\circ}$. For $|\phis|>45^{\circ}$ it increases with $\phis$, and becomes different for low and high $\pTa$. The larger angle for out-of-plane trigger particles may be due to a more significant influence from medium flow. For in-plane orientation, the conical emission hadrons on the away side are likely aligned with the medium flow direction, receiving insignificant deflection to their $\pT$. The conical emission angle may even be shrunk if it forms after passing the center of the medium. Moreover, the overlap collision zone is thinner in-plane, so the away-side correlated hadrons can escape the collision zone more easily. For out-of-plane orientation, on the other hand, the conical emission hadrons move more or less perpendicularly to the medium flow direction because of the long path length they have to traverse. They receive a large side-kick from the medium flow, broadening their final emission angle. \begin{figure}[hbt] \begin{center} \hfill \includegraphics[width=0.45\textwidth]{fig21a_cone_vs_phis.eps} \includegraphics[width=0.45\textwidth]{fig21b_cone_vs_pt.eps} \end{center} \vspace{-1.9cm}\hfill STAR preliminary\hspace{3.9cm}STAR preliminary\hspace*{0.5cm} \vspace{0.8cm} \caption{Away-side double-peak fit position (relative to $\pi$) as a function of (a) $\phis$ and (b) $\pTa$. Error bars are statistical only. \red{The systematic uncertainties due to elliptic flow are indicated by the dashed lines.} Data are from 20-60\% Au+Au collisions, and the trigger particle $\pTt$ range is $3<\pTt<4$~\gev\red{~\cite{corrRP}}.} \label{fig:corr_symm_fit} \end{figure} Figure~\ref{fig:corr_symm_fit}(b) shows the conical emission peak angle as a function of $\pTa$ for in- and out-of-plane trigger particle orientations\red{~\cite{corrRP}}. The peak angle is relatively independent of associated $\pTa$ for in-plane trigger particles. The peak angle for the out-of-plane orientation is larger, consistent with a larger broadening effect from medium flow. However, the angle position increases with $\pTa$, which is naively not expected if those particles are pushed by the same medium flow velocity. It is, however, possible that the higher $\pTa$ hadrons are emitted earlier by the away-side parton while traversing the medium, thereby receiving a larger flow effect in the outer region of the bulk medium than the low $\pTa$ hadrons~\cite{Ma}. It is worthwhile to note that the peak positions reported here are from fits to dihadron correlations. They are different from those obtained from three-particle correlations~\cite{3part} where the conical emission angle was found to be independent of associated particle $\pTa$. The angle from the three-particle correlation fit is cleaner because the peaks are more cleanly separated in the two-dimensional azimuth space, while the fit to dihadron correlations is more affected by other physics effects. One such effect is jet deflection, which was found to be present in three-particle correlation where the diagonal peaks are stronger than the off-diagonal conical emission peaks~\cite{3part}. The results reported above combine the trigger particles above and below the EP. We can study the correlations of those trigger particles separately and learn more about the interplay between jet-correlation and the collision geometry. Those results are shown in Fig.~\ref{fig:corr_asym}, where the correlation for a positive $\phis$ bin is flipped via $\dphi\rightarrow-\dphi$ and properly shifted before combined with that in the symmetric negative $\phis$ bin. The background normalization is done by the \zyam\ method; the subtracted background is lower than in Fig.~\ref{fig:corr_symm} because the minimum now appears at only one side, $\dphi\approx-1$ instead of both sides ($\dphi\approx\pm1$) in Fig.~\ref{fig:corr_symm}. This difference is used as an assessment of the \zyam\ uncertainty as aforementioned. Again we fit the correlation functions with four Gaussians: two for the back-to-back ridges and two for the away-side conical emission peaks. We fix the back-to-back ridges to be identical including their amplitudes. We also allowed their magnitudes to vary independently, and obtained consistent fit results. \begin{figure}[hbt] \begin{center} \hfill \includegraphics[width=0.98\textwidth]{4GaussFittoRidgeData.eps} \end{center} \vspace{-0.7cm} \caption{Same as Fig.~\ref{fig:corr_symm}, except that the correlation in a positive $\phis$ bin is flipped and properly shifted before combined with that in the symmetric negative $\phis$ bin. The away-side peaks are now not symmetric about $\dphi=\pi$. The associated $\pTa$ range is $1<\pTa<1.5$~\gev.} \label{fig:corr_asym} \end{figure} \begin{figure}[hbt] \vspace*{-0.5cm} \begin{center} \hfill \includegraphics[width=0.73\textwidth]{Fig4FQSQM.eps} \includegraphics[width=0.25\textwidth]{cartoon.eps} \end{center} \vspace{-0.7cm} \caption{The away-side double-peak (a) positions, (b) areas, and (c) widths from the fits in Fig.~\ref{fig:corr_asym}. (d) Illustration of away-side conical emission angles. The triangles (circles) in (a-c) correspond to the first (second) conical emission peak in (d).} \label{fig:corr_asym_fit} \end{figure} Figure~\ref{fig:corr_asym_fit}(a) shows the away-side peak positions versus $\phis$. The cartoon aids to the visualization. As discussed above, the conical emission angle is not significantly affected by medium flow when the trigger particle is aligned in the RP. As the trigger particle moves away from the RP, the angle of the second peak (circles) does not seem to change, indicating insignificant effect from medium flow. In contrary, the other conical emission peak changes its location significantly. The largest change appears at $\phis\sim45^\circ$. If the away-side partner parton is deflected by the medium flow and then lose energy generating conical emission at a fixed angle, then the relative distance between the peaks would be the same. The data seem to indicate otherwise, suggesting that it is the conical emission particles that are pushed by flow, not the away-side partner parton. It was suggested that asymmetric away-side peaks may arise from absorption of particles traversing different amount of medium~\cite{Jia}. To investigate such effect, we show in Fig.~\ref{fig:corr_asym_fit}(b,c) the Gaussian areas and widths of the away-side peaks. The peak areas are similar, without evidence of medium absorption, whereas the peak widths are broadened, consistent with broadening due to the medium flow. \section{Summary} We have studied dihadron correlations with high $\pT$ trigger particles at $|\deta|>0.7$ as a function of the trigger azimuth ($\phis$) relative to the event plane. We combine the correlations from the two symmetric $\phis>0$ and $\phis<0$ bins straightforwardly as well as after flipping the former in $\dphi$. We argue the near-side ridge is accompanied by a similar ridge on the away side. We fit the correlation function with four Gaussians, two for the back-to-back ridges and two for the away-side conical emission peaks. We investigate the fit parameters as a function of $\phis$. We found significant variations in the peak positions, areas, widths with $\phis$, suggesting geometrical effects on conical emission. These effects are likely due to the medium flow. We found no evidence of medium absorption. Our study should help disentangle medium flow effects in the conical emission signal and may further our understanding of the medium created in heavy-ion collisions, such as its speed of sound and equation of state. \grey{One of the future tasks is to assess the systematic effects of elliptic flow and \zyam\ background uncertainties on our fit results.} \section*{References}
1,477,468,751,364
arxiv
\section{Prologue} {\em `Stratification is a common technique,'} often necessary, but also attractive because it {\em `may produce a gain in precision in the estimates of characteristics of the whole population'} (Cochran (1963), \S 5.1). In fact, if all sampling is done with replacement and the sample sizes are proportional to the strata sizes, stratified random sampling is at least as precise as simple random sampling. It even approaches perfection as the homogeneity inside the strata increases, i.e., as the heterogeneity of the population is more reflected by the heterogeneity between the strata and less by the heterogeneity inside the strata. Consequently, the rule of thumb with respect to stratified sampling is that it doesn't hurt to try. In real life, however, sampling is {\em without\/} replacement (cf. Cochran (1963), \S 2.1: {\em `Sampling with replacement is entirely feasible but except in special circumstances is seldom used, since there seems little point in having the same unit twice in the sample'\/})---and without replacement, the rule of thumb is no longer valid. Even optimal stratified sampling may hurt then, in that the corresponding estimator can have a larger variance than the estimator based on a simple random sample (cf. Armitage (1947), Cochran (1963) \S5.6, Evans (1951), and Govindarajulu (1999) \S5.5; for the obscurity of this fact, please see, for instance, the same Govindarajulu (1999) \S5.5 and Wilks (1963), \S10.9) The intuition might be helped here by realizing that, as equalities (\ref{truth}) below remind us, {\em not\/} replacing yields an advantage that is zero for a sample size of~1 and increases with the sample size. Thus, the advantage is larger for one sample of size $n>1$ than for~$n$ samples of size~1; cf. the illustration of Theorem 3 in \S2. We only consider the simplest possible case, that of a dichotomous population. For this case, the results in Armitage (1947), Cochran (1963), and Evans (1951) are extended to what looks like a quite complete picture. The simple-is-better effect shows up in more circumstances than previously thought and is provided with exact bounds for its size. \section{Lower and Upper Bounds} Consider an urn containing $N$ balls, of which $pN$ are red and $(1-p)N$ are black for a~$p\in[0,1]$. We want to estimate~$p$. One approach is to take a sample of $n$ balls from the urn and estimate~$p$ by the fraction of the red balls in the sample. A sample of $n$ balls {\em with replacement\/} is a random member $(b_1,\ldots,b_n)$ of $({\rm urn})^n$, where all outcomes are equally likely; a sample of $n$ balls {\em without replacement\/} is the same, except that the $b_1,\ldots,b_n$ are all different. Let~$X$ and~$Y$ denote the number of red balls in a sample of size~$n$ with and without replacement, respectively. Then \def\var{{\rm var}}for the fraction estimators $X/n$ and $Y/n$ for~$p$ the truth of \begin{equation}\label{truth} \var {X\over n}={p(1-p)\over n}={N-1\over N-n}\,\var {Y\over n} \label{well} \end{equation} is well known; it is better never to see the same ball twice. \def\StratEstWith {the Stratification Estimator With Replacement}% \def\StratEst {the Stratification Estimator Without Replacement}% \def\SimpleEstWith {the Simple Estimator With Replacement}% \def\SimpleEst {the Simple Estimator Without Replacement}% \def\stratestwith {the stratification estimator with replacement}% \def\stratest {the stratification estimator without replacement}% \def\simpleestwith {the simple estimator with replacement}% \def\simpleest {the simple estimator without replacement}% Now suppose the urn consists of $m\geq 2$ disjoint sub-urns, {\em strata}, each stratum${}_j$ containing~$N_j\geq2$ balls, $\sum^m_{j=1}N_j=N$, and that for each stratum${}_j$ we know $N_j/N$ but not its fraction~$p_j$ of red balls. Let $(n_1, \ldots, n_m)$ be an {\em allocation}, i.e., the~$n_j$ are natural numbers with $1\leq n_j\leq N_j$ for all~$j$ and $\sum_{j=1}^m n_j =n$, and let $X_j$, $Y_j$, $j=1\ldots,m$, denote the number of red balls in a sample of size~$n_j$ from stratum$_j$ with and without replacement, respectively; then each of \begin{eqnarray*} {X\over n}&& (\hbox{\simpleestwith}),\\ \sum_{j=1}^m {N_j\over N} {X_j\over n_j}&&(\hbox{\stratestwith}),\\ {Y\over n}&&(\hbox{\simpleest}),\\ \hbox{and}\quad\sum_{j=1}^m {N_j\over N} {Y_j\over n_j}&&(\hbox{\stratest}) \end{eqnarray*} is an unbiased estimator for~$p$; for their optimality, cf. Neyman (1934). For both with and without replacement we want to compare the variance of the simple estimator to that of the stratification estimator, i.e., $\var (X/n)$ to $\var(\sum_{j=1}^m (N_j/N) X_j/n_j)$ and $\var(Y/n)$ to $\var(\sum_{j=1}^m (N_j/N) Y_j/n_j)$, under the assumption that the~$X_j$ are independent as well as the~$Y_j$. It is immediate that if the strata are homogeneous but the whole population is not, i.e., $p\in(0,1)$ and each $p_j$ is equal to~0 or~1, then the stratification estimators are perfect while, with replacement, the simple estimator is not, and, without replacement, the simple estimator is only perfect when it is exhaustive, so that $0=\var(\sum_{j=1}^m (N_j/ N) X_j/n_j)<\var(X/ n)$ and $0=\var(\sum_{j=1}^m (N_j/ N) Y_j/n_j)\leq\var(Y/ n)$. For arbitrary $p_j$, the stratification estimator $\sum_{j=1}^m (N_j/N) X_j/n_j$ is still not worse than the simple estimator $X/n$ as long as the allocation is {\em proportional}, i.e., $n_j=(N_j/N)n$ for every~$j$, because in that case \begin{equation}\label{PropWithRepl} \var \sum_{j=1}^m {N_j\over N} {X_j\over n_j}= \var {X\over n} - {1\over n}\sum_{j=1}^m {N_j\over N}(p_j-p)^2 \end{equation} holds, as one easily verifies. Thus, if all sampling is with replacement and the allocation is proportional, then stratified sampling is seen to reduce the variance, unless all the~$p_j$ are equal. And where it doesn't help, it doesn't harm either. However, this reassurance no longer holds as soon as we change the allocation: \begin{thm}\label{WithNotProp} If $p_1=\cdots =p_m=p\in(0,1)$ and the allocation is {\em not} proportional, then {simple is better} in that $$ \var \sum_{j=1}^m {N_j\over N} {X_j\over n_j} > \var {X\over n}, $$ i.e., the variance of \stratestwith{} is greater than the variance of \simpleestwith. \end{thm} Nor does it hold if all samples are drawn {\em without\/} replacement: \begin{thm}\label{WithoutReplWithoutMalus} If $p_1=\cdots =p_m=p\in(0,1)$ and $n<N$, then\/ {simple is better} in that \begin{equation} \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} > \var {Y\over n}, \label{Thm2} \end{equation} i.e., the variance of \stratest{} is greater than the variance of \simpleest. \end{thm} This is (vi{\em b}) in Armitage (1947) for a dichotomous population, i.e., for the case where Armitage's `variable~$x$' has only 2 different values, except that we do not need his condition that (in the dichotomous case) all the~$N_j$ are equal. It is also the dichotomous case of what is proved at the end of \S5.6 in Cochran (1963), except that our condition $p_j=p$ is replaced by the condition that all the `mean square [errors] within strata' $p_j(1-p_j)N_j/(N_j-1)$ are equal and larger than the `mean square [error] among strata' $\sum_{j=1}^m N_j(p_j-p)^2/(m-1)$. In H\'ajek (1981), the observation after (20.31) that simple is better if $p_j=p$ only refers to proportional allocation, not necessarily to all allocations. Under additional conditions inequality (\ref{Thm2}) may be sharpened: \begin{thm}\label{WithoutReplWithMalus} Let $${\cal B}:={N-1\over N-m}\var {Y\over n}={N-n\over N-m}\,{p(1-p)\over n}.$$ If $p_1=\cdots =p_m=p\in(0,1)$, $n<N$, and (c1) $n_j\leq {3\over 4} N_j$ for $j=1,\ldots,m$, or (c2) the allocation is proportional, or (c3) $N_1=N_2=\ldots = N_m=N/m$, then $$ \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} \geq {\cal B} $$ i.e., the variance of \stratest{} is at least $(N-1)/( N-m \times$ the variance of \simpleest, and $$\var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} = {\cal B \quad\Leftrightarrow\quad n_j=n/m,\, N_j=N/m\quad\forall j. $$ \end{thm} Theorem \ref{WithoutReplWithMalus}(c3) follows from Evans (1951) (12{\em a, c}). A special case will illustrate Theorem~\ref{WithoutReplWithMalus} and bring out a weak point of the stratification estimator; the factor $(N-1) /(N-m)$ in~${\cal B}$ appearing here was met in (1) (take $n=m$). Imagine that all strata not only have the same composition, i.e., $p_j=p$, but also the same size, i.e., $N_j=N/m$, and that from each stratum only one ball is taken, so $n_j=1$ and $n=m$. Then $ \sum_{j=1}^m (N_j/N) Y_j/n_j= (1/n)\sum_{j=1}^n Y_j $, which is distributed as~$X/n$, so that with (\ref{well}) $$ \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} = \var {X\over n} = {N-1\over N-n} \,\var {Y\over n} ={\cal B}. $$ Splitting the sample over the strata reduces the without-replace\-ment bonus from (\ref{well}). The need for extra conditions in Theorem~\ref{WithoutReplWithMalus} such as (c1), (c2), or (c3), and the room there is for Theorem~\ref{WithoutReplWithoutMalus} are demonstrated by considering $m=N_1=n_1=2$, $n=N-1$, and $N>5$. Further, one may ask if the condition `$p_1=\cdots =p_m$' is only the beginning: are there other distributions of the red balls among the strata, for which there are theorems similar to Theorem~\ref{WithoutReplWithMalus} but with lower bounds that are even higher than~${\cal B}$ in Theorem~\ref{WithoutReplWithMalus}? The answer is `no', as long as $N_j:=N/m$ and $n_j:=n/m$ are feasible choices, because for these choices~${\cal B}$ is an {\em upper\/} bound (over varying distributions, given $N$, $n$, $m$, and $p$) for the variance of the stratification estimator $\sum_{j=1}^m {(N_j/ N)} {Y_j/ n_j}$ (corresponding to $p_1=\cdots =p_m$; cf. Theorem~\ref{WithoutReplWithMalus}): \begin{thm}\label{Minimax} If $N_j=N/m\geq2$, $n_j=n/m$ for all~$j$, and $n<N$ with ${\cal B}$ as in Theorem~\ref{WithoutReplWithMalus}, then $$ \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} \leq {\cal B}, $$ i.e., the variance of \stratest{} is at most $(N-1)/( N-m \times$ the variance of \simpleest, and $$ \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} = {\cal B} \quad\Leftrightarrow\quad p_1=\cdots=p_m. $$ \end{thm} Cf. `the worst result to be anticipated' on p.\thinspace 99 of Evans (1951). (Namely, `for a second' variable of interest; strata that are good, i.e., different, with respect to the first variable of interest need not be so for a second.) In the situation of Theorem~\ref{Minimax}, the worst is not that bad: in practice, $(N-1)/(N-m)$ will be close to~1, and it also follows that {\em if for every stratum the sample size is increased by~1, the new variance will {not} exceed the simple random sample variance corresponding to the old sample size.} Theorem~\ref{Minimax} shows that if we want to curb the badness of stratified sampling and proportional allocation is an option, then it works, at least for strata of the same size. This makes us realize that it {\em always\/} works, even for {\em arbitrary\/} strata sizes, because independence, (\ref{truth}), and (\ref{PropWithRepl}) imply $$ \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} \leq \var \sum_{j=1}^m {N_j\over N} {X_j\over n_j} \leq {p(1-p)\over n} $$ under proportional allocation. Our final results essentially show how the upper bound $p(1-p)/n$ for $\var \sum_{j=1}^m {(N_j/N)} {Y_j/ n_j} $ can be improved. \begin{thm}\label{Minimax2} If $(N_j/N)n$ and $pN_j$ are integers and $0<pN_j<N_j$ for all~$j$, then \begin{eqnarray} \nonumber {\cal B}&\leq&\min_{\rm Statistician} \max_{\rm Nature} \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j}\\ &\leq&{\cal B} + {N-n \over 4(N-m)nN^2} \sum_{j=1}^{m}{N-mN_j\over N_j-1}, \label{upper bound} \end{eqnarray} where `Statistician' means an allocation $(n_1, \ldots, n_m)$ that satisfies $n_j\leq {3\over 4} N_j$ for all~$j$ or is proportional, `Nature' means a distribution $(p_1,\ldots,p_m)$ of the $pN$ red balls among the strata (so $p_j\in[0,1]$, $\sum_{j=1}^m p_jN_j = pN$, and $p_j N_j \in\{0,1,\ldots, N_j\}$), and ${\cal B}$ is as in Theorem~\ref{WithoutReplWithMalus}; in fact, $\max_{\rm Nature}$ does not exceed the upper bound in (\ref{upper bound}) if the allocation is proportional. If $n\leq (3/4)N$, then the upper bound in (\ref{upper bound}) does not exceed $p(1-p)/n$. \end{thm} For the lower bound we observe that it follows from Theorem \ref{WithoutReplWithMalus} and that, also by Theorem \ref{WithoutReplWithMalus}, if not $N_j=N/m$ for all~$j$, then `${\cal B}\leq$' may be replaced by `${\cal B}<$'. The difference between upper and lower bound is bounded by $1/4N$ because $\sum_{j=1}^{m}{(N-mN_j)/( N_j-1)}\leq \sum_{j=1}^{m}{N/( 2-1)}= mN$; it reduces to~$0$ in case all strata have the same size (cf. Theorem~\ref{Minimax}). The circumstances under which stratified sampling will hurt, have been called {\em `very unusual'} and {\em `extreme'} (Evans, 1951), {\em `an academic curiosity'}, which will happen only {\em `mathematically'} (Cochran, 1963), as well as {\em `quite conceivable'} (Govindarajulu, 1999). \section{Justifications} \begin{pf*}{Proof of Theorem \ref{WithNotProp}} By Jensen's inequality we obtain \begin{eqnarray*} \var \sum_{j=1}^m {N_j\over N} {X_j\over n_j} &=& p(1-p) \sum_{j=1}^m {1\over n_j N/N_j} {N_j\over N} \\ &\geq & p(1-p) {1\over \sum_{j=1}^m {n_jN\over N_j} {N_j\over N }} \\ &=& {p(1-p)\over n} =\var {X\over n}, \end{eqnarray*} with `$=$' instead of `$\geq$' if and only if $n_j/N_j$ is constant, i.e., the allocation is proportional. \end{pf*} \begin{pf*}{Proof of Theorem \ref{WithoutReplWithoutMalus}} Let $m\geq2$, $1\leq n_j\leq N_j$, $N_j\geq 2$, $j=1,\ldots,m$ be integers with $N=\sum_{j=1}^m N_j$, $n=\sum_{j=1}^m n_j<N$. In order to prove \begin{equation} \label{(1)} \sum_{j=1}^m {N_j^2\over N_j -1}{N_j - n_j \over n_j} > {N^2\over N-1}{N-n\over n}, \end{equation} it suffices to prove it for $m=2$. Indeed, by splitting off one stratum from the urn at a time, applying (\ref{(1)}) with $m=2$ each time, and observing that one still has `$\geq$' instead of `$>$' if $n=N$, one obtains (\ref{(1)}) for the general case. For $1\leq k\leq K$ and $1\leq \ell \leq L$, $K,L>1$, $k+\ell<K+L $ we will prove \begin{eqnarray} \label{(2)} {(K+L)^2\over K+L-1}\left( {K+L \over k+\ell}-1\right) - { K ^2\over K -1}\left( {K \over k }-1\right) - { L ^2\over L-1}\left( { L \over \ell}-1\right) < 0. \end{eqnarray} To this end we rewrite the LHS of (\ref{(2)}) as $S+T$ with \begin{eqnarray*} S={ K+L \over K+L-1}\left( {(K+L)^2 \over k+\ell} -{ K^2 \over k} - {L ^2\over \ell}\right) = \left( {K+L \over K+L-1}\right) U, \end{eqnarray*} \begin{eqnarray*} T = -{(K+L)^2\over K+L-1}&+&\left( {1 \over K+L-1}-{1\over K-1}\right){K^2\over k}+{K^2\over K-1}\\ & + & \left( {1 \over K+L-1}-{1\over L-1}\right){L^2\over \ell}+{L^2\over L-1}.\\ \end{eqnarray*} Note that $\partial U / \partial k = - (K+L)^2 /(k+\ell)^2 + K^2 / k^2 \geq 0 $ iff $k\leq (K/L)\ell$. Consequently, U is maximal for $k = (K/L)\ell$ and \begin{eqnarray*} U\leq{ 1 \over \ell}\left( {(K+L)^2 \over K / L +1} - KL - L ^2 \right) = 0. \end{eqnarray*} Clearly, $T$ is strictly increasing in~$k$ and~$\ell$ and hence \begin{eqnarray*} T < {1 \over K+L-1} \left( -(K+L)^2 + K+L\right)+{K^2 - K\over K-1}+{L^2-L\over L -1}=0, \end{eqnarray*} which completes the proof. \end{pf*} \begin{pf*}{Proof of Theorem \ref{WithoutReplWithMalus}} The statements corresponding to (c2) and (c3) follow straightforwardly from the fact that if terms $t_j>0$ have sum $\sum_{j=1}^m t_j=t$, then for all $a_j\geq 0$, not every $a_j=0$, we have $$ \sum_{j=1}^m { a_j\over t_j}=\sum_{j=1}^m \left(\sqrt{a_j\over t_j}\right)^2 \sum_{j=1}^m \left(\sqrt{t_j\over t }\right)^2 \geq {1\over t}\left(\sum_{j=1}^m \sqrt{a_j}\right)^2, $$ with `$=$' instead of `$\geq$' if and only if $t_j=t\sqrt{a_j}/\sum_{i=1}^m\sqrt{a_i}$, by Cauchy-Schwarz. With $a_j=N_j^2$ and $t_j=N_j-1$ we obtain \begin{eqnarray*} \sum_{j=1}^m {N_j^2\over N^2} {p(1-p)\over {N_j\over N}n} {N_j-{N_j\over N}n\over N_j-1} &=&{p(1-p)(N-n)\over nN^2} \sum_{j=1}^m {N_j^2\over N_j-1} \\ &\geq& {p(1-p)(N-n)\over n N^2 } {N^2\over N -m}, \end{eqnarray*} which proves the statements corresponding to (c2), and with $a_j=1$ and $t_j=n_j$ we obtain \begin{eqnarray*} \sum_{j=1}^m{1\over m^2}{p(1-p)\over n_j} {{N\over m}-n_j\over{N\over m}-1}& = & {p(1-p)\over m^2(N-m)}\left(N\sum_{j=1}^m{1\over n_j}-m^2\right) \\ &\geq&{p(1-p)\over m^2(N-m)}\left(N{m^2\over n} - m^2\right) \\ & = & {p(1-p)\over n}{N-n\over N-m}, \end{eqnarray*} which proves the statements corresponding to (c3). In order to prove the statements corresponding to (c1), we observe that the function $$ \psi(x,y)={{1\over y}-1\over 1-x} $$ is strictly convex on $(0,1)\times (0,{3\over 4}]$, because it is strictly convex on any segment in $(0,1)\times (0,{3\over 4}]$. On any segment $\{t(x_1,{3\over 4}) + (1-t) (x_2, {3\over 4})\}$, namely, the strict convexity is clear, while for a segment not contained in $y={3\over 4}$ we have $$ \left\vert\matrix{ {\partial^2\over \partial x^2} \psi (x,y) & {\partial^2\over \partial x\partial y} \psi (x,y)\cr {\partial^2\over \partial y\partial x} \psi (x,y) & {\partial^2\over \partial y^2} \psi (x,y)\cr }\right\vert = \left\vert\matrix{ {2(1-y)\over y (1-x)^3} & {-1\over y^2 (1-x)^2} \cr {-1\over y^2 (1-x)^2} & {2\over y^3(1-x)} \cr }\right\vert = {3-4y \over y^4(1-x)^4}. $$ The Hessian of~$\psi$, therefore, of which the determinant is the product of the eigenvalues and the sum of the diagonal elements is the sum of the eigenvalues, is positive definite outside $y={3\over 4}$, so the second derivative of $t\in[0,1]\mapsto \psi \bigl( t(x_1,y_1) + (1-t)(x_2,y_2)\bigr)$ is positive on $(0,1)$. Consequently, applying Jensen's inequality to the random 2-vector $\pmatrix{X\cr Y}\colon j\in\{1,\ldots,m\}\mapsto \pmatrix{1/N_j\cr n_j/N_j}\in\R^2$ with $P(\{j\})={N_j/ N}$, $j=1,\ldots,m$, gives \begin{eqnarray*} \sum_{j=1}^m {{1\over n_j/N_j}-1\over 1-1/N_j}\, {N_j\over N} & = & E\psi (X,Y)\geq \psi (EX,EY)\\ & = & \psi ( {m\over N},{n\over N}) = {N\over N-m}\left({N\over n}-1\right), \end{eqnarray*} with `$=$' instead of `$\geq$' if and only if $1/N_j$ and $n_j/N_j$ are constant. This proves the statements corresponding to condition (c1). \end{pf*} \begin{pf*}{Proof of Theorem \ref{Minimax}} Under $n_j=n/m$, $N_j=N/m$ the variance of the stratification estimator becomes $$ \sum_{j=1}^m{1\over m^2}{p_j(1-p_j)\over {n\over m} } {{N\over m}-{n\over m}\over{N\over m}-1} $$ and $\sum p_jN_j = pN$ becomes $\sum p_j=mp$, while by Cauchy-Schwarz $$ \sum p_j(1-p_j)= mp - {\sum p_j^2\cdot \sum 1^2\over m} \leq mp -{1\over m}\left(\sum(p_j\cdot 1)\right)^2 = mp(1-p). $$ This proves Theorem \ref{Minimax}. If, in the situation of Theorem \ref{Minimax}, for every stratum the sample size is increased by~1, we have $n_{\rm new}=n+m$ and the new variance will not exceed $$ {\cal B}_{\rm new} ={N-m-n\over N-m}\,{p(1-p)\over n+m}\leq{N-n\over N-1}\,{p(1-p)\over n}. $$ \end{pf*} \begin{pf*}{Rest of Proof of Theorem \ref{Minimax2}} For the upper bound, let $\alpha_j, \beta_j, 1 \leq j \leq m,$ be positive reals with $\sum_{j=1}^m \beta_j =1.$ By the multiplier method of Lagrange we see that $\sum_{j=1}^m \alpha_jp_j(1-p_j)$ attains its maximum over $p_j $ under the side condition $\sum_{j=1}^m\beta_j(p_j - p)=0$ at $$ p_j= {1\over 2} + {\beta_j(p-{1\over 2}) \over \alpha_j \sum_{k=1}^m \alpha_k^{-1}\beta_k^2 } $$ with maximum value equal to $$ {1\over4}\left(\sum_{j=1}^m \alpha_j - {(2p-1)^2 \over \sum_{k=1}^m \alpha_k^{-1}\beta_k^2 }\right). $$ With $$\alpha_j= {N_j^2(N_j-n_j) \over N^2 (N_j -1)n_j}, \quad \beta_j={N_j\over N},\quad 1\leq j \leq m, $$ this shows that under proportional allocation, $n_j=(N_j/N)n$, \begin{equation}\label{a} \max_{\rm Nature} \var \sum_{j=1}^m {N_j\over N} {Y_j\over n_j} \leq {N-n \over 4(N-m)n} \left({N-m \over N^2}\sum_{j=1}^m {N_j^2 \over N_j -1} - (2p-1)^2\right) \end{equation} holds. The right-hand side of (\ref{a}) is equal to the upper bound in (\ref{upper bound}). Finally, as $${\cal B}={N-n\over N-m}\,{p(1-p)\over n},$$ the fact that the upper bound in (\ref{upper bound}) does not exceed $p(1-p)/n$ is equivalent to $$ {N-n\over 4(N-m)nN^2}\sum_{j=1}^m {N-mN_j\over N_j -1} \leq {n-m\over N-m}{p(1-p)\over n}. $$ Suppose $N_1\leq N_2\leq\cdots\leq N_m$. As $1\leq pN_1\leq {N_1-1}$, we have $$ {n-m\over N-m}{p(1-p)\over n}\geq {n-m\over (N-m)n}{{1\over N_1}\left(1-{1\over N_1}\right)}. $$ Consequently, it is sufficient to prove $$ \sum_{j=1}^m {N-mN_j\over N_j -1} \leq {4(N-m)nN^2\over N-n}\cdot{(n-m)(N_1-1)\over (N-m)n N_1^2}, $$ whose left-hand side is equal to the left-hand side in $$ (N-m)\sum_{j=1}^m { 1 \over N_j -1}-m^2 \leq (N-m) {m\over N_1 -1}-m^2 = {m(N-mN_1)\over N_1 - 1} $$ (remember $N_j\geq N_1$), so that it is sufficient to prove $$ m(N-mN_1)\left( {N_1\over N_1 - 1}\right)^2 \leq 4(n-m) {N^2\over N-n}. $$ This is true if $$ m\left(N-m{N\over n}\right)\left( {N/n\over N/n - 1}\right)^2 \leq 4(n-m) {N^2\over N-n} $$ (as $N_1\geq N/n$), which is equivalent to $$ mN(n-m) \leq 4n(N-n)(n-m) ,$$ which is true if $n\leq (3/4)N$, because $m\leq n$. \end{pf*} \Blind {\bf Acknowledgements} Andries Lenstra was supported by The Netherlands Foundation for Scientific Research (NWO), by EURANDOM, Eindhoven, the Netherlands, by Texas Tech University, Lubbock, TX, U.S.A., and by Oberlin College, Oberlin, OH, U.S.A. \EndBlind
1,477,468,751,365
arxiv
\section{Introduction} High-throughput spectroscopic imaging techniques are playing an increasing role in scientific discovery, in areas as diverse as astronomy, biology, materials science, and physics. They also hold promise for commercial applications in a variety of areas such as healthcare, surveillance, consumer products, music, robotics, and autonomous vehicles. As a consequence, we are witnessing an exponential growth in generation rates of spectroscopic data, dramatically outpacing humans' ability to analyze them. The grand challenge is therefore to perform high-throughput unsupervised interpretation of spectroscopic data. As a motivating example, consider high-throughput materials discovery in which hundreds or thousands of materials are simultaneously synthesized by deposing a system comprising different chemical elements (typically three or four), onto a substrate \cite{Green2013}. This is analogous to atomic spray painting in which by mixing three or four colors, many new colors are formed. In order to characterize the crystal structure of the synthesized materials, different spectroscopic imagining techniques such as X-ray diffraction or Raman spectroscopy are used. A key challenge is to subtract the X-ray and Raman patterns of the background substrate material from the X-Ray and Raman patterns of the synthesized materials, due to the fact that the synthesized materials can interact with the background material, compounded with noise in the spectroscopic imaging. Furthermore, the background substrate can also exhibit complex patterns as illustrated in Figure \ref{fig_RamanIntro}. These high-throughput experiments often lead to non-negative data. For example, spectroscopic data represent count or intensity quantities, which are naturally non-negative. To capture the non-negative nature of the data, we introduce the exponentially-modified Gaussian (EMG) mixture model, which can be applied in arbitrary contexts where residuals are expected to be contaminated by a distribution with positive support. This is in stark contrast to commonly used robust residual models, like the Huber loss or $\ell_1$, which assume a symmetric contaminating distribution and are otherwise asymptotically biased \cite{Huber1964}. \textbf{Our contributions:} \textbf{1)} We propose the exponentially-modified Gaussian mixture model, and prove two convexity results for the negative logarithm of its density. \textbf{2)} We further propose an expectation-maximization algorithm to optimize an arbitrary model with respect to the EMG mixture. \textbf{3)} We contrast the properties of the EMG mixture with commonly-used robust residual models, such as the Huber loss and quantile regression, in a linear regression task. \textbf{4)} We incorporate the EMG mixture into a probabilistic matrix factorization (PMF) framework, motivated by applications in spectroscopy. \textbf{5)} We show the effectiveness of PMF in conjunction with the EMG mixture for the inference of background signals and systematic errors in data arising in X-ray diffraction and Raman spectroscopy. \section{Preliminaries} We will denote the normal distribution by $\mathcal{N}(\mu, \sigma)$, and the normal density evaluated at $x$ as $\mathcal{N}_{\mu,\sigma}(x)$. \subsection{The Exponentially-Modified Gaussian Distribution} \label{sec_EMG} Let the random variable $r$ be defined by \begin{equation} \label{eq_EMGRV} r = r_E + r_G, \ \ \ r_E \sim \text{Exp}(\lambda), \ \ \ r_G \sim \mathcal{N}(\mu, \sigma). \end{equation} The distribution of $r$ is the \textit{exponentially-modified Gaussian distribution} and has previously found applications in biology \cite{EMGBio}, psychology \cite{EMGPsych}, and finance \cite{Carr2008SaddlepointMF}. Its density is the convolution of an exponential and a Gaussian density and has the form \begin{equation} \label{eq_EMG} \text{EMG}_{\mu, \sigma, \lambda}(x) = \frac{\lambda}{2} e^{\frac{\lambda}{2} (2\mu+\lambda \sigma^2 - 2x)} \ \mbox{erfc} \left( \frac{ \mu + \lambda \sigma^2 - x}{\sqrt{2}\sigma} \right), \end{equation} where $\lambda$ is the rate parameter of the exponential random variable, and $\mu$ and $\sigma$ are the location and scale parameters of the Gaussian random variable, respectively. Given $\mu = 0$ and a constant $\sigma$, increasing $\lambda$ corresponds to an increased probability of large positive values. \subsection{Quantile Regression} \label{sec_quantile} An important tool of robust statistics is quantile regression. The $q$th regression quantile is defined as a solution to \begin{equation} \label{eq_quantile} \min_{b \in \mathbb{R}^n}\left[ \sum_{y_t \geq x_t b} q | y_t - x_t b | + \sum_{y_t < x_t b} (1-q) |y_t - x_t b| \right], \end{equation} see \cite{Bassett1978}. For $q = .5$, this is equivalent to least absolute error regression. When the noise distribution is non-Gaussian, quantile regressions can be used to build powerful and reliable estimators. See \cite{koenker2001quantile} for a discussion including applications to executive compensation and human birth weights. Later, we will estimate quantities related to $\mu$ in \eqref{eq_EMG}. This is different from the mean of the EMG, which is $\mu + 1/\lambda$. If the noise were distributed according to \eqref{eq_EMGRV}, and we knew both $\sigma$ and $\lambda$, we could estimate $\mu$ by computing the quantile with which $\mu$ coincides via the cummulative distribution function and \eqref{eq_quantile}. \textit{The key limitation of this approach for our estimation tasks is that there is no automatic way of choosing the correct quantile if the distributional parameters are not known}. Note also that we can generalize this approach outside of the regression setting, by replacing $x_t b$ in \eqref{eq_quantile} with an arbitrary model. In section \ref{sec_PMFForSpectroscopy}, we will use this fact to compute a quantile matrix factorization. \subsection{Probabilistic Matrix Factorization} \label{sec_ProbMat} Classical matrix factorization is the problem of finding two matrices $U,V$ such that $A \approx UV$. The most commonly studied problem is \begin{equation} \label{eq_classicalMF} \min_{U,V} \|A-UV\|_F^2 = \min_{U,V} \sum_{ij} (A_{ij} - (U V)_{ij})^2. \end{equation} Given the inner dimension $k$ of $U$ and $V$, the solution of this problem can be computed via the singular-value decomposition (SVD) of $A$, as proven by the Eckart-Young-Mirsky theorem. \eqref{eq_classicalMF} is equivalent to maximum likelihood estimation of the factor matrices under a Gaussian error assumption. Therefore, an equivalent probabilistic formulation of \eqref{eq_classicalMF} is \begin{equation} \label{eq_probMFLikelihood} \max_{U,V} \ \prod_{i,j} \mathcal{N}_{(UV)_{ij},\sigma} (A_{ij})^{I_{ij}}, \end{equation} where $I_{ij}$ is the indicator function on the set of indices $\{ij\}$ that are observable. Importantly, if merely a single entry of $A$ cannot be observed, classical approaches such as SVD cannot be applied to \eqref{eq_probMFLikelihood}. \cite{ProbMF} introduced probabilistic matrix factorization in the context of collaborative filtering problems. In their work, $U$ and $V$ were estimated with maximum a-posteriori (MAP) optimization. \section{Exponentially-Modified Gaussian Mixture Model} \label{sec_ResidualModeling} \subsection{Motivation} \begin{comment} \begin{subfigure}[b][.5\linewidth] \includegraphics[width=\textwidth]{EMGPlot2.pdf} \end{subfigure} \begin{subfigure}[b][.5\linewidth] \includegraphics[width=\textwidth]{EMGDensitySurface.jpg} \end{subfigure} \end{comment} Our approach is motivated by the analysis of spectroscopic data $S$, which can be formally decomposed as \begin{comment} \begin{equation} \text{Spectrograms}(S) = \text{Peaks}(P) + \text{Background Signals}(B) \end{equation} \end{comment} \begin{equation} S = P + B, \end{equation} where $P$ are spectroscopic peaks, and $B$ are background signals. Even if the background model were perfect, there would be significant differences between the background model and the observed spectrograms; namely $S-B = P$, the non-background spectroscopic peaks. As we do not know the number of peaks a priori, incorporating peaks explicitly into a model can lead to errors in the analysis whose remedy requires human intervention. Our foremost goal is to eliminate the need for human intervention in the analysis of spectroscopic data. So instead, we model the residuals caused by peak signals probabilistically. Spectroscopic data represent counts or intensities and are therefore positive. The distribution of intensities for a wide range of spectroscopic data can be modeled by the exponential distribution $\text{Exp}(\lambda)$ with rate parameter $\lambda$. For this reason, the distribution of the residual at the $i$th data point $(S-B)_i$ of a \textit{perfect background fit to noiseless data} can be modeled - again formally - as \begin{equation} \label{eq_noiselessDistribution} (S-B)_i = P_i \sim p^{\text{noiseless}}_{\lambda, z_i} := (1-z_i) \delta + z_i \text{Exp}(\lambda), \end{equation} where $\delta$ is the Dirac delta distribution and $z_i \in \{0,1\}$. The variable $z_i$ is $0$ if there is no peak at data point $i$, and $1$ if a peak can be observed. If the entire data set did not have a single peak, the ideal background model would explain the entire data, so that the residual would be $S-B \sim \delta$. Similarly, if all data points had observable peaks, $S-B \sim \text{Exp}(\lambda)$. Data from real experiments is noisy. Here, we assume additive Gaussian noise. As a result, the residual density is the convolution of the Gaussian density with $p^{\text{noiseless}}_{\lambda, z}$: \begin{equation} \label{eq_mainDensity} \begin{split} \text{EMGM}_{\mu, \sigma, \lambda, z}(x) &:= [\mathcal{N}_{\mu,\sigma} \ast p^{\text{noiseless}}_{\lambda, z}](x) \\ =(1-&z) \mathcal{N}_{\mu, \sigma}(x) + z \ \text{EMG}_{\mu, \sigma, \lambda}(x). \end{split} \end{equation} The EMG density is defined as the convolution of a exponential and a Gaussian density (see section \ref{sec_EMG}). Thus, for any model $M(\theta)$, with parameters $\theta$, the likelihood of our model with data $D = \{D_i |\ i = 1, \ldots, n \}$ and latent variables $z = \{z_i | \ i = 1, \ldots, n\}$ is \begin{equation} \label{eq_likelihood} \begin{split} L(\theta; \sigma, \lambda, z) &= P( D | M(\theta), \sigma, \lambda, z) \\ &= \prod_{i=1}^n \text{EMGM}_{M_{i}(\theta), \sigma, \lambda, z_{i}}( D_{i} ). \end{split} \end{equation} \eqref{eq_likelihood} is the exponentially-modified Gaussian mixture model we propose for dealing with a contaminating distribution with positive support. We introduce the notation $M(\theta)$ to highlight that an arbitrary model can be optimized with respect to the EMG mixture model. In our experiments, we let $M$ be a line for linear regression, and also a low-rank matrix for the spectroscopic applications. \begin{remark} Even though we focus on scientific applications, note that spectroscopic data also abound in other fields like audio source separation, see e.g. \cite{Virtanen2007}. \end{remark} \subsection{Theoretical Properties} \label{sec_theory} We provide theoretical insights of the mixture by studying the properties of the EMG distribution. Similar to the Gaussian distribution, the EMG distribution defines a location-scale family. We define, with a slight abuse of notation, the ``standard" EMG density EMG$_\alpha(x)$ which only depends on one parameter $\alpha$. This simplifies the proofs of the following statements, minimizes analytical clutter, and elucidates the function of the individual parameters. \begin{definition} Let the standard EMG density be \begin{equation} \label{eq_stdEMG} \text{EMG}_{\alpha}(x) = \frac{\alpha}{2} e^{\alpha (\alpha/2 - x)} \ \mbox{erfc} \left( \frac{ \alpha - x}{\sqrt{2}} \right). \end{equation} \end{definition} \begin{Theorem} Given the standard EMG density \eqref{eq_stdEMG}, we have \begin{equation} \label{eq_stdEMGRelation} \text{\emph{EMG}}_{\mu, \sigma, \lambda}(x) = \frac{1}{\sigma} \text{\emph{EMG}}_{(\lambda \sigma)} \left( \frac{x-\mu}{\sigma} \right). \end{equation} \end{Theorem} \begin{proof} See supplementary material. \end{proof} The negative logarithm of the standard EMG density is \begin{equation} \begin{split} - \log \text{EMG}_{\alpha}(x) = - \log \frac{\alpha}{2} - \alpha \left( \alpha/2 - x \right) \\ - \log \text{erfc} \left( \frac{ \alpha - x }{\sqrt{2}} \right). \end{split} \end{equation} \begin{Theorem} \label{lem_convexX} The negative logarithm of the EMG density is strictly convex in $x$ and $\mu$. \end{Theorem} \begin{proof} See supplementary material. \end{proof} \begin{Theorem} \label{lem_convexL} The negative logarithm of the EMG density is strictly convex in $\lambda$ satisfying $1/\lambda > \sigma$. \end{Theorem} \begin{proof} See supplementary material. \end{proof} \begin{remark} The assumption $1/\lambda > \sigma$ implies that the variance of the exponential component is greater than the variance of the Gaussian component. In our applications, this is equivalent to assuming a signal to noise ratio bigger than one. \end{remark} Finally, note that the convexity results of Theorem \ref{lem_convexX} and Theorem \ref{lem_convexL} carry over to the negative logarithm of \eqref{eq_likelihood} because sums of convex functions are convex. Therefore, any non-convexity with respect to the model parameters $\theta$ are strictly due to the model $M(\theta)$ to be optimized, and not the EMGM. \subsection{ Expectation-Maximization Algorithm } \label{sec_EMAlgorithm} Maximizing the logarithm of \eqref{eq_likelihood} directly is intractable because the discrete variables $z_i$. Instead, we optimize the log-likelihood of our model using an expectation-maximization algorithm. To this end, we need to define $\epsilon := P(z_i = 1)$ and $\gamma_i := \mathbb{E}_{z_i | \theta, \sigma, \lambda, \epsilon} [z_i]$. We can then take the expectation of \eqref{eq_likelihood} over all $z_i$: \begin{equation} \label{eq_exptectedLikelihood} \begin{split} \mathbb{E}_{z|\theta, \sigma, \lambda, \epsilon} ( \log L ) &= \sum_{i=1}^n (1-\gamma_i) \log \mathcal{N}_{M_i(\theta), \sigma}(D_i) \\ &+ \gamma_i \log \text{EMG}_{M_i(\theta), \sigma, \lambda}(D_i). \end{split} \end{equation} The expectation step is \begin{equation} \begin{split} \gamma_i = \frac{\epsilon \text{EMG}_{M_i(\theta), \sigma, \lambda}(D_i)}{(1-\epsilon) \mathcal{N}_{M_i(\theta),\sigma}(D_i) + \epsilon \text{EMG}_{M_i(\theta), \sigma, \lambda}(D_i)} \end{split} \end{equation} \begin{comment} \gamma_{0} &= \frac{(1-\pi) \mathcal{N}_{M_i(\theta),\sigma}(D_i)} {(1-\pi) \mathcal{N}_{0,\sigma}(r_i) + \pi \text{EMG}_{0, \sigma, \lambda}(D_i)} \\ \end{comment} The maximization step optimizes \eqref{eq_exptectedLikelihood} for the model parameters $\theta$, the continuous mixture model parameters $\sigma$ and $\lambda$, and updates the mixture probability $\epsilon$. The maximization works in two steps. First, $\sigma, \lambda$ are held fixed while $\theta$ is optimized. Then, $\theta$ is held fixed while $\sigma, \lambda$ are optimized. Recall that Theorem \ref{lem_convexL} gives a condition for when the optimization of $\lambda$ is a strictly convex problem. If the assumption of the theorem does not hold, we can only guarantee finding a local optimum of this optimization problem. We use a gradient descent on \eqref{eq_exptectedLikelihood} for both subproblems of the maximization step. This is not guaranteed to find the true maximum with respect to all unobserved variables. However, it is guaranteed to improve the likelihood if the parameters do not already constitute a stationary point. This is a type of generalized EM (GEM) algorithm and is guaranteed to improve the true likelihood at each iteration until a stationary point is found \cite{ Dempster19977, Neal:1999:VEA:308574.308679}. The step size of the gradient descent algorithm is chosen using a backtracking line search to ensure descent at every iteration of all optimization procedures in the M-step. We re-scale all gradients by the absolute value of their second derivatives, which makes the descent algorithm scale invariant and accelerates it in practice \cite{Bertsekas2008}. All gradient evaluations are linear in the number of parameters. As the EMGM is twice differentiable and strongly convex in $x$ and $\mu$ (see Theorem \ref{lem_convexX}), the number of gradient evaluations scales logarithmically with the required precision. \section{Related Work} Herein we describe prior work in robust statistics, highlighting commonalities and important differences with respect to our work. Optimization of a model with the likelihood given by \eqref{eq_likelihood} can be viewed as a location estimation problem of an asymmetrically-contaminated Gaussian distribution. In particular, we want to estimate $\mu$ for the distribution $F := (1-\epsilon) \mathcal{N}(\mu,\sigma) + \epsilon C$, where $C$ is a contaminating distribution. This is related to work in robust statistics starting with Huber's seminal paper \cite{Huber1964}, in which he introduced the function \begin{equation} \label{eq_Huber} \rho_\delta(x) = \begin{cases} \frac{1}{2}x^2 & \text{if } |x| \leq \delta \\ \delta (x - \frac{1}{2} \delta) & |x| > \delta \end{cases}. \end{equation} Huber proved that the minimum of $\sum_k \rho(x_k - \xi)$ over $\xi$ is an optimal estimator of the population mean of a contaminated normal distribution. In particular, he proved that this estimator achieves the minimum asymptotic variance among all translation invariant estimators on contaminated normal distributions of the form $F = (1-\epsilon) G + \epsilon H$, where $G$ is the normal distribution, and $H$ is a contaminating distribution. Critically, this optimality result was derived with the assumption of a symmetric contaminating distribution $H$. The estimator is not consistent if the contaminating distribution is asymmetric. Therefore, it is not guaranteed to work well in our setting. \begin{comment} "the "trimming procedure" is rather sensitive to the behavior of $F$ at the rejection points $\pm k$; a high density at these points will play havoc with the estimate. This shortcoming seems to be common also to other rejection procedures; here one might avoid it by smoothing $\rho$ at $\pm k$." In addition, there is no obvious way to automatically learn the parameter of the Huber loss from data. \end{comment} In recent and related work, \cite{Fujisawa2008, Kanamori2015} proposed using scoring rules to guard regression algorithms against substantial contamination. Remarkably, these works make no explicit assumptions about the family of contaminating distributions. However, the methods rely on the $L_2$ inner product of the contaminating distribution and the regular noise distribution to be ``extremely small" \cite{Kanamori2015}. In other words, all contaminated datapoints have to be exceedingly unlikely under the regular noise assumption. This is undoubtedly not true in our scenario \eqref{eq_mainDensity}: The inner product of the Gaussian and the exponential is equal to EMG$_{0, \sigma, \lambda}(0)$ which is not small in general, except for extremely small $\lambda$. \cite{Takeuchi2002} introduced the Robust Regression for Asymmetric Tails (RRAT) algorithm. The algorithm can be used to estimate the conditional mean $\mathbb{E}(y | x)$ in a regression setting, and is based on quantile regression (see Section \ref{sec_quantile}). It uses the fact that even with an asymmetric contaminating distribution, there is a quantile which coincides with the mean. An advantage of RRAT is that it can deal even with heavy-tailed asymmetric contamination, as long as its first moment is defined. However, this approach has two limitations. First, though the noise distribution can be asymmetric, the algorithm assumes a zero mean. This is not necessarily true in the case of the EMGM model. Secondly, even if the approach could be adapted to this setting, its key limitation is that its hyper-parameters cannot be chosen automatically. In contrast to the previously described methods, \textit{all parameters of our model can be automatically inferred from data by optimizing the likelihood with the EM algorithm (see Section \ref{sec_EMAlgorithm}). This is key for automating scientific discovery in high-throughput settings.} \section{Experiments} \label{sec_Experiments} \begin{figure} \begin{center} \includegraphics[width=.8\linewidth]{LinearRegressionExample.pdf} \caption{A sample dataset with exponentially-distributed contamination and regression results for several residual models. Most competing methods exhibit positive bias, while the EMGM is close to the ground truth. } \label{fig_linearRegression} \end{center} \end{figure} \subsection{Linear Regression} We first study the behavior of the EMG mixture residual model on a linear regression task. Specifically, we are given data points $x, y \in \mathbb{R}$ and want to infer $a,b$ so that $y = ax+b$. In contrast to the traditional setting, $y$ is not only corrupted by Gaussian noise, but also by a contaminating distribution with positive support. In addition to exponential contamination, we provide results with log-normal contamination to study the behavior of the EMGM algorithm if its distributional assumptions are not satisfied. In both cases, we let \begin{equation} \label{eq_regressionData} y_i = \frac{\pi}{2} x_i + e + G_{i} + \mathbbm{1}_C(i) C_{i}, \end{equation} where $G_{i} \sim \mathcal{N}(0, 1/2)$, $\mathbbm{1}_C$ is the indicator on the set of contaminated indices, and $C_{i}$ is drawn from the contaminating distribution. For all experiments and a given data size $N$, we contaminate 25\% of all data points. The regression coefficients are initialized to $a = 1, b = 0$. The initial mixture probability of the EMGM is set to 50\%, and its initial parameters are $\mu = 0, \sigma = 1$, and $\lambda = 1$. In Figure \ref{fig_linearRegression}, we show regression results on a sample dataset generated using \eqref{eq_regressionData}. Please see \cite{Takeuchi2002} for details on RRAT. Notably, $\ell 2$ and RRAT perform worst. The former is not robust to outliers and the data breaks RRAT's assumption of zero mean noise. Even tinkering with the number and type of quantiles as input to RRAT could not improve its performance on this data. Therefore we left it out of the evaluations in this paper. The second tier of residual models are $\ell 1$ and the Huber loss with $\delta = .2$, which visually overlap in the figure. Though robust against symmetric contamination, these models exhibit positive bias on this data. Lastly, quantile regression with $q=.2$ and EMGM are visually closest to the ground truth. \begin{table} \caption{ Error statistics of estimation of $a, b$ with different contaminating distributions. MAE is mean absolute value, mean is the mean error, std is the standard deviation of the error. Bold is best.} \small \centering \resizebox{.95\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c| } \multicolumn{5}{c}{Exponentially-Distributed Contamination} \\ \hline $N = 2^8$ & $\ell2$ & Huber & $\ell1$ & Quant .2 & EMGM \\ \hline MAE a & 1.23e-01 & 6.15e-02 & 6.74e-02 & 6.25e-02 & \textbf{5.08e-02} \\ mean a & -2.96e-03 & -6.65e-03 & -5.76e-03 & 7.15e-03 & \textbf{-2.93e-03} \\ std a & 1.58e-01 & 7.65e-02 & 8.23e-02 & 7.95e-02 & \textbf{6.50e-02} \\ MAE b & 5.02e-01 & 1.63e-01 & 1.59e-01 & 3.27e-01 & \textbf{3.37e-02} \\ mean b & -5.02e-01 & -1.63e-01 & -1.59e-01 & 3.27e-01 & \textbf{-4.63e-04} \\ std b & 7.25e-02 & 4.50e-02 & 4.89e-02 & 4.85e-02 & \textbf{4.18e-02} \\ \hline $N = 2^{14}$ & $\ell2$ & Huber & $\ell1$ & Quant .2 & EMGM \\ \hline MAE a & 1.49e-02 & 7.90e-03 & 8.32e-03 & 8.56e-03 & \textbf{6.90e-03} \\ mean $a$ & -8.42e-04 & 8.62e-05 & \textbf{2.05e-04} & 2.38e-04 & 2.93e-04 \\ std $a$ & 1.84e-02 & 9.96e-03 & 1.03e-02 & 1.07e-02 & \textbf{8.80e-03} \\ MAE b & 5.01e-01 & 1.64e-01 & 1.61e-01 & 3.28e-01 & \textbf{3.80e-03} \\ mean $b$ & -5.01e-01 & -1.64e-01 & -1.61e-01 & 3.28e-01 &\textbf{-4.22e-04} \\ std $b$ & 8.46e-03 & 5.07e-03 & 5.45e-03 & 5.42e-03 & \textbf{4.70e-03} \\ \hline \multicolumn{5}{c}{Log-Normally Distributed Contamination} \\ \hline $N = 2^8$ & $\ell2$ & Huber & $\ell1$ & Quant .2 & EMGM \\ \hline MAE a & 1.20e-01 & 6.33e-02 & 7.04e-02 & 6.73e-02 & \textbf{5.67e-02} \\ mean a & 6.71e-03 & -7.98e-03 & -8.27e-03 & \textbf{-1.04e-03} & -5.29e-03 \\ std a & 1.56e-01 & 7.87e-02 & 8.60e-02 & 8.29e-02 & \textbf{7.12e-02} \\ MAE b & 4.18e-01 & 1.59e-01 & 1.56e-01 & 3.26e-01 & \textbf{3.98e-02} \\ mean b & -4.18e-01 & -1.59e-01 & -1.56e-01 & 3.26e-01 & \textbf{-2.95e-02} \\ std b & 7.61e-02 & \textbf{3.89e-02} & 4.22e-02 & 4.69e-02 & 4.13e-02 \\ \hline $N = 2^{14}$ & $\ell2$ & Huber & $\ell1$ & Quant .2 & EMGM \\ \hline MAE a & 1.50e-02 & 7.87e-03 & 8.70e-03 & 8.27e-03 & \textbf{6.74e-03} \\ mean a & -1.90e-03 & 7.27e-05 & \textbf{2.60e-05} & -6.46e-04 & -2.96e-04 \\ std a & 1.84e-02 & 9.64e-03 & 1.05e-02 & 1.02e-02 & \textbf{8.46e-03} \\ MAE b & 4.13e-01 & 1.59e-01 & 1.57e-01 & 3.28e-01 & \textbf{1.87e-02} \\ mean b & -4.13e-01 & -1.59e-01 & -1.57e-01 & 3.28e-01 & \textbf{-1.87e-02} \\ std b & 8.93e-03 & 5.02e-03 & 5.39e-03 & 6.33e-03 & \textbf{4.99e-03} \\ \hline \end{tabular} } \label{tab_regressionResults} \end{table} For a quantitative comparison of the methods, consider Table \ref{tab_regressionResults}. We compute the mean absolute error (MAE), the mean error (mean), and the standard deviation of the error (std) of the estimation of both $a$ and $b$, by considering the regression results of 256 realizations of \eqref{eq_regressionData} for two data sizes $N$. All methods seem to be able to estimate $a$ well. The bias of most methods comes to light when considering $b$. Indeed, $\ell2$, $\ell1$, and Huber have a negative mean error, indicating positive bias, across both dataset sizes and both contaminating distributions. The $20\%$ quantile regression has negative bias, which is also not mitigated by more data. In contrast, the EMGM exhibits small bias and mean absolute error for the exponentially contaminated datasets. Further, the MAE reduces as the data size increases, indicating that the EMGM correctly adapts its distributional parameters to the data. Figure \ref{fig_convergenceRegression} also highlights this trend. It depicts the MAE vs data size for exponential (left) and log-normal (right) contaminations. Notably, the left plot shows that the MAE of the EMGM estimate of $b$ decays approximately as $1/\sqrt{N}$. The other estimates do not exhibit this convergent behavior. Surprisingly, even if the data is contaminated by log-normal noise, the MAE of EMGM exhibits a strong reduction with data size until it plateaus at a low error level (see Figure \ref{fig_convergenceRegression} (right) and Table \ref{tab_regressionResults}). \begin{figure} \begin{center} \includegraphics[width=.49\linewidth]{exponentialConvergencePlot.pdf} \includegraphics[width=.49\linewidth]{lognormalConvergencePlot.pdf} \caption{ MAE of $b$ as a function of data size $N$. Left: Exponentially-distributed contaminations. Right: Log-normally-distributed contaminations. With increasing data size, EMGM exhibits convergent behavior for exponential contaminations and achieves a low level of error for log-normally distributed contaminations.} \label{fig_convergenceRegression} \end{center} \end{figure} \subsection{Probabilistic Matrix Factorization for Spectroscopy} \label{sec_PMFForSpectroscopy} In complex spectroscopic datasets, several types of background signals and systematic errors can contribute to the observed data. We assume that these unobserved background components combine linearly with each other and the spectroscopic peaks to form the observed spectrograms. Therefore, we model the background of the entire dataset B as a low-rank matrix: \begin{equation} \label{eq_backgroundModel} B = UV, \end{equation} where $U \in \mathbb{R}^{n \times k}$, $V \in \mathbb{R}^{k \times m}$. The columns of $U$ can be interpreted as the individual background signals, while the rows of $V$ are the activation of each background signal per spectrogram in the dataset. Combining the matrix decomposition \eqref{eq_backgroundModel} with the residual model \eqref{eq_likelihood}, we obtain \begin{equation} \label{eq_ModelProb} P( S | U, V, \sigma, \lambda, z) := \prod_{ij} \text{EMGM}_{(UV)_{ij}, \sigma, \lambda, z_{ij} }( S_{ij}). \end{equation} $S \in \mathbb{R}^{n \times m}$ is the matrix of measurements, whose $m$ columns consist of spectrograms of length $n$. Estimating the factors $U, V$ is a type of probabilistic matrix factorization (see Section \ref{sec_ProbMat}). In the experiments, the factor matrices are optimized with the expectation-maximization algorithm proposed in Section \ref{sec_EMAlgorithm}, with two minor additions: First, we restrict the columns of $U$ to belong to a Reproducing-Kernel Hilbert Space (RKHS), as background components often exhibit special characteristics such as smoothness. In our experiments, we use an RKHS generated by the RBF kernel, whose length scale is large enough to permit a low-rank factorization of the kernel matrix $K_{ij} = k(x_i, x_j)$. We calculate the factorization with an SVD of $K$ as a pre-computation. Denote by $W$ the left singular vectors corresponding to singular values above a specified precision. Then $W (W^T U)$ is the projection of the columns of $U$ onto the RKHS. This matrix-matrix product is $O(nkr)$ where $r$ is the numerical rank of $K$, and $k$ is the number of columns of $U$. The projection occurs in the gradient steps of the maximization step of the EM algorithm, leading to a projected gradient descent algorithm. Secondly, we introduce a half-Normal prior on sigma: $\sigma \sim \mathcal{N}_+(0, \sigma_\sigma)$. This makes the inference of $\sigma$ via the gradient descent algorithm, a generally non-convex problem, more well-posed and encourages solution with small Gaussian noise variance. We proceed to estimate $\sigma$ with the algorithm described in Section \ref{sec_EMAlgorithm}. \subsubsection{Synthetic Spectroscopic Data} \label{sec_synData} \begin{figure} \begin{center} \includegraphics[width= .95\linewidth]{PMFExample4.pdf} \caption{Sample background estimates for a data set with 128 synthetically generated X-ray diffraction patterns. The matrix factorization driven by the EMG mixture closely follows the ground truth, while most other methods overestimate it significantly, thereby absorbing important peaks from the relevant non-background signals. The estimates of the regression quantile objectives (Quant with $q = .3$, Quant Low with $q = .2$) are signficantly better though still not as accurate as EMGM. In contrast to the EMGM, there is no automatic procedure to choose the best parameters for the quantile objective.} \label{fig_syntheticExample} \end{center} \end{figure} \begin{table} \caption{ Error statistics of the spectroscopic background estimation over 32 datasets of $N = 128$ spectrograms with several ranks $k = 1,2,3$. EMGM outperforms the other methods in the majority of cases.} \label{tab_syntheticErrors} \begin{center} \resizebox{.95\linewidth}{!}{ \begin{tabular}{ |c|c|c|c|c|c|c| } \hline $k = 1 $ & $\ell2$ & $\ell1$ & Quant & Quant Low & EMGM \\ \hline mean $\ell2$ & 2.03e-01 & 1.12e-01 & 6.46e-02 & 4.65e-02 & \textbf{3.93e-02} \\ std $\ell2$ & 1.83e-02 & 1.27e-02 & 8.95e-03 & 7.65e-03 & \textbf{5.34e-03} \\ mean $\ell1$ & 1.41e-01 & 7.05e-02 & 4.00e-02 & \textbf{2.94e-02} & 3.02e-02 \\ std $\ell1$ & 1.19e-02 & 7.99e-03 & 5.39e-03 & 4.62e-03 & \textbf{4.11e-03} \\ \hline $k = 2$ & $\ell2$ & $\ell1$ & Quant & Quant Low & EMGM \\ \hline mean $\ell2$ & 2.41e-01 & 1.39e-01 & 8.51e-02 & 7.54e-02 & \textbf{6.54e-02} \\ std $\ell2$ & 2.30e-02 & 2.19e-02 & \textbf{2.15e-02} & 2.57e-02 & 2.88e-02 \\ mean $\ell1$ & 1.36e-01 & 8.49e-02 & 5.12e-02 & 4.43e-02 & \textbf{4.00e-02} \\ std $\ell1$ & 1.22e-02 & 1.30e-02 & 1.22e-02 & 1.26e-02 & \textbf{1.09e-02} \\ \hline $k = 3$ & $\ell2$ & $\ell1$ & Quant & Quant Low & EMGM \\ \hline mean $\ell2$ & 2.47e-01 & 1.38e-01 & 8.63e-02 & 7.67e-02 & \textbf{6.05e-02} \\ std $\ell2$ & 2.88e-02 & 3.06e-02 & 1.81e-02 & \textbf{1.79e-02 }& \textbf{1.79e-02} \\ mean $\ell1$ & 1.44e-01 & 8.51e-02 & 5.34e-02 & 4.83e-02 & \textbf{3.75e-02} \\ std $\ell1$ & 1.21e-02 & 1.74e-02 & 1.13e-02 & 1.28e-02 & \textbf{8.57e-03} \\ \hline \end{tabular} } \end{center} \end{table} We study the behavior of PMF using the EMG mixture on a synthetic spectroscopic dataset created using the Materials Project, an open database which currently contains information for 83,989 inorganic compounds \cite{Jain2013}. We randomly selected the spectrograms of a subset of them to generate synthetic data for the background inference task. In particular, we generated a set of datasets with $N$ spectrograms, which consist of 1024 datapoints each. Each spectrogram is a linear combination of an X-ray diffraction pattern, up to three synthetically generated background components ($k = 1,2,3$), and Gaussian noise. We perform the matrix factorization using 5 different objective functions: $\ell_2, \ell_1$, Huber, quantile, and the EMG mixture. We compute factorization with the quantile objective function for $q = .3$ and $q = .2$, referred to as Quant and Quant Low, respectively. Regarding parameter initialization, the length scale of the RBF kernel is $l = 5$. The factor matrices are set to $U = W(W^T R_U)$, and $V = R_V$ where the elements of $R_U \in \mathbb{R}^{n, k}$, $R_V \in \mathbb{R}^{k, m}$ are drawn from $U(0,1)$. The rank $k$ is set to its true value in each of the three cases $k = 1,2,3$. Further, the EMGM's distributional parameters are initialized to their maximum-likelihood estimates on the residuals of the quantile factorization with $q=.3$. See Figure \ref{fig_syntheticExample} for a visual comparison of the five methods on a sample spectrogram. The result of the EMG mixture follows the ground truth closely, while the other objective functions, except for quantile, overestimate the ground truth of the background significantly. For a more quantitative analysis, see Table \ref{tab_syntheticErrors}. For the results of each objective function, we record the vector $\ell2$ norm and the vector $\ell1$ norm of the residual matrices. To summarize these results, Table \ref{tab_syntheticErrors} shows the average and standard deviation of the error norms over 32 synthetically-generated datasets with $N = 128$ spectrograms. We report results of the two quantile factorizations in favor of the Huber results, which are comparable to $\ell1$. The quantile matrix factorization with $q = .2$ is competitive for $k = 1$, especially considering the $\ell 1$ norm. It is important to note however, that it is a-priori not clear which quantile will be accurate. Also, the higher $\ell 2$ norm errors indicate that the estimates are not uniformly accurate, but are unstable across a dataset: a disadvantage of estimators based on low quantiles. For $k = 2,3$ the EMGM outperforms all other methods. Figure \ref{fig_convergenceFactorization} shows the mean vector $\ell2$ error of several residual models. While Quant Low ($q=.2$) starts out very well, it plateaus. In contrast, the EMGM performs better with increasing amounts of data. \begin{figure} \begin{center} \includegraphics[width=.9\linewidth]{FactorizationConvergence.pdf} \caption{ Mean vector $\ell2$ error of the synthetic background estimation task as a function of the number of spectrograms $N$ for $k = 2$. EMGM improves with increasing data size.} \label{fig_convergenceFactorization} \end{center} \end{figure} \subsection{ X-Ray Diffraction and Raman Spectroscopy Data} \label{sec_realWorld} We illustrate the efficacy of the EMG mixture model to infer the background in a real-world X-ray diffraction (XRD) dataset and a real-world Raman dataset, arising in materials science. The Raman dataset has 2100 spectrograms each consisting of 1024 points. The XRD dataset has 186 spectrograms, each consisting of 6400 points. Prior work on background subtraction in spectroscopy is mostly based on the application of smoothing operators on one spectrogram of a dataset at a time. For example, \cite{NewBckGrnd} uses a cubic spline interpolation on set of heuristically chosen nodes. \cite{ZhaoRamanBackground} introduced a method called I-ModPoly. It works by iteratively fitting a low-order polynomial to a spectrogram and updating a noise level estimate. The datapoints above this noise level are ignored for the polynomial fitting. In addition to smooth background signals, some datasets are created on a substrate with its own spectroscopic signature, which is shared among all spectrograms of the dataset. We consider this spectroscopic signature background signal to be able to subtract it and facilitate the the analysis of the scientifically interesting components of the signal. Figure \ref{fig_realComparison} (bottom) shows an XRD spectrogram in which the three most intense peaks come from a background source. Figure \ref{fig_realComparison} (top) shows a sample Raman spectrogram, and background models calculated by the approach of Section \ref{sec_PMFForSpectroscopy} with quantile factorization ($q = .2$) and EMGM, and the method of \cite{ZhaoRamanBackground} with two different polynomial degrees. The EMGM result is the only one that does not exhibit drawbacks (see caption for details). Figure \ref{fig_realComparison} (bottom) shows a sample XRD spectrogram with background models calculated by the same methods as above. Interestingly, the EMGM and quantile methods both infer the substrate signature and background signals correctly, as verified by human experts. As we have seen in Section \ref{sec_synData}, quantile factorization can work well if the quantile is chosen appropriately, so this result does not come as a complete surprise. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{RamanComparison349.pdf} \includegraphics[width=\linewidth]{XRDComparison144.pdf} \caption{\textbf{Top:} A real-world Raman spectrogram with several background models. Quant ($q = .2$) underestimates the background at 500 - 700 and from 1250 onward. The polynomial method with low degree also underestimates the background at 500, while the high degree chips away from the peak at 600. The EMGM does not suffer from these drawbacks. \textbf{Bottom:} A real-world XRD spectrogram with substrate peaks at 19, 24, 26.5. Both EMGM and quantile matrix factorization (with a good, manually chosen $q$ as input) are able to capture the substrate signal correctly. The low-order polynomial method overshoots slightly around the most intense peak at 19. The high-order polynomial starts to absorb all non-substrate peaks. } \label{fig_realComparison} \end{center} \end{figure} \section{Conclusion} We introduced the exponentially-modified Gaussian (EMG) mixture residual model, which is well suited to model residuals that are contaminated by a distribution with positive support. We proved two convexity results for the negative logarithm of the EMG density, and further introduced an expectation-maximization algorithm for optimizing the EMG mixture model. We compared the EMG mixture against commonly-used residual models such as $\ell_1$, the Huber loss, and regression quantiles, showing its convergence for exponentially-distributed contaminations. We incorporated the EMG mixture into a probabilistic matrix factorization framework, motivated by applications in spectroscopy. We showed how this approach is effective in inferring background signals and systematic errors in data arising from X-ray diffraction and Raman spectroscopy, dramatically outperforming existing approaches and revealing the data's physically meaningful components. We hope that our work will inspire other researchers to pursue possible extensions. For example, while our real-world data comes from materials science, the methods should be widely applicable to spectroscopic data arising in other scientific domains (e.g. astronomy, physics, biology) and also spectroscopic data arising in other domains (e.g. music, speech, animal vocalizations). \begin{comment} An interesting research direction is to investigate the potential benefits of different optimization algorithms, like MCMC-based methods. Such methods have been shown to improve the performance of other probabilistic matrix factorization models \cite{BayesianProbabilisticMFMCMC}. Further, we plan on generalizing the residual model to incorporate more heavy-tailed outlier phenomena. Finally, robust probabilistic matrix factorization has found numerous other applications which might benefit from the methods put forth by this work. In particular, we demonstrate the effectiveness of PMF in conjunction with the EMG mixture model on synthetic data and two real-world applications: . We show how our approach can be used to infer background signals and systematic errors in data arising from these experimental settings, The algorithm is now used to aid the discovery of new materials and the accurate characterization of their properties, thanks to our collaborators at Caltech. Since knowledge of materials has driven technological progress for millennia (bronze age, silicon, etc.), such a technique has the potential to contribute to transformative discoveries. \end{comment} \pagebreak
1,477,468,751,366
arxiv
\section{Introduction} Diffusion equations and Stokes problems in a time dependent domain are present in countless research areas with many applications such as problems of temperature distribution \cite{doi10113716M1083207}, studies of biological pattern formation and cell motility on evolving surfaces \cite{doi:10.1098/rsif.2012.0276,PhysRevE.60.4588,garcke2013diffuse}. The most relevant topic that moves our research is the modelling of surfactants in two-phase flows using a diffuse interface, that is part of a long time project \cite{CiCP-31-707,Raudino20168574,variraudinoA,variraudinoB,variraudinoC,variraudinoD,variraudinoE,variraudinoF,variraudinoG}. In these works the authors report the experimental trapping kinetics of a diffusing flux of surfactants sticking at the surface of an oscillating gas bubble set in the middle of the vessel (see Fig.~\ref{plot_setup} (a)). The surfactant concentration past the oscillating bubbles is detected by conductivity measurements (see Fig.~\ref{plot_setup} (b)). A different and unexpected behavior is observed in presence of an empty bubble oscillating at resonance frequency (black curve in Fig.~\ref{plot_setup} (b)). The phenomenon is particularly relevant when the bubbles are exposed to intense forced oscillations near resonance. Surfactants are important for several industrial applications, such as processes of emulsification and mixing~\cite{wu2021new} or the production and stabilization of 2D nanomaterials~\cite{tyurnina2021environment}. They can be soluble in at least one of the fluid phases and the exchange of surfactants between the bulk phases and the fluid interfaces is governed by the process of adsorption and desorption. In \cite{garcke2013diffuse} different phase field models are derived for two-phase flow with a surfactant soluble in possibly both fluids. In \cite{doi:10.1063/1.1724167,Diamant_1996} they present models that account for both the diffusive transport inside the solution and the kinetics taking place at the interface using a free-energy formulation. \begin{figure}[tb] \begin{minipage} {.48\textwidth} \centering \includegraphics[width=\textwidth]{Figures/setup.pdf} \end{minipage} \begin{minipage} {.48\textwidth} \centering \includegraphics[width=\textwidth]{Figures/detector_experimental.pdf} \end{minipage} \caption{\textit{Experimental domain and results from~\cite{Raudino20168574, CiCP-31-707}. (a) Schematic setup of the real apparatus. The central sphere represents the oscillating bubble. See~\cite{Raudino20168574, CiCP-31-707} for a detailed description and for experimental values of $H$, $A$, $L$. (b) Time evolution of the aqueous solution conductance measured above the bubble (electrodes 2 and 3 of (a)). Red line: no bubble; blue line: saturated bubble submitted to a flux of surfactants; black line: oscillating bubble submitted to a flux of surfactants (adapted from \cite{Raudino20168574}). The line thicknesses represent an estimate of the experimental uncertainty of the conductivity measurements. }} \label{plot_setup} \end{figure} A theoretical work of Ward and Tordai \cite{ward1946time} formulated a time-dependent relation between the surface density of surfactants adsorbed at an interface and their concentration at the sub-surface layer of the solution, assuming a diffusive transport from the bulk solution. Consequent theoretical works have focused on providing a second closure relation between these two variables, as in \cite{multiscale_mod}, while in this paper we focus on the numerical aspects of the problem. Specifically, we study a diffusion equation in a bulk domain, with a dynamic time-dependent boundary condition derived by conservation arguments and stating that at the boundary the flux is proportional to the time derivative of the solution~\cite{PhysRevE.60.4588,Plaza}. We formulate a finite-difference scheme for advection-diffusion equation with moving curved surfaces/boundaries. Time-discretization is performed with the Crank-Nicolson method. The bubble region is implicitly described by a level-set function, while the implementation of boundary conditions on complex-shaped boundaries/surfaces is based on a ghost-point method. The ghost-point approach for domains described by level-set functions has been successfully proposed in several contexts~\cite{Fedkiw:GFM, Gibou:Ghost, Gibou:fourth_order, Gibou:fluid_solid, fernandez2020very, clain2021very}, as it has the advantage to allow an implicit representation of the boundary and then it can be employed on meshes that do not necessarily conform with a complex-shaped boundary. This advantage alleviates the computational burden that is associated with mesh generation steps, as observed in other approaches based on fitted-boundary methods. A flexible ghost-point technique, suitable for different boundary condition types, is presented in~\cite{COCO2013464, COCO2018299} and applied to several contexts~\cite{COCO2020109623, chertock2018second}. In this paper, we extend the approach to accommodate time dependant boundary conditions and the presence of second order tangential derivatives. A geometric multigrid method is employed to efficiently solve the sparse linear system arising from the discretization of the problem. The multigrid approach is extended from~\cite{COCO2013464} in order to account for time-dependent boundary conditions and the presence of tangential derivatives. A suitable technique is presented in order to maintain the optimal efficiency of the multigrid method and contain the boundary effect degradation of the performance. The method is second-order accurate in space and time, as confirmed by numerical tests. The convection of particles is driven by the fluid motion around the oscillating bubble, usually modelled by incompressible Navier-Stokes equations. We assume the solute does not significantly change the density and rheology of the fluids, therefore the fluid motion is independent of the solute concentration (one way coupling). We also assume that the motion of the bubble is assigned {\it a priori} and does not depend on the fluid dynamics. In a more realistic scenario, the bubble surface deformation would be influenced by the fluid motion, resulting in a two-way coupling as in fluid/membrane interaction problems~\cite{mavroyiakoumou2020large, mavroyiakoumou2021dynamics}. For the physical parameters range adopted in the experiments, the Reynolds numbers are very small so that the convective terms of the Navier-Stokes equations can be neglected, and we can safely model the fluid-dynamics by the Stokes equations instead. To numerically solve Stokes equations on moving domains we employ the method proposed in~\cite{COCO2020109623} based on a monolithic approach. The paper is structured as follows. In Sect.~\ref{sect:mathmodel} we present the mathematical model for the diffusion of particles. Sect.~\ref{sect:FDdisc} describes the finite-difference ghost-point technique to implement the time-dependent boundary conditions. In Sect.~\ref{sect:MG} we present the multigrid method to solve the sparse linear system arising from the ghost-point discretization. In Sect.~\ref{numtest} we perform several numerical tests to prove the second order accuracy in 2D and 3D axisymmetric geometries. Sect.~\ref{sect:moving_bubble} deals with the fluid dynamics generated by the bubble oscillations. Small amplitude of the oscillations suggests that the bubble motion can be modelled purely from time-dependent boundary condition for the fluid velocity, while the computational bubble domain remains steady, saving then meaningful computational efforts. This simplification is justified by numerical tests. Finally, we couple the Stokes problem with the convection-diffusion equations to model the particle concentration evolution around an oscillating attracting bubble, proposing two types of oscillations (harmonic and ellipsoidal). Conclusions are drawn in Sect.~\ref{sec:conclusions}. \section{Multiscale Model}\label{sect:mathmodel} Modelling the diffusion in presence of a trap is challenging if multiple scales are involved. In recent papers, such as \cite{CiCP-31-707,Raudino20168574}, the range of the attractive-repulsive core of the trap is of the order of nanometers, a length that is several orders of magnitude smaller than the size of the domain. In order to overcome such difficulty, a \textit{multiscale model} was proposed in \cite{multiscale_mod}, which we briefly recall here, in the single carrier approximation. The time evolution of a local concentration of ions $c= c(\vec{x},t)$ diffusing in a steady fluid is governed by the conservation law \begin{equation}\label{eq:conservation}\frac{ \partial c}{\partial t}=-\nabla\cdot J. \end{equation} The flux term $J$ contains a diffusion and a drift term: \begin{equation} \label{eq:flux} J=\ -D\left(\nabla c +\ \frac{1}{k_B T} c \nabla V\right) \end{equation} where $k_B$ is the Boltzmann's constant, $T$ is the absolute temperature and $V = V(\vec{x})$ is a suitable attractive-repulsive potential that models the particle trap. For simplicity, we describe the 1D model derivation only, referring the reader to~\cite{multiscale_mod} for a detailed derivation of higher order dimension models. In 1D, equations \eqref{eq:conservation} and \eqref{eq:flux} read: \begin{align} \label{eq_1D} \displaystyle \frac{\partial c}{\partial t} + \frac{\partial J}{\partial x} &=0 &\\ \label{eq_flux_1D} \displaystyle J &= -D\left(\frac{\partial c}{\partial x} + \frac{1}{k_B T} \, c \, V' \right) & \end{align} Assume that the trap is located at $x=0$. Initially, particles are located within the region $x>0$. If they get close to the trap, they are attracted towards $x=0$. This phenomenon is simulated by a potential $V(x)$ such that $V'(x)>0$ for $x>0$. On the other hand, if particles pass to the region $x<0$, they are repulsed towards $x=0$. Then, $V'(x)<0$ for $x<0$. The attractive/repulsive mechanism is therefore modeled in the neighborhood of $x=0$ by a short range potential $V(x)$ that is different from zero only in a thin region around the trap, say $\Omega^\varepsilon_b = [-\varepsilon,\varepsilon L]$ with $\varepsilon \ll 1$ and $L>0$. Therefore, $V(x)=0$ for $x\geq\varepsilon L$ and (assuming that the first derivative of $V(x)$ is continuous) $V'(\varepsilon L) = 0$. A typical shape of the potential is reported in the left panel of Fig.~\ref{figure_potential_V_1D}. \begin{figure}[!ht] \centering \begin{minipage} {.49\textwidth} \centering \includegraphics[width=\textwidth]{Figures/V_x_05.pdf} \end{minipage} \begin{minipage} {.49\textwidth} \includegraphics[width=\textwidth]{Figures/U_xi_05.pdf} \end{minipage} \caption{\textit{Representation of $V(x)$, on the left, and $U(\xi)$, on the right, for $\varepsilon = 0.05$ and $L=2$. On the left the dashed line $x=\varepsilon L$ denotes the right boundary of $\Omega_b^\varepsilon$. }} \label{figure_potential_V_1D} \end{figure} Assuming that there is a wall at $x=1$, the fluid domain is represented by $\Omega_f^\varepsilon = [\varepsilon L , 1]$. The problem consists of solving \eqref{eq:conservation} and \eqref{eq:flux} in $\Omega^\varepsilon =\Omega_b^\varepsilon \cup \Omega_f^\varepsilon = [-\varepsilon,1]$ with boundary conditions $J(-\varepsilon) = J(1)= 0$. This problem presents a multiscale challenge because of the different spatial scales of $\Omega_b^\varepsilon = [-\varepsilon, \varepsilon L]$ and $\Omega_f^\varepsilon = [\varepsilon L, 1]$. To overcome this difficulty, we aim at approximating the behaviour of the trap in $\Omega^\varepsilon_b$ with a suitable boundary condition at $x=0$, obtaining then a simplified problem in $\Omega = [0,1]$ as follows. Using a scaling variable $\xi = 1+x/\varepsilon$, the potential can be written in terms of $U(\xi)$ for $\xi \in [0,1+L]$ as $V(x) = U(\xi)$. In summary, we first choose a scaling potential $U(\xi)$ for $\xi \in [0,1+L]$ such that $U'(\xi)<0$ in $[0,1]$, $U'(\xi)>0$ in $[1,1+L]$, $U(1+L)=0$, $U'(1+L)=0$, and then we study the behaviour of the trap when the potential is $V(x) = U(1+x/\varepsilon)$. We assume that the solution $c_\varepsilon(\xi,t)$ of the scaled problem \begin{align} \label{eq_eps} \displaystyle \frac{\partial c _\varepsilon}{\partial t} + \frac{1}{\varepsilon}\frac{\partial J_\varepsilon}{\partial \xi} &=0 &\\ \label{eq_flux_eps} \displaystyle J_{\varepsilon} &= -D\frac{1}{\varepsilon}\left(\frac{\partial c _\varepsilon}{\partial \xi} + \frac{1}{k_B T} \, c _\varepsilon \, U' \right) & \end{align} has the following expansion in $\Omega^\varepsilon_b$: \begin{equation}\label{exprho} c_\varepsilon(\xi,t) = c ^{(0)}(\xi,t)+\varepsilon c ^{(1)}(\xi,t)+O(\varepsilon^2). \end{equation} Since the flux $J_\varepsilon$ must be bounded for $\varepsilon \to 0$, from $\eqref{eq_flux_eps}$ we have that the coefficient of the term $\mathcal{O}(\varepsilon^{-1})$ in $J_\varepsilon$ has to vanish: \begin{equation} \frac{\partial c^{(0)}}{\partial \xi}+ \frac{1}{k_B T} U'(\xi) c ^{(0)} = 0. \label{Jorder0} \end{equation} This equation can be solved for $c^{(0)}(\xi,t)$, yielding \begin{equation} c ^{(0)}(\xi,t) = c ^{(0)}(1+L,t)\exp \left(-\frac{U(\xi)}{k_B T}\right) \end{equation} since $U(1+L) = 0$. Integrating \eqref{eq_1D} in $\Omega^\varepsilon_b$ we have: \[ \frac{d}{dt}\int_{- \varepsilon}^{\varepsilon L} c (x,t) \, dx +J(\varepsilon L) - J(-\varepsilon) = 0 \] and using the approximation $c(x,t) \approx c^{(0)}(\xi,t)$, the boundary condition $J(-\varepsilon)=0$ and that $V'(\varepsilon L)=0$ we obtain \begin{align} \nonumber \varepsilon \; \frac{\partial c (\varepsilon L,t)}{\partial t} \; \int_{0}^{1+L}\exp\left(-\frac{U(\xi)}{k_B T}\right) d \xi - D \frac{\partial c (\varepsilon L,t)}{\partial x} &= 0 & \end{align} that represents a boundary condition of $c(x,t)$ at $x=\varepsilon L$. Using this boundary condition at $x=0$ instead of $x=\varepsilon L$, we finally obtain the following simplified problem related to the multiscale model: \begin{align} \frac{\partial c }{\partial t} &= \frac{\partial}{\partial x} \left( D \frac{\partial c}{\partial x} \right) \quad {\rm {for} }\, x \in [0,1] & \label{reduced1d} \\ \frac{\partial c }{\partial x} &= 0 \quad {\rm {at} }\, x = 1 & \label{BCt} \\ M\frac{\partial c }{\partial t} - D\frac{\partial c }{\partial x} &= 0 \quad {\rm {at} }\, x = 0 & \label{BCeps} \end{align} where \begin{equation} \label{expr_M} M=\varepsilon\int_{0}^{1+L}\exp\left(-\frac{U(\xi)}{k_B T}\right)d\xi. \end{equation} We observe that if the potential does not depend on $\varepsilon$, $M \rightarrow 0$ as $\varepsilon \rightarrow 0$ and then the condition \eqref{BCt} reduces to a zero Neumann boundary condition, therefore the interesting multiscale limit is obtained by letting $\varepsilon\to 0$, still maintaining $M$ finite.\footnote{The effective dependence of the small still finite size of $\varepsilon$ is studied in \cite{multiscale_mod}.} \begin{figure}[htp] \centering \hfill \begin{minipage}[b] {.3\textwidth} \centering \includegraphics[width=1.1\textwidth]{Figures/domains.pdf} \end{minipage}\hfill \begin{minipage}[b] {.45\textwidth} \centering \includegraphics[width=0.6\textwidth]{Figures/classification_points.pdf} \end{minipage} \hspace*{\fill} \caption{\textit{Representation of the domain on the left and classification of inside grid points (green), ghost points (red) and inactive points (blue circles) on the right.}} \label{classification_points} \end{figure} Let us now describe the problem in higher dimensions. In two space dimensions, the fluid is contained in a domain $\Omega$ which is the a region external to a bubble $\mathcal{B}$ and internal to the square box $\mathcal{S} = (-a,a)^2 \subset \mathbb{R}^2$, with $a>0$ (see Figure \eqref{classification_points}, left panel). Eq.~\eqref{reduced1d} reads: \begin{equation}\label{pde2d} \frac{\partial c }{\displaystyle \partial t} = \nabla \cdot \left( D \nabla c \right) \text{ in } \Omega \end{equation} where $D$ is the diffusion coefficient. Imposing zero flux at the wall results in homogeneous Neumann boundary conditions on $\Gamma_\mathcal{S} = \partial \mathcal{S}$: \begin{equation}\label{bcwall} \frac{\partial c }{\partial {n}_\mathcal{S}} = 0 \quad \text{ on } \Gamma_\mathcal{S}, \end{equation} where $n_\mathcal{S}$ denotes the unit normal vector on $\Gamma_\mathcal{S}$, pointing out of the domain $\Omega$. In the presence of a steady bubble, the fluid domain is represented by $\Omega = \mathcal{S} \backslash \mathcal{B}$, where $\mathcal{B}$ is the region occupied by the bubble and represented by a sphere centred in the origin and with radius $R_\mathcal{B}$ such that $0 < R_\mathcal{B} < a$. Similarly to \eqref{BCeps}, a suitable boundary condition is enforced on the boundary $\Gamma_\mathcal{B} = \partial \mathcal{B}$ to simulate the \textit{attractive-repulsive} mechanism of the bubble surface with the particles. In 2D, the analogue of boundary condition \eqref{BCeps} becomes (see~\cite{multiscale_mod} for more details): \begin{equation}\label{bcbubble} M\frac{\partial c }{\partial t} = MD\frac{\partial ^2 c }{\partial {\tau} ^2}-D\frac{\partial c }{\partial {n}_\mathcal{B}}\quad \text{ on } \Gamma_\mathcal{B}, \end{equation} where $M$ is given by the analogue of \eqref{expr_M} (the integration of the potential is performed along the direction normal to the surface of the bubble), $\tau$ denotes the unit vector tangential to $\Gamma_\mathcal{B}$, $n_\mathcal{B}$ is the unit normal vector on $\Gamma_\mathcal{B}$ pointing out of the domain $\Omega$ and $\partial^k/\partial\tau^k$ denotes the $k$-th derivative along such tangential direction. In 3D the region $\mathcal{S}$ is the cube $(-a,a)^3$, the static bubble $\mathcal{B}$ is a sphere centered at the origin with radius $R_{\mathcal{B}}<a$, and the boundary condition on $\mathcal{B}$ becomes: \begin{equation}\label{bcbubble3d} M\frac{\partial c }{\partial t} = MD\Delta_\perp c -D\frac{\partial c }{\partial n_\mathcal{B}}\quad \text{ in } \Gamma_{\mathcal{B}} \end{equation} where $\Delta_\perp$ is the Laplacian-Beltrami operator on the surface of the bubble $\Gamma_\mathcal{B}$. \section{Finite-Difference discretization}\label{sect:FDdisc} \subsection{Discretization in time} Eqs.~\eqref{pde2d} and \eqref{bcbubble} can be written in compact form \begin{equation}\label{compactQ} \frac{\partial c }{\partial t} = Q \, c \end{equation} where $Q$ is the following (linear) differential operator \begin{equation} Q \, c = \left\{ \begin{matrix} D \Delta c & \text{ in } \Omega\\ \\ \displaystyle D\frac{\partial ^2 c }{\partial \tau ^2}-DM^{-1} \frac{\partial c }{\partial n} \quad & \text{ on } \Gamma_\mathcal{B} \end{matrix} \right. \label{Qexpr} \end{equation} with homogeneous Neumann boundary condition \eqref{bcwall} on $\Gamma_\mathcal{S}$. Eq.~\eqref{compactQ} is discretized in time by using the \textit{Crank-Nicolson} method, which is second order accurate: \begin{align} \nonumber \frac{ c ^{n+1}- c ^n}{k} &= \frac{1}{2} \left( Q \, c ^n + Q \, c ^{n+1} \right) &\\ \label{CNdisc} \left(I - \frac{k}{2}Q\right) \, c ^{n+1} &= \left(I + \frac{k}{2}Q\right) \, c ^n& \end{align} where $k$ is the time step and $I$ is the identity operator. \subsection{Discretization in space}\label{sect:discspace} The computational domain $\mathcal{S}$ is discretized through a uniform Cartesian mesh with spatial step $h = 2a/N = \Delta x = \Delta y$, where $N^2$ is the number of cells. We choose to use a cell-centered discretization to facilitate the implementation of homogeneous Neumann boundary conditions on $\Gamma_\mathcal{S}$. However, the accuracy of the method does not rely on this choice and a vertex-centered discretization would produce similar results. Therefore, the set of grid points is $\mathcal{S}_h = \{(x_i,y_j)=(-a - h/2 + ih,-a - h/2 + jh), (i,j) \in \{1,\cdots,N\}^2 \}$. Within the set of grid points we define the set of internal points $\Omega_h = \mathcal{S}_h \cap \Omega$, the set of bubble points $\mathcal{B}_h =\mathcal{S}_h \cap \mathcal{B}$ and the set of ghost points $\mathcal{G}_h$ as grid points inside the bubble with at least one neighbor point inside $\Omega$: \begin{equation} (x_i,y_j) \in \mathcal{G}_h \iff (x_i,y_j) \in \mathcal{B}_h \text{ and } \{(x_i \pm h,y_j),(x_i,y_j\pm h) \} \cap \Omega_h \neq \emptyset. \end{equation} The remaining grid points that are neither inside nor ghost are called inactive points. See Fig.~\ref{classification_points} (right panel) for a classification of inside, ghost, and inactive points. Let $N_I = |\Omega_h|$ and $N_G = |\mathcal{G}_h|$ be the number of internal and ghost points, respectively. We aim at approximating the solution $c$ at grid points of $\Omega_h \cup \mathcal{G}_h$, then our numerical solution can be represented as a column vector $ c _h = (\ldots, c _{i,j}, \ldots)^T \in \mathbb{R}^{N_I+N_G}$, after choosing a bijective map between $\left\{ 1,\ldots,N_I+N_G \right\}$ and the grid points of $\Omega_h \cup \mathcal{G}_h$ (the overall numerical method does not rely on the particular choice of this map). The problem \eqref{CNdisc} is then discretized in space, leading to a linear system \begin{equation}\label{CNdiscspace} \left(I_h - \frac{k}{2}Q_h\right) c _h^{n+1} = \left(I_h + \frac{k}{2}Q_h\right) c _h^n, \end{equation} to be solved at each time step, where $I_h$ and $Q_h$ are the $(N_I+N_G) \times (N_I+N_G)$ matrices representing the discretization of the operators $I$ and $Q$ and defined as follows. We denote by $I_h^{(i,j)}= \left(I_h^{(i,j),1},\ldots,I_h^{(i,j),N_I+N_G} \right)$ and $Q_h^{(i,j)}= \left(Q_h^{(i,j),1},\ldots,Q_h^{(i,j),N_I+N_G} \right)$ the rows of $I_h$ and $Q_h$, respectively, associated with the grid point $(x_i,y_j)$. If $P_{ij} = (x_i,y_j) \in \Omega_h$ in an internal grid point (as in Fig.~\ref{stencil0}, left panel), then the equation of the linear system is obtained from the discretization of the internal equation \eqref{pde2d} and the standard central difference on a five-point stencil is used to discretize the Laplace operator on $(x_i,y_j)$, and therefore $I_h^{(i,j)}$ and $Q_h^{(i,j)}$ are defined by \begin{equation}\label{Ih_internal} I_h^{(i,j)} c _h = c _{i,j} \end{equation} \[ Q_h^{(i,j)} c _h = D \frac{ c _{i+1,j}+ c _{i-1,j}+ c _{i,j+1}+ c _{i,j-1} - 4 c _{i,j}}{h^2}. \] If $(x_i,y_j)$ is close the the wall and the five-point stencil contains grid points outside $\Omega$, we can use the boundary condition \eqref{bcwall} to reduce the five-point stencil and use only internal grid points. For example, looking at Fig.~\ref{stencil0} (right panel), we use the boundary condition \[ \frac{ c _{i,0}- c _{i,1}}{h} = 0 \Longrightarrow c _{i,0} = c _{i,1} \] and then \[ Q_h^{(i,1)} c _h = D \frac{ c _{i+1,1}+ c _{i-1,1}+ c _{i,2}- 3 c _{i,1}}{h^2}. \] \begin{figure}[htp] \centering \hfill \begin{minipage}[b] {.45\textwidth} \centering \includegraphics[width=0.75\textwidth]{Figures/stencil5points.pdf} \end{minipage}\hfill \begin{minipage}[b] {.45\textwidth} \centering \includegraphics[width=0.75\textwidth]{Figures/stencil5points_J0.pdf} \end{minipage} \hspace*{\fill} \caption{\textit{Representation of the five-point stencil for the discretization of internal points $P_{ij} = (x_i,y_j)$ (left panel) and the reduced stencil when $P_{ij}$ is close to the wall $\Gamma_{\mathcal{S}}$ (right panel). In the latter case the stencil is composed by four points.}} \label{stencil0} \end{figure} If $G=(x_i,y_j) \in \mathcal{G}_h$ is a ghost point, then we discretize the boundary condition \eqref{bcbubble}, following a ghost-point approach similar to the one proposed in \cite{COCO2013464} and summarised as follows. We first compute the closest boundary point $B \in \Gamma_\mathcal{B}$ by \[ B = O + R_\mathcal{B} \frac{G-O}{|G-O|}, \] where $O$ is the centre of the bubble and $R_\mathcal{B}$ is the radius. Then, we identify the $3\times3$~--~point stencil having $G=(x_G,y_G)=(x_i,y_j)$ on one corner and whose convex hull contains $B=(x_B,y_B)$ (see Fig.~\ref{stencil}, right panel): \[ \left\{ (x_{i+s_x m_x},x_{j+s_y m_y}) \colon m_x,m_y=0,1,2 \right\}, \] where $s_x = {\rm SGN} (x_B-x_G)$ and $s_y = {\rm SGN} (y_B-y_G)$, with ${\rm SGN}(\alpha)=-1$ for $\alpha<0$ and ${\rm SGN}(\alpha)=1$ for $\alpha\geq0$. The solution $ c $ and its first and second derivatives are then interpolated at the boundary point $B$ using the discrete values $ c _{i,j}$ on the $3\times3$~--~point stencil. The interpolations can be obtained as tensor products of 1D interpolations in the axis directions. In detail, the 1D quadratic interpolations using the grid points $x_{i-2},x_{i-1},x_{i}$ to evaluate the function, its first derivative and the second derivative on $x_i - \vartheta h$ with $0\leq\vartheta<1$ (see Fig.~\ref{fig:1Dinterp}) are given by \[ \tilde{ c }(x_i - \vartheta\,h) = \sum_{m=0}^2 l_{m}(\vartheta) \, c _{i-m}, \quad \tilde{ c }'(x_i - \vartheta\,h) = \sum_{m=0}^2 l'_{m}(\vartheta) \, c _{i-m}, \quad \tilde{ c }''(x_i - \vartheta\,h) = \sum_{m=0}^2 l''_{m}(\vartheta)\, c _{i-m}, \] where \[ l(\vartheta) = \left( \frac{(1-\vartheta)(2-\vartheta)}{2}, \quad \vartheta (2-\vartheta), \quad \frac{\vartheta(\vartheta-1)}{2} \right) \] \[ l'(\vartheta) = \frac{1}{h} \left( \frac{(2\vartheta-3)}{2}, \quad 2(1-\vartheta), \quad \frac{(2\vartheta-1)}{2} \right) \] \[ l''(\vartheta) = \frac{1}{h^2} \left( 1, \quad -2, \quad 1 \right). \] In 2D, we define (see Fig.~\ref{stencil}, right panel) \[ \vartheta_x = s_x (x_B-x_G)/h, \qquad \vartheta_y = s_y (y_B-y_G)/h. \] \begin{figure}[htp] \centering \centering \includegraphics[width=0.35\textwidth]{Figures/1Dinterp.pdf} \caption{\textit{1D interpolation on grid points $x_{i-2},x_{i-1},x_{i}$ (circle markers) to evaluate the function, its first derivative and the second derivative on $x_i - \vartheta h$ (star marker). }} \label{fig:1Dinterp} \end{figure} Observe that $0\leq \vartheta_x,\vartheta_y < 1$. The 2D interpolation formulas are: \begin{multline} \label{coeffsLSstencil} \tilde{ c }(B) = \sum_{m_x,m_y=0}^2 l_{m_x}(\vartheta_x) l_{m_y}(\vartheta_y) c _{i+s_x m_x,j+s_y m_y}, \\ \frac{\partial \tilde{ c }}{\partial x}(B) = s_x \sum_{m_x,m_y=0}^2 l'_{m_x}(\vartheta_x) l_{m_y}(\vartheta_y) c _{i+s_x m_x,j+s_y m_y}, \\ \frac{\partial \tilde{ c }}{\partial y}(B) = s_y \sum_{m_x,m_y=0}^2 l_{m_x}(\vartheta_x) l'_{m_y}(\vartheta_y) c _{i+s_x m_x,j+s_y m_y}, \\ \frac{\partial^2 \tilde{ c }}{\partial x^2}(B) = \sum_{m_x,m_y=0}^2 l''_{m_x}(\vartheta_x) l_{m_y}(\vartheta_y) c _{i+s_x m_x,j+s_y m_y}, \\ \frac{\partial^2 \tilde{ c }}{\partial y^2}(B) = \sum_{m_x,m_y=0}^2 l_{m_x}(\vartheta_x) l''_{m_y}(\vartheta_y) c _{i+s_x m_x,j+s_y m_y}, \\ \frac{\partial^2 \tilde{ c }}{\partial x \partial y}(B) = s_x\, s_y \sum_{m_x,m_y=0}^2 l'_{m_x}(\vartheta_x) l'_{m_y}(\vartheta_y) c _{i+s_x m_x,j+s_y m_y}. \end{multline} Finally, the rows of $I_h$ and $Q_h$ associated with the ghost point $G=(x_G,y_G)$ are defined by evaluating the boundary condition on $B$, i.e. \begin{equation}\label{IHghost} I_h^{(i,j)} c _h = \tilde{ c } (B) \end{equation} \begin{equation}\label{QHghost} Q_h^{(i,j)} c _h = D \left. \frac{\partial ^2 \tilde{ c }}{\partial \tau ^2} \right|_B - \frac{D}{M} \left. \frac{\partial \tilde{ c }}{\partial \bm{n}} \right|_B \end{equation} where \begin{eqnarray}\label{normaleq} \frac{\partial }{\partial n} = n_x\frac{\partial }{\partial x} +n_y\frac{\partial }{\partial y}, &\qquad \displaystyle \frac{\partial ^2}{\partial \tau ^2} = \displaystyle \tau_x^2\frac{\partial^2 }{\partial x^2} + 2\tau_x \tau_y\frac{\partial^2 }{\partial x \partial y} + \tau_y^2\frac{\partial^2 }{\partial y^2}, \\ (n_x,n_y) = \frac{O-G}{|O-G|}, & \qquad (\tau_x,\tau_y)= (-n_y,n_x). \end{eqnarray} Observe that, while the row $I_h^{(i,j)}$ associated with an internal point $(x_i,y_j)$ is the row of the identity matrix \eqref{Ih_internal}, for a ghost point $G=(x_i,y_j)$ this is not true (unless $G=B$, i.e.~$\vartheta_x=\vartheta_y=0$) since it contains $3^2=9$ values $l_{m_x}(\vartheta_x) l_{m_y}(\vartheta_y)$ for $m_x,m_y=0,1,2$. \subsection{Complex-shaped bubbles: a level-set approach} The discretization described in the previous sections for a spherical bubble can be extended to the case of more complex-shaped bubbles adopting a level-set approach. In detail, the bubble $\mathcal{B}$ can be implicit defined by a level set function $\phi(x,y)$ that is positive inside the bubble, negative outside and zero on the boundary $\Gamma_\mathcal{B}$ (\cite{Osher,book:72748}): \begin{align} \mathcal{B} &= \{(x,y): \phi(x,y) > 0\}&\\ \Gamma_\mathcal{B} &= \{(x,y): \phi(x,y) = 0\}.& \end{align} The unit normal vector $n$ in \eqref{normaleq} can be computed by: \begin{equation}\label{LSnormal} n = \frac{\nabla \phi }{|\nabla \phi|} \end{equation} provided that the level-set function is known explicitly. If it is known only at grid nodes, then the derivatives of $\phi$ in \eqref{LSnormal} are approximated by adopting a similar interpolation procedure as the one described in Eq.~\eqref{coeffsLSstencil}. We observe that for a given bubble $\mathcal{B}$ there are infinite level-set functions. For example, $\phi=R_\mathcal{B}-\sqrt{x^2+y^2}$ and $\phi=R_\mathcal{B}^2-(x^2+y^2)$ describe the same circular bubble. For a given bubble, the most convenient level-set function in terms of numerical stability is the signed distance function $\phi_d(x,y)$, i.e.~the distance between $(x,y)$ and $\Gamma_\mathcal{B}$ (positive inside the bubble, negative otherwise). The signed distance function $\phi_d$ can be computed from a generic level-set function $\phi$ by the reinitialization algorithm~\cite{sussman1994level, russo2000remark, du2008second}, consisting of finding the steady-state solution of: \begin{equation} \label{pde} \frac{\partial \hat{\phi}}{\partial t} = {\rm sgn}(\phi)\left(1-|\nabla \hat{\phi} |\right), \qquad \hat{\phi}=\phi\quad \text{ at time } t=0 \end{equation} where $t$ is a fictitious time. A signed distance function is preferred to avoid numerical instabilities associated with sharp or shallow gradients close to the boundary (for a signed distance function we have $|\nabla \phi_d| =1$). However, the cases investigated in the present paper involve steady bubbles or moving bubbles with a pre-determined evolution of the shape, then the instability issues of a generic level-set function are not observed. \begin{figure}[htp] \centering \hfill \begin{minipage}[b] {.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{Figures/stencil5points_ghost.pdf} \end{minipage}\hfill \begin{minipage}[b] {.45\textwidth} \centering \includegraphics[width=0.6\textwidth]{Figures/stencil9points.pdf} \end{minipage} \hspace*{\fill} \caption{\textit{Representation of the five-point stencil when $P_{ij}$ is close to the boundary $\Gamma_{\mathcal{B}}$ (left panel). In this case the stencil contains a ghost point $G$. On the right panel we represent the upwind nine-point stencil associated with the ghost point $G$ and the boundary projection point $B$.}} \label{stencil} \end{figure} \section{Multigrid approach}\label{sect:MG} The linear system \eqref{CNdiscspace} can be written as $A_h \, c _h^{n+1}=b_h$, where $A_h= \left(I_h - \frac{k}{2}Q_h\right)$ and $b_h=\left(I_h + \frac{k}{2}Q_h\right) c _h^n$, and it is solved in this paper using an efficient multigrid approach that is an extension of the method proposed in \cite{COCO2013464} for elliptic equations on complex-shaped domains. In brief, a multigrid method is an iterative solver that starts by performing few steps of a suitable {\it relaxation scheme} to the linear system $A_h\, c _h^{n+1}=b_h$, obtaining an approximated solution $\bar{ c }^{n+1}$. The relaxation scheme is chosen in such a way that the high frequency Fourier modes of the residual $r_h = b_h-A_h\, c _h^{n+1}$ are dumped away much quicker than the low frequency Fourier modes. In other words, the relaxation operator smooths the residual $r_h$ after few relaxation steps (say $\nu_1$ steps). If so, it is said to have the {\it smoothing property}. Then, the residual $r_h$ is transferred to a coarser grid with spatial step $H=2h$ (without losing much information, as it is mainly composed of low frequency modes) by a suitable restriction operator $r_H = \mathcal{I}^h_H r_h$ and then the residual equation $A_H e_H = r_H$ is solved on the coarse grid to obtain an approximation of the error $e_H$. Then, the error is transferred to the fine grid by an interpolation operator $e_h = \mathcal{I}^H_h e_H$ and the approximation $\bar{ c }^{n+1}$ is updated by $\bar{ c }^{n+1} \leftarrow \bar{ c }^{n+1} + e_h$. Few more steps (say $\nu_2$) of the relaxation operator are then performed on the fine grid to reduce the errors introduced by the interpolation procedure. The entire scheme is then performed iteratively until the residual falls below a certain tolerance. In addition, the residual equation $A_H e_H = r_H$ can be solved recursively by moving to a coarser grid with spatial step $2H$, and so on. Several types of multigrid schemes, such as $V-$cycle, $W-$cycle and Full Multigrid, can be adopted according to the different strategies that can be chosen. In this paper we use a $W-$cycle approach and describe the main components of the multigrid method, i.e.~relaxation, restriction and interpolation operators, while we refer the reader to, for example,~\cite{Trottemberg:MG} for a comprehensive treatment of multigrid methods. \subsection{Relaxation scheme} Standard relaxation schemes that show the smoothing property for elliptic equations in rectangular domains are the Gauss-Seidel scheme and the weighted Jacobi scheme (with weight $\omega = 2/3$ in 1D and $\omega = 4/5$ in 2D, see~\cite{Trottemberg:MG} ). While these relaxation schemes converge when the discretization is performed on a rectangular domain, they might not converge when using a ghost-point approach for curved boundaries (see \cite{COCO2013464}). To obtain a convergent scheme for the problem proposed in this paper, we modify the relaxation on ghost points as described below (while we keep a Gauss-Seidel scheme on the internal equations). The proposed relaxation scheme can be written in the Richardson form (with iterative index $k$) \begin{equation}\label{rich} c _h^{n+1,k+1} = c _h^{n+1,k} + P_h^{-1} (b_h-A_h c _h^{n+1,k}) \end{equation} where $P_h$ is a $(N_I+N_G) \times (N_I+N_G)$ matrix called {\it preconditioner} and is chosen as a suitable approximation of $A_h$. A standard Gauss-Seidel scheme (on both internal and ghost points) corresponds to $P_h=(D_h+L_h)$, where $D_h$ and $L_h$ are the diagonal and lower part of $A_h$, respectively. To modify the scheme for ghost points, we change the $N_G$ diagonal values of $D_h$ that corresponds to ghost points, obtaining a new diagonal matrix $\tilde{D}_h$. Finally, the diagonal entries $\tilde{D}_h^{(i,j)}$ of $\tilde{D}_h$ are \[ \tilde{D}_h^{(i,j)} = \left\{ \begin{matrix} D_h^{(i,j)} = 1+\displaystyle\frac{2kD}{h^2} & \text{ if } & (x_i,y_j) \in \Omega_h \\ \beta & \text{ if } & (x_i,y_j) \in \text{ Ghost } \\ \end{matrix} \right. \] where $\beta \in \mathbb{R}$ is a suitable value that we determine later. We observe that in practice the relaxation \eqref{rich} is performed without storing the entire matrices $P_h$ and $A_h$ (matrix-free fashion). This has the advantage of avoiding the explicit construction of the sparse matrices with great simplification of implementation aspects and savings in computational time, especially for moving domains when the discrete operator depends on time. In fact, the vector $ c _h^{n+1}$ is computationally stored in a temporary array $ c _h$ and its components $ c _{i,j}$ are updated (overridden) in a Gauss-Seidel fashion by iterating all over the grid points. We distinguish between internal and ghost points. We use the notation $a\leftarrow b$ to say that the variable $a$ is updated with the value $b$. \paragraph{Internal points} If $(i,j) \in \Omega_h$, \[ c _{i,j} \leftarrow c _{i,j}+\frac{h^2}{h^2+2kD} \left(b_{i,j}- c _{i,j}-\frac{kD}{2h^2} \left( c _{i+1,j}+ c _{i-1,j}+ c _{i,j+1}+ c _{i,j-1}-4 c _{i,j}\right) \right) \] \paragraph{Ghost points} If $(i,j) \in \mathcal{G}_h$, \begin{equation}\label{richghost} c _{i,j} \leftarrow c _{i,j}+\beta^{-1} \left(b_{i,j}-\left(I_h^{(i,j)} c _h - \displaystyle \frac{k}{2} Q_h^{(i,j)} c _h \right) \right) \end{equation} where $I_h^{(i,j)}$ and $I_h^{(i,j)}$ are defined by \eqref{IHghost} and \eqref{QHghost}, respectively. The iteration \eqref{richghost} can be written as \begin{equation}\label{richghost2} c _{i,j} \leftarrow \left(1-\beta^{-1} \left( I_h^{(i,j),(i,j)}- \displaystyle \frac{k}{2} Q_h^{(i,j),(i,j)} \right) \right) c _{i,j}+ \ldots \text{ terms that do not depend on } c _{i,j} \ldots \end{equation} and the value $\beta$ is chosen in such a way that the coefficient of $ c _{i,j}$ on the right-hand side of \eqref{richghost2} is not larger than one in absolute value, i.e. \begin{equation}\label{condbeta} \left| 1-\beta^{-1} A_h^{(i,j),(i,j)} \right|\leq 1, \text{ with } A_h^{(i,j),(i,j)} = \left( I_h^{(i,j),(i,j)}- \displaystyle \frac{k}{2} Q_h^{(i,j),(i,j)} \right). \end{equation} Using \eqref{IHghost} and \eqref{QHghost}, we have \begin{equation}\label{diagI} I_h^{(i,j),(i,j)} = l_{0}(\vartheta_x) \, l_{0}(\vartheta_y) \end{equation} \begin{align}\label{diagQ} Q_h^{(i,j),(i,j)} = & D \left( \tau_x^2 l''_{0}(\vartheta_x) l_{0}(\vartheta_y) + 2 \tau_x \tau_y l'_{0}(\vartheta_x) l'_{0}(\vartheta_y) + \tau_y^2 l_{0}(\vartheta_x) l''_{0}(\vartheta_y) \right) \nonumber \\& - \frac{D}{M} \left( n_x l'_{0}(\vartheta_x) l_{0}(\vartheta_y) + n_y l_{0}(\vartheta_x) l'_{0}(\vartheta_y) \right) \end{align} In order to satisfy condition \eqref{condbeta}, we require that $\beta$ is chosen in such a way that \begin{equation}\label{condbeta1and2} 0 \leq \beta^{-1} A_h^{(i,j),(i,j)} \leq 2. \end{equation} The left inequality of \eqref{condbeta1and2} is satisfied by choosing $\text{sign } \beta = \text{sign } A_h^{(i,j),(i,j)}$ (we conventionally choose $\text{sign }(0) =1$). To satisfy the right inequality of \eqref{condbeta1and2}, we have \[ \left| \beta \right| \geq \frac{\left| A_h^{(i,j),(i,j)} \right|}{2}. \] This condition is always satisfied (regardless of the ghost point) if we choose \begin{equation}\label{condA} \left| \beta \right| \geq \tilde{A}/2 \quad \text{ with } \quad \sup_{\vartheta_x,\vartheta_y \in [0,1]} \left| A_h^{(i,j),(i,j)} \right| \leq \tilde{A}. \end{equation} The estimate $\tilde{A}$ can be found as follows. Using \eqref{diagI}, \eqref{diagQ} and \eqref{coeffsLSstencil}, we have \begin{multline}\label{Abound} \left| A_h^{(i,j),(i,j)} \right| \leq \left| I_h^{(i,j),(i,j)} \right| + \frac{k}{2} \left| Q_h^{(i,j),(i,j)} \right| \leq \left| l_{0}(\vartheta_x) \right| \left| l_{0}(\vartheta_y) \right| \\ + \frac{Dk}{2} \left( \left| l''_{0}(\vartheta_x) \right| \left| l_{0}(\vartheta_y) \right| + 2 \left|l'_{0}(\vartheta_x) \right| \left| l'_{0}(\vartheta_y)\right| + \left| l_{0}(\vartheta_x) l''_{0}(\vartheta_y) \right| + \frac{1}{\left| M \right|} \left( \left|l'_{0}(\vartheta_x) \right| \left| l_{0}(\vartheta_y) \right| + \left| l_{0}(\vartheta_x) \right| \left| l'_{0}(\vartheta_y) \right| \right) \right). \end{multline} Since \begin{equation} \begin{gathered} \sup_{\vartheta \in [0,1]} \left| l_0(\vartheta) \right| = \sup_{\vartheta \in [0,1]} \left| \frac{(1-\vartheta)(2-\vartheta)}{2} \right| = 1, \quad \sup_{\vartheta \in [0,1]} \left| l'_0(\vartheta) \right| = \sup_{\vartheta \in [0,1]} \left| \frac{(2\vartheta-3)}{2h} \right| = \frac{3}{2h}, \\ \sup_{\vartheta \in [0,1]} \left| l''_0(\vartheta) \right| = \sup_{\vartheta \in [0,1]} \left| \frac{1}{h^2} \right| = \frac{1}{h^2}, \end{gathered} \end{equation} then (from \eqref{Abound}) \begin{equation} \label{supAtilde} \sup_{\vartheta_x,\vartheta_y \in [0,1]} \left| A_h^{(i,j),(i,j)} \right| \leq 1 + \frac{Dk}{2} \left( \frac{13}{2 h^2} + \frac{3}{\left| M \right| h} \right) \tilde{A} \end{equation} and finally the condition on $\left| \beta \right|$ is (from \eqref{condA}, taking $\tilde{A}$ as the right side of \eqref{supAtilde}) \[ \left| \beta \right| \geq \frac{1}{2} \left( 1 + \frac{Dk}{2} \left( \frac{13}{2 h^2} + \frac{3}{\left| M \right| h} \right) \right). \] \subsection{Transfer operators} In this section we define the transfer operators $\mathcal{I}^h_H$ (restriction) and $\mathcal{I}^H_h$ (interpolation). We adopt a geometric multigrid, then the Galerkin conditions (required in the Algebraic multigrid) are not satisfied, meaning that the interpolation and restriction operators are not one the transpose of the other (multiplied by a suitable constant) and the coarse grid operator is constructed in the same way as in the fine grid, without taking into account the transfer operators. In this way we can afford treating in a simpler manner more complex-shaped geometries, maintaining the same sparsity pattern of the system at all levels. We use a cell-centered discretization and denote by $\Omega_h$ and $\Omega_{H}$ the fine and coarse grids respectively, with $\Omega_H$ built in the same way as $\Omega_h$ (Sect.~\ref{sect:discspace}) but with spatial step $H=2h$. \subsubsection{Restriction operator} The defect $r_h$ contains both the defect of the inner relaxations and the defect of the relaxation of the boundary conditions. Since the discrete operators of inner equations and boundary conditions scale with different powers of $h$, the defect may show a sharp gradient crossing the boundary. For this reason, the restriction operator of the inner equation should involve only inside grid points and not ghost or inactive nodes to prevent the degradation of the multigrid performance \cite{COCO2013464,COCO2018299}. In practice, for each inner grid point $(x,y)$ of the coarse grid $\Omega_H$ we identify the four surrounding grid nodes of the fine grid $\Omega_h$, namely $$ \mathcal{N}_{(x,y)} = \left\{ \left( x \pm \frac{h}{2},y\pm \frac{h}{2} \right) \right\} $$ and we perform the restriction by averaging on the grid nodes of $\mathcal{N}_{(x,y)}$ that are inside $\Omega$ (see Fig.~\ref{restriction_operator_ghost}): \begin{equation} r_H(x,y) = \mathcal{I}^{h}_{H} r_h(x,y) = \frac{1}{\left| \mathcal{N}_{(x,y)} \cap \Omega_h \right|} \sum_{(x^*,y^*) \in \mathcal{N}_{(x,y)} \cap \Omega_h} r_h(x^*,y^*). \end{equation} We observe that the restriction reverts to the classical restriction operator for cell-centered discretization and rectangular domains when $(x,y)$ is away from the boundary (see~\cite{Trottemberg:MG}): \begin{equation*} \mathcal{I}^{h}_{H} r_h(x,y) = \frac{1}{4}\left[ r_h\left(x-\frac{h}{2},y - \frac{h}{2}\right) + r_h\left(x-\frac{h}{2},y + \frac{h}{2}\right)+r_h\left(x+\frac{h}{2},y - \frac{h}{2}\right) + r_h\left(x+\frac{h}{2},y + \frac{h}{2}\right) \right]. \end{equation*} A similar approach is adopted to compute the restriction on a ghost point $(x,y) \in \mathcal{G}_H$: \begin{equation} \mathcal{I}^{h}_{H} r_h(x,y) = \frac{1}{\left| \mathcal{N}_{(x,y)} \cap \mathcal{G}_h \right|} \sum_{(x^*,y^*) \in \mathcal{N}_{(x,y)} \cap \mathcal{G}_h} r_h(x^*,y^*). \end{equation} \begin{figure}[htp] \centering \begin{minipage}[b] {.50\textwidth} \centering \includegraphics[width=0.45\textwidth]{Figures/restriction_operator_ghost.pdf} \end{minipage}\hfill \begin{minipage}[b] {.50\textwidth} \centering \includegraphics[width=0.45\textwidth]{Figures/restriction_operator_ghost2.pdf} \end{minipage}\hfill \caption{\textit{Left panel: representation of the restriction operator and respective weights when the coarse grid point is close to the boundary $\Gamma_{\mathcal{B}}$. Right panel: restriction operator for ghost points and respective weights. }} \label{restriction_operator_ghost} \end{figure} \subsubsection{Interpolation operator} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Figures/interpolation_operator.pdf} \caption{\textit{Representation of the interpolation operator for cell-centered discretization and respective weights. }} \label{interpol} \end{figure} Once the residual equation $A_H e_H = r_H$ is solved on the coarse grid, the error $e_H$ is interpolated back to the fine grid $\mathcal{I}_h^H e_H = e_h$. Since the error $e_H$ is continuous across the boundary (it is the solution of the residual equation on the entire domain), then there is no need to separate the cases for inner and ghost points and the standard bilinear interpolation operator can be adopted across the entire domain. For example, for the inner grid node $(x,y)$ depicted in Fig.~\ref{interpol}, the interpolation reads: \begin{multline} e_h (x,y) = \mathcal{I}^H_h e_H (x,y) \\ = \frac{1}{16} \left( 9 e_H\left(x-\frac{h}{2},y+\frac{h}{2}\right) + 3 e_H\left(x+\frac{h}{2},y+\frac{h}{2}\right) + 3 e_H\left(x-\frac{h}{2},y-\frac{h}{2}\right) + e_H\left(x+\frac{h}{2},y-\frac{h}{2}\right) \right). \end{multline} \section{Numerical results}\label{numtest} \subsection{Accuracy test in 2D}\label{sect:2dtest} In this section we test the accuracy of the method. We choose an exact solution $ c _{exa}$ and augment the system~\eqref{Qexpr} as: \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle \frac{\partial c }{\partial t} = D \Delta c + f \quad \text{ in } \Omega\\ \displaystyle\frac{\partial c }{\partial n} + f_N = 0 \quad \text{ on } \Gamma_\mathcal{S}\\ \displaystyle M\frac{\partial c }{\partial t} = MD\frac{\partial ^2 c }{\partial \tau ^2}-D\frac{\partial c }{\partial n_\mathcal{B}} + f_B \quad \text{ on } \Gamma_\mathcal{B}\\ \end{array} \right. \label{accuracy2D} \end{eqnarray} choosing $\displaystyle f,f_N$ and $ f_B $ in such a way that $ c = c _{exa}$ is the exact solution. The computational domain is $\mathcal{S} =[-1,1]\times[-1,1]$, the radius of the bubble is $R_\mathcal{B} = 0.4$, while $M = 2 \times 10^{-4}$ and $ D = 0.1$. We choose the following exact solution: \begin{align} \label{exact_rho} c _{exa}(x,y,t) &= \cos(t)^2 c _0(x,y) + \sin(t)^2 c _1(x,y)&\\ \nonumber c _0(x,y) &= \exp\left(-\frac{\left(x-x_0\right)^2+\left(y-y_0\right)^2}{\sigma_0}\right),\quad x_0 = 0,\quad y_0 = -0.6,\quad \sigma_0 = 0.1&\\ \nonumber c _1(x,y) &= \exp\left(-\frac{\left(x-x_1\right)^2+\left(y-y_1\right)^2}{\sigma_1}\right),\quad x_1 = 0,\quad y_1 = -0.7,\quad \sigma_1 = 0.1.& \end{align} We compute the $L^1, {L}^2$ and ${L}^\infty$ norms of the relative error at $t=\pi/8$ \begin{align} \label{relativeerr} e_\gamma = \frac{|| c - c _{exa}||_\gamma}{|| c _{exa}||_\gamma}, \quad \gamma = 1,2,\infty \end{align} for different values of $N$ and show the results in Table~\ref{table2D} and in the left panel of Fig.~\ref{fig:acc2D3D}, confirming numerically that the method is second order accurate. \begin{table} \centering \begin{tabular}{||c||c||c||c||c||c||c||} \hline \hline $N$ & $e_1$ & {$p_1$}& $e_{2}$ & {$p_{2}$} & $e_{\infty}$ & ${p_{\infty}}$ \\ \hline \hline 40 & 5.165E-02 & - & 4.377E-02 & - & 5.418E-02 & - \\ 80 & 1.234E-02 & {2.066} & 1.045E-02 & {2.066} & 1.453E-02 & {1.898} \\ 160 & 3.054E-03 & {2.014} & 2.584E-03 & {2.016} & 3.696E-03 & {1.975} \\ 320 & 7.766E-04 & {1.975} & 6.472E-04 & {1.997} & 9.323E-04 & {1.987} \\ \hline \hline \end{tabular} \caption{\textit{Relative errors $e_1$, $e_2$ and $e_\infty$ at $t=\pi/8$ and accuracy orders $p_1$, $p_2$ and $p_\infty$ in $\mathcal{L}^1, \mathcal{L}^2$ and $\mathcal{L}^\infty$ norms, respectively, for $ c $ in the 2D test of Sect.~\ref{sect:2dtest}. The exact solution is~\eqref{exact_rho}.}} \label{table2D} \end{table} \subsection{Accuracy tests in 3D axisymmetric formulation}\label{sect:2dAStest} \begin{figure} \centering \includegraphics[width=0.38\textwidth]{Figures/domains3D_squared.pdf} \caption{\textit{Representation of the domain $\Omega$ in 3D axisymmetric: $\Gamma_{\mathcal{S}}$ is the external wall (top, right and bottom boundaries); $\mathcal{B} $ is the bubble with boundary $\Gamma_{\mathcal{B}}$ and radius $R_\mathcal{B}$; $\Gamma_{c}$ is the axis of symmetry (left boundary). }} \label{3Daxisym} \end{figure} In this section we test the accuracy of the method for a 3D axisymmetric model (see Fig.~\ref{3Daxisym}). The computational domain is $\mathcal{S} = [0,2]\times[-1,1]$, the radius of the bubble $R_\mathcal{B} = 0.4$ and $M = 2 \times 10^{-4}$. The coordinates are the radial distance $\xi$ and the vertical coordinate $z$. and $ D = 0.1$. The problem reads (see Fig.~\ref{3Daxisym}): \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle \frac{\partial c }{\partial t} =D \left( \frac{\partial^2 c }{\partial \xi^2} + \frac{1}{\xi} \frac{\partial c }{\partial \xi} + \frac{\partial^2 c }{\partial z^2} \right) \quad \text{ in } \Omega \\ \displaystyle \nabla c \cdot n_\mathcal{S} = 0 \quad \text{ on } \Gamma_\mathcal{S} \cup \Gamma_c\\ \displaystyle M\frac{\partial c }{\partial t} = MD\frac{\partial ^2 c }{\partial \tau ^2}-D\frac{\partial c }{\partial n_\mathcal{B}}\quad \text{ on } \Gamma_\mathcal{B}\\ \end{array} \right. \label{system3D} \end{eqnarray} We choose the following exact solution: \begin{align} \label{exact_rho3D} c _{exa}(\xi,z,t) &= \cos(t)^2 c _0(\xi,z) + \sin(t)^2 c _1(\xi,z)&\\ \nonumber c _0(\xi,z) &= \exp\left(-\frac{\left(\xi-\xi_0\right)^2+\left(z-z_0\right)^2}{\sigma_0}\right),\quad \xi_0 = 0,\quad z_0 = -0.6,\quad \sigma_0 = 0.1&\\ \nonumber c _1(\xi,z) &= \exp\left(-\frac{\left(\xi-\xi_1\right)^2+\left(z-z_1\right)^2}{\sigma_1}\right),\quad \xi_1 = 0.1,\quad z_1 = -0.7,\quad \sigma_1 = 0.1& \end{align} and then we calculate the relative errors at $t=\pi/8$ as in Eq.~\eqref{relativeerr}. Results are presented in table~\ref{table3D} and in the right panel of Fig.~\ref{fig:acc2D3D}, showing second order accuracy. \begin{table} \centering \begin{tabular}{||c||c||c||c||c||c||c||} \hline \hline $N$ & $e_1$ & {$p_1$}& $e_{2}$ & {$p_{2}$} & $e_{\infty}$ & ${p_{\infty}}$ \\ \hline \hline 40 & 1.868E-01 & - & 1.233E-01 & -& 1.299E-01 & -\\ 80 & 4.361E-02 & {2.099} & 2.885E-02 & {2.096} & 3.123E-02 & {2.057} \\ 160 & 1.075E-03 & {2.021} & 7.103E-02 & {2.022} & 7.728E-02 & {2.015} \\ 320 & 2.708E-03 & {1.989} & 1.771E-03 & {2.004} & 1.926E-03 & {2.004} \\ \hline \hline \end{tabular} \caption{\textit{Errors $e_1$, $e_2$ and $e_\infty$ at $t=\pi/8$ and accuracy orders $p_1$, $p_2$ and $p_\infty$ in $\mathcal{L}^1, \mathcal{L}^2$ and $\mathcal{L}^\infty$ norms, respectively, for $ c $ in the 3D axisymmetric test of Sect.~\ref{sect:2dAStest}. The exact solution is Eq.~\eqref{exact_rho3D}.}} \label{table3D} \end{table} \begin{figure}[htp] \centering \hfill \begin{minipage}[b] {.5\textwidth} \includegraphics[width=\textwidth]{Figures/2D_rho_reduced_2.pdf} \end{minipage}\hfill \begin{minipage}[b] {.5\textwidth} \includegraphics[width=\textwidth]{Figures/3D_rho_reduced_2.pdf} \end{minipage} \hspace*{\fill} \caption{\textit{Representation of the relative errors at $t =\pi/8$ in logarithmic scale against the value of $N \in \{40,80,160,320\}$ in 2D (left panel, test of Sect.~\ref{sect:2dtest}, Table~\ref{table2D}) and 3D axisymmetric (right panel, test of Sect.~\ref{sect:2dAStest}, Table~\ref{table3D}).}} \label{fig:acc2D3D} \end{figure} \section{Moving bubbles}\label{sect:moving_bubble} In the following sections, unless otherwise specified, we consider the 3D axisymmetric model. If the bubble moves over time it generates a fluid motion around it. In that case, the fluid domain depends on time $\Omega(t)$. Particles are subjected not only to the diffusion process but they are also transported by the moving fluids. The motion of a fluid past an oscillating bubble is governed by the incompressible Navier-Stokes equations. At low Reynolds numbers, the viscous forces are dominant and the convective term of the Navier-Stokes equations can be neglected so that the motion can then be described by the Stokes equations: \begin{align} \label{stokes_cylindrical} \frac{\partial \textbf{u}}{\partial t} + \nabla p &= \frac{1}{Re} \nabla^2 \textbf{u} \quad \text{ in } \Omega(t) &\\ \nonumber \nabla \cdot \textbf{u} &= 0 \quad \text{ in } \Omega(t) & \end{align} where \textbf{u} is the fluid velocity, $p$ is the pressure, $Re$ is the Reynolds number. Driven by the application to Sorption Kinetics, we are interested in modelling the fluid dynamics generated by an oscillating bubble at extremely small amplitudes ($\sim 10^{-8} \text{ m}$), resulting in a low Reynolds number $Re<0.1$ (see~\cite{multiscale_mod}). Therefore, Stokes equations provide a reasonable approximation of the fluid dynamics for the problems investigated in this paper. The 3D axisymmetric formulation (the coordinates are the radial distance $\xi$ and the vertical coordinate $z$) is completed by the following boundary conditions (see Fig.~\ref{3Daxisym}): \begin{align} \label{stokes_cylindricalBC} \frac{\partial \textbf{u}}{\partial n} &= 0\quad \text{ on } \Gamma_c &\\ \nonumber \textbf{u} &= 0\quad \text{ on } \Gamma_\mathcal{S} & \\ \nonumber \textbf{u} \cdot \textbf{n} &=\textbf{u}_b \cdot \textbf{n} \quad \text{ on } \Gamma_\mathcal{B}(t) & \\ \nonumber \frac{\partial (\textbf{u} \cdot \tau)}{\partial n} & = 0 \quad \text{ on } \Gamma_\mathcal{B}(t). & \end{align} The first one is the homogeneous boundary condition dictated by the axis symmetry, the second one is a no-slip boundary condition on the external wall, while the third and fourth ones are the free-slip boundary conditions at the boundary of the bubble ($\textbf{n}$ and $\tau$ are the normal and tangential vectors, respectively), where $\textbf{u}_b$ is the velocity of the bubble surface. The Stokes equations \eqref{stokes_cylindrical} are discretized in time using Crank-Nicholson: \begin{align}\label{stokesCN} \displaystyle \frac{\textbf{u}^{(n+1)}-\textbf{u}^{(n)}}{\Delta t}+\nabla p^{(n+1/2)} &=\frac{1}{2Re}\left(\nabla^2\textbf{u}^{(n)} + \nabla^2 \textbf{u}^{(n+1)}\right) \quad \text{ in } \Omega^{(n+1)}& \\ \nonumber \nabla \cdot \textbf{u}^{(n+1)} &= 0 \quad \text{ in } \Omega^{(n+1)}& \end{align} The pressure $p$ and the velocity components $\textbf{u} = (u, v)$ are defined on a staggered grid (see Fig.~\ref{fig:staggered}): $p$ is defined at the centre of each cell, while $u$ and $v$ are defined at the mid points of vertical and horizontal side cells, respectively, i.e.~the so called Marker-and-cell discretization introduced by Harlow in the sixties \cite{harlow1965numerical}. The differential operators are discretized in space using central difference: \begin{eqnarray*} \displaystyle \frac{\partial p}{\partial \xi}\Big|_{i+1/2,j} = \frac{p_{i+1,j}-p_{i,j}}{h}, \quad \frac{\partial p}{\partial z}\Big|_{i,j+1/2} = \frac{p_{i,j+1}-p_{i,j}}{h} \\ \displaystyle \frac{\partial u}{\partial \xi} \Big|_{i+1/2,j} = \frac{u_{i+3/2,j}-u_{i-1/2,j}}{2h}, \quad \frac{\partial u}{\partial z}\Big|_{i+1/2,j} = \frac{u_{i+1/2,j+1}-u_{i+1/2,j-1}}{2h} \\ \displaystyle \frac{\partial v}{\partial \xi}\Big|_{i,j+1/2} = \frac{v_{i+1,j+1/2}-v_{i-1,j+1/2}}{2h}, \quad \frac{\partial v}{\partial z}\Big|_{i,j+1/2} = \frac{v_{i,j+3/2}-v_{i,j-1/2}}{2h} \\ \displaystyle \nabla^2u\Big|_{i+1/2,j} = \frac{u_{i+3/2,j}+u_{i-1/2,j}+u_{i+1/2,j+1}+u_{i+1/2,j-1}-4u_{i+1/2,j}}{h^2}\\ \displaystyle \nabla^2v\Big|_{i,j+1/2} = \frac{v_{i+1,j+1/2}+v_{i-1,j+1/2}+v_{i,j+3/2}+v_{i,j-1/2}-4v_{i,j+1/2}}{h^2}\\ \displaystyle \nabla \cdot \textbf{u}\Big|_{i,j} = \frac{u_{i+1/2,j}-u_{i-1/2,j}+v_{i,j+1/2}-v_{i,j-1/2}}{h} \end{eqnarray*} \begin{figure}[htp] \centering \includegraphics[width=0.5\textwidth]{Figures/staggeredGrid.pdf} \caption{\textit{Staggered grid for the Stokes problem. Horizontal velocity $u$ is defined on the middle points of the vertical edges of the cells (circle markers), vertical velocity $v$ is defined on the middle points of the horizontal edges of the cells (diamond markers), pressure $p$ is defined on the centers of the cells (dot markers).}} \label{fig:staggered} \end{figure} This discretization results in a linear system to be solved for $(\textbf{u}^{(n+1)},p^{(n+1/2)})$. This linear system is singular, due to the non uniqueness of $p^{(n+1/2)}$ (it is defined up to an additive constant) and it has to satisfy a compatibility condition to guarantee the existence of the solution. Following the approach presented in~\cite{COCO2020109623}, the issue is circumvented by augmenting the problem \eqref{stokesCN} with an additional scalar unknown $\zeta \in \mathbb{R}$ and an additional equation for $p$ as follows: \begin{align}\label{stokes_aug} \displaystyle \frac{\textbf{u}^{(n+1)}-\textbf{u}^{(n)}}{\Delta t}+\nabla p^{(n+1/2)} &=\frac{1}{2Re}\left(\nabla^2\textbf{u}^{(n)} + \nabla^2 \textbf{u}^{(n+1)}\right) \quad \text{ in } \Omega^{(n+1)}& \\ \nonumber \nabla \cdot \textbf{u}^{(n+1)} &= \zeta \quad \text{ in } \Omega^{(n+1)}&\\ \nonumber \int_\Omega p\, d\Omega &= 0 \quad \text{ in } \Omega^{(n+1)}& \end{align} The problem consists then of finding $ (\textbf{u}^{(n+1)}, p^{(n+1/2)} , \zeta)$ that satisfies Eq.~\eqref{stokes_aug}. We observe that the free divergence condition is not guaranteed, namely $\zeta$ is usually different from zero. However, the divergence decades with the same order of the method, i.e.~$\zeta = O(h^2)$, where $h$ is the spatial step, and then the overall accuracy order is not degraded (see~\cite{COCO2020109623} for more details). We observed numerically that in the absence of a bubble, namely $\Omega = \mathcal{S}$, then $\xi = 0$ within machine precision. This is usually the case of domains without curved boundaries. The third equation of (\ref{stokes_aug}) is discretized by standard mid-point rule, leading to the linear equation \begin{equation} \sum_{(\xi_i,z_j) \text{ internal points }} p_{i,j} = 0. \end{equation} The curved boundary is treated using the ghost-point technique described in~\cite{COCO2020109623}, similarly to the approach presented in Sect.~\ref{sect:discspace}. \subsection{Test 1: pulsating bubble}\label{sect:pulsating} In this test we want to model the expansion/compression of a (pulsating) bubble, represented by a sphere $\mathcal{B}(t)$ centred at the origin and with radius: \begin{equation} R(t) = R_\mathcal{B} (1 + A\sin(\omega t)) \label{radius_t} \end{equation} where $ R_\mathcal{B}$ is the radius of the bubble at time $t=0$. The velocity of the bubble surface is \begin{equation}\label{bcub} \textbf{u}_b (\xi,z) = R'(t) \, \textbf{n} = A\, R_\mathcal{B} \, \omega \cos(\omega \, t) \, \textbf{n}, \text{ where } \textbf{n} = (\xi,z)/ \sqrt{\xi^2+z^2} \text{ and } \sqrt{\xi^2+z^2} = R(t). \end{equation} The exact solution for the 3D axisymmetric Stokes problem~\eqref{stokes_cylindrical} with free-slip boundary conditions on the bubble surface (third and fourth equations of~\eqref{stokes_cylindricalBC}) in a semi-infinite domain $\Omega(t) = \left\{ (\xi,z) \colon 0<\xi<+\infty, \right.$ $ \left. \xi^2+z^2>(R(t))^2\right\}$ is: \begin{equation}\label{exactu} \textbf{u}_{\rm exa} = R'(t) \frac{(R(t))^2}{(\xi^2+z^2)^{3/2}} \cdot \begin{pmatrix} \xi \\ z \end{pmatrix} , \quad p = R(t) (R''(t) R(t)+2 (R'(t))^2)/\sqrt{\xi^2+z^2}. \end{equation} In a finite domain $\Omega(t) = \mathcal{S} \backslash \mathcal{B}(t)$ we cannot prescribe the wall boundary conditions $\textbf{u}=0$ on the external boundary $\Gamma_\mathcal{S}$ otherwise the mass conservation is not guaranteed (since the volume of the bubble is not constant over time). For this specific test we then prescribe the exact velocity \eqref{exactu} at $\Gamma_\mathcal{S}$. We choose $R_\mathcal{B} = 0.253$, $A=0.04$ and $\omega= 2 \pi \, \nu$ with $\nu=50$ and we compute the numerical error at time $t_{fin}=0.1$ as the difference between the numerical solution and the exact solution \eqref{exactu}. The results are presented in Fig.~\ref{test_1_2}, where the second order accuracy is confirmed for both velocity components $u$ (radial component) and $v$ (vertical component) in norms $L^1$, $L^2$ and $L^\infty$. \begin{figure}[htp] \centering \includegraphics[width=0.6\textwidth]{Figures/osc1_error_uv.pdf} \caption{\textit{Representation of the relative error for the oscillating bubble, Test 1, Sect~\ref{sect:pulsating}. We plot the errors for the two components of the velocity $u$ and $v$ in $\mathcal{L}^1$ and $\mathcal{L}^\infty$ norms. In this test $\Omega = [0,2]\times[-1,1]$, $A$ = 0.04, $nu$ = 1, $R_\mathcal{B}$ = 0.253 and final time $t_{fin} $ = 0.1.}} \label{test_1_2} \end{figure} \subsection{A steady computational bubble approach}\label{sect:fixed} When the amplitude of the bubble oscillation is sufficiently small compared to its dimensions, then $R(t) \approx R_\mathcal{B} $ and it is reasonable to simplify the model by assuming that the velocity of the surface bubble is assigned at a distance of $R_\mathcal{B}$ from the origin rather than $R(t)$: \begin{equation}\label{bcfixedtest1} \textbf{u}_b(\xi,z) = A \, \omega \cos(\omega\,t) \cdot \begin{pmatrix} \xi \\ z \end{pmatrix} \text{ for } \sqrt{\xi^2+z^2} = R_\mathcal{B}. \end{equation} In this way, the computational domain does not move in time, since $\mathcal{B}(t)=\mathcal{B}(0)$ for any $t>0$ (steady computational bubble) and the fluid motion is generated purely from the boundary conditions. The exact solution for the 3D axisymmetric Stokes problem~\eqref{stokes_cylindrical} with free-slip boundary conditions on the bubble surface (third and fourth equations of~\eqref{stokes_cylindricalBC}), with surface velocity defined by \eqref{bcfixedtest1}, in a semi-infinite domain $\Omega = \left\{ 0<\xi<+\infty, \xi^2+z^2>R_\mathcal{B}^2\right\}$ is: \begin{equation}\label{exactufixed} \textbf{u}^{\rm f}_{\rm exa} = R'(t) \frac{(R_\mathcal{B})^2}{(\xi^2+z^2)^{3/2}} \cdot \begin{pmatrix} \xi \\ z \end{pmatrix}, \quad p = R''(t) R_\mathcal{B}^2 /\sqrt{\xi^2+z^2}. \end{equation} The difference between the exact solutions \eqref{exactu} and \eqref{exactufixed} is $\textbf{u}_{\rm exa}-\textbf{u}^{\rm f}_{\rm exa} = \mathcal{O}(A)$. Therefore, for a fixed spatial step, the difference between the two approaches decades as $A \rightarrow 0$. This is confirmed numerically in Fig.~\ref{2errors_1_2} (left panel), where we compute the difference between the numerical solutions of the two approaches $\textbf{u}_h$ and $\textbf{u}_h^{f}$ at a fixed value of the spatial step $h=1/50$ and different values of $A$ over a period $T = 1/\nu$ as follows: \begin{equation} \displaystyle \frac{1}{T}\int_0^T\frac{\int_{\Omega(t)}\left|\textbf{u}_h-\textbf{u}^{f}_h\right|^p}{\int_{\Omega(t)}\left|\textbf{u}_h^{f}\right|^p}dt, \quad p = 1,2,\infty. \label{period_error} \end{equation} In the right panel of Fig.~\ref{2errors_1_2} we compute the numerical error as the difference between the numerical and the exact solutions for both approaches at a fixed spatial step and different values of $A$. We observe that the second approach generally over performs the first one, although for sufficiently small values of $A$ the two errors reach a plateau, meaning that the overall error is dominated by the discretization error at the fixed spatial step. We conclude that when $A/R_\mathcal{B} \ll 1$ (as the cases investigated in this paper) it is more efficient to keep a steady computational bubble $\mathcal{B}$ and simulate the bubble motion by assigning a varying velocity at the computational surface $\partial \mathcal{B}$. We will follow this approach in the following tests. \begin{figure}[htp] \centering \hfill \begin{minipage}[b] {.5\textwidth} \includegraphics[width=\textwidth]{Figures/Linf_integralerror_2numerics_N240_plot.pdf} \end{minipage}\hfill \begin{minipage}[b] {.5\textwidth} \includegraphics[width=\textwidth]{Figures/Linf_integralerror_2errors_N240_plot.pdf} \end{minipage} \hspace*{\fill} \caption{\textit{Left panel: relative difference between the numerical solutions of the two approaches described in Sections~\ref{sect:pulsating} and~\ref{sect:fixed} against the value of $A$. Right panel: relative errors of the two approaches computed as the difference between the numerical solutions and the respective exact solutions~\eqref{exactu} and~\eqref{exactufixed} against the value of $A$. In both plots we have $A \in \{h/64, h/16, h/4, h, 4h, 16h\}$, with $h=1/120$, and the time range is $[t_{\rm in},t_{\rm fin}] = [1,2]$, with $\nu = 1$.} } \label{2errors_1_2} \end{figure} \subsection{Test 2: Oscillating bubble}\label{sect:oscbubble} In the following tests we model the advection-diffusion process of particles in a moving fluid past an oscillating bubble: \begin{equation}\label{pde2dcoupled} \frac{\partial c }{\displaystyle \partial t} = \nabla \cdot \left( D \nabla c - c \textbf{u} \right) \end{equation} where $\textbf{u} = (u,v)$ is the solution of the Stokes problem~\eqref{stokes_cylindrical}. In general, we describe the motion of a bubble (that is not necessarily a sphere) by its parametric equations: \begin{align*} \xi(\theta,t) &= \xi_c(t)+\delta_\xi(t) \cos(\theta) &\\ z(\theta,t) &= z_c(t)+\delta_z(t) \sin(\theta) & \end{align*} where $(\xi_c(t),z_c(t))$ is the centre of the bubble, while $\delta_\xi(t)$ and $\delta_z(t)$ regulate the deformation from a spherical shape. We assume that at time $t=0$ the bubble is a sphere centred at the origin and with radius $R_\mathcal{B}$, then $\xi_c(0)=z_c(0)=0$ and $\delta_\xi(0)=\delta_z(0)=R_\mathcal{B}$. Following the same approach as in Sect.~\ref{sect:fixed}, we keep a steady computational bubble $\mathcal{B}(t)=\mathcal{B}(0)$ for $t>0$ and we model the velocity of the surface $\textbf{u}_b(\xi,z) = (u_b(\xi,z),v_b(\xi,z))$ at $(\xi,z)\colon \sqrt{\xi^2+z^2}=R_\mathcal{B}$ as: \begin{align*} u_b(\xi,z) &= \xi'_c(t)+\delta'_\xi(t) \cos(\theta) &\\ v_b(\xi,z) &= z'_c(t)+\delta'_z(t) \sin(\theta), \quad \text{ where } \quad \theta = \arctan (z/\xi).& \end{align*} In \textsc{Test2a}, we model an harmonic vertical oscillation of the spherical bubble: \[ \xi(t)=0, \quad z(t)=A \sin(2 \pi \nu t), \quad \delta_\xi(t) = \delta_z(t) = R_\mathcal{B}, \] while In \textsc{Test2b}, we model an ellipsoidal deformation of the bubble: \begin{equation}\label{deftest2b} \xi(t)=z(t)=0, \quad \delta_z(t) = R_\mathcal{B} (1+A \sin(2 \pi \nu t)), \quad \delta_\xi(t) = \sqrt{R_\mathcal{B}^3/\delta_z(t)}. \end{equation} In \eqref{deftest2b}, we observe that $\delta_\xi(t)$ has been defined to guarantee that the volume of the ellipsoid, $V(t)=4/3 (\pi \delta_\xi(t)^2 \delta_z(t))$, is constant over time. We choose $R_\mathcal{B}=0.258$, $A = 0.01$ and $\nu=10$ (\textsc{Test2a10} and \textsc{Test2b10}) or $\nu=1000$ (\textsc{Test2a1000} and \textsc{Test2b1000}). In Fig.~\ref{fig:osc1and2} we plot the vector fields of the fluid velocity at selected fractions of the first oscillation period $T$ ($\nu \cdot t = 0.25,0.50,0.75,1.00$). The colormap represents the magnitude of the velocity, while the red dashed line is the fictitious representation of the bubble (where $A$ has been amplified by a factor of 20 for graphical purposes). In \textsc{Test2a10} we observe that a small vortex is generated next to the bubble, moving farther to the right after disappearing at around $\xi=1.5$ between $t \cdot \nu = 0.25$ and $t \cdot \nu = 0.50$. At that time, a new vortex is generated next to the bubble, disappearing between $t \cdot \nu = 0.75$ and $t \cdot \nu = 1.00$ and so on, approaching a periodic behaviour. A similar mechanism is observed in \textsc{Test2a1000}, except that the vortexes disappear when they are much closer to the bubble compared to \textsc{Test2a10}, say at around $\xi=0.5$. In \textsc{Test2b10} two vortexes are generated at the same time, moving towards the top right and bottom right corners of the domain, respectively. They disappear in favour of novel vortexes with the same timeline as in \textsc{Test2a10}. This phenomenon is observed in experimental results~\cite{tho2007cavitation}. In \textsc{Test2b1000} a similar behaviour is observed, except that the two vortexes disappear when they are much closer to the bubble than in \textsc{Test2b10}. \begin{figure}[htp] \centering \hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_10_timenu0p25.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_10_timenu0p50.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_10_timenu0p75.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_10_timenu1p00.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_1000_timenu0p25.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_1000_timenu0p50.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_1000_timenu0p75.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc1_nu_1000_timenu1p00.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_10_timenu0p25.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_10_timenu0p50.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_10_timenu0p75.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_10_timenu1p00.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_1000_timenu0p25.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_1000_timenu0p50.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_1000_timenu0p75.pdf} \end{minipage}\hfill \begin{minipage}[b] {.249\textwidth} \centering \includegraphics[width=\textwidth]{Figures/osc2_nu_1000_timenu1p00.pdf} \end{minipage}\hfill \caption{Vector fields of the velocity at selected fractions of the first oscillation period $T$ ($\nu \cdot t = 0.25,0.50,0.75,1.00$) for the numerical tests of Sect.~\ref{sect:oscbubble}. Each row of plots corresponds to a numerical test. From top to bottom: \textsc{Test2a10}, \textsc{Test2a1000}, \textsc{Test2b10} and \textsc{Test2b1000}. The red dashed line is the fictitious representation of the bubble (where $A$ has been amplified by a factor 20 for graphical purposes).} \label{fig:osc1and2} \end{figure} In Table~\ref{table:divUaccuracy} we show that $\nabla \cdot \vec{u}_h$ decades with the same order of accuracy of the numerical error on $\vec{u}_h$, i.e.~$\nabla \cdot \vec{u}_h = \mathcal{O}(h^2)$, where $h$ is the spatial step. For this purpose, we use \textsc{Test2a1000} and \textsc{Test2b1000}. Since we do not know the exact solution for these tests, we approximate the order of accuracy $q$ of $\vec{u}_h$ by using the Richardson extrapolation: \[ q \approx \log_2 \left( \frac{\left\| \vec{u}_h - \vec{u}_{h/2} \right\|_\infty}{\left\| \vec{u}_{h/2} - \vec{u}_{h/4} \right\|_\infty} \right). \] We observe numerically that $q \approx 2$. The relative error on $\vec{u}_h$ is then approximated by \[ {e}_h = \frac{\left\| \vec{u}_h - \vec{u}_{\text{exa}} \right\|_\infty}{\left\| \vec{u}_{\text{exa}} \right\|_\infty} \approx \frac{4}{3} \frac{\left\| \vec{u}_h - \vec{u}_{h/2} \right\|_\infty}{\left\| \vec{u}_{h} \right\|_\infty} \] The relative error on $\nabla \cdot \vec{u}_h$ is computed by normalization with $\nabla \vec{u}_h$: \[ {e}^{\text{div}}_h = \frac{\left\| \nabla \cdot \vec{u}_h \right\|_\infty}{\left\| \nabla \vec{u}_h \right\|_\infty} \] with $\left\| \nabla \vec{u}_h \right\|_\infty = \max \left\{ \left\| \, \left| \nabla u_h \right| \, \right\|_\infty, \left\| \, \left| \nabla v_h \right| \, \right\|_\infty \right\}$, where $\left| \nabla u_h \right|$ and $\left| \nabla v_h \right|$ represent the central finite-difference approximations of $(u_x^2+u_y^2)^{1/2}$ and $(v_x^2+v_y^2)^{1/2}$, respectively. The order of accuracy $q^\text{div}$ is approximated by \[ q^\text{div} \approx \log_2 \left( \frac{e_h^\text{div}}{e_{h/2}^\text{div}} \right) \] We also observe from Table~\ref{table:divUaccuracy} that the relative error on the divergence ${e}^{\text{div}}_h$ is about one order of magnitude smaller than the relative error on velocity ${e}_h$. \begin{table}[H] \centering \begin{tabular}{|| c | c | c | c | c | c ||} \hline \hline No.~of points & $\left\| \vec{u}_h - \vec{u}_{h/2} \right\|_\infty$ & order $q$ & $e_h$ & $e_h^\text{div}$ & order $q^\text{div}$\\ \hline 32 $\times$ 32 & 2.12 $\cdot 10^{-1}$ & 1.79 & 1.62 $\cdot 10^{-3}$ & 2.02 $\cdot 10^{-4}$ & 1.74 \\ 64 $\times$ 64 & 6.15 $\cdot 10^{-2}$ & 2.11 & 4.69 $\cdot 10^{-4}$ & 6.05 $\cdot 10^{-5}$ & 1.96 \\ 128 $\times$ 128 & 1.42 $\cdot 10^{-2}$ & 2.02 & 1.08 $\cdot 10^{-4}$ & 1.55 $\cdot 10^{-5}$ & 1.99 \\ 256 $\times$ 256 & 3.50 $\cdot 10^{-3}$ & - & 2.67 $\cdot 10^{-5}$ & 3.91 $\cdot 10^{-6}$ & 1.92 \\ 512 $\times$ 512 & - & - & - & 1.03 $\cdot 10^{-6}$ & -\\ \hline \hline No.~of points & $\left\| \vec{u}_h - \vec{u}_{h/2} \right\|_\infty$ & order $q$ & $e_h$ & $e_h^\text{div}$ & order $q^\text{div}$\\ \hline 32 $\times$ 32 & 4.51 $\cdot 10^{-1}$ & 1.88 & 3.44 $\cdot 10^{-3}$ & 6.12 $\cdot 10^{-4}$ & 1.79 \\ 64 $\times$ 64 & 1.23 $\cdot 10^{-1}$ & 1.89 & 9.36 $\cdot 10^{-4}$ & 1.77 $\cdot 10^{-4}$ & 1.88 \\ 128 $\times$ 128 & 3.32 $\cdot 10^{-2}$ & 1.94 & 2.53 $\cdot 10^{-4}$ & 4.84 $\cdot 10^{-5}$ & 1.84 \\ 256 $\times$ 256 & 8.68 $\cdot 10^{-3}$ & - & 6.61 $\cdot 10^{-5}$ & 1.35 $\cdot 10^{-5}$ & 1.93 \\ 512 $\times$ 512 & - & - & - & 3.55 $\cdot 10^{-6}$ & -\\ \hline \hline \end{tabular} \caption{\textit{Relative numerical errors $e_h$ (on $\vec{u}_h$) and $e_h^\text{div}$ (on $\nabla \cdot \vec{u}_h$) and respective orders of accuracy $q$ and $q^\text{div}$ for \textsc{Test2a1000} (top) and \textsc{Test2b1000} (bottom). }} \label{table:divUaccuracy} \end{table} In Fig.~\ref{fig:detector} we plot the particle concentration $c$ at a specific \textit{detector\/} point $(\xi_d=0.4,z_d=0)$ over time $t$ for different numerical tests (\textsc{Test2a10},\textsc{Test2a1000},\textsc{Test2b10},\textsc{Test2b1000}). At the initial time $t=0$ the particles follow a Gaussian distribution centered in $(\xi_0=0.8,z_0=0)$, as follows (initial condition): \begin{equation}\label{eq:IC} c(\xi,z,0) = a_1 e^{-a_2((\xi-\xi_0)^2+(z-z_0)^2)}, \quad a_1 = \left(2\pi\sigma^2\right)^{-3/2}, \quad a_2 = \left(2\sigma^2\right)^{-1}, \sigma = 0.4. \end{equation} The black line refers to the case of steady bubble (${\bf u}=0$). All tests present an oscillating behaviour of the particle concentration in the vicinity of the black line. The oscillation frequency of the particle concentration is strictly related with the oscillation frequency of the bubble, while the amplitude depends on the type of bubble oscillation (harmonic or ellipsoidal). However, the temporal average of the particle concentration of the proposed tests, namely $\overline{c}(x,t) = \frac{1}{T}\int_{t-T/2}^{t+T/2} c(x,\tau) \, d \tau = \nu \int_{t-1/(2\nu)}^{t+1/(2\nu)} c(x,\tau) \, d \tau$, does not seem to match the green line, suggesting that the bubble oscillation actually changes the particle distribution in the vicinity of the bubble. The problem presents a temporal multiscale effect and a rigorous mathematical explanation of this phenomenon should provide the associated PDEs for $\overline{c}$, showing that these equations differ from the simple diffusion equations with ${\bf u}=0$. This is part of our ongoing effort. \begin{figure}[htp] \centering \hfill \begin{minipage}[b] {.5\textwidth} \includegraphics[width=\textwidth]{Figures/test2AB10.pdf} \end{minipage}\hfill \begin{minipage}[b] {.5\textwidth} \includegraphics[width=\textwidth]{Figures/test2AB1000_2.pdf} \end{minipage} \hspace*{\fill} \caption{\textit{Detector values of the particle concentration $c$ of Eq.~\eqref{pde2dcoupled} at $(\xi_d=0.4,z_d=0)$. Initial condition is~\eqref{eq:IC}. The spatial step is $h = 1/120$. On the left we plot the comparison between \textsc{Test2a10} (blue line), \textsc{Test2b10} (red line) and steady-bubble case with $\textbf{u}=0$ (black line). Analogously, on the right, we show the comparison between \textsc{Test2a1000} and \textsc{Test2b1000}. The dashed lines represent the mean values of the respective tests. }} \label{fig:detector} \end{figure} \section{Conclusions} \label{sec:conclusions} We have presented a second order accurate numerical method for the recently developed multiscale model of sorption kinetic \cite{multiscale_mod}, in the single carrier approximation. The problem consists of a concentration of particles that diffuse in a fluid agitated by an oscillating bubble. In addition, the bubble attracts the particles that are in the vicinity of its surface. This attraction is modelled by a time-dependent boundary condition on the bubble surface, where normal and (second order) tangential derivatives are involved. The region occupied by the bubble is implicitly described by a level-set function and the boundary conditions on the curved boundary (bubble surface) are discretized by a proper ghost-point technique. A multigrid method is designed to efficiently solve the linear system arising from the discretization of the equations. The complexity of the boundary condition on the bubble surface leads to a specific stability condition that must be satisfied by the relaxation scheme of the multigrid method. The fluid motion is modelled by the Stokes equations, solved by a monolithic approach (continuity and momentum equations are solved simultaneously). A simplified model is provided for the treatment of the boundary conditions in the case of moving bubble, in which the bubble is computationally steady and its oscillations are modeled by a time-dependent fluid velocity imposed on its surface. This simplification is justified by the small amplitude of the bubble oscillations, and a comparison with the more realistic moving domain model confirms that the differences between the two approaches are negligible for the problems investigated in this paper. Furthermore, this approximation makes the computation more efficient, because it avoids evolving the domain and its discretization, so the coefficients of the linear system \eqref{CNdiscspace} are time independent. Two types of bubble oscillations are implemented: harmonic and ellipsoidal oscillation, the latter providing a better representation of the experimental results existing in literature. The particle concentration at a specific point of the domain is reconstructed over time for the two types of oscillation. The same test is repeated for a steady fluid case (actual steady bubble). We observed that the particle concentration for the oscillating bubble oscillates around an average function that is close to (but not the same as) the one obtained for the actual steady bubble. The discrepancy between the two functions can be mathematically described by numerical approaches developed for temporal multiscale problems. A more rigorous analysis of this phenomenon is part of our ongoing effort. Future works will include a saturation effect (already modelled in~\cite{multiscale_mod}, when high concentrations is reached near the surface of the bubble) and a full two carrier model that describes the interaction between the two species of ions (with a potential obtained from a self-consistent Poisson equation). \section*{Acknowledgments} G.R.~and C.A.~acknowledge partial support from ITN-ETN Horizon 2020 Project ModCompShock, Modeling and Computation on Shocks and Interface (Project Reference 642768), and from PRIN Project 2017 entitled ''Innovative numerical methods for evolutionary partial differential equations and applications'' (No.2017KKJP4X), coming from the Italian Ministry of Education, University and Research (MIUR). All authors acknowledge support from GNCS--INDAM (National Group for Scientific Computing, Italy). \bibliographystyle{unsrt}
1,477,468,751,367
arxiv
\section{\label{sec:level1} Introduction} By Noether's principle, any continuous symmetry of the action of a physical system corresponds to a conserved quantity. In special relativity, integrating the energy-momentum tensor of matter density against Killing fields in $\mathbb{R}^{3,1}$ gives the well-defined notion of energy momentum 4-vector, angular momentum, and center of mass. In attempt to generalize these concepts to general relativity, one encounters two major difficulties. Firstly, gravitation does not have mass density. Secondly, there is no symmetry in a general spacetime. As such, most study of conserved quantities are restricted to isolated systems on which asymptotically flat coordinates exist at infinity. However, it has been conjectured \cite{Penrose} for decades that a quasi-local description of conserved quantities should exist, at least for energy-momentum and angular momentum. These are notions attached to a spacelike 2-surface $\Sigma$ in spacetime. In \cite{Wang-Yau1}, a new definition of quasi-local energy-momentum and quasi-local mass was proposed using isometric embeddings of the 2-surface into $\mathbb{R}^{3,1}$. The expression originated from the boundary term in the Hamilton-Jacobi analysis of gravitation action \cite{Brown-York2, hh}. To each pair of $(i, t_0^\nu)$ in which $i:\Sigma\hookrightarrow \mathbb{R}^{3,1}$ is an isometric embedding and $t_0^\nu$ is a future time-like unit Killing field, a canonical gauge (see (3) in \cite{Wang-Yau1}) is chosen and a quasi-local energy is assigned. The pair is considered to be a quasi-local observer and the quasi-local mass is obtained by minimizing the quasi-local energy seen among admissible $(i, t_0^\nu)$. A critical point of the quasi-local energy is an optimal isometric embedding \cite{Wang-Yau2}. In this letter, we use the optimal isometric embedding and the canonical gauge to transplant Killing fields in $\mathbb{R}^{3,1}$ back to the surface of interest in the physical spacetime. In particular, this defines quasi-local angular momentum and quasi-local center of mass with respect to rotation Killing fields and boost Killing fields. We refer to \cite{Szabados} for earlier work on the definition of quasi-local angular momentum, notably \cite{Penrose2}. This new proposal is further applied to study asymptotically flat spacetime and to define new total conserved quantities on asymptotically flat initial data set. There are several existing definitions of total angular momentum and total center of mass such as the Arnowitt-Deser-Misner (ADM) angular momentum (\cite{Arnowitt-Deser-Misner,Ashtekar-Hansen,Regge-Teitelboim}) and the center of mass proposed by Huisken-Yau, Regge-Teitelboim, Beig-\'OMurchadha, Christodoulou, and Schoen \cite{Huisken-Yau, Regge-Teitelboim, Beig-Omurchadha, Christodoulou, Huang}. Unlike these definitions which rely on an asymptotically flat coordinate system or an asymptotic Killing field, the new definition is free of such ambiguities. In this Letter, we show that the new definition satisfies highly desirable properties, and fully captures the dynamics of the Einstein equation. \section{\label{sec:level2} Definition and properties of quasi-local conserved quantities} Suppose $\Sigma$ is a spacelike surface in a time orientable spacetime $N$, $u^\nu$ is a future-pointing timelike unit normal, and $v^\nu$ is a spacelike unit normal with $u^\nu v_\nu=0$ along $\Sigma$. We recall the \textit{mean curvature vector field} $h^\nu=-kv^\nu+pu^\nu $ in \cite{Wang-Yau1} and its companion $j^\nu=k u^\nu-pv^\nu $. Both are normal vector fields along $\Sigma$ which are defined independent of the choice of $u^\nu$ and $v^\nu$. We assume the mean curvature vector field is spacelike everywhere along $\Sigma$ and the norm of the mean curvature vector is denoted by $|H|=\sqrt{k^2-p^2}>0$. $h^\nu$ and $j^\nu$ also define a connection one-form $\alpha_H$ of the normal bundle of $\Sigma$ by \[\alpha_H=\frac{1}{k^2-p^2}\pi_{\alpha}^\beta(h^\nu\nabla_\beta j_\nu)\] where $\pi_\alpha^\beta=\delta_{\alpha}^\beta-u^\beta u_\alpha+v^\beta v_\alpha$ is the projection from the tangent bundle of $N$ onto the tangent bundle of $\Sigma$. We choose local coordinates $\{u^a\}_{a=1,2}$ on $\Sigma$ and express this one-form as $(\alpha_H)_a$ and the induced Riemannian metric on $\Sigma$ as a symmetric $(0,2)$ tensor $\sigma_{ab}$. The definition of quasi-local conserved quantities depends only on the data $(\sigma_{ab}, |H|, (\alpha_H)_a)$, or the induced Riemannian metric and the mean curvature vector field. Consider a reference isometric embedding $i:\Sigma \hookrightarrow \mathbb{R}^{3,1}$ of $\Sigma$ so that the induced metric on the image surface is $\sigma_{ab}$. Suppose the mean curvature vector of the image surface is also spacelike (this unnecessary assumption makes the exposition easier), we can similarly compute the mean curvature vector field on the image surface and obtain the correspond data $|H_0|$ and $(\alpha_{H_0})_a$. The definition of quasi-local energy, as in the Hamilton-Jacobi theory, is with respect to a constant timelike unit vector $t_0^\nu$ in $\mathbb{R}^{3,1}$. Suppose the components of the isometric embedding $i$ is given by $(X^0, X^1, X^2, X^3)$, each as a smooth function on $\Sigma$. Let $\eta_{\alpha\beta}$ be the Minkowski metric and $\tau=-t_0^\mu\eta_{\mu\nu} X^\nu$ be the time function on $\Sigma$ with respect to $t_0^\nu$. The quasi-local energy of $(\sigma_{ab}, |H|, (\alpha_H)_a)$ with respect to $(i, t_0^\nu)$ is given by \begin{equation}\label{qle}\frac{1}{8\pi}\int_\Sigma \{( \cosh\theta_0|H_0|-\cosh\theta |H|)\sqrt{1+|\nabla\tau|^2}-[\nabla_a \theta_0-\nabla_a\theta+(\alpha_{H_0})_a-(\alpha_H)_a]\nabla^a\tau\}\end{equation} where $\theta_0=\sinh^{-1}\frac{-\Delta\tau}{|H_0|\sqrt{1+|\nabla\tau|^2}}$ and $\theta=\sinh^{-1}\frac{-\Delta\tau}{|H|\sqrt{1+|\nabla\tau|^2}}$. $\Delta\tau=\nabla^a\nabla_a \tau$ and $|\nabla\tau|^2=\sigma^{ab}\nabla_a\tau\nabla^a\tau$, where $\nabla_a$ is the covariant derivative of $\sigma_{ab}$. Expression \eqref{qle} is the same as (4) in \cite{Wang-Yau1} with $N_0=\sqrt{1+|\nabla\tau|^2}$ and $N_0^\nu=-\nabla^a\tau$. This is derived from the boundary term of the surface Hamiltonian (see \cite{Brown-York2, hh}) of gravitation action in the canonical gauge condition (3) in \cite{Wang-Yau1}. It turns out the quasi-local energy is best described by $\rho $ and $j_a$ defined as follows: \begin{equation} \label{rho} \begin{split}\rho &= \frac{\sqrt{|H_0|^2 +\frac{(\Delta \tau)^2}{1+ |\nabla \tau|^2}} - \sqrt{|H|^2 +\frac{(\Delta \tau)^2}{1+ |\nabla \tau|^2}} }{ \sqrt{1+ |\nabla \tau|^2}}. \end{split}\end{equation} and \begin{equation} \label{j_a} j_a=\rho {\nabla_a \tau }- \nabla_a [ \sinh^{-1} (\frac{\rho\Delta \tau }{|H_0||H|})]-(\alpha_{H_0})_a + (\alpha_{H})_a. \end{equation} Notice that $\theta_0-\theta=\sinh^{-1} (\frac{\rho\Delta \tau }{|H_0||H|})$ by the addition formula of the $\sinh$ function. In terms of these, the quasi-local energy is $\frac{1}{8\pi}\int_\Sigma (\rho+j_a\nabla^a\tau)$. We consider $(i, t_0^\mu)$ as a quasi-local observer and minimize quasi-local energy among all such observers. A critical point of the quasi-local energy satisfies the optimal isometric embedding equation : \textit{Definition 1 \cite{Wang-Yau2, Chen-Wang-Yau1, Chen-Wang-Yau2} - An embedding $i:\Sigma\hookrightarrow \mathbb{R}^{3,1}$ satisfies the optimal isometric equation for $(\sigma_{ab}, |H|, (\alpha_H)_a)$ if the components of $i$, $X^0, X^1, X^2, X^3$, as functions on $\Sigma$, satisfy $\eta_{\mu\nu} \nabla_a X^\mu \nabla_b X^\nu=\sigma_{ab}$ and there exists a future unit timelike constant vector $t_0^\nu$ such that $\tau=-t_0^\mu\eta_{\mu\nu} X^\nu$ satisfies \begin{equation}\label{optimal}\nabla^a j_a=0.\end{equation}} Such an optimal isometric embedding may not be unique, but it is shown in \cite{Chen-Wang-Yau2} that this is locally unique if $\rho>0$. The quasi-local mass and quasi-local energy-momentum 4-vector with respect to $(i, t_0^\nu)$ are $m= \frac{1}{8\pi}\int_\Sigma \rho $ and \[p^\nu= \frac{1}{8\pi}\int_\Sigma \rho \,t_0^\nu,\] respectively. Let $(x^0, x^1, x^2, x^3)$ denote the standard coordinate system on $\mathbb{R}^{3,1}$. \textit{Definition 2 - Let $K_{\alpha\gamma}$ be an element of the Lie algebra of the Lorentz group with $K_{\alpha\gamma}=-K_{\gamma\alpha}$. Let $K=K_{\alpha\gamma}\eta^{\gamma\beta} x^\alpha\frac{\partial}{\partial x^\beta}$ be the corresponding Killing field in $\mathbb{R}^{3,1}$. The conserved quantity corresponding to $(i, t_0^\nu, K)$ is $K_{\alpha\gamma}\Phi^{\alpha\gamma}$ where \begin{equation}\label{qlc_coordinate} \Phi^{\alpha\gamma}=-\frac{1}{8\pi} \int_\Sigma (\rho X^{[\alpha} t_0^{\gamma]}+ j_a X^{[\alpha} \nabla_a X^{\gamma]}). \end{equation}} For a spacelike 2-surface in $\mathbb{R}^{3,1}$, $\rho=0$ and $j_a=0$, thus all quasi-local conserved quantities vanish with respect to its own isometric embedding. By the definition, $p^\nu$ and $\Phi^{\alpha\gamma}$ transform equivariantly when the pair $(i, t_0^\mu)$ is acted by a Lorentz transformation. When the optimal isometric embedding $i$ is shifted by a translation in $\mathbb{R}^{3,1}$ such that $X^\mu\mapsto X^\mu+b^\mu$ for some constant vector $b^\mu$, $\Phi^{\alpha\gamma}$ is changed by \[\Phi^{\alpha\gamma}\mapsto \Phi^{\alpha\gamma}-\frac{1}{2}b^\alpha p^\gamma+\frac{1}{2}b^\gamma p^\alpha.\] For a surface of symmetry in an axially symmetric spacetime, the quasi-local angular momentum is the same as the Komar angular momentum, and the quasi-local center of mass lies on the axis of symmetry. \section{\label{sec:level3} A conservation law} Quasi-local conserved quantities satisfy a conservation law along timelike hypersurfaces described as follows. The expression in \eqref{qle} can be written as the difference between a reference term and a physical term where the reference term is given by \begin{equation}\label{qle.reference}\frac{1}{8\pi}\int_\Sigma \{ \cosh\theta_0|H_0|\sqrt{1+|\nabla\tau|^2}-[\nabla_a \theta_0+(\alpha_{H_0})_a]\nabla^a\tau\}.\end{equation} On the image $i(\Sigma)$ of $i:\Sigma\hookrightarrow \mathbb{R}^{3,1}$, we choose the outward pointing spacelike unit normal $v_0^\nu$ such that $(t_0)_\nu v_0^\nu=0$. Let $u_0^\nu$ be the future pointing timelike unit normal of $i(\Sigma)$ such that $(u_0)_\nu v_0^\nu=0$. Extending $\Sigma$ alone the direction of $t_0^\nu$, we obtain a timelike hypersurface $\mathcal{C}$ of $\mathbb{R}^{3,1}$ whose spacelike unit outward normal is the extension of $v_0^\nu$. Let $\pi_{\mu\nu}$ be the conjugate momentum of $\mathcal{C}$. The expression \eqref{qle.reference} is the same as \begin{equation}\label{qle.reference2} \frac{1}{8\pi}\int_{i(\Sigma)} \pi_{\mu\nu} t_0^\mu u_0^\nu.\end{equation} Note that $i(\Sigma)$ is contained in $\mathcal{C}$ and $u_0^\nu$ is the normal of $i(\Sigma)$ in $\mathcal{C}$. Since $\nabla^\mu\pi_{\mu\nu}=0$ and $t_0^\nu$ is Killing, we apply the divergence theorem to the portion of $\mathcal{C}$ that is bounded by $i(\Sigma)$ and a totally geodesic hyperplane orthogonal to $t_0^\nu$, and equate \eqref{qle.reference2} to the corresponding expression over the projection of $i(\Sigma)$ onto this hyperplane. The same procedure can be applied to the physical term by transplanting the Killing field $t_0^\nu$ back to the surface in the physical spacetime through the optimal isometric embedding and the canonical gauge. In a vacuum physical spacetime, $\nabla^\mu\pi_{\mu\nu}=0$ still holds, but the transplanted field may not be Killing and may not be tangent to the timelike hypersurface. However, these errors vanish at spatial infinity and we obtain a conservation law for the total mass and total angular momentum. This conservation law was first observed in \cite{Brown-York2}. The novelty here is that this law is applied to both the physical and the reference spacetime. \section{Conserved quantities at spatial infinity} On an asymptotically flat initial data set $(M, g, k)$, we take the limit of quasi-local conserved quantities on coordinate spheres to define total conserved quantities. For each family of solutions $(i_r, t^\nu_0({r}))$ of optimal isometric embeddings for $\Sigma_r$ such that the isometric embedding $i_r$ converges to the standard embedding of a round sphere of radius $r$ in $\mathbb{R}^3$ when $r\rightarrow \infty$, we define: \textit{Definition 3 -Suppose $\lim_{r\rightarrow \infty} t^\nu_0({r})=t^\nu_0=L_0^{\,\,\, \nu}$ for a Lorentz transformation $L_\mu^{\,\,\,\nu}$. Denote $L_{\mu\gamma}=L_\mu^{\,\,\,\nu}\eta_{\nu\gamma}$. The total center of mass of $(M, g, k)$ is defined to be \begin{equation} C^i = \frac{1}{m}\lim_{r \to \infty}[ \Phi^{i\gamma}({r})L_{0\gamma}+\Phi^{0\gamma}({r})L_{i\gamma}], i=1, 2, 3 \end{equation} and the total angular momentum is defined to be \begin{equation} J_{i} =\lim_{r \to \infty} \epsilon_{ijk} [\Phi^{j\gamma}({r})L_{k\gamma}-\Phi^{k\gamma}({r})L_{j\gamma}], i,j,k=1, 2, 3 . \end{equation}} $C^i$ corresponds to the conserved quantity of a boost Killing field, while $J_i$ corresponds to that of a rotation Killing field with respect to the direction of the energy-momentum 4-vector. \section{Finiteness of total conserved quantities} In this section, we prove finiteness of the newly defined total conserved quantities for vacuum asymptotically flat initial data sets of order one. \textit{Definition 4 - $(M, g, k)$ is \textit{asymptotically flat of order one} if there is a compact subset $C$ of $M$ such that $ M \backslash C $ is diffeomorphic to $\mathbb{R}^3 \backslash B$, and in terms of the coordinate system $\{x^i\}_{i=1, 2, 3}$ on $M\backslash C$, $g_{ij} = \delta_{ij}+ \frac{g_{ij}^{(-1)}}{r}+ \frac{g_{ij}^{(-2)}}{r^2}+ o(r^{-2})$ and $ k_{ij} = \frac{k_{ij}^{(-2)}}{r^2} + \frac{k_{ij}^{(-3)}}{r^3} + o(r^{-3})$, where $r=\sqrt{\sum_{i=1}^3 (x^i)^2}$.} Transforming into spherical coordinates $(r, \theta, \phi)=(r, u^1, u^2)$, on each level set of $r$, $\Sigma_r$, we can use $\{u^a\}_{a=1, 2}$ as coordinate system to express the geometric data we need in order to define quasi-local conserved quantities: \begin{equation}\label{expansion} \begin{split} \sigma_{ab} & = r^2 \tilde \sigma_{ab}+ r \sigma_{ab}^{(1)} + \sigma_{ab}^{(0)}+ o(1) \\ |H| & = \frac{2}{r}+ \frac{h^{(-2)}}{r^2}+\frac{h^{(-3)}}{r^3} + o(r^{-3}) \\ \alpha_H & = \frac{\alpha_H^{(-1)} }{r}+ \frac{\alpha_H^{(-2)} }{r^2} + o(r^{-2}), \end{split} \end{equation} where $\tilde{\sigma}_{ab}$ is the standard round metric on $S^2$, and $\sigma_{ab}^{(1)}, \sigma_{ab}^{(0)}, h^{(-2)}, h^{(-3)}, \alpha_H^{(-1)}$, and $\alpha_H^{(-2)}$ are all considered as geometric data on $S^2$. It was proved in \cite{Chen-Wang-Yau1} that for such an initial data set, there exists a unique family of optimal isometric embeddings $(i_r, t_0^\nu({r}))$ of $\Sigma_r$ such that the components of $i_r$ are given \begin{equation}\label{exp_opt}(0, r\tilde{X}^1, r\tilde{X}^2, r\tilde{X}^3)+o({r}),\end{equation} where $\tilde{X}^i, i=1, 2, 3$ denote the three coordinate functions on a unit 2-sphere in standard spherical coordinates. Similar expansions for $|H_0|$ and $\alpha_{H_0}$ can be obtained. We recall from \cite{Wang-Yau3, Chen-Wang-Yau1} that the total energy-momentum vector $p^\nu$ satisfies $\lim_{r \to \infty} t^\nu_0({r})=\frac{1}{m} p^\nu$ with $m=\sqrt{-p_\nu p^\nu}$, and the components are given by \begin{equation}\label{ADM}p^0= \frac{1}{8 \pi}\int_{S^2} (h_0^{(-2)}-h^{(-2)}) \text{ and } p^i= \frac{1}{8 \pi}\int_{S^2} \tilde{X}^i \widetilde{div}(\alpha_H^{(-1)}).\end{equation} It is shown to be the same as the ADM energy-momentum vector on the initial data set $(M, g, k)$ in \cite{Wang-Yau3}. Unlike translating Killing fields which define energy and linear momentum, the expression of boost and rotation Killing fields involves the coordinate functions. Therefore existing definitions of total angular momentum and total center of mass are in general infinite and not well-defined unless additional conditions \cite{Ashtekar-Hansen, Regge-Teitelboim, Chrusciel2} are imposed at spatial infinity. Recall that an vacuum initial data set $(M, g, k)$ satisfies\begin{equation}\label{constraint} R(g) + (tr_g k)^2-|k|_g^2 =0 \text{ and } \nabla_g^i (k_{ij} - (tr_g k) g_{ij}) =0 \end{equation} where $R(g)$ is the scalar curvature of $g_{ij}$. \textit{Theorem 1 - The new total angular momentum and new center of mass are always finite on any vacuum asymptotically flat initial data set of order one.} \textit{Proof.} By the expansions of geometric data \eqref{expansion} and the optimal isometric embedding \eqref{exp_opt}, the total center of mass and total angular momentum are finite if \begin{equation}\label{condition_converge} \begin{split} \int_{S^2} \tilde X^i (h_0^{(-2)}-h^{(-2)}) =0\text{ and } \int_{S^2} \tilde X^i \left( \tilde{\epsilon}^{ab}\tilde{\nabla}_b(\alpha_H^{(-1)})_a \right)=0, \end{split} \end{equation} where $\tilde{\nabla}$ is the covariant derivative of $\tilde{\sigma}_{ab}$ and $\tilde{\epsilon}_{ab}$ is the area form on $S^2$. Let $\Sigma_r$ be coordinate spheres of $(M, g)$ and $\hat h$ be the mean curvature of $\Sigma_r$ with respect to $g$. By the second variation formula of area and the Gauss equation on $\Sigma_r$, we have $ \partial_r \hat h = -f[\frac{R(g)}{2}-K+\frac{1}{2}(\hat h^2+A^2) ]- \Delta f, $ where $f$ is the length of $\frac{\partial}{\partial r}$, $K$ is the Gauss curvature, $A$ is the second fundamental form, and $\Delta$ is the Laplace operator of $\Sigma_r$. Expanding each term in the last equation in $r$, we solve for $ \hat h^{(-2)} = -\tilde \sigma^{ab}\sigma^{(1)}_{ab} + \tilde \nabla^a \tilde \nabla ^b \sigma^{(1)}_{ab} - \widetilde \Delta (\tilde \sigma^{ab}\sigma^{(1)}_{ab})- \widetilde \Delta f^{(-1)} -2f^{(-1)}$, where $f=1+\frac{f^{(-1)}}{r}+o(r^{-1})$ and $\tilde{\Delta}$ is the Laplace operator of $S^2$. The decay condition on $k_{ij}$ implies $\hat h^{(-2)}=h^{(-2)}$ and thus $\int_{S^2} \tilde X^i h^{(-2)}=0$. $h_0^{(2)}$ can be solved from the optimal isometric embedding equation and $h_0^{(-2)}=-\frac{1}{2}\tilde{\sigma}^{ab}\sigma_{ab}^{(1)}-\tilde \nabla _e(\frac{1}{2} \tilde{\epsilon}^{ce} \tilde{\epsilon} ^{ad} \tilde\nabla _d\sigma^{(1)}_{ac} +\tilde{\epsilon}^{ce} F_c)$ for a one-form $F_c$ on $S^2$. Again, we obtain $\int_{S^2} \tilde X^i h_0^{(-2)}=0$. On the other hand, by the vacuum momentum constraint equation, we derive $\tilde{\epsilon}^{ab}\tilde{\nabla}_b(\alpha_H^{(-1)})_a= \tilde{\epsilon}^{ab}\tilde{\nabla}_b\tilde{\nabla}^c\pi^{(0)}_{ac}$ for a symmetric 2-tensor $\pi^{(0)}_{ac}$ on $S^2$, and thus the second equality in \eqref{condition_converge} also holds. This finishes the proof of Theorem 1. The new total angular momentum vanishes on any hypersurface in $\mathbb{R}^{3,1}$: a property that is rather unique among known definitions. Indeed, it is shown in \cite{Chen-Huang-Wang-Yau} that there exists asymptotically flat hypersurface in $\mathbb{R}^{3,1}$ with finite non-zero ADM angular momentum. In \cite{Chen-Wang-Yau3}, we show that the new total angular momentum is the same on any strictly spacelike hypersurface in the Kerr spacetime, a result of the conservation law discussed in \ref{sec:level3}. \section{Conserved quantities and the vacuum Einstein equation} Let $(M, g, k)$ be a vacuum initial data set. The vacuum Einstein equation is formulated as an evolution equation of the pair $(g(t), k(t))$ that satisfies $g(0)=g$, $k(0)=k$ and \begin{equation} \label{evolution} \begin{split} \partial _t g_{ij} & =-2N k_{ij}+ \nabla_i \gamma_j+\nabla_j \gamma_i\\ \partial _t k_{ij} & = -\nabla_i\nabla_jN+N\left(R_{ij} + (tr_g k) k_{ij} - 2 k_{il} k^l \,_j\right) \end{split} \end{equation} where $N$ is the lapse function and $\gamma$ is the shift vector. \textit{Theorem 2- Suppose $(M, g, k)$ is an vacuum asymptotically flat initial data set of order one. Let $(M, g(t), k(t) )$ be the solution to the initial value problem $g(0)=g$ and $k(0)=k$ for the vacuum Einstein equation with lapse function $N=1+O(r^{-1})$ and shift vector $\gamma= \gamma^{(-1)}r^{-1}+O(r^{-2})$. We have \[\begin{split} \partial_t C^i (t)= \frac{p^i}{p^0} \text{ and } \partial_t J_{i} (t) = 0 \end{split} \] for $i=1, 2, 3$ where $p^\nu$ is the ADM energy momentum 4-vector of $(M, g, k)$.} \textit{Proof.} By the expansions of geometric data \eqref{expansion} and the optimal isometric embedding \eqref{exp_opt}, it suffices to prove \begin{equation}\label{center}\partial_t [\frac{1}{8\pi}\int_{S^2}\tilde{X}^i \rho^{(-3)}(t)]=\frac{p^i}{t_0^0} \text{ and }\partial_t [\int_{S^2}(\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j-\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j) \tilde{\sigma}^{ab} j_b^{(-2)}(t)]=0.\end{equation} In view of the definition of $\rho$ and the Einstein equation, $\int_{S^2} \tilde X^ i\partial_t \rho^{(-3)}(t)=\int_{S^2} \tilde X^ i\partial_t(\frac{h_0^{(-3)}(t)-h^{(-3)}(t)}{t_0^0})$. By applying the optimal isometric embedding and the vacuum constraint equation as in the proof of Theorem 1, together with the Einstein equation \eqref{evolution}, the first equality in \eqref{center} can be established. As for the second equality, since $\tilde{\nabla}^a(\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j-\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j)=0$, we can throw away two terms in $j_b$ whose leading terms are closed one-forms on $S^2$. The leading term of $\tau$ is $r\tau^{(1)}=-r \sum_k t_0^k \tilde{X}^k$ and \[\partial_t[\int_{S^2}(\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j-\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j) \rho^{(-3)}(t)\tilde{\nabla}^a \tau^{(1)}] =-t_0^j \partial_t[\int_{S^2} \rho^{(-3)}(t)\tilde{X}^i]+t_0^i \partial_t[\int_{S^2} \rho^{(-3)}(t) \tilde{X}^i)]\] where we use $(\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j-\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j)\tilde{\nabla}^a \tilde{X}^k=\tilde{X}^i\delta^{jk}-\tilde{X}^j \delta^{ik}$. The last expression vanishes in view of the first equality in \eqref{center} and $p^\nu=mt_0^\nu$. The only term left is the time derivative of $\int_{S^2}(\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j-\tilde{X}^i\tilde{\nabla}_a\tilde{X}^j) \tilde{\sigma}^{ab} (\alpha_{H})_b^{(-2)}(t)$. This is shown to be independent of $t$ by the conservation law discussed in \ref{sec:level3}. This finishes the proof of Theorem 2. \noindent\textit{Remark - Assuming the total center of mass and angular momentum is finite initially, Theorem 2 holds under the weaker assumption $g= \delta + O(r^{-1})$ and $k= O(r^{-2})$, see \cite{Chen-Wang-Yau3}.} \section{Properties of the new total conserved quantities} 1. The definition depends only on the geometric data $(g, k)$ and the foliation of surfaces at infinity, and in particular does not depend on an asymptotically flat coordinate system or the existence of an asymptotically Killing field. 2. All total conserved quantities vanish on any spacelike hypersurface in the Minkowski spacetime, regardless of the asymptotic behavior. 3. The new total angular momentum and total center of mass are always finite on any vacuum asymptotically flat initial data set of order one. 4. The total angular momentum satisfies conservation law. In particular, the total angular momentum on any strictly spacelike hyperurface of the Kerr spacetime is the same. 5. Under the vacuum Einstein evolution of initial data sets, the total angular momentum is conserved and the total center of mass obeys the dynamical formula $\partial_t C^i(t)=\frac{p^i}{p^0}$ where $p^\nu$ is the ADM energy-momentum four vector.
1,477,468,751,368
arxiv
\section{Introduction}\label{sec:Intro} Optimal mass transport problems for several marginals arise in many applications of operations research and statistics \cite{bhp-13,ce-10,cmn-10,cfk-13,bcmm-16,ght-14,mtbmmh-15,p-14,ty-05}. The {\em weighted Wasserstein barycenters} are optimal solutions to these problems and have seen a lot of activity \cite{bk-12,bll-11,coo-15,cd-14,p-13,rpdb-12,ywwl-17,zp-17}: Given probability measures $P_1,{\dots},P_n$ on $\mathbb{R}^d$ and a weight vector $\lambda\in \mathbb{R}^n_{>0}$ with $\sum_{i=1}^n \lambda_i=1$, a Wasserstein barycenter is a probability measure $\bar P$ on $\mathbb{R}^d$ satisfying \begin{equation}\label{eqn:weightedbary} \varphi(\bar P):=\sum\limits_{i=1}^n \lambda_i W_2(\bar P, P_i)^2 = \inf\limits_{P\in \mathcal{P}^2(\mathbb{R}^d)} \sum\limits_{i=1}^n \lambda_i W_2(P,P_i)^2, \end{equation} where $W_2$ is the quadratic Wasserstein distance and $\mathcal{P}^2(\mathbb{R}^d)$ is the set of all probability measures on $\mathbb{R}^d$ with finite second moments. Informally, a barycenter $\bar P$ is a measure such that the summed-up transport from $\bar P$ to all $P_i$ with respect to the quadratic Wasserstein distance is minimal. See the monographs \cite{v-03,v-09} and \cite{ac-11} for the current state of the art for barycenters for continuous measures. We study the frequent setting with data given as a set of {\em discrete probability measures} $P_1,\dots,P_n$, i.e. with finite support in $\text{supp}(P_i)\subset \mathbb{R}^d$. We denote the support set of $P_i$ as $\text{supp}(P_i) = \{\textbf{x}_{ik}\big| k = 1,...,|P_i|\}$, where $|P_i| $ is the number of support points in $P_i$. Each $\textbf{x}_{ik} \in \text{supp}(P_i)$ has a corresponding mass $d_{ik}>0 $, and $\sum_{k=1}^{|P_i|} d_{ik} = 1$ for each $P_i$. We formally state the problem of computing a {\em discrete barycenter}, i.e. a barycenter for a given set of discrete measures. \\\\ \noindent{\bf Discrete Barycenter Problem \textbf{Input:} $\,\,\,$ Discrete probability measures $P_1, \ldots, P_n$, weight vector $\lambda\in \mathbb{R}^n_{>0}$ \textbf{Output:} Discrete barycenter $\bar P$ for $P_1, \ldots, P_n$ and $\lambda$. \\\\ Discrete barycenters satisfy a number of interesting properties \cite{abm-16,m-16}. In contrast to the continuous case, it is possible to have several barycenters (i.e. optimizers of (\ref{eqn:weightedbary})). However, each barycenter is a discrete measure itself, supported on a subset of the set \begin{equation} S=\{\sum\limits_{i=1}^n \lambda_i \textbf{x}_{ik} | \textbf{x}_{ik}\in \text{supp}(P_i)\}:=\{\textbf{x}_1,\dots,\textbf{x}_{|S|}\}. \end{equation} This is the set of all convex combinations of support points, one from each measure $P_i$, given by the fixed $\lambda_i$. We call its elements $\textbf{x}_j$ the {\em weighted means}. We say the measures $P_1,\dots,P_n$ are {\em in general position} if different combinations of $\textbf{x}_{ik}\in \text{supp}(P_i)$ always induce different weighted means $\textbf{x}_j$. When representing a barycenter $\bar P$, we use values $z_j$, $ j=1,\ldots,|S|$, to denote the mass on support point $\textbf{x}_j\in S$. Further, the values $y_{ijk}$ denote mass transported from $\textbf{x}_j\in S$ to $\textbf{x}_{ik} \in \text{supp}(P_i)$ (for all $i=1,\dots,n$ and $k=1,\dots,|P_i|$). A proof that all barycenters are supported on a subset of $S$ follows from a combination of two properties. First, the quadratic Wasserstein distance for the cost of transport between two points $\textbf{x}_j$ and $\textbf{x}_{ik}$ is simply the squared Euclidean distance $\|\textbf{x}_j - \textbf{x}_{ik} \|^2$. With this notation, the transportation cost (\ref{eqn:weightedbary}) can be written as \begin{equation}\label{eqn:LPobjective} \varphi(\bar P):= \inf\limits_{P\in \mathcal{P}^2(\mathbb{R}^d)} \sum\limits_{i=1}^n \lambda_i W_2(P,P_i)^2 = \min \sum_{i=1}^n\lambda_i\sum_{j=1}^{|S|}\sum_{k = 1}^{|P_i|}\|\textbf{x}_j - \textbf{x}_{ik} \|^2 y_{ijk} \end{equation} in the discrete setting. Second, for each barycenter, an optimal transport to the measures is {\em non-mass splitting}: The mass of each barycenter support point is transported only to a single support point in each measure. Algebraically, this can be represented as follows: Let $\textbf{x}_j\in S$ have mass $z_j$. Then for all $i$, there is exactly one $k$ with $y_{ijk}=z_j$, while $y_{ijk'}=0$ for all $k'\neq k$. Combining the two properties, it is not hard to see why $S$ has the aforementioned shape. The point $\textbf{x}_j = \sum_{i=1}^n \lambda_i \textbf{x}_{ik}$ is the unique minimizer of the cost $\sum_{i=1}^n \lambda_i \|\textbf{x}^*-\textbf{x}_{ik}\|^2$ of sending a unit of mass from $\textbf{x}^*\in \mathbb{R}^d$ to a single, fixed $\textbf{x}_{ik}$ in in each measure $P_i$, $i=1,\dots,n$. Further, there always exists a barycenter with provably {\em sparse support}; we call this a {\em sparse barycenter}. More precisely, there is a barycenter $\bar P$ with \begin{equation}\label{sparseequ} |\text{supp}(\bar{P})| \leq (\sum_{i=1}^n |P_i|) - n + 1. \end{equation} In particular, the number of support points of $\bar P$ is bounded above by the sum of the number of support points in the measures $P_1,\dots,P_n$. This is generally just a tiny fraction of the number of possible support points in $S$, whose size is bounded by the product, rather than the sum, of the sizes of the original measures. It is open whether the Discrete Barycenter Problem can be solved in polynomial time. The best known algorithms are based on linear programming \cite{abm-16,coo-15}: Let the $z_j$ correspond to variables measuring the (unknown) masses of a barycenter supported on $S$. The mass $z_j$ is transported to each measure $P_i$, the amount of which is measured by the variables $y_{ijk}$. This yields constraints $\sum_{k=1}^{|P_i|} y_{ijk} = z_j$ (for all $i=1,\dots,n$ and $j=1,\ldots,|S|$). Further, each support point $\textbf{x}_{ik}$ in each measure $P_i$ receives its mass $d_{ik}$ from barycenter support points $\textbf{x}_j$, which can be stated as $\sum_{j=1}^{|S|} y_{ijk} = d_{ik}$ (for all $i=1,\dots,n$ and $k=1,\dots,|P_i|$). Finally, note that (\ref{eqn:LPobjective}) is a linear objective function. Thus using $z_j$ and $y_{ijk}$ as variables, a linear program for the computation of a barycenter may be formulated as \begin{align*}\label{baryLP} \tag{original} \min &\text{ }\spc \sum_{i=1}^n\lambda_i\sum_{j=1}^{|S|}\sum_{k = 1}^{|P_i|}\|\textbf{x}_j - \textbf{x}_{ik} \|^2 y_{ijk} \nonumber \\ \sum_{k=1}^{|P_i|} y_{ijk} = &\text{ }\spc z_j, \text{ }\spc\forall i=1,\ldots,n,\text{ }\forall j=1,\ldots,|S|,\nonumber\\ \sum_{j=1}^{|S|} y_{ijk} = &\text{ }\spc d_{ik}, \text{ }\spc\forall i=1,\ldots,n,\text{ }\forall k=1,\ldots,|P_i|,\nonumber\\ y_{ijk} \geq & \text{ }\spc0, \text{ }\spc\text{ }\spc\text{ }\forall i=1,\ldots,n,\text{ }\forall j=1,\ldots,|S|,\text{ }\forall k=1,\ldots,|P_i|\\ z_j \geq & \text{ }\spc0, \text{ }\spc\text{ }\spc\text{ }\forall j=1,\ldots,|S|\text{ }.\nonumber \end{align*} Moreover, any optimal vertex of the LP has sparse support, i.e. it satisfies (\ref{sparseequ}). See \cite{abm-16,coo-15} for more details. \begin{proposition}\label{thm:5eq} The Discrete Barycenter Problem can be solved using LP (\ref{baryLP}). Any optimal vertex corresponds to a sparse barycenter. \end{proposition} One of the most important observations about LP (\ref{baryLP}) is that its size may scale exponentially in the number $n$ of measures \cite{b-17}. For simplicity in denoting the worst case scenario, assume $|P_i|=p_{\max}$ for all $i=1,\dots,n$. For measures $P_1,\dots,P_n$ with support points in general position, one obtains $|S|=\prod_{i=1}^n |P_i|=(p_{\max})^n$. Then LP (\ref{baryLP}) has $n(p_{\max})^{n+1}+(p_{\max})^n$ variables and $n (p_{\max})^n+n p_{\max}$ equality constraints. The exponential scaling of LP (\ref{baryLP}) means that, even for reasonably sized problems, computations are challenging. This is why it is of great interest to reduce the computational effort as much as possible. A first approach in the literature is a strongly polynomial $2$-approximation algorithm based on the restriction of the set of support points for an approximate barycenter to the union of supports of the measures $P_1,\dots,P_n$ \cite{b-17}. \subsection*{Contributions and Outline} In this paper, we improve on LP (\ref{baryLP}) while retaining all optimal solutions to the Discrete Barycenter Problem. By using the optimality conditions of discrete barycenters in a better way, we present several ways to reduce the problem size. In Section \ref{sec:Form}, we formally develop these improvements. First, we explain why just a subset of the variables $y_{ijk}$ is required for a better formulation LP (\ref{barymodLP}) of the original LP. We show that LP (\ref{barymodLP}) always is an improved on LP (\ref{baryLP}). For data in general position, i.e. where each combination $\sum_{i=1}^n \lambda_i \textbf{x}_{ik}$ gives a different point in $S$, the reduction in the number of variables is dramatic (even for moderate problem sizes, one typically retains less than $1\%$ compared to the original). For structured data, we still obtain a significant improvement (even though these are the cases where LP (\ref{baryLP}) already worked well). This makes LP (\ref{barymodLP}) a go-to approach for structured data. Second, we turn to an alternative LP (\ref{LPw}) that finds a discrete barycenter. It has been used as a proof method in \cite{abm-16,m-16} to show the existence of a sparse barycenter for all discrete barycenter problems. Due to an inherent, unavoidable exponential scaling, independently of what the underlying data looks like, it has not been considered for computational purposes before. However, for data in general position, we show that it, in fact, is the best approach. For a set of just two measures, one obtains a transportation problem. Third, we combine the two above approaches to a hybrid model (LP (\ref{LPhybrid})) that retains the best properties of both formulations for partially structured data. The key idea informally is that the decision which model to use can be split into independent decisions for each point in the set $S$, respectively each combination of support points $\textbf{x}_{ik}\in \text{supp}(P_i)$ with one from each measure. In partially structured data, it is best to use mix the two strategies. Sections \ref{sec:datageneral} and \ref{sec:reggrids} are dedicated to the theoretical analysis and computational experiments for different types of data. We use three representative types of data: a geospatial data set in general position, the well-known {\em digits} data set, where handwritten digits are recorded in a $16\times 16$-grid (c.f. \cite{lbh-98}), and a tailored data set that combines the properties of these two sets. In Section \ref{sec:datageneral}, we discuss data in general position. We show that both LP (\ref{barymodLP}) and LP (\ref{LPw}) are vast improvements over LP (\ref{baryLP}) in both theory and practice. Further, we exhibit that a best implementation of LP (\ref{LPhybrid}) becomes identical to LP (\ref{LPw}). In Section \ref{sec:reggrids}, we discuss data supported in a regular grid. In this highly structured setting, the size of $S$ becomes polynomial. We begin the section with a discussion of an important distinction: A priori knowledge about this structure versus the lack thereof. We show that the construction of LPs (\ref{barymodLP}) and LP (\ref{LPhybrid}) is hard (exponential unless $P=NP$) if we lack this information, even though the resulting LPs are of polynomial size. We then devise an efficient preprocessing routine for LP (\ref{LPhybrid}) (which also works for LP (\ref{barymodLP})) to achieve a significant, but not necessarily optimal improvement over LP (\ref{baryLP}) while avoiding the inefficient preprocessing required to set up the model exactly. In Sections \ref{sec:noprior} and \ref{sec:prior}, we show computational experiments without and with a priori knowledge. We see that LPs (\ref{barymodLP}) and LP (\ref{LPhybrid}) perform best in this setting. We also see that the efficient preprocessing routine leads to a better total running time, even though the constructed LPs are larger. Finally, in Section \ref{sec:besthybrid}, we show computational experiments for a data set supported in the combination of a regular grid with some additional points in general position. Here, LP (\ref{LPhybrid}) greatly outperforms the other models, as it is able to adapt its representation of $S$ to the different parts of the data. \section{Improved Linear Programs}\label{sec:Form} In the following, we describe three ways to improve on the formulation of LP (\ref{baryLP}). The first one is a strict improvement, the second one the best approach for data in general position, and the third one a hybrid approach of the former two. \subsection{Optimality Conditions for $y$ Variables}\label{sec:ystart} For an initial reduction in size, we note that variables can be dropped from LP (\ref{baryLP}) while keeping all optimal solutions in the feasible set. Due to the non-mass splitting property, we have seen that any $\textbf{x}_j = \sum_{i = 1}^n \lambda_i \textbf{x}_{ik} \in S$ is an optimal support point for mass to be transported to all of the $\textbf{x}_{ik}$ from which it is constructed. This implies that in an optimal solution, $\textbf{x}_j$ never transports to any $\textbf{x}_{ik}$ not in a weighted mean calculation producing $\textbf{x}_j$. Equivalently, in the optimal solution, $y_{ijk} = 0$ for all such pairs $\textbf{x}_j, \textbf{x}_{ik}$. We can therefore eliminate those $y_{ijk}$ from the formulation by only introducing variables $y_{ijk}$ when $\textbf{x}_{ik}$ appears in a computation of weighted mean $\textbf{x}_j$. We require some notation. Let $S_{ik}$ be the set of indices $j$ for which $\textbf{x}_j \in S$ can be computed as a weighted mean of a set of support points that includes $\textbf{x}_{ik}$. Formally, $$ S_{ik}=\{\; j \; : \; \textbf{x}_j=\lambda_i\textbf{x}_{ik} + \sum\limits_{l=1, l\neq i}^n \lambda_l \textbf{x}_{lk'} \text{ for some } \textbf{x}_{lk'}\in \text{supp}(P_l)\}.$$ Conversely, let $S_j$ to be the set of index tuples $(i,k)$ of support points $\textbf{x}_{ik}$ which contribute to a computation of $\textbf{x}_j$, i.e. $$ S_j=\{\; (i,k) \; : \; \textbf{x}_j=\lambda_i\textbf{x}_{ik} + \sum\limits_{l=1, l\neq i}^n \lambda_l \textbf{x}_{lk'} \text{ for some } \textbf{x}_{lk'}\in \text{supp}(P_l)\}.$$ With this notation, LP (\ref{baryLP}) can be improved to a smaller formulation that restricts the use of variables $y_{ijk}$ to only when $\textbf{x}_{ik}$ appears in a construction of the weighted means $\textbf{x}_j$ as follows. \begin{align*}\label{barymodLP} \tag{reduced} \min &\text{ }\spc \sum_{i=1}^n\lambda_i\sum_{k = 1}^{|P_i|}\sum_{j \in S_{ik}}\|\textbf{x}_j - \textbf{x}_{ik} \|^2 y_{ijk} \nonumber \\ \sum_{(i,k) \in S_j} y_{ijk} = &\text{ }\spc z_j, \text{ }\spc\forall i=1,\ldots,n,\text{ }\forall j=1,\ldots,|S|\nonumber\\ \sum_{j \in S_{ik}} y_{ijk} = &\text{ }\spc d_{ik}, \text{ }\spc\forall i=1,\ldots,n,\text{ }\forall k=1,\ldots,|P_i|\\ y_{ijk} \geq & \text{ }\spc0, \text{ }\spc\text{ }\spc\text{ }\forall i=1,\ldots,n,\text{ }\forall k=1,\ldots,|P_i|,\text{ } \forall j \in S_{ik} \nonumber \\ z_j \geq & \text{ }\spc0, \text{ }\spc\text{ }\spc\text{ }\forall j=1,\ldots,|S|\text{ }\nonumber \end{align*} We obtain Theorem \ref{thm:6eq}. \begin{theorem}\label{thm:6eq} The Discrete Barycenter Problem can be solved using LP (\ref{barymodLP}). Any optimal vertex corresponds to a sparse barycenter. \end{theorem} The optimal vertices of LPs (\ref{baryLP}) and (\ref{barymodLP}) are in one-to-one correspondence. This immediately gives the second part of the statement. Note that at least one pair $(\textbf{x}_j,\textbf{x}_{ik})$ where $\textbf{x}_{ik}$ is not part of any weighted means construction of $\textbf{x}_j$, exists for any dimension $\mathbb{R}^d$ as long as $n \geq 2$ and $|P_i| \geq 2$ for at least one $i = 1, \dots, n$. In other words, for non-trivial input there is an $(i,k)\notin S_j$ for some $j$. Thus LP (\ref{barymodLP}) provides a strict reduction in the number of variables and the number of non-zero entries in the constraint matrix for all practical examples. \begin{lemma}\label{lem:lessy} For a set of at least two measures $P_1,\dots,P_n$, and at least one with $|P_i| \geq 2$, LP (\ref{barymodLP}) has strictly fewer variables than LP (\ref{baryLP}). Further, the nonzero entries of the constraint matrix are a strict subset. \end{lemma} Informally, LP (\ref{barymodLP}) always is a strict improvement over LP (\ref{baryLP}). In Section \ref{sec:reggrids}, we turn to the dramatic reduction in size -- often several orders of magnitude -- in more detail. \subsection{Fixed Transport} An alternative linear program for the Discrete Barycenter Problem has been used as a proof method in \cite{abm-16,m-16} for the properties in Section \ref{sec:Intro}, but not considered for computational purposes. However, as we will see in Section \ref{sec:datageneral}, there are inputs where this is the preferred approach, in particular for data in general position. The key idea is to treat each combination of original support points that gives a weighted mean separately, even if they produce the same coordinates. Applying this idea to LP (\ref {barymodLP}) means that a variable $z_j$ is used for each combination of original support points. Then $|S_j|=n$, it contains precisely one pair $(i,k)$ for each $i\leq n$. If mass is associated to variable $z_j$, it is also fully associated to the corresponding $y_{ijk}$ with $(i,k)\in S_j$. This implies that the variables $y_{ijk}$ can be eliminated through $y_{ijk}=z_j$ for all $(i,k)\in S_j$ . Informally, assigning mass to $z_j$ gives rise to a {\em fixed transport} to the measures. We now develop an LP formulation based on this idea in our own notation. First, we define the set of all combinations of original support points, one from each measure, as $$S^* = \{ (\textbf{x}_{1k}, \ldots, \textbf{x}_{nk}) : \textbf{x}_{ik} \in \text{supp}(P_i)\}:=\{s_1^*,\dots,s^*_{|S^*|}\}.$$ There is an intimate relation of $S^*$ and $S$: Each tuple $s^*_h=(\textbf{x}_{1k}, \ldots, \textbf{x}_{nk})$, $h=1,\dots,|S^*|$, corresponds to a set of original support points $\textbf{x}_{ik}$, $i=1,\dots,n$. These support points add up to a weighted mean $\textbf{x}_j = \sum_{i=1}^n \lambda_i \textbf{x}_{ik} \in S$. So each $s^*_h\in S^*$ is associated to an $\textbf{x}_j\in S$. Generally, it is possible that multiple $s^*_h\in S^*$ are associated with the same $\textbf{x}_j\in S$. In turn, each $\textbf{x}_j\in S$ is associated to at least one $s^*_h \in S^*$. We obtain $|S^*| \geq |S|$. For data in general position, $|S^*| = |S|$, and there is a bijection between the tuples $s^*_h$ and weighted means $\textbf{x}_j$. Next, we introduce a variable $w_h$ for each $s^*_h=(\textbf{x}_{1k}, \ldots, \textbf{x}_{nk})$, $h = 1, \ldots, |S^*|$, representing mass associated with it. The corresponding cost $c_h$ of transporting a unit of mass to the measures is \begin{equation*} c_h = \sum_{i=1}^n \lambda_i ||\textbf{x}_j-\textbf{x}_{ik}||^2. \end{equation*} Finally, we define sets $S_{ik}^*$ similarly to the sets $S_{ik}$. $S_{ik}^*$ is the set of indices $h$ in $1, \ldots, S^*$ where the $i^{th}$ component of $s_h^*$ is $\textbf{x}_{ik}$. Formally, $$ S_{ik}^*=\{\; h \; : \; s_h^*=(\dots,\textbf{x}_{ik},\dots)\}.$$ We now have all the ingredients for the LP. Note that in LPs (\ref{baryLP}) and (\ref{barymodLP}), the first type of main constraints was used to connect the $z_j$ and $y_{ijk}$. They are not needed in the new variant, where the $y_{ijk}$ are eliminated due to the fixed transport. We obtain the following LP as an alternative to solving the Discrete Barycenter Problem. \begin{align*}\label{LPw} \tag{general} \min &\text{ }\spc \sum_{h = 1}^{|S^*|} c_h w_h \nonumber \\ \sum_{h \in S_{ik}^*} w_h = &\text{ }\spc d_{ik}, \text{ }\spc\forall i=1,\ldots,n,\text{ }\forall k=1,\ldots,|P_i|\\ w_h \geq & \text{ }\spc 0, \text{ }\spc\text{ }\spc\text{ }\forall h=1,\ldots,|S^*|\nonumber \end{align*} Barycenter masses $z_j$ and an optimal transport $y_{ijk}$ can readily be reconstructed from the $w_h$, and the optimal vertices of LPs (\ref{baryLP}) and (\ref{LPw}) again are in one-to-one correspondence. This gives us Theorem \ref{thm:7eq}. \begin{theorem}\label{thm:7eq} The Discrete Barycenter Problem can be solved using LP (\ref{LPw}). Any optimal vertex corresponds to a sparse barycenter. \end{theorem} For $n=2$, i.e. for a set of two discrete measures, the computation of a discrete barycenter can be modeled as a transportation problem; see \cite{m-16} where this was explained through a transformation of LP (\ref{baryLP}). We believe the best way to prove this claim is to perform a reindexing of LP (\ref{LPw}) for this special case, as follows: As there are only two measures, we can denote the support elements and masses as $\textbf{x}_k \in \text{supp}{(P_1)}$ with mass $d_k$ and $\textbf{x}_l \in \text{supp}{(P_2)}$ with mass $d_l$. Further, the weights for the measures can be represented as $\lambda_1=\lambda$ and $\lambda_2=1-\lambda$. For a given $\textbf{x}_k \in \text{supp}{(P_1)}$ and $\textbf{x}_l \in \text{supp}{(P_2)}$, an optimal barycenter support point is $\textbf{x}=\lambda \textbf{x}_k + (1-\lambda) \textbf{x}_l$. The corresponding cost of transport is \begin{align*} c_{kl}&=\lambda \|\textbf{x} -\textbf{x}_k\|^2 + (1-\lambda) \|\textbf{x} -\textbf{x}_l\|^2 = \lambda (1-\lambda)^2\| \textbf{x}_k -\textbf{x}_l\|^2+ \lambda^2(1-\lambda)\|\textbf{x}_k -\textbf{x}_l\|^2 \\ &=\lambda (1-\lambda)\| \textbf{x}_k -\textbf{x}_l\|^2=\alpha \| \textbf{x}_k -\textbf{x}_l\|^2,\end{align*} for some constant $\alpha\in \mathbb{R}$. Thus, the Discrete Barycenter Problem for $n=2$ can be denoted as the following transportation problem. \begin{align*}\label{LPtransport} \tag{transportation} \min &\text{ }\spc \sum_{k=1}^{|P_1|}\sum_{l=1}^{|P_2|} c_{kl} w_{kl} \nonumber \\ \sum_{k=1}^{|P_1|} w_{kl} = &\text{ }\spc d_{k}, \text{ }\forall k=1,\ldots,|P_1 |\\ \sum_{l=1}^{|P_2|} w_{kl} = &\text{ }\spc d_{l}, \text{ }\forall l=1,\ldots,|P_2| \nonumber\\ w_{kl} \geq & \text{ }\spc 0, \text{ }\forall k=1,\ldots,|P_1|, \text{ }\forall l=1,\ldots,|P_2|\nonumber \end{align*} It is well-known that transportation problems can be solved in strongly polynomial time \cite{t-86,ks-95}. This immediately gives Theorem \ref{thm:transport}. \begin{theorem}\label{thm:transport} The Discrete Barycenter Problem for $n=2$ can be solved in strongly polynomial time by solving LP (\ref{LPtransport}), respectively LP (\ref{LPw}). \end{theorem} In Section \ref{sec:datageneral}, we will see that LP (\ref{LPw}) also is the go-to approach for a set of measures in general position. \subsection{Hybrid Approach to Variable Introduction} The strategies of variable introduction presented in LP (\ref{barymodLP}) and LP (\ref{LPw}) are not mutually exclusive. For each tuple $s_h^*$ individually, we can decide whether to use a representation with $y$-variables as in LP (\ref{barymodLP}) or to use a representation with $w$-variables as in LP (\ref{LPw}). To this end, we partition the set $\{1,\dots, |S^*|\}$ into two index sets $(S^*)^y$ and $(S^*)^w$, to indicate for which $s_h^*$ we will use the $y$-representation, respectively the $w$-representation. The original support points $\textbf{x}_{ik}$ can receive mass through a transport denoted by some $y_{ijk}$ and through some fixed transport of a $w_h$. First, we introduce $(S^*)^w_{ik}$ to be the set of indices $h$ such that the $i^{th}$ component of $s_h^*$ is $\textbf{x}_{ik}$. Then $(S^*)^w_{ik}\subset S_w^*$ corresponds precisely to those combinations $s_h^*$ which imply a fixed transport to $\textbf{x}_{ik}$ if assigned mass. For a formal definition of $(S^*)^w_{ik}$, we only have to restrict $S_{ik}^*$ to indices $h \in (S^*)^w$, i.e. $$ (S^*)^w_{ik}=\{\; h \; : \; s_h^*=(\dots,\textbf{x}_{ik},\dots), h \in (S^*)^w\}.$$ For a proper indexing of the transport corresponding to $(S^*)^y$ we need index sets that mirror $S_{ik}$ and $S_j$ as defined in Section \ref{sec:ystart}, restricted to $(S^*)^y$. $S^y_{ik}$ contains the indices $j$ of all $\textbf{x}_j \in S$ produced by the weighted means of tuples in $(S^*)^y$ that contain $\textbf{x}_{ik}$. Formally, $$ S_{ik}^y=\{\; j \; : \; \textbf{x}_j=\lambda_i\textbf{x}_{ik} + \sum\limits_{l=1, l\neq i}^n \lambda_l \textbf{x}_{lk'} \text{ for some } s^*_h=(\textbf{x}_{1k'},\dots,\textbf{x}_{ik},\dots,\textbf{x}_{nk'}) \in (S^*)^y \}.$$ Further, $S^y_j$ contains those index pairs $(i,k)$ for which $\textbf{x}_j \in S$ is the weighted mean of a tuple in $(S^*)^y$ that contains $\textbf{x}_{ik}$. This gives $$ S_j=\{\; (i,k) \; : \; \textbf{x}_j=\lambda_i\textbf{x}_{ik} + \sum\limits_{l=1, l\neq i}^n \lambda_l \textbf{x}_{lk'} \text{ for some } s^*_h=(\textbf{x}_{1k'},\dots,\textbf{x}_{ik},\dots,\textbf{x}_{nk'}) \in (S^*)^y\}.$$ Now, we are ready to state an LP that allows for the split of $\{1,\dots, |S^*|\}$ into index sets $(S^*)^y$ and $(S^*)^w$: \begin{align*}\label{LPhybrid} \tag{hybrid} \min \text{ }\spc \sum_{h \in (S^*)^w}&c_h w_h+\sum_{i=1}^n\lambda_i\sum_{k = 1}^{|P_i|}\sum_{j \in S^y_{ik}}\|\textbf{x}_j - \textbf{x}_{ik} \|^2 y_{ijk} \nonumber \\ \sum_{(i,k) \in S^y_j} y_{ijk} = &\text{ }\spc z_j, \text{ }\spc\forall i=1,\ldots,n,\text{ }\forall j=1,\ldots,|S^y_{ik}|\nonumber\\ \sum_{h \in (S^*)^w_{ik}} w_h + \sum_{j \in S^y_{ik}} y_{ijk} = &\text{ }\spc d_{ik}, \text{ }\spc\forall i=1,\ldots,n,\text{ }\forall k=1,\ldots,|P_i|\\ w_h \geq & \text{ }\spc 0, \text{ }\spc\text{ }\spc\text{ }\forall h=1,\ldots,|(S^*)^w|\nonumber \\ y_{ijk} \geq & \text{ }\spc0, \text{ }\spc\text{ }\spc\text{ }\forall i=1,\ldots,n,\text{ }\forall k=1,\ldots,|P_i|,\text{ } \forall j \in S^y_{ik} \nonumber \\ z_j \geq & \text{ }\spc0, \text{ }\spc\text{ }\spc\text{ }\forall j=1,\ldots,|S^y_{ik}|\text{ }\nonumber \end{align*} Correctness of this model is a direct consequence of Theorems \ref{thm:6eq} and \ref{thm:7eq}. \begin{theorem}\label{thm:8eq} The Discrete Barycenter Problem can be solved using LP (\ref{LPhybrid}). Any optimal vertex corresponds to a sparse barycenter. \end{theorem} In the next sections, we study the advantages of LPs (\ref{barymodLP}), (\ref{LPw}), and (\ref{LPhybrid}) in model size and practical performance in some representative settings. We begin with data in general position, where LP (\ref{LPw}) is the best model and LP (\ref{LPhybrid}) mirrors it. Then we turn to highly structured data, more precisely a set of measures supported in regular $d$-dimensional grids, where LPs (\ref{barymodLP}) and (\ref{LPhybrid}) dramatically outperform LP (\ref{LPw}). Finally, we exhibit a type of input for which LP (\ref{LPhybrid}) vastly outperforms the other approaches. \section{Data in General Position}\label{sec:datageneral} \subsection{Theoretical Analysis}\label{generaltheory} We begin with data in general position. Recall that measures $P_1,\dots,P_n$ are in general position if different combinations of $\textbf{x}_{ik}\in \text{supp}(P_i)$ always induce different weighted means $\textbf{x}_j$. For a simple notation, we assume that all $P_i$ have the same number of support points $|P_i|= p_{\max}$. Further, recall that LP (\ref{baryLP}) used $n(p_{\max})^{n+1}+(p_{\max})^n$ variables and $n (p_{\max})^n+n p_{\max}$ constraints. First, we analyze the size reduction achieved through LP (\ref{barymodLP}). The general position implies that all $(p_{\max})^n$ combinations of support points in the original measures produce $(p_{\max})^n$ different $\textbf{x}_j$, which then is also the number of required $z_j$ variables. Since each $\textbf{x}_j$ in LP (\ref{barymodLP}) transports to $n$ points, one for each $P_i$, we get a total number of variables $(p_{\max})^n+n(p_{\max})^n=(1+n)(p_{\max})^n$. This has reduced the exponent on the growth by one compared to before; the number of variables is reduced by $n(p_{\max})^n(p_{\max}-1)$. A representation of this reduction as a percentage highlights the dramatic improvement over LP (\ref{baryLP}). The percentage reduction in number of variables can be represented as a function that is dominated by $p_{\max}$: \[ 1-\dfrac{(1+n)(p_{\max})^n}{(p_{\max})^n+n(p_{\max})^{n+1}}=1-\dfrac{1+n}{1+n p_{\max}}= \dfrac{n(p_{\max}-1)}{1+n p_{\max}} = \dfrac{(p_{\max}-1)}{\frac{1}{n}+ p_{\max}}.\] For all reasonable sizes of $n$, this fraction is close to $\frac{(p_{\max}-1)}{ p_{\max}}$. For example, for $p_{\max}=256$, as is the case for $P_i$ supported densely on a $16 \times 16$ grid, this is about a $99.61\%$ reduction in the number of variables. The digits data set used for computational experiments in Section \ref{sec:reggrids} is supported on such a grid (but not densely). Next, we turn to LP (\ref{LPw}). Here, the number of variables depends only on the sizes of the support sets $\text{supp}(P_i)$ of the original measures. There are always $ \prod_{i=1}^n |P_i| $ variables regardless of whether or not the measures are in general position. Informally, unlike LP (\ref{barymodLP}) or LP (\ref{baryLP}), LP (\ref{LPw}) always assumes the worst case of having data in general position. For all $|P_i| = p_{\max}$, it has $ (p_{\max})^n$ variables. In this case, LP (\ref{baryLP}) had $(p_{\max})^n + n (p_{\max})^{n+1}$ variables. Using LP (\ref{LPw}) eliminates all of the $y$ variables from LP (\ref{baryLP}), reducing the total number of variables by $n(p_{\max})^{n+1}$. It also eliminates all $y$ variables from LP (\ref{barymodLP}), for a reduction in the number of variables of $n(p_{\max})^n$. Looking again at the percentage reduction, for LP (\ref{baryLP}) the reduction is given by \[ 1- \frac{(p_{\max})^n}{(np_{\max})^{n+1}+(p_{\max})^n}= \frac{n p_{\max}}{np_{\max}+1} = \frac{p_{\max}}{p_{\max}+\frac{1}{n}}.\] This can get arbitrarily close to a $100 \%$ reduction in variables; at $n=4$ and $p_{\max} = 256$ this is already $99.9\%$. Comparing to LP (\ref{barymodLP}), the reduction is given by \[ 1- \frac{(p_{\max})^n}{(1+n)(p_{\max})^n}= \frac{n }{n+1} = \frac{1}{1+\frac{1}{n}}.\] Thus the percentage reduction between LP (\ref{barymodLP}) and LP (\ref{LPw}) depends only on $n$ and can also get arbitrary close to $100\%$; for $n=4$, this is an $80\%$ reduction in number of variables. It is also worth noting that the reduction of $y$ variables when improving LP (\ref{baryLP}) to LP (\ref{barymodLP}) does not reduce the number $n((p_{\max})^{n}+p_{\max})$ of constraints. In contrast, LP (\ref{LPw}) does reduce the number of constraints to $n p_{\max}$, which is no longer exponential. This reduction can be significant for the solving time, especially if the number of required variables is comparable between formulations. It is always the case that LP (\ref{barymodLP}) has at least as many constraints as LP (\ref{LPw}). For data in general position, LP (\ref{LPw}) is clearly preferable, with both fewer variables and fewer constraints. Finally, consider LP (\ref{LPhybrid}). For each of the $(p_{\max})^n$ different $\textbf{x}_j$, respectively each of the $|S|=|S^*|$ combinations $s_h^*$ of original support points, we have to decide whether to introduce $w$ variables as in LP (\ref{LPw}) or $y, z$ variables as in LP (\ref{barymodLP}). With the same arguments as above, it is best to always introduce $w$ variables, which produces exactly LP (\ref{LPw}). Formally, we obtain Theorem \ref{thm:hybisgen}. \begin{theorem}\label{thm:hybisgen} For data in general position, choosing $(S^*)^w=S^*$ and $(S^*)^y=\emptyset$ gives a minimal number of variables and constraints in LP (\ref{LPhybrid}). Then LP (\ref{LPhybrid}) and LP (\ref{LPw}) are identical. \end{theorem} We summarize the formulas for the number of variables and constraints for the various LP variants in Table \ref{table:pmax}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|}\hline LP formulation & Variables & Constraints \\ \hline (\ref{baryLP}) & $n(p_{\max})^{n+1}+(p_{\max})^n$ & $n (p_{\max})^n+n p_{\max}$ \\ \hline (\ref{barymodLP}) & $(1+n)(p_{\max})^n$ & $n (p_{\max})^n+n p_{\max}$ \\ \hline (\ref{LPw}) & $(p_{\max})^n$ & $n p_{\max}$ \\ \hline (\ref{LPhybrid}) & $(p_{\max})^n$ & $n p_{\max}$ \\ \hline \end{tabular} \end{center} \caption{The number of variables and constraints of the LPs for data in general position.} \label{table:pmax} \end{table} \subsection{Computational Results}\label{sec:compgeneral} We exhibit the practical advantage of LP (\ref{LPw}) over LP (\ref{baryLP}) through some sample computations. These computations are on crime data for Denver County, which is openly available as part of the Denver Open Data Catalog. For a data set in general position, we use the locations of murders during 2016. Each month forms a measure $P_i, i = 1, \ldots, 12,$ by weighting each murder equally during the month. A discrete barycenter for this data set can be interpreted to indicate locations for police presence, such that a fast response to (at least) one of the incidents in each month is achieved. An aggregate image of all murder locations and a corresponding discrete barycenter are displayed in Figure \ref{fig:murders}. The radii of the shapes in both parts of the figure are relative to their masses. Larger masses occur for fewer murders in that month. The data for this application was processed in Python, and the LP was solved using AMPL. Using LP (\ref{baryLP}), the largest number of months for which we were able to compute a discrete barycenter was $9$ months. This computation took a minute on a standard laptop. In contrast, using LP (\ref{LPw}) the same computation finished in less than a second. The barycenter for a full year ($12$ months) of data, depicted in Figure \ref{fig:murders}, was computed in $14$ minutes. Note that the number of variables for the $12$-month set is about $30$ times larger than for $9$ months; recall the exponential scaling, highlighted in Table \ref{table:pmax}. \begin{figure}[t] \begin{center} \includegraphics[scale = .4]{murder_map.pdf} \end{center} \begin{center} \includegraphics[scale = .4]{police_map.pdf} \end{center} \caption{Murder locations in Denver County in 2016 (top) and a barycenter indicating suggested police presence locations (bottom). }\label{fig:murders} \end{figure} \section{Data in Regular Grids}\label{sec:reggrids} LPs (\ref{baryLP}) and (\ref{barymodLP}) do have a significant advantage over LP (\ref{LPw}) in many practical applications: They are able to take advantage of the structure of the supports of $P_i$, whereas LP (\ref{LPw}) cannot. To see this advantage, first recall that LPs (\ref{baryLP}) and (\ref{barymodLP}) have variables $y$ and $z$ both indexed on $j = 1, \ldots, |S|$, where $|S|$ is the number of {\em distinct} weighted means. In contrast, LP (\ref{LPw}) introduces variables $w$ indexed on $h = 1, \ldots, |S^*|$. $S$ can be of much smaller size than $S^*$, where each combination $s_h^*$ of support points in the original measures is counted, even if they result in the same support point $\textbf{x}_j\in S$. We will highlight the differences in the various model sizes for structured data by considering measures that are supported in a $d$-dimensional regular grid. This is one of the most frequent settings in optimal mass transport problems. The MNIST digits data set of handwritten digits is a prime example for such data. It has been used for benchmarking for many machine learning algorithms; see \cite{lbh-98} for more information. In this database, each measure is a handwritten digit scanned into a $16 \times 16$ grid. Each measure is supported on a subset of the grid. The different shades of grey indicate different masses at the support points; the darker, the more mass it holds. The masses add up to $1$. See Figure \ref{fig:example8}. In this figure, we also include a sample barycenter computed for four digits from the set. Note that the barycenter is supported sparsely on a finer grid (here $61 \times 61$). \begin{figure} \begin{center} \includegraphics[scale = .35]{digitimage8_0.pdf} \includegraphics[scale = .35]{TrueB8_4.pdf} \end{center} \caption{Left, an example of digit 8 from the MNIST digits data set. Right, an example barycenter calculated for four digit 8's. }\label{fig:example8} \end{figure} For our implementation, we considered two approaches. In the first, we implemented LPs (\ref{baryLP}), (\ref{barymodLP}), (\ref{LPw}), (\ref{LPhybrid}) in C\texttt{++}. In this implementation, during the data processing we do not use the a priori knowledge that the final structure of $S$ is a subset of the $(nK-n+1)^d$-length grid. Instead, we generate $S$ by processing each element of $S^*$ and checking for duplication in those elements already produced. This requires exponential effort, but allows us to determine exactly those $\textbf{x}_j \in S$ which can be generated by the support set of the digits of consideration, and produce the smallest possible LP's for each formulation. These provide a baseline for other applications in which there may be structure, but the presence or type of which is not known prior to processing. The exponential effort of the preprocessing for the first implementation is in strong contrast to the polynomial size of the support set $S$ in this application. Because of this, in our second implementation we do use the a priori knowledge of the structure of $S$ to generate the possible $\textbf{x}_j$. This is done without checking if $\textbf{x}_{ik}$ exist to produce $\textbf{x}_j$, so it requires effort that is now linear in the number of measures $n$. This leads to larger LP's, due to the presence of additional $\textbf{x}_j$, but significantly less processing time. We have implemented LPs (\ref{baryLP}) and (\ref{barymodLP}) in this manner. Both efforts produce LP's containing variable and constraint numbers bounded by the formulas given in Table \ref{tab:gridVC}, which are given under the assumptions that the original grids have non-zero mass at all elements of the grid. All LPs were solved using the Gurobi Optimizer 7.0. We first present the theoretical size comparisons for grid-structured data. Included in the theory is the additional necessary grid knowledge which we use in the second implementation where we avoid processing $S^*$; that is, we consider how we may reduce the $y$ variables without construction of $S_{ik}$ and $S_j$. We also discuss a formula for determining if the hybrid approach may introduce fewer variables without processing $S^*$, but we do not implement this. \subsection{Theoretical Analysis}\label{sec:TC} \subsubsection{LPS (\ref{baryLP}) and (\ref{barymodLP}) versus LP (\ref{LPw})\\\\} {\hspace*{-0.17cm}}We begin by comparing the model sizes of LPs (\ref{baryLP}) and (\ref{LPw}). Suppose that all measures $P_i$ are supported on a $d$-dimensional regular grid of integer step sizes in each direction, each coordinate going from $1$ to $K$, and that $\lambda_i = \frac{1}{n}$ for all $i=1,\dots,n$. Informally, then $S$ becomes a $d$-dimensional grid that is $n$ times finer than the original grids. We get $|S| = (nK-n+1)^d$, and the number of variables in LP (\ref{baryLP}) is $$(nK-n+1)^d (1+\sum_{i=1}^n |P_i| )= (nK-n+1)^d (1+nK^d) .$$ This growth is polynomial in $n$, with degree $d+1$. By contrast, LP (\ref{LPw}) will have $(K^d)^n$ variables, an exponential growth in $n$. The number of constraints in both LPs does not significantly impact the relation between the two LPs: While LP (\ref{LPw}) has the minimum number of $nK^d$ constraints, LP (\ref{baryLP}) only has an additional $n(nK-n+1)^d$ constraints (which again is polynomial in $n$ with degree $d+1$). So both formulations have a polynomial number of constraints for a regular grid. \begin{table} \begin{center} \begin{tabular}{|c|c|c|}\hline LP formulation & Variables & Constraints \\ \hline (\ref{baryLP}) & $(nK-n+1)^d(1+nK^d)$ & $nK^d+n(nK-n+1)^d$ \\ \hline (\ref{LPw}) & $(K^d)^n$ & $nK^d$ \\ \hline \end{tabular} \end{center} \caption{The number of variables and constraints of the LP's for $n$ measures supported fully on a grid of length $K$.}\label{tab:gridVC} \end{table} For grid-structured data, the previous result that LP (\ref{barymodLP}) is smaller than LP (\ref{baryLP}) still holds. In the first implementation, while processing $S^*$ we can again construct $S_{ik}$ and eliminate any $y$ not in the combination in that manner. But in fact, we now show that we can also use the structure to determine, prior to processing, all original support points $\textbf{x}_{ik}$ which can never produce a particular $\textbf{x}_j$. This knowledge will be used directly in the second implementation of LP (\ref{barymodLP}). For the sake of simplicity, let $\lambda_i=\frac{1}{n}$. Then $\textbf{x}_j = \frac{1}{n} \sum_{i=1}^n \textbf{x}_{ik}$ for all $\textbf{x}_j\in S$ and we can use $s_l = \sum_{i=1}^n x_{ikl}$ to denote each coordinate $l = 1, \dots, d$. Consider an $\textbf{x}_j$ which has a coordinate $s_l$ within $K-1$ of the minimum or of the maximum among all points in $S$ for at least one $l = 1, \dots, d$. Then there exist points in the original grid to which it cannot transport, in any optimal solution. This is depicted in Figure \ref{fig:outergrid}, which shows the set $S$ for $4$ measures in $\mathbb{R}^2$ in a regular grid with $K=4$, resulting in a $13\times 13$ grid. Any point with a coordinate of $1, 1.25, 1.5$, or a coordinate of $4, 3.75,$ or $3.5$ does not transport to all points in the original grids. All other points -- those in the `center' of the grid -- will require all $nK^d+1$ variables when the original measures are supported on the entirety of the original grid. In summary, LP (\ref{barymodLP}) is of smaller size than LP(\ref{baryLP}), and both are significantly smaller than (\ref{LPw}), in this setting. Without a priori knowledge, the construction of LP (\ref{barymodLP}) requires the implicit setup of sets $S_{ik}$ and $S_j$ (recall the discussion in Section \ref{sec:ystart}), which we will now discuss the difficulty in doing. Due to the structure of the grid data, it is possible to efficiently -- so in particular without processing $S^*$ explicitly -- take advantage of the effect highlighted in Figure \ref{fig:outergrid}. In contrast, for general data, this is a difficult task: We prove that the construction of the sets $S_j$ is hard, unless P$\neq$NP. Of course, if $S$ is of exponential size, the total effort of constructing all the $S_j$ trivially is be exponential, too. However, even deciding whether a given $\textbf{x}$ lies in $S$ or not is already NP-hard. \begin{lemma}\label{lem:goodLPishard} Let $P_1,\dots,P_n \subset \mathbb{R}^d$ be discrete measures, let $\lambda_1,\dots,\lambda_n$, and let $\textbf{x}\in \mathbb{R}^d$. Then it is NP-hard to decide whether $\textbf{x}\in S$, even for $d=1$. \end{lemma} \begin{proof} We prove the claim through a reduction to the subset sum problem, which is known to be NP-complete. The subset sum problem can be stated as follows: given a set of integer $p_1,\dots,p_n$, is there a non-empty subset whose sum is a given $s\in \mathbb{Z}$? Let $P_1,\dots,P_n \subset \mathbb{Z}$ be discrete measures, and let $P_i$ consist of the support points $p_i$ and $0$ (both of mass $\frac{1}{2}$, but this is not relevant). Further, let $\lambda_1,\dots,\lambda_n=\frac{1}{n}$ and $x\in \mathbb{Z}$. To decide whether $\textbf{x}\in S$, note that $\textbf{x}$ has to be represented as $x= \sum_{i=1}^n \lambda_i \textbf{x}_{ik}$, where $\textbf{x}_{ik}$ is either $p_i$ or $0$ for all $i\leq n$. Thus, the decision is equivalent to the decision whether there is a subset of the $p_1,\dots,p_n$ adding up to $n\cdot x$. This proves the claim. \qed \end{proof} We obtain the following direct consequence on the hardness of constructing $S_j$ for a fixed $\textbf{x}_j\in S$. \begin{corollary}\label{cor:goodLPishard} Let $P_1,\dots,P_n \subset \mathbb{R}^d$ be discrete measures, let $\lambda_1,\dots,\lambda_n$, and let $\textbf{x}_j\in S$. Then it is NP-hard to construct the set $S_j$. \end{corollary} Recall that, formally, NP-hardness is a statement on decision problems. The phrasing in Corollary \ref{cor:goodLPishard} means that it is NP-hard to decide whether a given set $S_j$ is correct. \begin{figure} \begin{center} \begin{tikzpicture \node (11) at (1,2.7) {$(1,1)$}; \node (41) at (4,2.7) {$(4,1)$}; \node (14) at (1,6.3) {$(1,4)$}; \node (44) at (4,6.3) {$(4,4)$}; \fill [black] (1,3) circle (2pt); \fill [black] (1.25,3) circle (2pt); \fill [black] (1.5,3) circle (2pt); \fill [black] (1.75,3) circle (2pt); \fill [black] (2,3) circle (2pt); \fill [black] (2.25,3) circle (2pt); \fill [black] (2.5,3) circle (2pt); \fill [black] (2.75,3) circle (2pt); \fill [black] (3,3) circle (2pt); \fill [black] (3.25,3) circle (2pt); \fill [black] (3.5,3) circle (2pt); \fill [black] (3.75,3) circle (2pt); \fill [black] (4,3) circle (2pt); \fill [black] (1,3.25) circle (2pt); \fill [black] (1.25,3.25) circle (2pt); \fill [black] (1.5,3.25) circle (2pt); \fill [black] (1.75,3.25) circle (2pt); \fill [black] (2,3.25) circle (2pt); \fill [black] (2.25,3.25) circle (2pt); \fill [black] (2.5,3.25) circle (2pt); \fill [black] (2.75,3.25) circle (2pt); \fill [black] (3,3.25) circle (2pt); \fill [black] (3.25,3.25) circle (2pt); \fill [black] (3.5,3.25) circle (2pt); \fill [black] (3.75,3.25) circle (2pt); \fill [black] (4,3.25) circle (2pt); \fill [black] (1,3.5) circle (2pt); \fill [black] (1.25,3.5) circle (2pt); \fill [black] (1.5,3.5) circle (2pt); \fill [black] (1.75,3.5) circle (2pt); \fill [black] (2,3.5) circle (2pt); \fill [black] (2.25,3.5) circle (2pt); \fill [black] (2.5,3.5) circle (2pt); \fill [black] (2.75,3.5) circle (2pt); \fill [black] (3,3.5) circle (2pt); \fill [black] (3.25,3.5) circle (2pt); \fill [black] (3.5,3.5) circle (2pt); \fill [black] (3.75,3.5) circle (2pt); \fill [black] (4,3.5) circle (2pt); \fill [black] (1,3.75) circle (2pt); \fill [black] (1.25,3.75) circle (2pt); \fill [black] (1.5,3.75) circle (2pt); \fill [black] (1.75,3.75) circle (2pt); \fill [black] (2,3.75) circle (2pt); \fill [black] (2.25,3.75) circle (2pt); \fill [black] (2.5,3.75) circle (2pt); \fill [black] (2.75,3.75) circle (2pt); \fill [black] (3,3.75) circle (2pt); \fill [black] (3.25,3.75) circle (2pt); \fill [black] (3.5,3.75) circle (2pt); \fill [black] (3.75,3.75) circle (2pt); \fill [black] (4,3.75) circle (2pt); \fill [black] (1,4) circle (2pt); \fill [black] (1.25,4) circle (2pt); \fill [black] (1.5,4) circle (2pt); \fill [black] (1.75,4) circle (2pt); \fill [black] (2,4) circle (2pt); \fill [black] (2.25,4) circle (2pt); \fill [black] (2.5,4) circle (2pt); \fill [black] (2.75,4) circle (2pt); \fill [black] (3, 4) circle (2pt); \fill [black] (3.25, 4) circle (2pt); \fill [black] (3.5, 4) circle (2pt); \fill [black] (3.75, 4) circle (2pt); \fill [black] (4, 4) circle (2pt); \fill [black] (1,4.25) circle (2pt); \fill [black] (1.25,4.25) circle (2pt); \fill [black] (1.5,4.25) circle (2pt); \fill [black] (1.75,4.25) circle (2pt); \fill [black] (2,4.25) circle (2pt); \fill [black] (2.25,4.25) circle (2pt); \fill [black] (2.5,4.25) circle (2pt); \fill [black] (2.75,4.25) circle (2pt); \fill [black] (3, 4.25) circle (2pt); \fill [black] (3.25, 4.25) circle (2pt); \fill [black] (3.5, 4.25) circle (2pt); \fill [black] (3.75, 4.25) circle (2pt); \fill [black] (4, 4.25) circle (2pt); \fill [black] (1,4.5) circle (2pt); \fill [black] (1.25,4.5) circle (2pt); \fill [black] (1.5,4.5) circle (2pt); \fill [black] (1.75,4.5) circle (2pt); \fill [black] (2,4.5) circle (2pt); \fill [black] (2.25,4.5) circle (2pt); \fill [black] (2.5,4.5) circle (2pt); \fill [black] (2.75,4.5) circle (2pt); \fill [black] (3, 4.5) circle (2pt); \fill [black] (3.25, 4.5) circle (2pt); \fill [black] (3.5, 4.5) circle (2pt); \fill [black] (3.75, 4.5) circle (2pt); \fill [black] (4, 4.5) circle (2pt); \fill [black] (1,4.75) circle (2pt); \fill [black] (1.25,4.75) circle (2pt); \fill [black] (1.5,4.75) circle (2pt); \fill [black] (1.75,4.75) circle (2pt); \fill [black] (2,4.75) circle (2pt); \fill [black] (2.25,4.75) circle (2pt); \fill [black] (2.5,4.75) circle (2pt); \fill [black] (2.75,4.75) circle (2pt); \fill [black] (3, 4.75) circle (2pt); \fill [black] (3.25, 4.75) circle (2pt); \fill [black] (3.5, 4.75) circle (2pt); \fill [black] (3.75, 4.75) circle (2pt); \fill [black] (4, 4.75) circle (2pt); \fill [black] (1,5) circle (2pt); \fill [black] (1.25,5) circle (2pt); \fill [black] (1.5,5) circle (2pt); \fill [black] (1.75,5) circle (2pt); \fill [black] (2,5) circle (2pt); \fill [black] (2.25,5) circle (2pt); \fill [black] (2.5,5) circle (2pt); \fill [black] (2.75,5) circle (2pt); \fill [black] (3, 5) circle (2pt); \fill [black] (3.25, 5) circle (2pt); \fill [black] (3.5, 5) circle (2pt); \fill [black] (3.75, 5) circle (2pt); \fill [black] (4, 5) circle (2pt); \fill [black] (1,5.25) circle (2pt); \fill [black] (1.25,5.25) circle (2pt); \fill [black] (1.5,5.25) circle (2pt); \fill [black] (1.75,5.25) circle (2pt); \fill [black] (2,5.25) circle (2pt); \fill [black] (2.25,5.25) circle (2pt); \fill [black] (2.5,5.25) circle (2pt); \fill [black] (2.75,5.25) circle (2pt); \fill [black] (3, 5.25) circle (2pt); \fill [black] (3.25, 5.25) circle (2pt); \fill [black] (3.5, 5.25) circle (2pt); \fill [black] (3.75, 5.25) circle (2pt); \fill [black] (4, 5.25) circle (2pt); \fill [black] (1,5.5) circle (2pt); \fill [black] (1.25,5.5) circle (2pt); \fill [black] (1.5,5.5) circle (2pt); \fill [black] (1.75,5.5) circle (2pt); \fill [black] (2,5.5) circle (2pt); \fill [black] (2.25,5.5) circle (2pt); \fill [black] (2.5,5.5) circle (2pt); \fill [black] (2.75,5.5) circle (2pt); \fill [black] (3, 5.5) circle (2pt); \fill [black] (3.25, 5.5) circle (2pt); \fill [black] (3.5, 5.5) circle (2pt); \fill [black] (3.75, 5.5) circle (2pt); \fill [black] (4, 5.5) circle (2pt); \fill [black] (1,5.75) circle (2pt); \fill [black] (1.25,5.75) circle (2pt); \fill [black] (1.5,5.75) circle (2pt); \fill [black] (1.75,5.75) circle (2pt); \fill [black] (2,5.75) circle (2pt); \fill [black] (2.25,5.75) circle (2pt); \fill [black] (2.5,5.75) circle (2pt); \fill [black] (2.75,5.75) circle (2pt); \fill [black] (3, 5.75) circle (2pt); \fill [black] (3.25, 5.75) circle (2pt); \fill [black] (3.5, 5.75) circle (2pt); \fill [black] (3.75, 5.75) circle (2pt); \fill [black] (4, 5.75) circle (2pt); \fill [black] (1,6) circle (2pt); \fill [black] (1.25,6) circle (2pt); \fill [black] (1.5,6) circle (2pt); \fill [black] (1.75,6) circle (2pt); \fill [black] (2,6) circle (2pt); \fill [black] (2.25,6) circle (2pt); \fill [black] (2.5,6) circle (2pt); \fill [black] (2.75,6) circle (2pt); \fill [black] (3, 6) circle (2pt); \fill [black] (3.25, 6) circle (2pt); \fill [black] (3.5, 6) circle (2pt); \fill [black] (3.75, 6) circle (2pt); \fill [black] (4, 6) circle (2pt); \end{tikzpicture} \qquad \qquad \qquad \qquad \qquad \begin{tikzpicture \node (11) at (1,2.7) {$(1,1)$}; \node (41) at (4,2.7) {$(4,1)$}; \node (14) at (1,6.3) {$(1,4)$}; \node (44) at (4,6.3) {$(4,4)$}; \fill [green] (1,3) circle (2pt); \fill [green] (1.25,3) circle (2pt); \fill [green] (1.5,3) circle (2pt); \fill [green] (1.75,3) circle (2pt); \fill [green] (2,3) circle (2pt); \fill [green] (2.25,3) circle (2pt); \fill [green] (2.5,3) circle (2pt); \fill [green] (2.75,3) circle (2pt); \fill [green] (3,3) circle (2pt); \fill [green] (3.25,3) circle (2pt); \fill [green] (3.5,3) circle (2pt); \fill [green] (3.75,3) circle (2pt); \fill [green] (4,3) circle (2pt); \fill [green] (1,3.25) circle (2pt); \fill [green] (1.25,3.25) circle (2pt); \fill [green] (1.5,3.25) circle (2pt); \fill [green] (1.75,3.25) circle (2pt); \fill [green] (2,3.25) circle (2pt); \fill [green] (2.25,3.25) circle (2pt); \fill [green] (2.5,3.25) circle (2pt); \fill [green] (2.75,3.25) circle (2pt); \fill [green] (3,3.25) circle (2pt); \fill [green] (3.25,3.25) circle (2pt); \fill [green] (3.5,3.25) circle (2pt); \fill [green] (3.75,3.25) circle (2pt); \fill [green] (4,3.25) circle (2pt); \fill [green] (1,3.5) circle (2pt); \fill [green] (1.25,3.5) circle (2pt); \fill [green] (1.5,3.5) circle (2pt); \fill [green] (1.75,3.5) circle (2pt); \fill [green] (2,3.5) circle (2pt); \fill [green] (2.25,3.5) circle (2pt); \fill [green] (2.5,3.5) circle (2pt); \fill [green] (2.75,3.5) circle (2pt); \fill [green] (3,3.5) circle (2pt); \fill [green] (3.25,3.5) circle (2pt); \fill [green] (3.5,3.5) circle (2pt); \fill [green] (3.75,3.5) circle (2pt); \fill [green] (4,3.5) circle (2pt); \fill [green] (1,3.75) circle (2pt); \fill [green] (1.25,3.75) circle (2pt); \fill [green] (1.5,3.75) circle (2pt); \fill [black] (1.75,3.75) circle (2pt); \fill [black] (2,3.75) circle (2pt); \fill [black] (2.25,3.75) circle (2pt); \fill [black] (2.5,3.75) circle (2pt); \fill [black] (2.75,3.75) circle (2pt); \fill [black] (3,3.75) circle (2pt); \fill [black] (3.25,3.75) circle (2pt); \fill [green] (3.5,3.75) circle (2pt); \fill [green] (3.75,3.75) circle (2pt); \fill [green] (4,3.75) circle (2pt); \fill [green] (1,4) circle (2pt); \fill [green] (1.25,4) circle (2pt); \fill [green] (1.5,4) circle (2pt); \fill [black] (1.75,4) circle (2pt); \fill [black] (2,4) circle (2pt); \fill [black] (2.25,4) circle (2pt); \fill [black] (2.5,4) circle (2pt); \fill [black] (2.75,4) circle (2pt); \fill [black] (3, 4) circle (2pt); \fill [black] (3.25, 4) circle (2pt); \fill [green] (3.5,4) circle (2pt); \fill [green] (3.75,4) circle (2pt); \fill [green] (4,4) circle (2pt); \fill [green] (1,4.25) circle (2pt); \fill [green] (1.25,4.25) circle (2pt); \fill [green] (1.5,4.25) circle (2pt); \fill [black] (1.75,4.25) circle (2pt); \fill [black] (2,4.25) circle (2pt); \fill [black] (2.25,4.25) circle (2pt); \fill [black] (2.5,4.25) circle (2pt); \fill [black] (2.75,4.25) circle (2pt); \fill [black] (3, 4.25) circle (2pt); \fill [black] (3.25, 4.25) circle (2pt); \fill [green] (3.5,4.25) circle (2pt); \fill [green] (3.75,4.25) circle (2pt); \fill [green] (4,4.25) circle (2pt); \fill [green] (1,4.5) circle (2pt); \fill [green] (1.25,4.5) circle (2pt); \fill [green] (1.5,4.5) circle (2pt); \fill [black] (1.75,4.5) circle (2pt); \fill [black] (2,4.5) circle (2pt); \fill [black] (2.25,4.5) circle (2pt); \fill [black] (2.5,4.5) circle (2pt); \fill [black] (2.75,4.5) circle (2pt); \fill [black] (3, 4.5) circle (2pt); \fill [black] (3.25, 4.5) circle (2pt); \fill [green] (3.5,4.5) circle (2pt); \fill [green] (3.75,4.5) circle (2pt); \fill [green] (4,4.5) circle (2pt); \fill [green] (1,4.75) circle (2pt); \fill [green] (1.25,4.75) circle (2pt); \fill [green] (1.5,4.75) circle (2pt); \fill [black] (1.75,4.75) circle (2pt); \fill [black] (2,4.75) circle (2pt); \fill [black] (2.25,4.75) circle (2pt); \fill [black] (2.5,4.75) circle (2pt); \fill [black] (2.75,4.75) circle (2pt); \fill [black] (3, 4.75) circle (2pt); \fill [black] (3.25, 4.75) circle (2pt); \fill [green] (3.5,4.75) circle (2pt); \fill [green] (3.75,4.75) circle (2pt); \fill [green] (4,4.75) circle (2pt); \fill [green] (1,5) circle (2pt); \fill [green] (1.25,5) circle (2pt); \fill [green] (1.5,5) circle (2pt); \fill [black] (1.75,5) circle (2pt); \fill [black] (2,5) circle (2pt); \fill [black] (2.25,5) circle (2pt); \fill [black] (2.5,5) circle (2pt); \fill [black] (2.75,5) circle (2pt); \fill [black] (3, 5) circle (2pt); \fill [black] (3.25, 5) circle (2pt); \fill [green] (3.5, 5) circle (2pt); \fill [green] (3.75, 5) circle (2pt); \fill [green] (4, 5) circle (2pt); \fill [green] (1,5.25) circle (2pt); \fill [green] (1.25,5.25) circle (2pt); \fill [green] (1.5,5.25) circle (2pt); \fill [black] (1.75,5.25) circle (2pt); \fill [black] (2,5.25) circle (2pt); \fill [black] (2.25,5.25) circle (2pt); \fill [black] (2.5,5.25) circle (2pt); \fill [black] (2.75,5.25) circle (2pt); \fill [black] (3, 5.25) circle (2pt); \fill [black] (3.25, 5.25) circle (2pt); \fill [green] (3.5, 5.25) circle (2pt); \fill [green] (3.75, 5.25) circle (2pt); \fill [green] (4, 5.25) circle (2pt); \fill [green] (1,5.5) circle (2pt); \fill [green] (1.25,5.5) circle (2pt); \fill [green] (1.5,5.5) circle (2pt); \fill [green] (1.75,5.5) circle (2pt); \fill [green] (2,5.5) circle (2pt); \fill [green] (2.25,5.5) circle (2pt); \fill [green] (2.5,5.5) circle (2pt); \fill [green] (2.75,5.5) circle (2pt); \fill [green] (3, 5.5) circle (2pt); \fill [green] (3.25, 5.5) circle (2pt); \fill [green] (3.5, 5.5) circle (2pt); \fill [green] (3.75, 5.5) circle (2pt); \fill [green] (4, 5.5) circle (2pt); \fill [green] (1,5.75) circle (2pt); \fill [green] (1.25,5.75) circle (2pt); \fill [green] (1.5,5.75) circle (2pt); \fill [green] (1.75,5.75) circle (2pt); \fill [green] (2,5.75) circle (2pt); \fill [green] (2.25,5.75) circle (2pt); \fill [green] (2.5,5.75) circle (2pt); \fill [green] (2.75,5.75) circle (2pt); \fill [green] (3, 5.75) circle (2pt); \fill [green] (3.25, 5.75) circle (2pt); \fill [green] (3.5, 5.75) circle (2pt); \fill [green] (3.75, 5.75) circle (2pt); \fill [green] (4, 5.75) circle (2pt); \fill [green] (1,6) circle (2pt); \fill [green] (1.25,6) circle (2pt); \fill [green] (1.5,6) circle (2pt); \fill [green] (1.75,6) circle (2pt); \fill [green] (2,6) circle (2pt); \fill [green] (2.25,6) circle (2pt); \fill [green] (2.5,6) circle (2pt); \fill [green] (2.75,6) circle (2pt); \fill [green] (3, 6) circle (2pt); \fill [green] (3.25, 6) circle (2pt); \fill [green] (3.5, 6) circle (2pt); \fill [green] (3.75, 6) circle (2pt); \fill [green] (4, 6) circle (2pt); \end{tikzpicture} \end{center} \caption{Combining $n=4$ grids of length $K=4$ results in a 13x13 grid for $S$. The outer $K-1$ rows and columns have at least $n K$ $\textbf{x}_{ik}$ to which they will not transport.}\label{fig:outergrid} \end{figure} \subsubsection{On the construction of LP (\ref{LPhybrid})\\\\} {\hspace*{-0.17cm}}The key idea to LP (\ref{LPhybrid}) is that the two strategies of variable introduction can be used interchangeably; recall Section \ref{sec:Form}. Note that choosing between a $y,z$ set of variables and a set of $w$ variables is different from the construction of sets $S_{ik}$ and $S_j$; we only have to count the number of combinations $s^*_h$ that give the same support point $\textbf{x}_j$. Lemma \ref{lem:goodLPishard} tells us that this remains hard for general data. However, for data in regular grids, we have a lot of extra information. In the following, we analyze choosing between a $y,z$ set of variables and a set of $w$ variables for each support point $\textbf{x}_j$ in $S$, introducing all of one type. So the combinations $s_h^*$ that give the same $\textbf{x}_j$ have to be either all in $(S^*)^w$ or all in $(S^*)^y$, and not split up. This simplification is a restriction for general data, where obtaining a lower number of variables for a variant of LP (\ref{LPhybrid}) may be possible by mixing the two strategies for an individual $\textbf{x}_j$. We do not attempt to achieve this true minimum. Our goal is a significant improvement over LP (\ref{barymodLP}). To derive a formula for the number of combinations producing a particular $\textbf{x}_j$, we begin in dimension $d=1$. We again let $\lambda_i = \frac{1}{n}$ for all $i=1,\dots,n$. Then $\textbf{x}_j = \frac{1}{n} \sum_{i=1}^n \textbf{x}_{ik}$ for each $\textbf{x}_j\in S$. We consider $s = n\cdot \textbf{x}_j = \sum_{i=1}^n \textbf{x}_{ik}$. Note that the $\textbf{x}_{ik}$ are integers between $1$ and $K$. The problem of determining the number of combinations of $\textbf{x}_{ik}$ that give $\textbf{x}_j$ is equivalent to the number $F(s,K,n)$ of combinations of $n$ integers between $1$ and $K$ that sum up to $s$. More formally, \[ F(s,K,n) = \text{number of solutions to} \begin{cases} a_{1}+a_{2}+ \ldots + a_{n}=s \\1 \leq a_{i} \leq K, \; a_i \in \mathbb{Z} \; \forall i\leq n \end{cases}.\] This is a standard counting problem, where $n$ $K$-sided dice are tossed and one is interested in the number of combinations that give sum $s$. Thus, $F(s,K,n)$ is given by \begin{equation}\label{eqn:F} F(s,K,n) =\sum_{m=0}^n (-1)^m {n \choose m}{ s-mK-1 \choose n-1}. \end{equation} We can extend this formula to $\textbf{x}_j\in\mathbb{R}^d$ for higher dimension $d$ by considering each coordinate of $\textbf{x}_j$ individually. Let $s_l$ be the corresponding sum of the l-th coordinate of $\textbf{x}_j$; then the total number of combinations whose weighted mean is $\textbf{x}_j$ is \begin{equation}\label{eqn:Nj} N_j = \prod_{l = 1}^d F(s_l,K,n).\end{equation} The maximum number of variables $y_{ijk}$ which will be introduced for a particular $z_j$ is $nK^d$. So for any $j$ where $N_j$ exceeds $nK^d+1$, it is better to use $y,z$ variables, rather than introducing individual $w_h, h = 1,\dots, N_j,$ for the combinations. The $\textbf{x}_j$ for which the $w$ are preferable correspond to the corners of the possible support grid $S$. Figure \ref{fig:corner} illustrates these $\textbf{x}_j$ for $n=3$ and $n=4$ for the grid corresponding to the digits data set. \begin{figure} \begin{center} \includegraphics[scale = .4]{16gridn3corners.pdf} \includegraphics[scale = .4]{16gridcorners.pdf} \end{center} \caption{Support points in $S$ for a set of $n=3$ and $n=4$ measures supported in a $16 \times 16$ grid. The dark support points indicate those where $w$ variables are preferable over $y,z$ variables in an implementation of LP (\ref{LPhybrid}).}\label{fig:corner} \end{figure} In Figure \ref{fig:corner}, we do note that for large $n$, the majority of the grid will prefer $y,z$ variables. Additionally, the areas where $w$ variables are preferable overlap heavily with the same areas of the grid where, as in Figure \ref{fig:outergrid}, we can reduce the number of $y$ variables. We therefore anticipate that in this scenario, the hybrid approach will not show significant improvement over LP (\ref{barymodLP}). We will therefore conclude our computational experiments with an example in which LP (\ref{LPhybrid}) does show a significant advantage. \subsection{Digit Computations: No Usage of A Priori Knowledge}\label{sec:noprior} We first present the LP sizes and solving times for computations where we do not take advantage of the a priori knowledge of the barycenter support set structure. Each run is the computation of a barycenter for four handwritten variants of the same digits, with the chosen digits' computations being representative of a typical sample of that digit. These computations follow the discussion in the beginning Section \ref{sec:reggrids}; they give a baseline comparison for the four LP formulations when there is a lot of repetition in the weighted means, but no a priori expert knowledge on the structure of the possible support $S$. Recall that $S$ is much smaller than $S^*$ for such data. It is expected that LP (\ref{LPw}) will perform poorly in this scenario, as it is unable to benefit from the underlying structure. Table \ref{VCsizeDigitNoGrid} contains the number of variables and constraints produced by the formulations and Table \ref{fig:SolvingDigitNoGrid} contains the solving times of the LPs after setup. The size of the support of each measure varies significantly based on the digit (with digit 1 being the smallest), which is the reason for the significant difference in running and solving times among the different digits. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline Digit & \multicolumn{2}{c|}{LP (\ref{baryLP})} & \multicolumn{2}{c|}{LP (\ref{barymodLP})} & \multicolumn{2}{c|}{LP (\ref{LPw})} & \multicolumn{2}{c|}{LP (\ref{LPhybrid})}\\ \hline & Rows & Columns& Rows & Columns& Rows & Columns& Rows & Columns \\ \hline 0 & 10,598 & 1,317,437 & 10,598 & 740,116 &522 &281,656,116 & 10,130 & 739,615 \\ \hline 1 & 3,083 & 144,200 & 3,083 & 82,834 &199 & 6,058,800 & 2,867&82,624 \\ \hline 3 & 10,999 & 1,212,100 &10,569 & 679,509 & 459 & 167,401,080 & 10,503 & 679,442 \\ \hline 7 & 7,944 &640,974 & 7,944& 353,378& 336 &48,014,460 & 7,432 & 352,843 \\ \hline 9 & 8,787 & 918,280 & 8,787 & 500,554 & 439 & 131,637,120& 8,207 & 499,940 \\ \hline \end{tabular} \end{center} \caption{Number of constraints and variables for four representative measures of each digit with each formulation without using a priori knowledge.}\label{VCsizeDigitNoGrid} \end{table} \begin{table} \begin{center} \begin{tabular}{| c |c|c|c|c|}\hline Digit & LP (\ref{baryLP}) & LP (\ref{barymodLP}) & LP (\ref{LPw}) & LP (\ref{LPhybrid})\\ \hline 0 & 151.69 & 110.31 & * & 93.83\\ \hline 1 & 2.69 & 1.82 &323.15 & 1.46 \\ \hline 3 & 163.57 & 76.06 & * & 69.42 \\ \hline 7 & 53.50 & 24.54 &6488.2 & 22.44 \\ \hline 9 & 86.06 & 44.92 & * & 36.14 \\ \hline \end{tabular} \end{center} \caption{Solving times, in seconds, for four measures of each digit with each formulation without using a priori knowledge. (*) For three of the digits, the LP solver was unable to complete the setup on the laptop due to memory constraints. }\label{fig:SolvingDigitNoGrid} \end{table} As seen in Table \ref{VCsizeDigitNoGrid}, the number of variables (columns) in the LP is reduced significantly when moving from LP (\ref{baryLP}) to LP (\ref{barymodLP}) as is expected from the earlier analysis. This is roughly a $44\%$ reduction in number of variables. The lower reduction than in the theoretical analysis is due to the sparsity of the measures; not all measures are of size $p_{\max}$. In Table \ref{VCsizeDigitNoGrid} we also include the size of LP (\ref{LPw}), which has fewer constraints (rows), but significantly more variables, as expected. The reduction in LP size, moving from LP (\ref{barymodLP}) to LP (\ref{LPhybrid}), is relatively small. This is not surprising, because LP (\ref{barymodLP}) is already quite small. As the mass of the digits is densest in the middle of the grids, where there are also the highest numbers of combinations that produce the same support points, $w$ variables are only being introduced for a few of the extreme values of the support grid, as anticipated in view of Figure \ref{fig:corner}. The LP solution times do improve significantly in each of the smaller formulations. The solving times for LP (\ref{barymodLP}) are often about half of the solving time for the original formulation. Despite not being significantly smaller in size, the improvement in solution times continues, though less dramatically, moving to LP (\ref{LPhybrid}). \subsection{Digit Computations: Using A Priori Knowledge of Grid}\label{sec:prior} We now turn to the versions of the LPs that make use of the a priori knowledge of the support structure. We recall that the primary goal in this approach is to avoid exponential setup costs by not operating on $S^*$. LP (\ref{LPw}) is not competitive with the other three LPs in this case as it always produces an exponential number of variables. While the a priori knowledge produced in Section \ref{sec:TC} can be used to implement a hybrid approach, we only report on LP (\ref{baryLP}) and LP (\ref{barymodLP}). Table \ref{VCsizeDigitGrid} lists the problem sizes; Table \ref{fig:SolvingDigitGrid} includes the solving times as before. In Table \ref{VCsizeDigitGrid}, we first note that, as expected, the LP sizes are larger than those given in Table \ref{VCsizeDigitNoGrid}. This is because this implementation allows for slightly larger LPs for a significant tradeoff in shorter setup time. For a fair comparison of the two approaches, we have to include the setup time. To this end, we show the total running time, including setup and solution of the LP, for LP (\ref{baryLP}) without a priori knowledge, and LP (\ref{baryLP}) and LP (\ref{barymodLP}) with a priori knowledge, in Table \ref{fig:RunningDigitGrid}. The reduced formulation, LP(\ref{barymodLP}), typically yields about a $13\%$ reduction in number of variables as seen in Table \ref{VCsizeDigitGrid}. Yet, we see a significant improvement in both solution times and total running times, as displayed in Tables \ref{fig:SolvingDigitGrid} and \ref{fig:RunningDigitGrid}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline Digit & \multicolumn{2}{c|}{LP (\ref{baryLP})} & \multicolumn{2}{c|}{LP (\ref{barymodLP})} \\ \hline & Rows & Columns& Rows & Columns \\ \hline 0 & 15,406 & 1,946,083 & 15,406 & 1,692,982\\ \hline 1 & 3,371& 158,600 & 3,371 & 126,409\\ \hline 3 & 15,343 & 1,711,660 & 15,343 & 1,494,751 \\ \hline 7 & 13,268 & 1,089,521 & 13,268 & 942,719\\ \hline 9 & 15,323 & 1,637,240 & 15,323 & 1,420,895 \\ \hline \end{tabular} \end{center}\caption{Number of constraints and variables for four measures of each digit using a priori knowledge.}\label{VCsizeDigitGrid} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|}\hline Digit & LP (\ref{baryLP}) & LP (\ref{barymodLP})\\ \hline 0 & 414.82 & 216.46\\ \hline 1 & 3.34 &3.25\\ \hline 3 & 272.33 & 178.08\\ \hline 7 & 222.82 & 114.61\\ \hline 9 & 355.24 & 159.90\\ \hline \end{tabular} \end{center} \caption{Solving times, in seconds, for four measures of each digit using a priori knowledge. }\label{fig:SolvingDigitGrid} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline Digit & LP (\ref{baryLP}) & LP (\ref{baryLP}) & LP (\ref{barymodLP}) \\ & No a priori & With a priori & With a priori \\ \hline 0 & 763.41 & 425.34 & 225.57 \\ \hline 1 & 7.28 & 3.51& 3.36 \\ \hline 3 & 608.19&278.93& 183.74\\ \hline 7 & 123.48& 225.42 & 116.79 \\ \hline 9 & 313.26& 360.81& 164.66 \\ \hline \end{tabular} \end{center}\caption{Total running times, in seconds, for four measures of each digit, including setup times.} \label{fig:RunningDigitGrid} \end{table} \subsection{Computation: Strength of Hybrid Formulation}\label{sec:besthybrid} For our final computational experiment, we exhibit an example where the hybrid approach performs significantly better than the other three formulations. In this example, we have $5$ measures of equal support size $28$; that is, $n=5$ and $|P_i|=28$. Each measure is supported on a $5 \times 5$ grid with the same coordinates in all five measures, and an additional three points which have different coordinates in each measure. Figure \ref{fig:hybridtoyimage} exhibits an example. Informally, these measures combine features of the previous examples: a part of the support is highly structured (the $5 \times 5$ grid), and the other support points lie in general position. In all measures, the three points are sufficiently far from the grid to avoid generating any duplicates in $S$ inside the grid boundaries. Masses for each measure were randomly generated and differ in distribution among the possible support points in each measure. LP solution times for this example are shown in Table \ref{ToySolve}. \begin{figure} \begin{center} \begin{tikzpicture \fill [black] (1,3) circle (2pt); \fill [black] (1.25,3) circle (2pt); \fill [black] (1.5,3) circle (2pt); \fill [black] (1.75,3) circle (2pt); \fill [black] (2,3) circle (2pt); \fill [black] (1,3.25) circle (2pt); \fill [black] (1.25,3.25) circle (2pt); \fill [black] (1.5,3.25) circle (2pt); \fill [black] (1.75,3.25) circle (2pt); \fill [black] (2,3.25) circle (2pt); \fill [black] (1,3.5) circle (2pt); \fill [black] (1.25,3.5) circle (2pt); \fill [black] (1.5,3.5) circle (2pt); \fill [black] (1.75,3.5) circle (2pt); \fill [black] (2,3.5) circle (2pt); \fill [black] (1,3.75) circle (2pt); \fill [black] (1.25,3.75) circle (2pt); \fill [black] (1.5,3.75) circle (2pt); \fill [black] (1.75,3.75) circle (2pt); \fill [black] (2,3.75) circle (2pt); \fill [black] (1,4) circle (2pt); \fill [black] (1.25,4) circle (2pt); \fill [black] (1.5,4) circle (2pt); \fill [black] (1.75,4) circle (2pt); \fill [black] (2,4) circle (2pt); \fill [black] (5.5,7.8) circle (2pt); \fill [black] (7,7.2) circle (2pt); \fill [black] (6.6,5.3) circle (2pt); \end{tikzpicture} \end{center} \caption{A sample support set of a measure where the hybrid approach is preferable.}\label{fig:hybridtoyimage} \end{figure} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|}\hline LP (\ref{baryLP}) & LP (\ref{barymodLP}) & LP (\ref{LPw}) & LP (\ref{LPhybrid}) \\ \hline 43,143 & 1222.74 & 880.78 & 386.73 \\ \hline \end{tabular} \end{center} \caption{LP solving times, in seconds, for n=5 where each measure has 28 support points including a $5 \times 5$ regular grid.}\label{ToySolve} \end{table} In this example, all the alternative formulations presented in this paper are preferable to the original formulation by a significant margin. However, LP (\ref{LPhybrid}) vastly outperforms all the others. This is because of the mixed structure of the support of the measures. Due to its ability to choose $y,z$ variables for the grid part of the support set and $w$ variables for the combinations involving the support points in general points, the hybrid approach finds a smallest set of variables to represent the problem. In contrast, LP (\ref{barymodLP}) is restricted to $y,z$ variables, which is not an optimal approach for data in general position, and LP (\ref{LPw}) is restricted to $w$ variables, which is not an optimal approach for the grid data. \section*{Acknowledgments} We thank Natalia Villegas Franco for running and visualizing the computations on the Denver crime data in Section \ref{sec:compgeneral}; see Figure \ref{fig:murders}. Borgwardt gratefully acknowledges support through Grant 524210 {\em Polyhedral Theory in Data Analytics}, Collaboration Grant for Mathematicians, of the Simons Foundation.
1,477,468,751,369
arxiv
\section{Introduction} Accelerated expansion of the universe has always been one of the most challenging subjects of investigation in cosmology. For early universe, it plays an important role for solving major problems in standard cosmology like horizon and flatness problem. It also provides an effective mechanism for producing a nearly scale-invariant spectra for quantum fluctuations of the inflaton which can work as a seed for the structure formations in our universe. This paradigm which is dubbed as ``Inflation'' plays a major role in modern era of cosmology \cite{infl}. On the other hand, there are not many daunting questions in cosmology today than what is the driving force behind the late time acceleration of the universe \cite{accl}. First directly observed by Supernova type Ia measurements by two independent groups in 1998 \cite{snold}, today it has also been confirmed by observations from new SupernovaType Ia measurements\cite{snnew}, Cosmic Microwave Background Radiations (CMBR)\cite{cmbr}, and galaxy clustering by large scale redshift surveys\cite{lss}. To discover the nature of driving force behind this late time acceleration of the universe, also termed in literature as ``dark energy'', is one of the most important tasks in cosmology today. The first and the most simple choice for dark energy is the vacuum energy or cosmological constant $\Lambda$. However this possibility of {\bf $\Lambda$} being dominant component of the total energy density of the universe, has problem that the energy scale involved is lower than normal energy scale of most particle physics models by a factor of $\sim 10^{-123}$. An optimistic choice for modelling the missing energy of the universe is in the form of a scalar field with a canonical kinetic energy, slowly rolling down a considerably flat potential so that its slowly varying energy density mimics an effective cosmological constant. This form of missing energy density is popularly called `` Quintessence''\cite{quint}. It is similar to the inflaton field with the difference that it evolves in a much lower energy scale. Amongst them, tracker quintessence models have the advantage of allowing the late time accelerating epoch to be reached from a wide range of initial conditions. In order to have a viable dark energy model, it is important that this scalar field remains subdominant during most of the thermal history of the universe. In this sense, it is necessary that this scalar field mimicks the background energy density in the early time. This property is called the ``scaling or attractor property''. This property is very important in cosmology as they allow us to study the asymptotic behavior of a particular cosmology and check whether that particular behavior is stable or not. There have been numerous investigations involving scaling solutions in cosmology in standard Einstein's gravity as well as in scalar tensor gravity \cite{scaling}. Recently, there are suggestions that the present acceleration of the universe is not due to any new unknown component in the cosmic soup, but due to the modifications of the gravitational physics at large distance scales. In this regard, possible modifications in the Friedmann's equation have been proposed. Most of these modifications have been inspired by the higher dimensional brane-world models. A few suggestions are particularly interesting in this regard. The Dvali-Gabadadze-Poratti (DGP) brane induced gravity model\cite{dgp}, the Cardassian model proposed by Freese and Lewis\cite{card}, a model by Dvali and Turner\cite{turner}, by Shtanov and Sahni\cite{shtanov} and also the modified Chaplygin gas model by Barriero and Sen\cite{mgcg} are some of them. Cosmological solutions with a canonical scalar field have been studied in the modified gravity models\cite{scalemod}. Recently, an effective scalar field theory governed by a Lagrangian density with a non-canonical kinetic energy term ${\cal{L}} = -V(\phi)F(X)$, where $X = -(1/2)\partial^{\mu}\phi\partial_{\mu}\phi$, has attracted considerable attention in cosmology. Such model can lead to a late-time acceleration and is called ``k-essence''\cite{kessen}. A scalar field with a non-canonical kinetic term has also been investigated for an early universe inflationary scenario and is termed as ``k-inflation''\cite{kinf}. One example of such Lagrangian density is ${\cal L}_{tach}=-V(\phi)\sqrt{1- \partial^{\mu}\phi\partial_{\mu}\phi}$. As discussed by Padmanabhan and Roy Choudhury\cite{paddytirth}, this is generalization of the Lagrangian of a relativistic particle. The Hamiltonian structure of tachyonic matter given by the above Lagrangian density is very similar to that of a special relativistic particle governed by ${\cal L} = -m\sqrt{1-\dot{q}^2}$ where $m$ and $q$ are the mass and generalized coordinate of the particle respectively. This tachyon field can naturally arise in open string theory\cite{asoke} and can provide a rich gamut of possibilities in cosmological context\cite{harry}. Padmanabhan and choudhury \cite{paddytirth} also explored the possiblity of this kind field acting both as a clustered dark matter and smooth dark energy with a scale dependent equation of state (see \cite{mota} for a different approach for such possibility). In this paper we have studied systematically the scaling solutions involing a tachyon field described by the Lagrangian density,mentioned above, in modified version of Friedman equation. In section 2, we describe the equations of motion in the modified version of gravity in general and define the variables that allow us to study the scaling solutions. The conditions for scaling behavior are introduced in Section 3 and we also describe how to calculate the potentials for the tachyon field which describes the scaling behavior. In section 4, we apply our method to some particular examples of the modified gravity e.g the Cardassian model, the Shtanov-Sahni model, the Randall-Sundrum model and the DGP model. We conclude in section 5. \vspace{8mm} \section{Equation Of Motion In Modified Gravity} We consider a spatially flat Friedmann-Robertson-Walker (FRW) model. In this case the expansion rate of the universe $H$ (also called the ``Hubble parameter'') is defined as \begin{equation} H^{2} = ({8\pi G/3})\rho L^{2}(\rho), \end{equation} \vspace{2mm} \noindent where $H \equiv {\dot{a}\over{a}}$ is the Hubble parameter, $a$ is the scale factor, $\rho$ is the total energy density of the universe. A ``dot'' denotes derivative with respect to cosmic time and $G$ is the gravitational constant. Whatever modification to standard Einstein's gravity we assume, is parametrized by the correction term $L(\rho)$ and this is assumed to be positive definite without any loss of generality. For $L(\rho)=1$, one recovers the standard Einstein's gravity. Our aim is to investigate models where the universe is sourced by the matter component together with a dark energy part. In this paper, we assume that this dark energy field has a non-canonical kinetic energyof Dirac-Born-Infeld form with a Lagrangian density \begin{equation} {\cal L}_{tach}=-V(\phi)\sqrt{1- \partial^{\mu}\phi\partial_{\mu}\phi}, \end{equation} \vspace{2mm} \noindent where $V(\phi)$ is the potential for the field $\phi$. To start with, we assume the matter part is described by a barotropic fluid with a equation of state $P_{\gamma} = (\gamma-1)\rho_{\gamma}$, $\gamma$ being a constant. Then the total energy density of the universe $\rho$, appearing the equation (1) is given by $\rho = \rho_{\gamma} + \rho_{\phi}$. One can calculate the energy density and pressure for the tachyon field $\phi$ from the Lagrangian density given in equation (2) as \begin{eqnarray} \rho_{\phi} &=& {V(\phi)\over{\sqrt{1-\dot{\phi}^2}}}\nonumber\\ P_{\phi} &=& -V(\phi)\sqrt{1-\dot{\phi}^2}. \end{eqnarray} \vspace{2mm} \noindent We also assume that as in standard gravity, the energy momentum tensors of matter and dark energy are covariantly conserved separately which implies that \begin{eqnarray} \dot{\rho_{\gamma}} &=& - 3H\gamma\rho_{\gamma}\nonumber\\ \ddot{\phi}&+& 3H\dot{\phi}(1-\dot{\phi}^2)+{V^{'}\over{V}}(1-\dot{\phi}^{2}) = 0. \end{eqnarray} \vspace{2mm} \noindent Equations (1) and (4) close the system of equations. We now define three new variables: \begin{eqnarray} X &=& \dot{\phi}\nonumber\\ Y &=& \sqrt{V\over{\rho}}\\ N &=& Log[a]\nonumber. \end{eqnarray} \vspace{2mm} \noindent When we expressed in terms of these new variables, the system of equations (1) and (4) becomes \begin{eqnarray} X^{'} &=& -(1-X^2)[3X-\sqrt{3}Y\lambda]\nonumber\\ Y^{'} &=& {Y\over{2}}\left[-\sqrt{3}\lambda XY - {3Y^{2}(\gamma-X^{2})\over\sqrt{1-X^{2}}} + 3\gamma\right]\\ \lambda^{'} &=& - \sqrt{3}\lambda^{2}XY(\Gamma - 3/2)\nonumber \\ &+&3\lambda\left[\gamma - {Y^{2}(\gamma-X^2)\over{\sqrt{1-X^{2}}}}\right]\rho {d Log[L(\rho)]\over{d\rho}}\nonumber, \end{eqnarray} \vspace{2mm} \noindent where ``prime'' means differentiation with respect to $N$. The parameter $\lambda$ and $\Gamma$ are defined as \begin{eqnarray} \lambda &=& -{(dV/d\phi)\over{\sqrt{8\pi G}L(\rho)V^{3/2}}}\nonumber\\ \Gamma &=& {Vd^{2}V/d\phi^2\over{(dV/d\phi)^2}}. \end{eqnarray} \vspace{2mm} \noindent From the definition of total energy density, one can get the constraint equation \begin{equation} {Y^{2}\over{\sqrt{1-X^{2}}}} + {\rho_{\gamma}\over{\rho}} = 1. \end{equation} The equation of state for the tachyon field and its density parameter are given by \begin{eqnarray} \gamma_{\phi} &=& X^{2}\nonumber\\ \Omega_{\phi} &=& {Y^{2}\over{\sqrt{1-X^{2}}}}. \end{eqnarray} \vspace{2mm} \noindent Then the allowed phase space for $X$ and $Y$ is given by $0\leq X^{2}+Y^{4}\leq 1$ from the requirement $0\leq \Omega_{\phi} \leq 1$. From the expression of the equation of state $\gamma_{\phi}$, one see that $\gamma_{\phi} \geq 0$ ($P_{\phi} = (\gamma_{\phi} -1)\rho_{\phi}$). For the case $\lambda = constant$, in standard gravity, the potential turns out to be $V(\phi) \propto \phi^{-2}$. But due to the presence of the modified term $L(\rho) \neq 1$, it is not so in our case. But it is interesting to see that, for $\lambda =constant$, the equations in (6) have the identical form to that of the plane autonomous system of standard relativistic cosmology involving a tachyon field in terms of the parameters \begin{eqnarray} X_{s} &=& \dot{\phi}\nonumber\\ Y_{s} &=& {\sqrt{8\pi G V(\phi)}\over{\sqrt{3}H}}\\ \lambda_{s} &=& -{(dV/d\phi)\over{\sqrt{8\pi G}V^{3/2}}}\nonumber. \end{eqnarray} \vspace{2mm} \noindent This immediately ensures that our system of equations (6) with constant $\lambda$ admits the same set of critical points to that of the standard cosmology when those solution are expressed in terms of $\{X,Y,\lambda\}$. Also the stability of these critical points can be directly obtained from stability analysis of the standard cosmology scenario. Now there are altogether five fixed points in our system equations (6) when $\{X,Y,\lambda\} = \{X_{c}, Y_{c}, \lambda_{c}\}$ are constants. Three of these $\{X_{c}=0,Y_{c}=0\}$, $\{X_{c}=1,Y_{c}=0\}$ and $\{X_{c}=-1,Y_{c}=0\}$ are unstable as studied by Copeland et al\cite{copeland1}. The fourth fixed point is given by \begin{eqnarray} X_{c} &=& \sqrt{\gamma},\nonumber\\ Y_{c} &=& \sqrt{3 \gamma}/\lambda_{c}. \end{eqnarray} \vspace{2mm} \noindent It is a stable one for $0\leq \gamma \leq \alpha = {\lambda_{c}^{2}\over{18}}\left(\sqrt{\lambda_{c}^{4}+36}-\lambda_{c}^{2}\right)$. For this one can easily show that $\gamma_{\phi} = \gamma$, hence it represents a scaling solution. But in this case as $\Omega_{\phi} = 3\gamma/(\lambda_{c}^{2}\sqrt{1-\gamma})$, $\gamma$ has to be less than 1. It restricts this case to be a viable model for dark energy as the background fluid can never be matter-like. The fifth fixed point is given by \begin{eqnarray} X_{c} &=& {\lambda_{c}\over{\sqrt{3}}} \left({{\sqrt{\lambda_{c}^{4}+36}-\lambda_{c}^{2}}\over{6}}\right)^{1/2}\nonumber\\ Y_{c} &=& \left({{\sqrt{\lambda_{c}^{4}+36}-\lambda_{c}^{2}}\over{6}}\right)^{1/2} \end{eqnarray} \vspace{2mm} \noindent This fixed point is a stable node for $\gamma \geq \alpha\equiv {\lambda_{c}^{2}\over{18}}\left(\sqrt{\lambda_{c}^{4}+36} -\lambda_{c}^{2}\right)$. \vspace{5mm} \section{Scaling Solutions} As we mentioned in the earlier section, that for a standard cosmology, $\lambda = const$, corresponds to a potential of inverse power law form. But with the modification term $L(\phi)$ in the expression of $\lambda$ in equation (6), the form of the potential will change. In this section, we shall calculate the form of the scaling potential in the modified gravity scenario. For this purpose, we essentially follow the method earlier prescribed by Copeland {\it et al.}\cite{copeland2} for a standard canonical scalar field. One can show that for both the critical points (11) and (12), $X^{'} = Y^{'} = \lambda^{'} = 0$, if the condition \begin{equation} \Gamma = {3\over{2}} + \rho{d\over{d\rho}}Log[L(\rho)] \end{equation} \vspace{2mm} \noindent is satisfied. Using the relation $\rho = V/Y_{c}^2$, the above equation can be written as \begin{equation} {d\over{d\phi}}(Log[{d\rho\over{d\phi}}]) - {3\over{2}}{d\over{d\phi}}\left[Log(\rho)\right] -{d\over{d\phi}}\left(Log[L(\rho)]\right) = 0. \end{equation} This is very similar to the corresponding equation (eqn (19) in ref \cite{copeland2}) for a canonical scalar field except the factor (3/2) in the second term in left hand side of the above equation. The above equation can be integrated to find $\rho(\phi)$ which together with the expression $V = {Y_{c}^2}\rho$ will give the corresponding potential for the scaling solution for given choice of modification $L(\rho)$. By integrating the above equation and using equation (7) with constant $\lambda$, we get \begin{equation} \int {d\rho\over{L(\rho)\rho^{3/2}}} = - \kappa \lambda_{c} Y_{c} \phi, \end{equation} \vspace{2mm} \noindent where $\kappa^{2} = 8\pi G$. While deriving the above expression, we have put one integration constant to zero without no loss of generality. This is equivalent to giving a linear shift to the value of the scalar field. It is also interesting to calculate the behavior of the scale factor for a given scaling solution assuming the scalar field is monotonically varying function of time $(\dot{\phi} \neq 1)$. In general one can express the scalar field equation (4) in the following manner: \begin{equation} \dot{\rho_{\phi}} = {V(\phi)\over{\sqrt{1-\dot{\phi}^2}}} (-3H\dot{\phi}^2) = -3H\dot{\phi}^{2} \rho_{\phi} \end{equation} \vspace{2mm} \noindent By using the definition of the Hubble parameter $H = {\dot{a}\over{a}}$, one can write \begin{equation} 3H^{2} = -{1\over{a}}{da\over{d\phi}}{d\rho_{\phi}\over{d\phi}}{1\over{\rho_{\phi}}} \end{equation} \vspace{2mm} \noindent Substituting the above equation in the Hubble equation (1), one can write \begin{equation} {da\over{d\phi}}{d\rho\over{d\phi}} = -\kappa^{2} a \rho^{2} L^{2}(\phi), \end{equation} \vspace{2mm} \noindent where $\rho$ is the total energy density of the universe and can be written as $\rho = {\rho_{\phi}\over{\Omega_{\phi c}}}$. Defining a new variable $b(\phi)$ as \begin{equation} b(\phi) = exp\left[\int^{\rho} d\rho {1\over{\rho^{2}L^{2}(\rho)}}\right], \end{equation} \vspace{2mm} \noindent one can write equation (18) as \begin{equation} {da\over{d\phi}}{db\over{d\phi}} = - \kappa^{2} ab, \end{equation} \vspace{2mm} \noindent which can be subsequently integrated to give the scale factor $a$: \begin{equation} a = Exp\left[ \int -\kappa^{2} ({db\over{d\phi}})^{-1} b d\phi \right]. \end{equation} \vspace{2mm} \noindent Using the equation (16), one can also calculate the time dependence of the scale factor through the following equation: \begin{equation} t = -\sqrt{3}\kappa \int d\phi \rho^{3/2}(\phi) L(\phi) ({d\rho\over{d\phi}})^{-1} \end{equation} \vspace{5mm} \noindent \section{Different Classes of Modified Gravity} \subsection{Randall-Sundrum model} For Type-II Randall Sundrum brane-world model where 3 brane with positive tension is embedded in five-dimensional Anti de-Sitter spacetime, the modification to the standard gravity is given by \begin{equation} L(\rho) = \sqrt{1 + {\rho\over{2\sigma}}}, \end{equation} \vspace{2mm} \noindent $\sigma$ is the tension of the 3-brane. One can now use equations (5) and (15) to determine the scaling potential which is given by: \begin{equation} V(\phi) = {4\sigma Y_{c}^{2}\over{\sigma\lambda_{c}^{2}Y_{c}^{2}\kappa^{2}\phi^{2}-2}} \end{equation} \vspace{2mm} \noindent One can also obtain the time dependence of the scale factor $a(t)$ using equations (21) and (22): \begin{equation} a(t) = \left({\kappa^{2}\lambda_{c}^{4}Y_{c}^{4}\sigma\over{6}}t^{2} - 1\right)^{1\over{\lambda_{c}^{2}Y_{c}^{2}}}. \end{equation} As explained in section II, there are two stable solutions. One of them can not produce a viable late time dark energu model, as the background fluid can never behave like matter. For the other case, one can have a late time scalar field dominated case ($\Omega_{\phi}=1$) with accelerated expansion. We have plotted the time evolution for this case in figure 1. We have taken ${\sigma\over{m_{p}^2}}= 10^{-20}$.We have shown the evolution for different values of $\lambda$. We should mention that evolution of the universe is same as obtained by Copeland {\it et al.} in \cite{copeland2} for a standard canonical scalar field. But the due to the noncaninical nature of the kinetic term, the required potential is different. In this case the acceleration occurs for $\gamma_{\phi} <2/3$ in the late time when the $\rho$ term in the action dominates whereas for $\gamma_{\phi} < 1/3$, acceleration occurs in early time when the $\rho^2$ term dominates. This translates to the constraints on $\lambda_{c}$ as $\lambda_{c} < 1.86$ and $\lambda_{c} < 1.01$ respectively. \begin{figure}[t] \centerline{\epsfxsize=3.7truein\epsfbox{Fig1.ps}} \caption{The evolution of the universe for Randall-Sundrum II brane world models. We have taken ${\sigma\over{m_{p}^4}} = 10^{-20}$ for this figure. $\lambda_{c} = 1, 1.5,2$ from top to bottom.} \end{figure} \vspace{5mm} \subsection{Shtanov-Sahni Braneworld Cosmology} There is another interesting class of models proposed by Shtanov and Sahni where a 3 brane with negative tension $\sigma$ is embedded in a five dimensional conformally flat space, where the fifth dimension has time-like signature. In this case the modification to standard gravity is describe as: \begin{equation} L(\rho) = \sqrt{1- {\rho\over{2|\sigma|}}} \end{equation} \vspace{2mm} \noindent One can calculate the potential which is given by \begin{equation} V(\phi) = {Y_{c}^{2}\over{{1\over{2|\sigma|}} + {\kappa^{2}\lambda_{c}^{2}Y_{c}^{2}\over{4}}\phi^{2}}}. \end{equation} \vspace{2mm} \noindent One can also calculate the corresponding scalar factor $a(t)$ \begin{equation} a(t) = \left({\kappa^{2}\lambda_{c}^{4}Y_{c}^{4}|\sigma|t^{2}\over{6}} + 1\right)^{1\over{\lambda_{c}^{2}Y_{c}^{2}}} \end{equation} \vspace{5mm} \subsection{Cardassian model} All the above modified gravity models modifies the gravity at small distance scales i.e it modifies gravity at high energy scales (early times). But recently SnIa, CMB and LSS measurements have confirmed the late time accelerated expansion of the universe. A number of phenomenological models have been proposed to explain this late time accelerated expansion through modifications of standard gravity at large distance scales. In one such model, know as ``Cardassian model'' originally proposed by Freese and Lewis the modification is given by \begin{equation} L(\rho) = \sqrt{1+A\rho^{n}} \end{equation} \vspace{2mm} \noindent The present acceleration of the universe can be obtained in such model with $n < -1/3$ when the universe contains only the matter. Although the presence of any scalar field was not assumed in this original model, it is interesting to study the scaling behavior of the background cosmology in the presence of both matter and a scalar field, a noncanonical one for our present purpose. To get the result analytically in closed form we have assumed $n= -1/2$ in the following calculations. We should mention that taking different values of $n$ is always possible, only problem is that solution may not be in a closed form. We have used this particular value of $n$ to present the solution in a closed form. With this choice of $n$, one can directly obtain the form of the potential $V(\phi)$ and the corresponding scale factor $a(t)$ as following: \begin{equation} V(\phi) = {A^{2}Y_{c}^{2}\over{\left[{\lambda_{c}^{2}\kappa^{2}Y_{c}^{2}A^{2}\phi^{2}\over{16}}-1\right]^{2}}} \end{equation} \vspace{5mm} \begin{equation} a(t) = \left[A\left({Y_{c}^{2}\lambda_{c}^{2}\kappa \over{4\sqrt{3}}}\right)^{2}t^{2}-1\right]^{2\over{Y_{c}^{2}\lambda_{c}^{2}}}. \end{equation} We have shown the evolution of universe for this in figure 2. We have assumed $A/m_{4} = 10 ^{-10}$. Acceleration of the universe takes place in early time for $\gamma < {2/3}$, which constrains the $\lambda_{c}$ parameter as $\lambda_{c} < 1.86$. For late time, universe always accelerates. \begin{figure}[t] \centerline{\epsfxsize=3.7truein\epsfbox{Fig2.ps}} \caption{The evolution of the universe for Cardassian model. We have taken ${A\over{m_{p}^4}} = 10^{-10}$ for this figure. $\lambda_{c} = 1, 1.5,2$ from top to bottom.} \end{figure} \subsection{Dvali-Gabadadze-Porrati Brane-World Gravity Model} The Dvali-Gabadadze-Porrati (DGP) model is a brane-world model (a 3-brane embedded in 5D anti-deSitter bulk) with the incorporation of the scalar curvature term for the 3-brane in the total action. The bulk is empty and all kind of matter fields are restricted to stay only on the 3-brane. The modified FRW equation for DGP model is given by \begin{equation} H^{2} \pm {H\over{r_{0}}} = {8\pi G\over{3}}\rho \end{equation} \vspace{2mm} \noindent where $r_{0}^2 = {m_{4}^{2}\over{2 m_{5}^{3}}}$ with $m_{4}$ and $m_{5}$ being the 4-d and 5-d Planck mass respectively. The parameter $r_{0}$ actually determines the scale at which the switchover from standard gravity to modified one takes place.The (+) and (-) sign corresponds to different kinds of embedding the 3-brane in the 5d bulk. The modified equation now becomes \begin{equation} H = {1\over{2r_{0}}} \left[\mp 1 +\sqrt{1+\alpha_{1} \rho}\right] \end{equation} \vspace{2mm} \noindent where $\alpha_{1} = {32 \pi r_{0}^{2}\over{3m_{4}^{2}}}$. A direct comparison of the above equation with equation (1), results the correction term in the Friedmann equation as \begin{equation} L = {1\over{\sqrt{\alpha_{1}\rho}}}\left[\mp 1 +\sqrt{1+\alpha_{1}\rho}\right]. \end{equation} \vspace{2mm} \noindent Putting this in the equation (15), one can integrate to get \begin{equation} {\sqrt{\alpha_{1}}\over{\mp 1+\sqrt{1+\alpha_{1}\rho}}} + \sqrt{\alpha_{1}}Sinh^{-1}\left(1\over{\sqrt{\alpha_{1}\rho}}\right) = \kappa\lambda_{c}Y_{c}\phi \end{equation} \vspace{2mm} \noindent This equation together with equation (5) can be used to find the scaling potential $V(\phi)$. One can also calculated the corresponding time dependence. This is given by \begin{equation} {3X_{c}^{2}\over{2 r_{0}}} t = Cotanh^{-1}\sqrt{1+\alpha_{1}\rho} + {1\over{\sqrt{1+\alpha_{1}\rho}\mp 1}} \end{equation} \vspace{5mm} \section{Conclusion} The issue of late time acceleration of the universe has been one of the most serious challenges in cosmology nowadays. While including an extra dark energy component with repulsive gravity in the energy budget of the universe, is one of the most studied approaches for explaining, modifying Einstein's gravity at large distance scales, has also been taken seriously in recent times. While any modifications to Einstein's gravity has its own problems, this idea has been rigorously pursued by various researchers in recent times. One of the main motivations of such idea is that, modifications of Einstein's gravity can arise at different energy scales very naturally through compactifying higher dimensional theories. On the other hand, the concept of scaling solutions in cosmology has been taken seriously in recent times, as such solutions are necessary for solving cosmic coincidence problem in dark energy models. In this paper, we have studied in systematic way the scaling solutions in modified gravity models when the universe contains a tachyon type scalar field in addition to the sandard matter field. This is an extension of the earlier work done by Copeland et al.\cite{copeland2} where the scalar field with a canonical scalar field was considered. We first describe the general equations and method of calculating scaling potential. Later on, we take specific modified gravity models and apply our method. We take four choices of modifications, e.g the Randall-Sundrum II model, Shtanov-Sahni Model, Cardassian model and DGP model. Our method can also be used to consider any modified gravity to calculate the scaling potential, while considering the tachyon type scalar field. We should mention that Tsujikawa and Sami\cite{ss} have earlier considered scaling solutions in modified gravity with a tachyon field. But for the modification, they have considered the special case when $H^{2} \propto \rho^{n}$ which does not include modifications like DGP model. Das et al.\cite{rupam} have also considered tracking solutions in modified gravity both with quintessence and K-essence type fields. Their approach is different from ours and they have not considered fields like tachyon which has some specific features. In future, one can take the general K-essence action for the non-canonical scalar field, and calculate the scaling potential. This will be our future goal. \section{Acknowledgement} The authors acknowledge the financial support provided by the University Grants Commission, Govt. Of India, through the major research project grant (Grant No: 33-28/2007(SR)).
1,477,468,751,370
arxiv
\section{Introduction} Although the minimal supersymmetric Standard Model (MSSM) has dominated the phenomenological studies of supersymmetric signals \cite{HabKan}, it has long been known that the symmetries of the Standard Model allow additional dimension-four couplings which may lead to interesting baryon- and lepton- number-violating processes~\cite{Rpar}. These couplings are expected to be present in the low energy Lagrangian, unless forbidden by a symmetry such as $R$ parity~\cite{fayet}. The complete set of such terms in the superpotential is: \[ \lambda L_{i}L_{j}{\bar{E}}_{k}+\lambda ^{\prime }L_{i}Q_{j}{\bar{D}_{k}}% +\lambda ^{\prime \prime }{\bar{U}_{i}}{\bar{D}_{j}}{\bar{D}_{k}} \] where the $L(Q)$ are the left-handed lepton (quark) superfields, and the ${\bar{E}}$,(${\bar{D}},{\bar{U}}$) are the corresponding right-handed fields. The symmetries of the model imply that there are 45 operators in total. However, there are many experimental constraints on these operators and their combinations, of which the most stringent comes from proton stability and excludes the simultaneous presence of certain products of $LQ\bar{D}$ and $\bar{U}\bar{D}\bar{D}$ couplings~\cite{SMVIS}. In addition, experimental constraints from the non-observation of modifications to Standard Model processes, or of possible exotic processes, gives bounds for most of the operators~\cite{constraints} and some combinations involving pairs of fermion generations. On the other hand, possible strong limits on $R$-parity violating interactions from cosmological arguments~\cite{CDEO} can be avoided in various schemes \cite{DRR}, including the case of electroweak baryogenesis ~\cite{Barel}. The large number of $R$-violating couplings complicates the systematic discussion of the phenomenological implications of these constraints. To date, most phenomenological analyses have assumed the dominance of a single operator, arguing that the Yukawa couplings of the Standard model display just such a property. In flavour-symmetry models, the dominant operator is naturally specified in the quark and lepton {\it current} basis. It is plausible to assume that mass mixing will induce non-zero coefficients for operators related to the dominant one. In Section 2 of this paper, we pursue this argument and compile the corresponding implications of some severe upper limits on particular $R$-violating interactions. There have been many attempts to understand quark and lepton masses and the mixing angles between mass and current eigenstates using models for family symmetries~\cite{famsym}. Some of these reproduce successfully the qualitative features of fermion masses and mixings, and so provide plausible frameworks for analyzing the possible hierarchy of $R$-violating interactions \cite{MODELS}. In this paper we consider models based on a single $U(1)$ family symmetry, with fermion charges constrained by the observed hierarchy of fermion masses and mixing angles \cite{IR}. Such models are discussed in Section 3, where problems arising from symmetric mass matrices and from constraints on products of operators are emphasized. As a specific application of this analysis, we look in Section 4 for models that might accommodate the proposed $R$-violating interpretations \cite{Rviol} of the possible HERA large-Q$^{2}$ anomaly \cite{H1,ZEUS}. Of the $45$ operators mentioned earlier, $9$ could in principle lead to resonant squark production at HERA. Of these, only the $\lambda'_{121}, \lambda'_{131}$ and $\lambda'_{132}$ cases survived an initial confrontation with other experimental constraints~\cite{Rviol} The suggestion that the apparent HERA excess may be due to single sparticle production via some $R$-violating couplings may not be gaining support~\cite{Jer}. Nevertheless, our analysis gives an indication which of the proposed mechanisms may be compatible with $U(1)$ family-symmetry models. Within the framework of the most symmetric schemes describing fermion masses, we find the bounds on R-violating couplings are so strong that such schemes do not lead to a significant excess of HERA events over the Standard Model prediction. However, in more general schemes we find that the $\lambda'_{131}$ interpretation is easy to accommodate, whereas the $\lambda'_{121}$ interpretation has difficulties with squark mass universality. We indicate how to construct a model consistent with the $\lambda'_{132}$ interpretation, though we do not present a specific example. We also show how the structure of the quark and lepton mass matrices would be strongly constrained by the confirmation of such an $R$-violating signal. \section{$R$ Violation and Family Symmetries} In order to obtain a realistic form for the quark and lepton masses and mixing angles, it is necessary to have non-diagonal forms for the mass matrices in the current basis. Diagonalising the mass matrix then implies that the mass eigenstates are mixtures of the current eigenstates. Attempts to make sense of the pattern of fermion masses and mixing angles often start with a family symmetry in the current basis which, when exact, allows only the third generation of quarks and leptons to acquire mass. Spontaneous breaking of this symmetry then allows other entries of the mass matrix to be non-zero. If the breaking is weak, these entries will be small, offering an explanation for the observed hierarchy of fermion masses and mixing angles. If $R$ parity is violated, such a symmetry would have important implications for $R$-violating operators, since couplings with different family structures would also appear with different powers of the family symmetry-breaking parameter. This is consistent with the common assumption that a single $R$-violating operator dominates. However, this assumption would apply in the current quark and lepton basis, and in the mass-eigenstate basis there would be several operators corresponding to the original dominant one in the current basis. Any given family-symmetry model would make characteristic predictions for the pattern of these related operators. Since there are stringent bounds on some of $R$-violating operators, particularly on those involving the first family and on some combinations that mix families, an analysis of such sub-leading operators in the mass-eigenstate basis may provide the most stringent bounds on the operators related by mass mixing. In addition, there could also be further contributions due to operators that are sub-dominant in the current basis, with strengths given by powers of the family symmetry-breaking parameter that are calculable in any given model. The relation between the forms of the mass matrix in the current and the mass eigenstate basis is given by \begin{eqnarray} M_{u}^{\prime } &=&V_{u}^{L}.M_{u}^{Diag}.(V_{u}^{R})^{\dagger } \nonumber \\ M_{d}^{\prime } &=&V_{d}^{L}.M_{d}^{Diag}.(V_{d}^{R})^{\dagger } \nonumber \\ M_{\ell }^{\prime } &=&V_{\ell }^{L}.M_{\ell }^{Diag}.(V_{\ell }^{R})^{\dagger } \end{eqnarray} where $V_{u,d,\ell}^{L,R}$ are the unitary matrices relating the left- and right-handed $u,$ $d$ and $ \ell $ current eigenstates to their mass eigenstates. We use the notation ${\cal L}_{mass} = \bar{\Psi}'_L M' \Psi'_R$ for all mass terms, so that the $V_L$ are given by diagonalising $M' M^{'\dagger}$, whilst the $V_R$ are obtained by diagonalising $M^{'\dagger} M'$. Only information on the entries of the Cabibbo-Kobayashi-Maskawa (CKM) product matrix \begin{equation} V^{CKM}=V_{u}^{L\dagger }V_{d}^{L} \end{equation} is provided by experiments to date. In general, one can construct models where the quark mixing is either in the up sector, or in the down sector, or both. In the class of models studied in this paper, in which the mass matrices have small off-diagonal entries generated by spontaneous breaking of a family symmetry, one may obtain useful connections between the mixing matrices and the elements of the mass matrices in the current basis by perturbative expressions for the off-diagonal elements, which are given in the Appendix. From this general analysis, it may be seen that in the specific case of the CKM mixing matrix the leading-order contribution comes from the $d$-quark mass-matrix elements that lie above the diagonal in our representation. As a result, we have little experimental input to guide us in constructing models for the elements below the diagonal. However, it has has been noted for some time that a phenomenologically successful relationship results if one assumes a ``texture zero'' in the (1,1) position {\it and} symmetry between the (1,2) and (2,1) matrix elements~\cite{RRR}. In this case one finds the relation \[ \mid V_{ud}\mid =(\frac{m_{d}}{m_{s}}+\frac{m_{u}}{m_{c}}+2\sqrt{\frac{% m_{d}m_{u}}{m_{s}m_{c}}}\cos {\phi })^{\frac{1}{2}} \] where $\phi $ is the usual CP-violating phase in the CKM matrix. The fact that this relation works well is the only phenomenological indication we have for a symmetric structure of the mass matrices, and it may just be accidental. Nevertheless, we think it a useful starting point for our analysis, so we consider first a simple model capable of yielding this form and accommodating the remaining fermion masses and mixing angles \cite{IR}. \begin{table}[h] \centering \begin{tabular}{|c|ccccccc|} \hline & $Q_i$ & $\bar{U}_i$ & $\bar{D}_i$ & $L_i$ & $\bar{E}_i$ & $H_2$ & $H_1$ \\ \hline $U(1)$ & $a _i$ & $a _i$ & $a _i$ & $b_i$ & $b_i$ & $-2a _3$ & $w a _3$ \\ \hline \end{tabular} \caption{{\it Assignments of $U(1)$ charges.}} \end{table} The model consists of a single $U(1)$ family symmetry with the same charges for the left- and right-handed states, as shown in Table 1, where, e.g., the choice $a_{i}=(-4,1,0)$ gives an acceptable pattern for the mass matrices. Suppressing unknown numerical factors and phases, which are all expected to be of order unity, with these charge assignments the up-quark mass matrix takes the form \[ M^{up}=\left( \begin{array}{ccc} \epsilon ^{8} & \epsilon ^{3} & \epsilon ^{4} \\ \epsilon ^{3} & \epsilon ^{2} & \epsilon \\ \epsilon ^{4} & \epsilon & 1 \end{array} \right) \label{mm} \] The down-quark mass matrix has a similar form, but with a different expansion parameter $\bar{\epsilon}\approx \sqrt{\epsilon }$. Since the up and down sectors have similar structures, mixing is present in both sectors, though it may be larger in the down sector, simply because $\bar{\epsilon} >\epsilon $. For the mass matrices of~\cite{IR} that we consider initially, one finds the following expressions for the quark mixing matrices~\footnote{ Lepton mixing is discussed in the next section.}: \[ V_{u}^{L,R}\approx \left( \begin{array}{ccc} 1 & \epsilon & 2\epsilon ^{4} \\ -\epsilon & 1 & \epsilon \\ \epsilon ^{2} & -\epsilon & 1 \end{array} \right) ,\;\;V_{d}^{L,R}\approx \left( \begin{array}{ccc} 1 & \bar{\epsilon} & 2\bar{\epsilon}^{4} \\ -\bar{\epsilon} & 1 & \bar{\epsilon} \\ \bar{\epsilon}^{2} & -\bar{\epsilon} & 1 \end{array} \right) \] using the second-order perturbation-theory formulae given in the Appendix. We now discuss the importance of this mixing for $R$-parity violation. The most relevant experimental constraints are those on the operators $L_{1}Q_{1}\bar{D}_{1}$ and $L_{1}Q_{3}\bar{D}_{3}$, for which $\lambda _{111}^{\prime }\leq 0.002$ from nuclear $\beta \beta $ decay \cite{HirVer} and for squark and gluino masses of 200 GeV, while $\lambda _{133}^{\prime }\leq 0.001$ from bounds~\cite{numass} on Majorana neutrino masses, again assuming masses of 200~GeV for the sparticles.~\footnote{The quoted bound is clearly only approximate, as the exact value depends on soft parameters~\cite{JOSI}.} However, operators related to these operators by mass mixing are also strongly constrained by these bounds. Consider first the relations \begin{equation} \begin{array}{l} (L_{1}Q_{1}\bar{D}_{1})^{\prime }=L_{1}Q_{1}\bar{D}_{1}+\epsilon L_{1}Q_{2}% \bar{D}_{1}+2\epsilon ^{4}L_{1}Q_{3}\bar{D}_{1}+... \\ (L_{1}Q_{3}\bar{D}_{3})^{\prime }=L_{1}Q_{3}\bar{D}_{3}-\bar{\epsilon}% L_{1}Q_{3}\bar{D}_{2}+\bar{\epsilon}^{2}L_{1}Q_{3}\bar{D}_{1}+... \end{array} \label{opmix} \end{equation} where the notation $()^{\prime }$ denotes effective operators in a current-eigenstate basis. We see that the operators $L_{1}Q_{2} \bar{D}_{1}$ and $L_{1}Q_{3}\bar{D}_{1}$ mix with $L_{1}Q_{1}\bar{D}_{1}$, so their coefficients are constrained to be the appropriate mixing coefficient ($\epsilon ^{-1}$ and $(1/ (2 \epsilon^4)$, respectively) times the bound on $\lambda _{111}^{\prime }$. Similarly, the coefficients of the operators $L_{1}Q_{3}\bar{D}_{2}$ and $L_{1}Q_{3}\bar{D}_{1}$ are constrained to be less than $\bar{\epsilon}^{-1}$ and $\bar{\epsilon}^{-2}$ times the bound on $\lambda _{133}^{\prime }$, respectively. We display below, as an example, matrices of upper limits on $L_1 Q_j \bar{D}_k$ operators. These limits follow from the mixing in this particular model, combined with the experimental upper bounds for sfermion masses of 200 GeV~\cite{constraints,HirVer,numass,noi,Agashe,atpar}. We first look at the bounds that arise from mixing with the $\lambda'_{111}$ operator, tabulating the direct experimental bounds in cases where they are stronger than those originating from the $\lambda'_{111}$ mixing: \[ L_{1jk}^{({\rm from} \; 111)} < \left( \begin{array}{ccc} 0.002 & 0.009 & 0.04 \\ 0.038 (0.009) & 0.03 & 0.3 \\ 0.07 & 0.56 & 0.001 \end{array} \right) \] In certain entries we have two values, because bounds on $e u \bar{d}$ terms involve mixing in the up-quark sector, whereas bounds on $\nu_e d \bar{d}$ terms involve mixing in the down-quark sector. Then we repeat the analysis for mixing with the $\lambda'_{133}$ coupling: \[ L_{1jk}^{( {\rm from} \; 133)} < \left( \begin{array}{ccc} 0.002 & 0.04 & 0.04 (0.02) \\ 0.07 & 0.03 (0.02) & 0.02 (0.004) \\ 0.02 & 0.004 & 0.001 \end{array} \right) \] and finally we gather all the best limits for the matrix elements in this particular model: \[ L_{1jk}^{best} < \left( \begin{array}{ccc} 0.002 & 0.009 & 0.04 (0.02) \\ 0.02 (0.009) & 0.03 (0.02) & 0.02 (0.004) \\ 0.02 & 0.004 & 0.001 \end{array} \right) \] Here, the bound on the (2,1) entry arises from constraints on $\lambda'_{121}$ from $K \rightarrow \pi \nu \bar{\nu}$ \cite{Agashe}, in the case that $V^{CKM}_{12,21}$ arises predominantly from the down-quark sector. At this stage we have not yet taken into account other bounds, especially bounds on products of $R$-violating couplings that pose even stricter constraints \cite{DEB,othpro}. As an example for this Ansatz, the couplings $L_{1}Q_2\bar{D}_1$ and $L_{1}Q_1\bar{D}_2$ appear at such an order in the family-symmetry breaking that the strong bound on the product of these couplings from contributions to $\Delta m_K$~\cite{DEB} is not satisfied. We shall return to this and related issues at a later stage. It should be noted that the model of~\cite{IR} has mixing in both the up and down sectors. Indeed, the (2,3) entry of the down mass matrix is $\bar{\epsilon}=0.23$, which is much larger than $V_{23}^{CKM}$. Thus, to obtain viable mass matrices in this example, one needs a suppression of the mixing in $\mid V_{cb}\mid =(a^{\prime }\frac{m_{s}}{m_{b}}+a\frac{m_{c}}{m_{t}}+2% \sqrt{aa^{\prime }\frac{m_{s}m_{c}} {m_{b}m_{t}}}\cos {\phi })^{\frac{1}{2}}$~\cite{IR}. This case, in which the mixing between states is much larger than would have been estimated just using the appropriate CKM mixing matrix element, serves as a healthy reminder of the potential importance of the details of the underlying model for fermion masses when drawing implications for $R$-violating phenomena. \section{Exploring Hierarchies of $R$-Violating Interactions} We now consider the effect of the $U(1)$ symmetry on the pattern of allowed $R$-violating interactions~\cite{MODELS}. We first recall that possible sets of quark and lepton charges leading to correct mass hierarchies are given~\cite{IR} by: \\ {\bf Case 1}: $a_i = b_i = (-4,1,0)$, where $a_i$ and $b_i$ are the quark and lepton charges respectively, and \\ {\bf Case 2}: $a_i =(-4,1,0)$, $b_i = (-\frac{7}{2},\frac{1}{2},0)$. In {\bf Case 1}, where leptons and quarks have the same charges, one needs an additional symmetry in order to eliminate dimension-four nucleon-decay operators. This may be done simply by imposing an anomaly-free flavour-independent baryon parity \cite{GrL}, under which the fields transform as \begin{equation} Z_3: (Q,\bar{U},\bar{D},L,\bar{E},H_1,H_2) \rightarrow (1,a^2,a,a^2,a^2,a^2,a) \end{equation} This allows only the lepton-number-violating operators, while forbidding baryon-number-violating ones ~\footnote{ A flavour-dependent generalisation of this symmetry has been discussed in~\cite{LR}. In this case, consistent solutions were found containing only a subclass of operators violating lepton number ($LL{\bar E}$) and baryon-number (${\bar U}{\bar D} {\bar D}$). In this way, it was possible to have both lepton and baryon number violation without disturbing proton stability. However, we do not pursue such models here.}. In {\bf Case 2}, which is motivated by constaints on HERA-friendly models, the lepton charges of the first two generations are half-integers. On might at first think that the residual $\tilde{Z}_2$ symmetry of the $U(1)$ forbids the $L_{1,2}Q\bar{D}$ operators. However, it is straightforward to combine this $\tilde{Z}_2$ with a normal $Z_2^M$ matter parity, so as to allow these terms while also forbidding the $\bar{U}\bar{D}\bar{D}$ terms. This is possible if $\tilde{Z}_2 \times Z_2^M$ is broken to a residual $Z_2$ by a field $\Phi$ that is odd under both symmetries. In this case, $\bar{U}\bar{D}\bar{D}$ is forbidden, because it transforms as $(+,-)$ under $\tilde{Z}_2 \times Z_2^M$. Similarly, $L_{1,2}Q\bar{D}$ transforms as $(-,-)$ and is also forbidden, but it occurs at ${\cal O}\left (\frac{\Phi}{M} \right )$ through the term $(\Phi L Q \bar{D})/M$. Let us now pass to the charges of the $R$-violating operators. The first thing to notice is that the form of the mass matrices only determines the relative charges of the operators, not their absolute charges. To see this, note that that the symmetric structure of $M^{up}$ is unchanged if we add a family-independent constant to the charges of the $\bar{U}$ fields. This shows that, as we have already mentioned, the charge normalization of our operators is undetermined by the mass structure. However, anomaly cancellation must be imposed. With the general charge assignment given in the first line of Table~\ref{table:2}, the coefficients of the $SU(3)^2 \times U(1)$, $SU(2)^2 \times U(1)$ and $U_Y(1) \times U(1)$ anomalies are proportional to $A_{3,2,1}$, where \begin{eqnarray} A_3 & = & 2\sum a_i + \frac{3}{2} w_1 + \frac{3}{2} w_2 \nonumber \\ A_2 & = & \frac{3}{2} \sum a_i + \frac{1}{2} \sum b_i + \frac{1}{2} a_3 (w-2) \nonumber \\ A_1 & = & \frac{11}{6} \sum a_i + \frac{3}{2} \sum b_i + \frac{1}{2} a_3 (w-2) + 4 w_1 + w_2 + 3w_3 \end{eqnarray} We demand that these should vanish up to a Green-Schwarz term~\cite{GS}, i.e., $A_3:A_2:A_1 = 1:1:5/3$. The effect of this is shown in Table \ref{table:2}: in the first row we have a generic charge assignment where the flavour-independent pieces $w_i$ are to be chosen such that the anomaly cancellation conditions are satisfied. Imposing these conditions and reabsorbing $a_3$ in the definitions of the charges, one obtains the charges shown in the second row of Table~\ref{table:2}, where $a'_i \equiv a_i - a_3$, and $b'_i \equiv b_i -b_3$ are of the same form as discussed above, i.e., $a'_i = (-4,1,0)$, $b'_i = (-4,1,0)$ in Case 1 and $b'_i = (-\frac{7}{2},\frac{1}{2},0)$ in Case 2. \begin{table}[h] \centering \begin{tabular}{|c |ccccccc|}\hline &$ Q_i$ & $\bar{U}_i$ &$ \bar{D}_i$ &$ L_i$ & $\bar{E}_i$ & $H_2$ & $ H_1$ \\ \hline $U(1)$ & $a _i$ & $a _i + w_1$ & $a _i + w_2$ & $b_i$ & $b_i+w_3 $ & $-2a _3$ & $ w a _3$ \\ \hline $U(1)$ & $a'_i$ & $a'_i + w_1$ & $a'_i - w_1$ & $b'_i$ & $b'_i-w_1 $ & $-w_1$ & $ w_1$ \\ \hline \end{tabular} \caption{{\it Assignments of flavour symmetry charges, before and after imposing anomaly cancellation.}} \label{table:2} \end{table} We now discuss the possible hierarchies of $R$-violating operators in the two cases. \\ {\bf Case 1}: In this case, the charges of the operators $O_{ijk} \equiv L_iL_j\bar{E}_k$ and $L_iQ_j\bar{D}_k$ are the same, and depend only on the values of $i,j,k$, and not on their order, as given in Table \ref{table:3}. We note that the constraints on the operators $L_1Q_1\bar{D}_1$ from nuclear $\beta\beta$ decay and on $L_1L_3\bar{E}_3$ from bounds on Majorana neutrino masses constrain the choice of the charge $w_1$. Since the exact constraint depends on the magnitude of the expansion parameter for the $R$-violating couplings, we need to consider what the constraints are on this expansion parameter. \begin{table}[h] \centering \begin{tabular}{|c |ccccc|}\hline ijk & { 111} & { 121} &{ 122} & { 222} & { 131} \\ \hline $U(1)$ & $-12-w_1$ & $-7-w_1$ & $-2-w_1$ & $3-w_1$ & $-8-w_1$ \\ \hline \hline ijk & { 133} & { 333} & { 223} & { 233} & { 123} \\ \hline $U(1)$ & $-4-w_1$ & $-w_1$ & $2-w_1$ & $1-w_1$ & $-3-w_1$ \\ \hline \end{tabular} \caption{{\it Operator charges in Case 1.}} \label{table:3} \end{table} In the case of the mass matrices, it was suggested in~\cite{IR} that the mixing between Higgs fields carrying different $U(1)$ quantum numbers was responsible for filling in the remaining elements of the mass matrix. In this case the expansion parameters $\epsilon$ and $\bar{\epsilon}$ are as given in~\cite{IR}, with $M_2$, $M_1$ being the mass scales of the heavy Higgs fields $H_2$, $H_1$ that mix with the light Higgses responsible for electroweak breaking. The scales of the vacuum expectation values $<\theta>$,$<\bar{\theta}>$ are bounded from below by $(1 / \sqrt{192}\pi) M_{string}$, the scale of the $U(1)$ symmetry breaking. Hence $M_2$ and $M_1$ are bounded from below by $\epsilon^{-1}\theta$ and $\bar{\epsilon}^{-1}\theta$, respectively. In the case of $R$ violation, mixing between the operators $LL\bar{E}$ or $LQ\bar{D}$ proceeds through heavy lepton or heavy quark mixing rather than through heavy Higgs exchange. If the former are much heavier than the Higgs states, the corresponding expansion parameter $% \epsilon^{\prime}$ will be much smaller. The limiting case occurs when they have string-scale masses, corresponding to \begin{equation} \epsilon^{\prime}= \epsilon \frac{M_2}{M_{string}} \geq \frac{<\theta>}{% M_{string}} = \frac{1}{\sqrt{192} \pi} \approx 0.02 \label{biggest} \end{equation} Taking this lower limit for the expansion parameter and using the constraint $\lambda _{111}^{\prime }\leq 0.002$ from nuclear $\beta \beta $ decay, we find that $|-12-w_1| \geq 2$, whilst the constraint $\lambda _{133}^{\prime }\leq 0.001$ from bounds~\cite{numass} on Majorana neutrino masses indicates that $|-4-w_1| \geq 2$. Next, we note that the magnitudes of the couplings in Table~\ref{table:3} are symmetric in the three indices $ijk$. This implies, for example, that at this level the $\lambda'_{121}$ and $\lambda'_{112}$ couplings should have similar magnitudes. This must be made consistent with with the constraint $(L_1 Q_{2} \bar{D}_1).(L_1 Q_{1} \bar{D}_2) \leq 4 \cdot 10^{-9}$, which arises from bounds on $\Delta m_K$~\cite{DEB}. In the present context, this constraint indicates that the relevant charge $|-7-w_1|$ has to be large, and we reach our first HERA-unfriendly conclusion: in this case the $e^+d \rightarrow {\tilde c}$ interpretation of the HERA data would become untenable. Thirdly, a related problem is that some couplings to muons would have comparable magnitudes to those listed in Table~\ref{table:3}. For example, the magnitude of the $\lambda'_{211}$ coupling would be comparable to that of the $\lambda'_{121}$ coupling. However, certain products of couplings involving electrons and muons have to be extremely suppressed~\cite{constraints}. For 200 GeV sfermions, \begin{eqnarray} \lambda_{231} \lambda_{131} & \leq & 2.8 \cdot 10^{-6} \nonumber \\ \lambda'_{1k1} \lambda'_{2k2} & \leq & 3.2 \cdot 10^{-6} \nonumber \\ \lambda'_{1k1} \lambda'_{2k1} & \leq & 2 \cdot 10^{-7} \nonumber \\ \lambda'_{11j} \lambda'_{21j} & \leq & 2 \cdot 10^{-7} \label{muonelectron} \end{eqnarray} Using the form of the mixing matrices for Case 1: \[ V_{\ell }^{L,R}\approx \left( \begin{array}{ccc} 1 & \bar{\epsilon}/3 & 2\bar{\epsilon}^{4} \\ -\bar{\epsilon}/3 & 1 & \bar{\epsilon} \\ \bar{\epsilon}^{2} & -\bar{\epsilon} & 1 \end{array} \right) \] we shall see later that these bounds are so severe as to rule out any possible HERA-friendly model of this simple type. Note that, in order to obtain correct lepton masses within this Ansatz, a factor of $\sim 3$ is needed in the (22) element of the mass matrix, and this factor also enters in the mixings. Other strong constraints on products of couplings are the following: \begin{eqnarray} \lambda_{1j1} \lambda_{1j2} & \leq & 2.8 \cdot 10^{-6} \nonumber \\ \lambda'_{i13} \lambda'_{i31} & \leq & 3.2 \cdot 10^{-7} \nonumber \\ \lambda'_{i12} \lambda'_{i21} & \leq & 4 \cdot 10^{-9} , \label{others} \end{eqnarray} which are particularly stringent in the model under consideration. Using these bounds, one finds \begin{eqnarray} \lambda'_{i13} \leq 6 \cdot 10^{-4} \nonumber \\ \lambda'_{i12} \leq 6 \cdot 10^{-5} , \label{indi} \end{eqnarray} together with corresponding bounds for permutations of the indices. If the reported apparent excess of HERA events at large $Q^2$ were due to production of a single squark by an $R$-violating coupling, one would need \begin{eqnarray} \lambda'_{121,131} \approx & 0.04/\sqrt{\cal B} \nonumber \\ \lambda'_{132} \approx & 0.3/\sqrt{\cal B} \nonumber \end{eqnarray} where ${\cal B}$ is the branching ratio of the decay $\tilde{q} \rightarrow e^+ q$. We see immediately from (\ref{indi}) that the first two possibilities cannot be realised in this model. Moreover, we see from (\ref{opmix}) that $\lambda'_{132} \approx \lambda'_{133}/\bar{\epsilon} \leq 0.004$, and infer the third possibility cannot be realised either. What happens to the remaining couplings? From the mixing discussed above, we have: \\ (i) $\lambda'_{111} = \lambda'_{112}/\bar{\epsilon} \leq 3 \cdot 10^{-4}$, which is a stronger bound than the one from neutrinoless $\beta \beta$ decay,~\footnote{We use the down-quark mixing parameter, which is larger, since the operator we compare with differs only in the index of $\bar{D}$.} \\ (ii) $\lambda'_{222} = \lambda'_{221}/\bar{\epsilon} \leq 3 \cdot 10^{-4}$, \\ (iii) $\lambda'_{223} = \lambda'_{213}/\bar{\epsilon} = \lambda'_{312}/\bar{\epsilon} \leq 3 \cdot 10^{-4}$ or $\lambda'_{223} = \lambda'_{213}/ \epsilon = \lambda'_{312}/\epsilon \leq 0.0013$. \\ The constraints here are very strict because the experimental bound on $\lambda'_{312}$ is more severe than that on $\lambda'_{213}$: since we require these two terms to have the same charge, we must take the stricter limit. We also have \\ (iv) $\lambda'_{233} = \lambda'_{223}/\bar{\epsilon} \leq 0.0013 (0.006)$, for expansion parameters $\bar{\epsilon}$ and $\epsilon$ respectively, \\ (v) $\lambda'_{333} = \lambda'_{233}/\bar{\epsilon} \leq 0.006 (0.025)$. In each of these cases, $R$ violation may be manifest in hadron-hadron colliders. For sfermion masses of 100 GeV and $\lambda \geq 10^{-6}$, the lightest supersymmetric particle is expected to decay inside the accelerator. The above constraints allow couplings that are significantly larger than this lower bound. Through a suitable choice of $w_1$, the couplings that are more severely constrained can be made small, while some others can be of importance for collider physics, though none can be very large in this type of model with symmetric mass matrices. Hence, single-squark production via an $R$-violating coupling is suppressed, and the best signal would be squark-pair production followed by $R$-violating decay. {\bf Case 2}: \\ In this case, the charges of the operators depend on the flavour-symmetry charge of the singlet field $\Phi$ that we have introduced. This does not affect the relative magnitudes of the $R$-violating couplings, since $\Phi$ appears in all terms. However, this charge and the vacuum expectation value of $\Phi$ do provide a possible source of suppression for the $R$-violating couplings. We take as an indicative value $a_{\Phi} = 1/2$: the corresponding subclasses of $LL\bar{E}\Phi$ and $LQ\bar{D}\Phi$ operators with integer flavour charge appear in Tables~\ref{table:4} and \ref{table:5}. \begin{table}[h] \centering \begin{tabular}{|c |cccc|}\hline ijk$(LL\bar{E})$ &$ 121 $ & $122$ &$ 133$ &$ 233$ \\ \hline $U(1)$ & $-6-w_1$ & $-2-w_1$ & $-3-w_1$ & $1-w_1$ \\ \hline \end{tabular} \caption{{\it Integer $LL\bar{E}$ charges for Case 2.}} \label{table:4} \vspace*{0.8 cm} \centering \begin{tabular}{|c |cccccc|}\hline ijk$(LQ\bar{D})$ &$ 111 $ & $121$ &$ 122$ &$ 131$ & $123$ & $133$ \\ \hline $U(1)$ & $-11-w_1$ & $-6-w_1$ & $-1-w_1$ & $-7-w_1$ & $-2-w_1$ & $-3-w_1$ \\ \hline \hline ijk$(LQ\bar{D})$ &$ 211 $ & $221$ &$ 222$ &$ 231$ & $223$ & $233$ \\ \hline $U(1)$ & $-7-w_1$ & $-2-w_1$ & $3-w_1$ & $-3-w_1$ & $2-w_1$ & $1-w_1$ \\ \hline \end{tabular} \caption{{\it Integer $LL\bar{D}$ charges for Case 2: these charges remain the same if $j$ and $k$ are interchanged.}} \label{table:5} \end{table} In this model, the lepton mass matrix takes the form \[ V_{\ell }^{L,R}\approx \left( \begin{array}{ccc} 1 & \bar{\epsilon}^{2} & 0 \\ -\bar{\epsilon}^{2} & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) \] which is independent of the value chosen for $w_1$. What are the predictions for the strength of the $R$-violating couplings in this model? As in the previous model, we have: \begin{eqnarray} \lambda'_{i13} \leq 6 \cdot 10^{-4} \nonumber \\ \lambda'_{i12} \leq 6 \cdot 10^{-5} \end{eqnarray} and so again there is no possibility to explain the HERA events, essentially for the same reasons as in Case 1. From the charges of Table \ref{table:5}, we find that $\lambda'_{211} = \lambda'_{113}$, $\lambda'_{133} = \lambda'_{213}$ and $\lambda'_{123} = \lambda'_{212}$. Since $\lambda'_{113} = \lambda'_{123}/\bar{\epsilon} \leq 3 \cdot 10^{-4}$, we obtain the slightly stronger bound $\lambda'_{i13} = \lambda'_{i31} \leq 3 \cdot 10^{-4}$. For the remaining couplings we have the following bounds: \\ (i) $\lambda'_{111} = \lambda'_{112}/\bar{\epsilon} \leq 3 \cdot 10^{-4}$, \\ (ii) $\lambda'_{122} = \lambda'_{121}/\bar{\epsilon} \leq 3 \cdot 10^{-4}$, \\ (iii) $\lambda'_{222} = \lambda'_{221}/\bar{\epsilon} \leq 3 \cdot 10^{-4}$, \\ (iv) $\lambda'_{223} = \lambda'_{213}/\bar{\epsilon} \leq 0.0013$, \\ (v) $\lambda'_{233} = \lambda'_{223}/\bar{\epsilon} \leq 0.006$. \\ The difference from the previous solution is that the couplings $L_3 Q_j\bar{D}_k$ are absent, and thus constraints from them are evaded. However, the model remains restrictive, as all the quarks of the same generation have the same charge. Therefore, the strict bounds on products of operators still constrain strongly individual couplings. Nevertheless, we see from the limits above that several possibilities exist for $R$-violating squark decays within hadron-hadron collider detectors. We see therefore that there are four problems that do not allow an explanation of the HERA events within the framework of these models. {\bf First}, the quarks and leptons of the same generation have the same charges, so the $L_iL_j\bar{E}_k$ and $L_iQ_j\bar{D}_k$ couplings are subject to the same bounds. {\bf Secondly}, the choice of symmetric mass matrices makes the last two equations of (\ref{others}) difficult to satisfy, because $Q_i$ and $\bar{D}_i$ have the same charge and hence each factor involved has the same suppression, so that one cannot arrange to satisfy the inequality while keeping one coupling large. {\bf Thirdly}, the model has large mixing in the (1,2) down-quark sector, making the last of eqs (\ref{others}) difficult to satisfy. {\bf Finally}, the large (1,2) mixing in the charged lepton sector may not be reconciled with bounds on products of couplings that involve electrons and muons. Thus we see that the combination of the various $R$-violating bounds with simple family symmetries produces strong constraints on a variety of $R$-violating couplings. For the case of the family symmetry leading to the symmetric mass matrix (\ref{mm}) these constraints imply that $R$ violation does not give rise to anomalous events at HERA at a significant rate. \section{HERA-Friendly Textures of $R$-Violating Couplings} In this section we explore modifications of the simple $U(1)$ family structure, which may be able to accommodate an $R$-violating interpretation of the apparent excess of events at HERA. As we have stressed, a major problem in building a model to accommodate the HERA events lies in the need to satisfy the bounds (\ref{others}) while keeping large one of the individual couplings involved in these products. This leads us to consider models with the (1,2) mixing entirely in the up-quark sector and to deviate from the symmetric mass-matrix structure. \subsection{Asymmetric Flavour Textures} Once one gives up on the symmetric form, the pattern of masses is insufficient to constrain the $U(1)$ charge structure, so there are many new possibilities. Here we present just one viable choice to illustrate the options, but we certainly do not claim any uniqueness. We start with the charge assignment (-4,1,0) for the quark doublets of the model discussed above, and modify the up- and down-quark singlet charges to achieve the desired structure. In order to reduce the arbitrariness, we also choose to generate both the up- and down-quark mass matrices with the same expansion parameter, as would be the case if the non-renormalisable terms $Q_i\bar{D}_iH_2 \theta/M$ arise through heavy-quark mixing. With this Ansatz, a suitable choice for the up-quark singlet charges is (-5,1,0), which gives \begin{eqnarray} M^{up} = \left ( \begin{array}{ccc} {\epsilon}^{9} & 4 {\epsilon}^{3} & {\epsilon}^4 \\ 4 {\epsilon}^{4} & \epsilon^2 & {\epsilon} \\ {\epsilon}^{5} & {\epsilon} & 1 \end{array} \right ) \end{eqnarray} A value of $\epsilon \approx 1/20$ gives an acceptable charm mass. Note we have been forced to assume an enhancement factor of 4 in the (1,2) and (2,1) elements in order to accommodate the mixing needed to generate $V^{CKM}_{12} \approx V_{u_{12}}^L = 4 \epsilon \approx 0.2$. The mass eigenvalues are $1, \epsilon^2$ and $16 \epsilon ^5$, {} from which we see that the factor of 16 which appears from the coefficients in the off-diagonal entries compared to the solution of~\cite{IR} is compensated by the additional power in the expansion parameter. The choice of down-quark charges is dictated by the requirement that we keep the (1,2) mixing small. A suitable choice for the charges of the singlet down quarks is (7,-3,1), which gives the structure \begin{eqnarray} M^{down} = \left ( \begin{array}{ccc} {\epsilon}^3 & {\epsilon}^{7} & {\epsilon}^3 \\ {\epsilon}^{8} & {\epsilon}^2 & {\epsilon}^2 \\ {\epsilon}^7 & {\epsilon}^3 & \epsilon \end{array} \right ) \end{eqnarray} The eigenvalues scale as $\epsilon, \epsilon^2, \epsilon^3$, and $V_{d_{13}}^{L} \approx V_{d_{31}}^{L} \approx \epsilon^2$. Moreover, $V_{d_{23,32}}^{L} \approx \epsilon = 0.05$, so we do not require the cancellations that were needed in~\cite{IR} (remember that in this case $V_{d_{23,32}}^{L} \approx \bar{\epsilon} = 0.23$). Note that this choice has the advantage of reducing the bottom mass through an $\epsilon$ factor, putting us in the small-tan$\beta$ regime. This phenomenological choice of charges does not yet ensure anomaly cancellation, but at a later stage, when we also have a good phenomenological choice for the lepton mass matrix, we will discuss what flavour-independent charges have to be added in order to cancel anomalies. These additional charges will not modify the hierarchy of couplings, nor the relative magnitudes of the $R$-violating couplings of a given type. The key point of this model is its large $U(1)$ charge difference between the relevant $LQ\bar{D}$ couplings: \begin{eqnarray} a_{L_1Q_2\bar{D}_1} - a_{L_1 Q_1 \bar{D}_2} & \rightarrow & 15 \label{PRO1} \\ a_{L_1Q_3\bar{D}_1} - a_{L_1 Q_1 \bar{D}_3} & \rightarrow & 10 \label{PRO2} \end{eqnarray} which leads to large relative suppressions of these operators, though mixing in the (1,2) and (1,3) down sectors will close this gap. Consider first the operators appearing in (\ref{PRO1}). The mixing of the left-handed down quarks is given by the form of $V_{12,21}^L$ in the Appendix, and the second-order term dominates with $m^d_{13} m^d_{32} / (M_3 M_2) \approx \epsilon^3 = 1.3 \cdot 10^{-4}$. The mixing of $\bar{D}_1,\bar{D}_2$ is given by $m_{21}^d/M_2 \approx m^d_{31} m^d_{23} / (M_3 M_2) \approx \epsilon^6$. Taking the same expansion parameter as for the masses, consistent with (\ref{biggest}), the net suppression is $\epsilon^3 \epsilon^6 \approx 2 \cdot 10^{-12}$. This is more than sufficient to satisfy the bound on $L_1Q_1\bar{D}_2$ while allowing the $L_1 Q_2 \bar{D}_1$ to have a coefficient large enough to give the HERA events. Similarly, one may check that it is possible to for the $L_1Q_3\bar{D}_1$ operator to be relevant for HERA, without inducing an $L_1Q_1\bar{D}_3$ coupling with an unacceptable value. \subsection{Lepton Flavour Violation} We now turn to the assignment of lepton charges in such a model. Given the bounds of (\ref{muonelectron}), if any coupling involving an electron is large, the corresponding coupling involving muons should be small. The simplest solution is to choose charge assignments so that the $U(1)$ charge of the (1,2) entry of the lepton mass matrix is half-integer. In this case, a residual $Z_2$ symmetry forbids the (1,2) mass term. The choice of charges is restricted by the anomaly cancellation conditions. These are: \begin{eqnarray} A_3 & = & ( a_{Q_1} + a_{Q_2} +a_{Q_3} ) + \frac{1}{2} (a_{\bar{U}_1} + a_{% \bar{U}_2} + a_{\bar{U}_3}) + \frac{1}{2} (a_{\bar{D}_1} + a_{\bar{D}_2} + a_{\bar{D}_3}) \nonumber \\ A_2 & = & \frac{3}{2} ( a_{Q_1} + a_{Q_2} +a_{Q_3}) + \frac{1}{2} (a_{L_1} + a_{L_2} + a_{L_3} ) + \frac{1}{2} ( a_{H_1} + a_{H_2} ) \nonumber \\ A_1 & = & \frac{1}{6} ( a_{Q_1} + a_{Q_2} +a_{Q_3} ) + \frac{4}{3} (a_{\bar{U% }_1} + a_{\bar{U}_2} + a_{\bar{U}_3}) + \frac{1}{3} (a_{\bar{D}_1} + a_{\bar{% D}_2} + a_{\bar{D}_3}) \nonumber \\ & & + \frac{1}{2} (a_{L_1} + a_{L_2} + a_{L_3} ) + (a_{\bar{E}_1} + a_{\bar{E% }_2} + a_{\bar{E}_3} ) + \frac{1}{2} ( a_{H_1} + a_{H_2} ) \label{eq:greens} \end{eqnarray} where by $a_{F_i}$ we denote the charge of particle $F$ in the $i^{th}$ generation. We see from $A_3$ and $A_2$ that, for integer quark charges, a natural solution of the conditions has the sums of $a_{\bar{E}_i}$ and $a_{L_i}$ integers. Thus, we need two of the $a_{\bar{E}_i}$ and $a_{L_i}$ to be half-integers. We see for example, that the choice $a_{L_1} = 9/2, a_{L_2} = -1, a_{L_3} = -1/2, a_{\bar{E}_1} = -1/2, a_{\bar{E}_2} = -1, a_{\bar{E}_3} = -1/2$ generates a viable mass hierarchy. Here we have chosen the (3,3) charge to be the same as that of the down-quark, in order to give $b-\tau$ unification. Clearly this pattern of charges is not the only viable choice, but it indicates how things may work. The corresponding lepton mass matrix in this solution is: \begin{eqnarray} M^{L} = \left ( \begin{array}{ccc} {\epsilon}^4 & 0 & {\epsilon}^4\\ 0 & {\epsilon}^2 & 0 \\ {\epsilon} & 0 & {\epsilon} \end{array} \right ) \end{eqnarray} and there is (1,3) mixing, but no (1,2) or (2,3) mixing. The eigenvalues of this mass matrix are $\epsilon, \epsilon^2, \epsilon^4$, consistent with the measured values for $\epsilon = 0.05$. \begin{table}[h] \centering \begin{tabular}{|c|ccccccc|} \hline & $Q_i$ & $\bar{U}_i$ & $\bar{D}_i$ & $L_i$ & $\bar{E}_i$ & $H_2$ & $H_1$ \\ \hline $U(1)$ & $a'_i$ & $a'_{\bar{U}_i} $ & $a'_{\bar{D}_i}$ & $a'_{L_i}$ & $a'_{\bar{E}_i}$ & $ 0 $ & $0 $ \\ \hline $U(1)$ & $a'_i$ & $a'_{\bar{U}_i} + w_1$ & $-1+a'_{\bar{D}_i} - w_1$ & $-1-a'_{L_i}$ & $a'_{\bar{E}_i}-w_1 $ & $% -w_1$ & $1+w_1$ \\ \hline \end{tabular} \caption{{\it Flavour charges in models with asymmetric mass matrices.}} \label{table:6} \end{table} We denote by $a'_{Q_i} \equiv (-4,1,0)$, $a'_{\bar U_i} \equiv (-5,1,0)$, $a'_{\bar D_i} \equiv (7,-3,1)$, $a'_{L_i} \equiv (9/2,-1,-1/2)$ and $a'_{\bar E_i} \equiv (-1/2,-1,-1/2)$ the choices of generation-dependent charges in this model. The top line of Table~\ref{table:6}, which includes these and the corresponding charges for the Higgs multiplets, is not anomaly-free. It is easy now to satisfy the anomaly-matching conditions, allowing for additional family-independent components of the $U(1)$ charge, which do not affect the mass matrix structure. An anomaly-free solution is obtained by adding charges as indicated in the second line of Table \ref{table:6}, where the variable $w_1$ is an integer. \subsection{Nucleon Stability} We now demonstrate that nucleon decay graphs due to combinations of $LQ{\bar D}$ and ${\bar U}{\bar D}{\bar D}$ interactions may be eliminated in this HERA-friendly example by imposing an anomaly-free discrete gauge symmetry. In the specific model discussed above, where the first- and third-generation leptons have half-integer charges under the flavour symmetry, we can again combine the residual $\tilde{Z}_2$ symmetry of the $U(1)$ \cite{IR} with a normal $Z_2^M$ matter parity, where $\tilde{Z}_2 \times Z_2^M$ is broken to a diagonal $Z_2$ by a field $\Phi$ that is odd under both symmetries. Then the couplings $\bar{U}\bar{D}\bar{D}$ and $L_{2}Q\bar{D}$ transform as $(+,-)$ under the symmetry and are forbidden. Renormalisable $L_{1,3} Q \bar{D}$ couplings which transform as $(-,-)$ are also forbidden, but effective couplings of this type may occur at ${\cal O}\left (\frac{\Phi}{M} \right )$ through the term $(\Phi L Q \bar{D})/M$. Such underlying $Z_2$ symmetries can be illustrated in the context of a GUT group, if desired. Consider, for example, the Pati-Salam gauge group $SU(4) \times SU(2)_L \times SU(2)_R$~\cite{PS}. In models based on this group, the fermionic fields belong to either the $4$ or the $\bar{4}$ representations of $SU(4)$, and no trilinear $R$-violating term is invariant under the symmetry. However, invariants can be constructed by introducing an adjoint field $\Sigma$~\cite{patisal}.~\footnote{A study of the string origin of non-renormalisable operators in this model has been presented in~\cite{string}.} If $\Phi$ has a half-integer charge, the fact that it is in the adjoint of $SU(4)$ means that all baryon-number-violating operators are forbidden in any order, whilst the terms $L_{1,3}Q\bar{D}\Phi$ have integer charge and are therefore allowed. Moreover, no effective terms $L_{1,3}\bar{E}H_2\Phi$, which could cause problems with the lepton mass hierarchies, are allowed, as they are not invariant under the extended gauge group. \subsection{Hierarchy of $R$-Violating Interactions in a HERA-Friendly Model} We now consider the effect of the $U(1)$ symmetry on the pattern of allowed $R$-violating interactions in the models that were motivated by the HERA events. The charges of the operators depend on the half-integer charge of the field $\Phi$ under the flavour symmetry. This does not affect the relative magnitudes of the $R$-violating couplings, since $\Phi$ appears in all terms. However, this charge and the vacuum expectation value of $\Phi$ do provide a possible source of suppression for the $R$-violating couplings. The corresponding subclasses of $LL\bar{E}\Phi$ and $LQ\bar{D}\Phi$ operators with integer flavour charge, before introducing mixing effects, appear in Tables~\ref{table:7} and \ref{table:8}. We have used the charges of Table~\ref{table:6}, imposing anomaly cancellation. Here we have taken $\Phi$ to have $U(1)$ charge $1 / 2$, but its actual value can be re-absorbed in the definition of $w_1$. Let us first consider the possibility that the possible excess HERA events are due to the $L_1Q_3\bar{D_2}$ operator with $\lambda^{\prime}> 0.3 / \sqrt{B}$, which will be the dominant operator if $w_1 = 0$. The relative suppression of the $L_1 Q_3 \bar{D}_3$ operator is $\epsilon^4$. Of course, mixing effects in the (2,3) sector of the down-quark mass matrix re-introduce an $L_1Q_3\bar{D_3}$ operator at order $\epsilon^2$, via the mixing of right-handed down quarks. In the type of models with a small $V^R_{d_{2,3}}$ mixing therefore, this solution may in principle be accommodated. However, for our specific choice of charges, we see that the operator $L_1 Q_3 \bar{D}_2$ has the same charge as $L_1 Q_1 \bar{D}_3$, which is bound by charged-current universality~\cite{noi} to be $\leq 0.04$ for a squark mass of 200 GeV. Hence the $L_1Q_3\bar{D}_2$ interpretation of the HERA data is not realisable in this specific model. However, this is rather accidental for this particular example, and need not be the case in general. \begin{table}[h] \centering \begin{tabular}{|c|cccc|} \hline ijk$(LL\bar{E}\Phi)$ & $122 $ & $131$ & $133$ & $231$ \\ \hline $U(1)$ & $1-w_1$ & $2-w_1$ & $2-w_1$ & $-4-w_1$ \\ \hline \end{tabular} \caption[]{\it Integer $LL\bar{E}\Phi$ charges, ignoring mass mixing.} \label{table:7} \end{table} \par \vspace*{0.8 cm} \par \begin{table}[h] \centering \begin{tabular}{|c|cccccc|} \hline ijk$(LQ\bar{D}\Phi)$ & $111 $ & $112$ & $113$ & $121$ & $122$ & $123$ \\ \hline $U(1)$ & $6-w_1$ & $-4-w_1$ & $-w_1$ & $11-w_1$ & $1-w_1$ & $5-w_1$ \\ \hline\hline ijk$(LQ\bar{D}\Phi)$ & $131 $ & $132$ & $133$ & $311$ & $312$ & $313$ \\ \hline $U(1)$ & $10-w_1$ & $-w_1$ & $4-w_1$ & $1-w_1$ & $-9-w_1$ & $-5-w_1$ \\ \hline\hline ijk$(LQ\bar{D}\Phi)$ & $321 $ & $322$ & $323$ & $331$ & $332$ & $333$ \\ \hline $U(1)$ & $6-w_1$ & $-4-w_1$ & $-w_1$ & $5-w_1$ & $-5-w_1$ & $-1-w_1$ \\ \hline \end{tabular} \caption{{\it Integer \protect$LQ\bar{D}\Phi\protect$ charges, ignoring mass mixing.}} \label{table:8} \end{table} We now look at the possibilities that the HERA events arise from the $L_1Q_{2,3}\bar{D}_1$ operators. We see from Table~\ref{table:6} that, in the absence of mixing, the relative suppressions of the $L_1Q_1\bar{D}_1$ and $L_1Q_3\bar{D}_3$ operators would have been enough to make these cases viable. When mixing effects are included, the possible effects of unknown phases should be taken into account when comparing with bounds. In the specific case that the HERA events are due to an $L_1Q_2\bar{D}_1$ coupling, we have no problem with the $L_1Q_3\bar{D}_3$ operator, but there is a potential difficulty with $\beta \beta$ decay, due to mixing with the $L_1Q_1\bar{D}_1$ operator: \begin{equation} \left| \lambda_{112} V^L_{u_{12}} \left(\frac{200 \; GeV}{m_{\tilde{u}_L}}% \right)^{\!\!2} \right|<4\cdot 10^{-3} \left(\frac{m_{\tilde{g}}}{1 \; TeV}% \right)^{\!1/2} \end{equation} where $m_{\tilde{g}}$ is the gluino mass. Given (a) that the $V^{CKM}$ mixing arises from the up sector in our framework~\footnote{ Even in the case that the $V^{CKM}_{12,21}$ mixing arises from the down-quark sector, squark mass universality violation is required in order to evade bounds from $K \rightarrow \pi \nu \bar{\nu}$. }, and (b) that the bounds from the Tevatron indicate that the branching ratio of $\tilde{c}_L$ to fermions can not be close to unity in the context of this interpretation, implying that $\lambda^{\prime}_{121}$ has to be larger than 0.04, we see that this solution is not naturally accommodated. It might be possible if $m_{\tilde{u}_L}$ is significantly larger than $m_{\tilde{c}_L}$, but this requires a violation of squark-mass universality that is potentially dangerous for flavour-changing neutral interactions. On the other hand, if the HERA events are due to a $L_1Q_3\bar{D}_1$ coupling, there is no problem with the $L_1Q_1\bar{D}_1$ operator, but a problem could in principle appear with the $L_1Q_3\bar{D}_3$ coupling that is bounded from limits on neutrino Majorana masses~\cite{numass}. However, the relevant (3,1) mixing term is small, indicating that in this case the unknown coefficients may be such that the bounds are easily accommodated. What about the other couplings? For $L_1Q_3\bar{D}_1 \approx 0.04$, the model predicts that $L_1Q_2\bar{D}_1 \approx L_3Q_1\bar{D}_2 \approx 0.002$, while all other couplings are very suppressed. Indeed, looking at the charges, we see that the next larger couplings are suppressed by $\epsilon^4$ as compared to $L_1Q_3\bar{D}_1$. Mixing effects are also suppressed, except for the operators $L_1Q_1\bar{D}_1 \approx L_1Q_2\bar{D}_1 (4\epsilon) \approx 0.0004$ and $L_3Q_2\bar{D}_2 \approx L_3Q_1\bar{D}_2 (4\epsilon) \approx 0.0004$, which are within the allowed range. Finally, note that we do not have any mixing between $L_1Q_j\bar{D}_k$ and $L_2Q_j\bar{D}_k$ couplings (the later are forbidden by the symmetry), so the dangerous product combinations that violate lepton flavour are also absent. In the light of the above discussion, we conclude that, of the valence quark production mechanisms via $L_1Q_2\bar{D}_1$ and $% L_1Q_3\bar{D}_1$ couplings, the second possibility seems to be favoured. It should be possible to make a model with a coupling $L_1Q_3\bar{D}_2$ sufficiently large to explain the HERA data, although we have not displayed one here. \section{Baryon Decay via Dimension-Five Operators} We saw earlier on that the experimental absence of baryon decay imposed important constraints on possible models, which are most easily evaded by imposing a baryon parity symmetry that forbids the dangerous ${\bar U} {\bar D} {\bar D}$ couplings. However, this is not the end of the story, since models may also contain dimension-five operators that would generate proton decay at an unacceptable level. The most dangerous among these operators are the operators $[QQQL]_F$ and $[QQQH_1]_F$, the latter in the presence of $LQ\bar{D}$ couplings.~\footnote{The lepton-number-violating operators $[Q\bar{U}\bar{E}H_1]_F$, $[Q\bar{U} L^*]_D$ and $[Q\bar{U}L^*]_D$ are dangerous in the presence of $\bar{U}\bar{D% }\bar{D}$ ones.} These operators can lead to fast proton decay via loop diagrams. In the case of $[QQQL]_F$ operators that involve the two lightest generations, the constraint on the coupling $\eta$ of any such operator is $\eta \le 10^{-7}$. This bound has some flexibility, since the magnitude of the loop diagrams depends on details of the sparticle spectrum, but this possibility is not crucial for the subsequent discussion of models. In the case of $[QQQH_1]_F$ operators with couplings $\eta^{\prime}$, fast proton decay may occur if they are present simultaneously with $LQ\bar{D}$ operators with generic coefficients $\lambda^{\prime}$. The product of the corresponding couplings is constrained: $\eta^{\prime}\lambda^{\prime}\le 10^{-10}$. Since an $R$-violating interpretation of the HERA events requires either $L_1Q_2\bar{D}_1$ or $L_1Q_3\bar{D}_1$ $\approx 0.04$, it is clear that we have to worry about the $[QQQH_1]_F$ operator as well. We now analyse the dimension-five operator charges in the different cases discussed in previous sections, to see whether they are large enough for the suppression by powers of small quantities to be sufficient. How small the terms actually are depends on the expansion parameter, as we have already discussed in a previous section. The $QQQH_1$ operators are easily dealt with, even though the baryon stability requirements seem to be more severe for them. The reason is that these operators transform as $(+,-)$ under the $\tilde{Z}_2 \times Z_2^M$, and are thus forbidden. What about the $QQQL$ operators? The $QQQL_{1,3}$ operators are not present, because they transform as $(-,+)$ under $\tilde{Z}_2 \times Z_2^M$. However, the operators $QQQL_2$ are allowed. These are dangerous, because proton decay may occur via the modes $p \rightarrow \bar{\nu}_{1,2,3} \pi^+$ and $p \rightarrow \bar{\nu}_{1,2,3}K^+$. Let us look at the flavour charges of these operators. We recall that colour antisymmetrisation implies that all the quark flavour indices cannot be identical. The operators that are not suppressed enough by quark mixing parameters have the following charges in the model that could explain the HERA events: \begin{eqnarray} a_{Q_1Q_1Q_2L_2} & = & -9 \nonumber \\ a_{Q_1Q_2Q_2L_2} & = & -4 \nonumber \\ a_{Q_1Q_1Q_3L_2} & = & -10 \nonumber \\ a_{Q_1Q_2Q_3L_2} & = & -5 \end{eqnarray} where for the lepton charge we used the anomaly-free choice of Table~\ref{table:6}. We infer that we do not need any further underlying symmetry in order to suppress these couplings adequately. However, even in models where this suppression does not occur, there could be some GUT symmetry that forbids the offending $QQQL$ operators~\footnote{ Moreover, in string-derived GUT models, string selection rules may lead to the vanishing of operators that are invariant under field-theory symmetries. In such models, it is possible to construct realistic fermion mass matrices while having maximal proton stability \cite{ELLN}.}. This would be an interesting constraint on GUT model-building, but should not be taken as a serious obstacle to constructing HERA-friendly models. \section{Concluding Comments} We have discussed the implications of a single $U(1)$ abelian flavour symmetry for the possible hierarchies of $R$-violating couplings. The relations between the Standard-Model Yukawa couplings and $R$-violating couplings depend on the choice of model charges, so the observed hierarchies of quark and lepton masses do not lead to a unique specification of the dominant $R$-violating couplings. However, we have identified certain general features of such a framework, highlighting the importance of mass mixing between current eigenstates. We have identified various interesting possibilities for hadron-hadron collider phenomenology that are consistent with this mixing and the available experimental constraints. Within this general approach, we have searched specifically for simple consistent models that lead to the favoured $R$-violating scenarios for the explaining the possible excess in the HERA data. Our results may be summarised as follows: $\bullet$ Flavour symmetries lead us to expect a hierarchy in the $R$-violating couplings, analogous to that observed for the known fermion masses. These hierarchies can be consistent with a squark-production interpretation of the HERA data (if required), as well as with the various other experimental constraints on the couplings. $\bullet$ The simplest charge assignments lead to unified, and thus more predictive, forms for the mass matrices. For the case of equal charges for up and down quarks and leptons of a given generation, the symmetry together with bounds from products of $R$-violating couplings implies that there should be no significant anomalous events at HERA coming from such couplings. If we wish to accommodate such anomalous events, we are forced to depart from this picture. Schemes with asymmetric charges and different assignments for up quarks, down quarks and leptons give rise to larger splittings between different operators. $\bullet$ Some of the charge assignments considered forbid large coefficients of dimension-five operators that are potentially dangerous for baryon stability. In schemes where this is not true, such terms would need to be forbidden by further GUT symmetries. One can consider relaxing various of our conditions, for example by introducing a higher level of asymmetry in the mass matrices, invoking multiple $U(1)$ flavour symmetries, etc., and in such models the predictions can be further altered. Moreover, additional zero couplings may be expected when one goes to a specific GUT/string construction. However, it is interesting that it is possible to construct phenomenological models with a single $U(1)$ flavour symmetry that are compatible with attempts to explain the reported excess of HERA data by $R$-violating squark production, albeit at a price. In order to constrain the possible schemes, and perhaps rule some out, more experimental data are required. \vspace*{0.2 cm} \begin{center} {\bf Appendix} \end{center} Using second-order perturbation theory, it is easy to derive the mixing elements for a generic mass matrix~\cite{HR,BJ}, where $m$ stands for the off-diagonal contributions and $M$ for the diagonal part. The left-handed mixing is given by \cite{BJ} \begin{eqnarray} V_{ij}^L = - \frac{ (m_{ij}M_j+m^*_{ji}M_i)}{M_i^2-M_j^2} + \frac{ (m_{ik}M_k+ m_{ki}^*M_i) (m_{kj}M_j +m^*_{jk} M_k ) } {(M_i^2-M_j^2)(M_k^2-M_j^2)} - \frac{m_{ik} m^*_{jk}}{(M_i^2-M_j^2)} \nonumber \end{eqnarray} and $V_{ij}^R$ is given by a corresponding expression, substituting the mass matrix by its hermitian conjugate. In the case that the ratio of $m_{ij}$ to $m_{ji}^*$ is considerably larger than the ratio $M_i/M_j$, the mixing elements are given by: \begin{eqnarray} V^L_{12} & = & +\frac{m_{12}}{M_2} \; - \left [ \frac{m_{13}m_{32}}{M_3 M_2} \right ] \; + \; ... \nonumber \\ V^L_{21} & = & -\frac{m^*_{12}}{M_2} \; + \left [ \frac{m^*_{32}m^*_{13}}{M_2 M_3} \right ] \; + \; ... \nonumber \\ V^L_{13} & = & +\frac{m_{13}}{M_3} \; + \left ( \frac{m_{12}m_{23}M_2}{M_3^3} \right) \; + \left [ \frac{m_{12}m^*_{32}}{M_3^2} \right ] \; + \; ... \nonumber \\ V^L_{31} & = & -\frac{m_{13}^*}{M_3} \; + \frac{m^*_{12}m^*_{23}}{M_2 M_3} \; + \left[ - \frac{m_{32}m^*_{12}}{M_3^2} \right ] \; + \; ... \\ V^L_{23} & = & +\frac{m_{23}}{M_3} \; + \left ( \frac{m^*_{12}m_{13}M_2}{M_3^3} \right) \; + \left [ \frac{m_{21}m^{*}_{31}}{M_3^2} \right ] \; + \; ... \nonumber \\ V^L_{32} & = & -\frac{m^*_{23}}{M_3} \; - \frac{m_{12}m^*_{13}}{M_2 M_3} \; + \left [ - \frac{m_{31}m^*_{21}}{M_3^2} \right ] \; + \; ... \nonumber \label{mix} \end{eqnarray} In the above expressions, terms in brackets mark contributions which involve mass entries below the diagonal. These terms, as well as the ones in parentheses, can in most cases be neglected. However, they may become relevant if the mass matrices have texture zeroes. In the case of the full $V_{CKM}$ matrix, one has \begin{eqnarray} V^{CKM}_{12} & = & + \frac{m^d_{12}}{M_2^d} - \frac{m^u_{12}}{M_2^u} + ... \nonumber \\ V^{CKM}_{21} & = & - \frac{{m^d_{12}}^*}{M_2^d} + \frac{{m^u_{12}}^*}{M_2^u} + ... \nonumber \\ V^{CKM}_{23} & = & + \frac{m^d_{23}}{M_3^d} - \frac{m^u_{23}}{M_3^u} + \frac{% m^d_{13} {m_{12}^u}^*}{M_3^d M_2^u} - \frac{m^u_{13} {m_{12}^u }^*}{M_3^u M_2^u} + ... \nonumber \\ V^{CKM}_{32} & = & -\frac{{m^d_{23}}^*}{M_3^d} + \frac{{m^u_{23}}^*}{M_3^u} - \frac{m^d_{12} {m_{13}^d}^*}{M_2^d M_3^d} + \frac{m^d_{12} {m_{13}^u}^*}{% M_3^u M_2^d} + ... \nonumber \\ V^{CKM}_{13} & = & + \frac{m^d_{13}}{M_3^d} - \frac{m^u_{13}}{M_3^u} - \frac{% m^d_{23} {m_{12}^u}}{M_3^d M_2^u} + \frac{m^u_{12} {m_{23}^u}}{M_3^u M_2^u} + ... \nonumber \\ V^{CKM}_{31} & = & -\frac{{m^d_{13}}^*}{M_3^d} + \frac{{m^u_{13}}^*}{M_3^u} + \frac{{m^d_{12}}^*{m_{23}^d}^*}{M_2^d M_3^d} - \frac{{m^d_{12}}^*{m_{23}^u}% ^*}{M_3^u M_2^d} + ... \label{secmix} \end{eqnarray} These formulae are used in the text in conjunction with specific parametric forms for the off-diagonal terms in the up- and down-quark mass matrices.
1,477,468,751,371
arxiv
\section{Introduction} \begin{table*} \begin{tabular}{cccccccccc} \hline GRB & $\alpha_{1}$ & $\Gamma_{x,1}$ & T$_{1}$ & $\alpha_{2}$ & $\Gamma_{x,2}$ & T$_{2}$ & $\alpha_{3}$ & $\Gamma_{x,3}$ & Prob. Chance\\ & & & (s) & & & (s) & & & Improvement \\ \hline \multicolumn{10}{l|}{2 or more breaks}\\ 051221A & 1.43$^{+0.01}_{-0.01}$ & 2.03$^{+0.20}_{-0.19}$ & 2935$^{+714}_{-785}$ & 0.059$^{+0.22}_{-0.11}$ & 1.91$^{+0.23}_{-0.22}$ & 24370$^{+4631}_{-2823}$ & 1.41$^{+0.08}_{-0.07}$ & 2.06$^{+0.19}_{-0.18}$ & 1$\times10^{-4}$\% \\ 060313 & 1.84$^{+0.85}_{-0.37}$ & & 1.7$^{+1.2}_{-0.9}$ & 0.74$^{+0.08}_{-0.09}$ & 1.82$^{+0.16}_{-0.10}$ & 7467$^{+1511}_{-1491}$ & 1.65$^{+0.12}_{-0.11}$ & 2.50$^{+0.22}_{-0.28}$ & 6$\times10^{-20}$\% \\ 061201 & 3.09$^{+0.66}_{-0.46}$ & $^{ }_{ }$ & 1.85$^{+1.03}_{-0.53}$ & 0.54$^{+0.13}_{-0.14}$ & 1.44$^{+0.20}_{-0.19}$ & 2209$^{+802}_{-587}$ & 1.84$^{+0.17}_{-0.14}$ & 2.26$^{+0.38}_{-0.42}$ & 2$\times10^{-3}$\%\\ 070724 & 0.97$^{+0.12}_{-0.05}$ & 1.45$^{+0.73}_{-0.64}$ & 79$^{+10}_{-35}$ & -1.13$^{+0.69}_{-1.01}$ & 1.66$^{+0.24}_{-0.23}$ & 110$^{+2}_{-3}$ & 10$^{+0.00}_{-4.33}$ & $^{}_{}$ & 1$\times10^{-5}$\% \\ 090426 & 2.12$^{+0.36}_{-0.45}$ & & 33$^{+125}_{-3}$ & 0.21$^{+0.31}_{-0.34}$ & 1.85$^{+0.36}_{-0.24}$ & 260$^{+140}_{-127}$ & 1.04$^{+0.07}_{-0.06}$ & 2.14$^{+0.14}_{-0.14}$ & 2$\times10^{-14}$\% \\ 090515 & 2.76$^{+0.55}_{-0.10}$ & & 0.30$^{+0.00}_{-0.30}$ & 0.28$^{+0.07}_{-0.03}$ & 1.85$^{+0.17}_{-0.16}$ & 156$^{+9}_{-27}$ & 2.51$^{+0.59}_{-0.87}$ & 2.12$^{+0.39}_{-0.33}$ & 3$\times10^{-32}$\% \\ 100625A & 3.63$^{+0.01}_{-0.25}$ & & 1.90$^{+2.40}_{-1.10}$ & 0.36$^{+0.36}_{-0.63}$ & 2.09$^{+0.30}_{-0.29}$ & 222$^{+52}_{-50}$ & 3.15$^{+0.94}_{-0.85}$ & 2.66$^{+0.53}_{-0.83}$ & 0.11\% \\ 100702A & 1.67$^{+0.15}_{-0.18}$ & $^{ }_{ }$ & 0.59$^{+0.4}_{-0.4}$ & 0.74$^{+0.18}_{-0.18}$ & 2.05$^{+0.13}_{-0.13}$ & 194$^{+14}_{-6}$ & 4.86$^{+0.52}_{-0.26}$ & 2.41$^{+0.28}_{-0.26}$ & 2$\times10^{-43}$\% \\ 101219A & & & & 0.79$^{+0.04}_{-0.04}$ & 1.33$^{+0.72}_{-0.75}$ & 195$^{+7}_{-12}$ & 10$^{+0.00}_{-2.40}$ & $^{}_{}$ & 9$\times10^{-3}$\% \\ 120305A & 2.88$^{+0.30}_{-0.23}$ & & 2.2$^{+1.4}_{-0.9}$ & 0.18$^{+0.29}_{-0.29}$ & 1.94$^{+0.21}_{-0.20}$ & 156$^{+11}_{-10}$ & 5.11$^{+0.55}_{-0.52}$ & 2.51$^{0.72}_{0.44}$ & 0.17\% \\ \hline \multicolumn{10}{l|}{1 break}\\ 051210 & & & & 0.65$^{+0.04}_{-0.04}$ & 1.21$^{+0.25}_{-0.15}$ & 137$^{+8}_{-6}$ & 3.52$^{+0.25}_{-0.19}$ & 3.11$^{+0.44}_{-0.65}$ & 1$\times10^{-8}$\% \\ 060801 & & & & 0.53$^{+0.05}_{-0.06}$ & 1.59$^{+0.23}_{-0.22}$ & 315$^{+21}_{-30}$ & 5.83$^{+0.86}_{-0.76}$ & 2.18$^{+0.63}_{-0.43}$ & 1$\times10^{-3}$\% \\ 070714A & 2.23$^{+0.18}_{-0.04}$ & $^{ }_{ }$ & 123$^{+4}_{-45}$ & 0.62$^{+0.06}_{-0.05}$ & 2.24$^{+0.33}_{-0.33}$ & & & & 4$\times10^{-6}$\% \\ 070809 & 1.42$^{+0.05}_{-0.04}$ & 1.65$^{+1.01}_{-0.40}$ & 233$^{+96}_{-68}$ & 0.52$^{+0.06}_{-0.06}$ & 1.35$^{+0.18}_{-0.13}$ & & & & 3$\times10^{-3}$\% \\ 080426 & 1.94$^{+0.15}_{-0.14}$ & $^{ }_{ }$ & 15$^{+18}_{-7}$ & 1.18$^{+0.05}_{-0.05}$ & 2.03$^{+0.26}_{-0.24}$ & & & & 0.018\% \\ 080905A & & & & 0.44$^{+0.05}_{-0.05}$ & 0.89$^{+0.56}_{-0.41}$ & 126$^{+45}_{-55}$ & 2.51$^{+0.30}_{-0.25}$ & 1.53$^{+0.29}_{-0.27}$ & 0.03\% \\ 080919 & & & & 0.86$^{+0.04}_{-0.03}$ & 2.31$^{+1.01}_{-0.83}$ & 351$^{+195}_{-55}$ & 4.83$^{+0.77}_{-0.84}$ & 2.35$^{+1.01}_{-0.83}$ & 0.02\% \\ 090510 & & & & 0.80$^{+0.01}_{-0.01}$ & 1.78$^{+0.14}_{-0.14}$ & 1412$^{+136}_{-192}$ & 2.18$^{+0.17}_{-0.17}$ & 2.22$^{+0.20}_{-0.16}$ & 1$\times10^{-6}$\% \\ 090621B & 4.06$^{+0.01}_{-0.49}$ & $^{ }_{ }$ & 5$^{+5}_{-1}$ & 0.72$^{+0.18}_{-0.16}$ & 3.40$^{+1.40}_{-1.00}$ & & & & 3$\times10^{-5}$\% \\ 091109B & 4.02$^{+0.01}_{-0.32}$ & $^{ }_{ }$ & 4$^{+1}_{-1}$ & 0.64$^{+0.08}_{-0.09}$ & 2.04$^{+0.55}_{-0.37}$ & & & & 4$\times10^{-4}$\% \\ 111020A & 1.63$^{+0.62}_{-0.05}$ & & 124$^{+38}_{-123}$ & 0.76$^{+0.05}_{-0.04}$ & 2.18$^{+0.49}_{-0.43}$ & & & & 0.02\% \\ 120521A & & & & 1.20$^{+0.05}_{-0.05}$ & 1.81$^{+0.36}_{-0.29}$ & 283$^{+13}_{-17}$ & 9.98$^{+0.02}_{-2.25}$ & & 0.12\% \\ \hline \multicolumn{10}{l|}{No breaks}\\ 050509B & 1.32$^{+0.06}_{-0.04}$ & 1.92$^{+1.13}_{-0.52}$ & & & & & & & \\ 050813 & 1.27$^{+0.04}_{-0.03}$ & 2.70$^{+4.30}_{-1.20}$ & & & & & & & \\ 050906 & $>$1.28 & & & & & & & & \\ 051105 & $>$1.33 & & & & & & & & \\ 060502B & 0.95$^{+0.04}_{-0.03}$ & 2.10$^{+2.77}_{-0.81}$ & & & & & & & \\ 061217 & 1.29$^{+0.08}_{-0.05}$ & 1.40$^{+1.13}_{-0.86}$ & & & & & & & \\ 070209 & $>$1.23 & & & & & & & & \\ 070429B & 1.54$^{+0.05}_{-0.04}$ & 3.10$^{+1.00}_{-1.40}$ & & & & & & & \\ 070729 & 1.29$^{+0.05}_{-0.04}$ & 1.62$^{+0.86}_{-0.43}$ & & & & & & & \\ 070810B & $>$1.36 & & & & & & & & \\ 071112B & $>$0.87 & & & & & & & & \\ 080702A & 1.13$^{+0.04}_{-0.04}$ & 1.99$^{+0.75}_{-0.67}$ & & & & & & & \\ 081024A & 0.99$^{+0.03}_{-0.02}$ & 1.82$^{+0.64}_{-0.55}$ & & & & & & & \\ 081101 & $>$1.21 & & & & & & & & \\ 081226 & 1.45$^{+0.05}_{-0.04}$ & 3.84$^{+0.96}_{-1.93}$ & & & & & & & \\ 090305A & 1.42$^{+0.05}_{-0.04}$ & & & & & & & & \\ 100117A & 0.97$^{+0.01}_{-0.01}$ & 2.59$^{+0.48}_{-0.40}$ & & & & & & & \\ 100206A & 1.80$^{+0.05}_{-0.04}$ & 3.30$^{+3.30}_{-1.30}$ & & & & & & & \\ 100628A & 1.00$^{+0.01}_{-0.01}$ & & & & & & & & \\ 110112A & 1.00$^{+0.06}_{-0.05}$ & 2.15$^{+0.39}_{-0.31}$ & & & & & & &\\ 111117A & 1.45$^{+0.05}_{-0.06}$ & 2.20$^{+0.40}_{-0.37}$ & & & & & & & \\ \end{tabular} \caption{The {\it Swift} SGRB sample and the results of broken powerlaw fits to the observed BAT-XRT data in the 0.3-10 keV band (as described in the text) and the X-ray spectral indicies for each regime ($\Gamma_{x}$). These are subdivided into those with 2 or more significant breaks in their lightcurves, those with 1 break and those with no significant breaks. Where values are left blank there was insufficient data available to constrain them. The last column shows the probability that this fit is a chance improvement on a simpler model.} \label{lcfits} \end{table*} \begin{table*} \begin{tabular}{cccccccc} \hline GRB & z & T$_{90}$ & $\Gamma_{\gamma}$ & Fluence & Host & Host offset & Optical Afterglow\\ & & (s) & & (10$^{-7}$ erg cm$^{-2}$ s$^{-1}$) & &(arcsec) & \\ \hline \multicolumn{8}{l|}{2 or more breaks}\\ 051221A$^{(1)}$ & 0.55 & 1.4$\pm$0.2 & 1.39$\pm$0.06 & 11.6$\pm$0.4 & y & 0.12$\pm$0.04 & Y \\ 060313$^{(2)}$ & (0.72) & 0.7$\pm$0.1 & 0.71$\pm$0.07 & 11.3$\pm$0.5 & ? & 0.4$\pm$0.6 & Y \\ 061201$^{(3)}$ & 0.111 & 0.8$\pm$0.1 & 0.81$\pm$0.15 & 3.3$\pm$0.3 & ? & 17 & Y \\ 070724A$^{(4)}$ & 0.46 & 0.4$\pm$0.04 & 1.81$\pm$0.33 & 0.30$\pm$0.07 & y & 0.7$\pm$2.1 & N \\ 090426$^{(5)}$ & 2.6 & 1.2$\pm$0.3 & 1.93$\pm$0.22 & 1.8$\pm$0.3 & y & 18 & Y \\ 090515$^{(6)}$ & (0.72) & 0.04$\pm$0.02 & 1.60$\pm$0.20 & 0.21$\pm$0.04 & n & - & Y \\ 100625A$^{(7)}$& (0.72) & 0.33$\pm$0.03 & 0.90$\pm$0.10 & 2.3$\pm$0.2 & y & 0$\pm$1.8 & N \\ 100702A$^{(8)}$ & (0.72) & 0.16$\pm$0.03 & 1.54$\pm$0.15 & 1.2$\pm$0.1 & n & - & N \\ 101219A$^{(9)}$ & 0.718 & 0.6$\pm$0.2 & 0.63$\pm$0.09 & 4.6$\pm$0.3 & y & - & N \\ 120305A$^{(10)}$ & (0.72) & 0.10$\pm$0.02 & 1.00$\pm$0.09 & 2.0$\pm$0.1 & n & - & N \\ 120521A$^{(11)}$ & (0.72) & 0.45$\pm$0.08 & 0.98$\pm$0.22 & 0.8$\pm$0.1 & n & - & N \\ \hline \multicolumn{8}{l|}{1 break}\\ 051210$^{(12)}$ & (0.72) & 1.4$\pm$0.2 & 1.10$\pm$0.30 & 0.8$\pm$0.1 & ? & 2.8$\pm$2.9 & N \\ 060801$^{(13)}$ & 1.13 & 0.5$\pm$0.1 & 0.47$\pm$0.24 & 0.8$\pm$0.1 & ? & 2.4$\pm$2.4 & N \\ 070714A$^{(14)}$ & (0.72) & 2.0$\pm$0.3 & 2.60$\pm$0.20 & 1.5$\pm$0.2 & n & - & N \\ 070809$^{(15)}$ & 0.219 & 1.3$\pm$0.1 & 1.69$\pm$0.22 & 1.0$\pm$0.1 & y & 20 & Y \\ 080426$^{(16)}$ & (0.72) & 1.7$\pm$0.4 & 1.98$\pm$0.13 & 3.7$\pm$0.3 & n & - & N \\ 080905A$^{(17)}$ & 0.122 & 1.0$\pm$0.1 & 0.85$\pm$0.24 & 1.4$\pm$0.2 & y & 9 & Y \\ 080919$^{(18)}$ & (0.72) & 0.6$\pm$0.1 & 1.10$\pm$0.26 & 0.7$\pm$0.1 & ? & - & Y \\ 090510$^{(19)}$ & 0.9 & 0.3$\pm$0.1 & 0.98$\pm$0.20 & 3.4$\pm$0.4 & y & 1 & Y \\ 090621B$^{(20)}$ & (0.72) & 0.14$\pm$0.04 & 0.82$\pm$0.23 & 0.7$\pm$0.1 & n & - & N \\ 091109B$^{(21)}$ & (0.72) & 0.30$\pm$0.03 & 0.71$\pm$0.13 & 1.9$\pm$0.2 & ? & 8 & Y \\ 111020A$^{(22)}$ & (0.72) & 0.40$\pm$0.09 & 1.37$\pm$0.26 & 0.7$\pm$0.1 & n & & N \\ \end{tabular} \caption{Properties of the SGRB sample, including T$_{90}$, $\Gamma_{\gamma}$ and Fluence (15--150 keV). These observed quantities, including host galaxy associations, offsets and optical afterglow detections, are from published papers and GCNs (references listed below), host offsets are quoted with errors if published. When the redshift is not known, the average redshift 0.72 was used and this is shown using brackets.} \label{candidates} $^{(1)}$\cite{cummings2005,soderberg2006} $^{(2)}$\cite{markwardt2006a,roming2006} $^{(3)}$\cite{markwardt2006b,stratta2007} $^{(4)}$\cite{parsons2007,berger2009,kocevski2010} $^{(5)}$\cite{sato2009,antonelli2009,xin2011} $^{(6)}$\cite{barthelmy2009,rowlinson2010b} $^{(7)}$\cite{barthelmy2010,tanvir2010} $^{(8)}$\cite{baumgartner2010} $^{(9)}$\cite{krimm2010, chornock2011} $^{(10)}$\cite{palmer2012} $^{(11)}$\cite{cummings2012} $^{(12)}$\cite{sato2005,laparola2006} $^{(13)}$\cite{sato2006,cucchiara2006} $^{(14)}$\cite{barthelmy2007} $^{(15)}$\cite{krimm2007,perley2008a} $^{(16)}$\cite{cummings2008a} $^{(17)}$\cite{cummings2008,rowlinson2010a} $^{(18)}$\cite{baumgartner2008,immler2008,covino2008} $^{(19)}$\cite{ukwatta2009,depasquale2010,mcbreen2010} $^{(20)}$\cite{krimm2009} $^{(21)}$\cite{markwardt2009, levan2009, malesani2009} $^{(22)}$\cite{sakamoto2011b} \end{table*} \begin{table*} \centering \begin{tabular}{cccccccc} \hline GRB & z & T$_{90}$ & $\Gamma_{\gamma}$ & Fluence & Host & Host offset & Optical Afterglow\\ & & (s) & & (10$^{-7}$ erg cm$^{-2}$ s$^{-1}$) & &(arcsec) & \\ \hline \multicolumn{8}{l|}{No breaks}\\ 050509B$^{(24)}$ & 0.23 & 0.024$\pm$0.009 & 1.50$\pm$0.40 & 0.2$\pm$0.1 & y & 17.9$\pm$3.4 & N \\ 050813$^{(25)}$ & (0.72) & 0.6$\pm$0.1 & 1.19$\pm$0.33 & 1.2$\pm$0.5 & n & - & N \\ 050906$^{(26)}$ & (0.72) & 0.13$\pm$0.02 & 1.91$\pm$0.42 & 0.6$\pm$0.3 & ? & - & N \\ 051105$^{(27)}$ & (0.72) & 0.028$\pm$0.004 & 1.38$\pm$0.35 & 0.2$\pm$0.05 & ? & - & N \\ 060502B$^{(28)}$ & (0.72) & 0.09$\pm$0.02 & 0.92$\pm$0.23 & 0.4$\pm$0.05 & n & - & N \\ 061217$^{(29)}$ & (0.72) & 0.3$\pm$0.05 & 0.96$\pm$0.28 & 0.46$\pm$0.08 & ? & - & N \\ 070209$^{(30)}$ & (0.72) & 0.1$\pm$0.02 & 1.55$\pm$0.39 & 0.11$\pm$0.03 & n & - & N \\ 070429B$^{(31)}$ & (0.72) & 0.5$\pm$0.1 & 1.71$\pm$0.23 & 0.63$\pm$0.1 & ? & - & ? \\ 070729$^{(32)}$ & (0.72) & 0.9$\pm$0.1 & 0.96$\pm$0.27 & 1$\pm$0.2 & ? & - & N \\ 070810B$^{(33)}$ & (0.72) & 0.08$\pm$0.01 & 1.44$\pm$0.37 & 0.12$\pm$0.03 & ? & - & N \\ 071112B$^{(34)}$ & (0.72) & 0.3$\pm$0.05 & 0.69$\pm$0.34 & 0.5$\pm$0.1 & n & - & N \\ 080702A$^{(35)}$ & (0.72) & 0.5$\pm$0.2 & 1.34$\pm$0.42 & 0.4$\pm$0.1 & n & - & N \\ 081024A$^{(36)}$ & (0.72) & 1.8$\pm$0.6 & 1.23$\pm$0.21 & 1.2$\pm$0.2 & n & - & N \\ 081101$^{(37)}$ & (0.72) & 0.2$\pm$0.02 & 1.25$\pm$0.20 & 0.62$\pm$0.1 & n & - & N \\ 081226$^{(38)}$ & (0.72) & 0.4$\pm$0.1 & 1.36$\pm$0.29 & 1.0$\pm$0.2 & n & - & N \\ 090305A$^{(39)}$ & (0.72) & 0.4$\pm$0.1 & 0.86$\pm$0.33 & 0.8$\pm$0.1 & n & - & Y \\ 100117A$^{(40)}$ & (0.72) & 0.30$\pm$0.05 & 0.88$\pm$0.22 & 0.9$\pm$0.1 & y & 0.6 & Y \\ 100206A$^{(41)}$ & (0.72) & 0.12$\pm$0.03 & 0.63$\pm$0.17 & 1.4$\pm$0.2 & ? & - & N \\ 100628A$^{(42)}$ & (0.72) & 0.36$\pm$0.009 & 1.26$\pm$0.25 & 0.3$\pm$0.1 & y & - & N \\ 110112A$^{(43)}$ & (0.72) & 0.5$\pm$0.1 & 2.14$\pm$0.46 & 0.3$\pm$0.1 & y & - & Y \\ 111117A$^{(44)}$ & (0.72) & 0.47$\pm$0.09 & 0.65$\pm$0.22 & 1.4$\pm$0.2 & y & 1.00$\pm$0.13 & N \\ \end{tabular} \contcaption{} $^{(24)}$\cite{barthelmy2005,gehrels2005} $^{(25)}$\cite{sato2005b} $^{(26)}$\cite{parsons2005, levan2008}, note this is a candidate extra-galactic magnetar giant flare $^{(27)}$\cite{cummings2005b, barbier2005, klose2005} $^{(28)}$\cite{sato2006b} $^{(29)}$\cite{parsons2006,ziaeepour2006} $^{(30)}$\cite{sakamoto2007} $^{(31)}$\cite{tueller2007, antonelli2007, holland2007} $^{(32)}$\cite{sato2007b, berger2007} $^{(33)}$\cite{sakamoto2007b, thone2007} $^{(34)}$\cite{fenimore2007} $^{(35)}$\cite{krimm2008} $^{(36)}$\cite{barthelmy2008} $^{(37)}$\cite{barthelmy2008b} $^{(38)}$Automated BAT analysis products $^{(39)}$\cite{krimm2009b, cenko2009} $^{(40)}$\cite{markwardt2010,levan2010,fong2010} $^{(41)}$\cite{sakamoto2010, miller2010, levan2010b, berger2010} $^{(42)}$\cite{barthelmy2010b, starling2010, berger2010b} $^{(43)}$\cite{barthelmy2011,levan2011} $^{(44)}$\cite{sakamoto2011,cenko2011,berger2011} \end{table*} \begin{figure*} \centering \includegraphics[width=7.5cm]{2_breaks.ps} \includegraphics[width=7.5cm]{1_breaks.ps} \includegraphics[width=7.5cm]{0_breaks.ps} \includegraphics[width=7.5cm]{0_breaks_part2.ps} \caption[BAT-XRT lightcurves for SGRB sample]{These are the BAT-XRT lightcurves (0.3 -- 10 keV, observed flux) sorted into 3 groups. a) These GRBs have 2 or more breaks in their lightcurve. b) GRBs with 1 break in their lightcurve. c) and d) GRBs with no significant breaks in their lightcurve.} \label{fig0.1} \end{figure*} \begin{figure} \centering \includegraphics[width=7.5cm]{new_bar_t.eps} \caption[Break times of ``canonical'' like SGRB lightcurves]{Histograms showing the break times for the SGRB lightcurves with a plateau phase. T$_{1}$ is the break from the steep decay phase to the plateau phase while T$_{2}$ marks the end of the plateau. The blue filled histograms correspond to the SGRB sample used in this Paper and overplotted in red are the LGRB values determined by \citet{evans2009}.} \label{fig0.1c} \end{figure} \begin{figure} \centering \includegraphics[width=7.5cm]{new_bar.eps} \caption{Histograms showing the temporal indices of the SGRB lightcurves with a plateau phase. $\alpha_1$ is the initial steep decay phase from the last decay in the prompt emission. $\alpha_2$ are the plateau and shallow decay phase slopes. $\alpha_3$ is the final afterglow decay slope. The filled histograms correspond to the SGRB sample used in this paper and overplotted are the LGRB values determined by \citet{evans2009}.} \label{fig0.1b} \end{figure} Following the launch of the {\it Swift} satellite \citep{gehrels2004}, it has been possible to place tighter constraints on the nature of short gamma-ray bursts (SGRBs). The detection of their faint and rapidly fading X-ray afterglows has led to the identification of optical afterglows and in many cases candidate host galaxies \citep[for example GRB 050509B, ][]{gehrels2005, hjorth2005}. These observations have provided significant support for the popular compact binary merger progenitor theories, i.e. the coalescence of two neutron stars (NS) or a NS and a black hole (BH) \citep{lattimer1976, eichler1989,narayan1992}. However, without the coincident observation of gravitational waves by observatories like LIGO (Laser Interferometry Gravitational-wave Observatory) we are missing the supporting ``smoking gun'' observation for this progenitor theory. Observed features in X-ray lightcurves suggest longevity of the central engine of GRBs, for example late time flares \citep[e.g. ][]{curran2008, margutti2010, bernardini2011} and plateaus \citep[e.g. ][]{nousek2006, zhang2006}. GRBs whose X-ray lightcurves have a steep decay and a plateau phase followed by a standard afterglow phase, have been identified as ``canonical'' lightcurves \citep{nousek2006, obrien2006, zhang2006, evans2009}. The steep decay phase is associated with high latitude emission from the prompt emission followed by a late emission plateau giving the plateau phase \citep{tagliaferri2005, goad2006}. The fluence of this plateau can be comparable to the fluence of the prompt emission \citep{obrien2006, margutti2012}, and typically they occur from $10^2$ -- $10^3$ s till $10^3$ -- $10^4$ s after the trigger time. The plateau is thought to provide evidence of ongoing central engine activity \citep{nousek2006,zhang2006}. \cite{evans2009} studied 162 GRBs in the {\it Swift} sample identifying a ``canonical'' lightcurve in 42\% of GRB X-ray lightcurves, including 2 (051221A and 060313) out of 11 SGRBs analysed. Although studies of flares and plateaus are typically conducted for LGRBs, fainter versions are evident in many SGRB X-ray lightcurves suggesting a long lived central engine \citep[e.g.][]{margutti2011}. This is problematic for SGRB progenitor theories as accretion is expected to end within a few seconds \citep[powering relativistic jets;][]{rezzolla2011} and only a small fraction of the merger mass is available \citep[0.01 -- 0.1 M$_{\odot}$ although this is dependant on the NS equation of state, ][]{lee2007}. Additionally, it is thought that the accretion disk gets destroyed after a few seconds \citep[e.g.][]{metzger2008}. There have been studies of fallback accretion, in which the NS is shredded and parts ($\le$ 10\% of the original disk mass) are flung into highly eccentric orbits which accrete onto the central engine at late times giving flares in the X-ray lightcurve \citep{rosswog2007}. Flares may also be caused by Toomre instabilities within the accretion disk \citep{perna2006}, although this does not explain plateau emission or late time flares as the SGRB accretion disks are expected to accrete within the first few seconds. \cite{cannizzo2011} have attempted to explain plateaus by introducing a band of material at a large distance from the central engine. \cite{cannizzo2011} suggest that the required reservoir of material could be provided via the accretion disk moving outwards (due to having a large amount of angular momentum) or ejecta thrown out during the merger in highly eccentric orbits that circularises forming an accretion disk. An alternative theory is that during some GRBs a millisecond pulsar (magnetar) may be formed with enough rotational energy to prevent gravitational collapse \citep{usov1992, duncan1992, dai1998a, dai1998b, zhang2001}. The rotational energy is released as gravitational waves and electromagnetic radiation, causing the magnetar to spin down. If the magnetar is sufficiently massive it may reach a critical point at which differential rotation is no longer able to support it, resulting in collapse to a BH. Assuming constant radiative efficiency, the energy injection from the magnetar would produce a plateau in the X-ray light curve \citep{zhang2001} and would be followed by a steep decay if the magnetar collapses to a BH. The progenitor of this system is typically thought to be a collapsar and LGRB candidates have been identified by \cite{troja2007} and \cite{lyons2009}. However, it has also been proposed that such a magnetar could be formed by the merger of two neutron stars \citep{dai1998a, dai2006, yu2007} or via the accretion induced collapse (AIC) of a white dwarf (WD) \citep{nomoto1991, usov1992, levan2006, metzger2008b}. A candidate event for this is GRB 090515 with an unusual X-ray plateau followed by a steep decay \citep{rowlinson2010b}. The likelihood of producing this event is dependent on the equation of state of neutron stars. \cite{morrison2004} studied the effect that the equation of state of a NS and rotation would have on the remnant of a compact merger, i.e. whether a NS or a BH is formed \citep[see also][]{shibata2006}. They showed that, even for the harder nuclear equations of state, the rotation of the NS could increase the maximum mass by $\sim50\%$ and hence mergers could often result in a NS. Considering the parameters of 6 known Galactic NS binaries and a range of equations of state, \cite{morrison2004} predict that the majority of mergers of the known binaries will form a NS. The recent discovery of an 1.97 M$_{\odot}$ NS \citep{demorest2010} provides further supporting evidence of the possibility that high mass magnetars can be formed from NS mergers (the maximum mass of NSs is dependent on the very uncertain NS equation of state, so this is a conservative lower limit on the maximum mass of a NS). \cite{ozel2010b} show that, for a maximum non-rotating NS mass of $M_{\rm max} =$2.1 M$_{\odot}$, the merger of two NSs with a total mass $\le 1.4M_{\rm max}$ will have a delayed collapse to a BH (i.e. a magnetar phase). They also predict a regime in which the merged remnant does not collapse to form a BH, in this case the total mass is $\le 1.2M_{\rm max}$. If the maximum NS mass is 2.1 M$_{\odot}$, then the merger of two NSs of masses up to 1.3 M$_{\odot}$ would result in a stable magnetar and the merger of two NSs with larger masses (up to 1.5-1.7 M$_{\odot}$) would form an unstable magnetar. As the majority of observed NSs have masses $\sim$1.4 M$_{\odot}$, it seems reasonable to predict that many NS mergers could result in a magnetar. The stability of the final magnetar is dependent on the maximum possible mass of a NS which is still uncertain. Its lifetime depends both on the rate that additional mass (if any) is accreted after formation, as well as the rate at which angular momentum is extracted by e.g. gravitational waves or magnetic torques \citep[e.g.][]{shibata2006, oechslin2007}. In this paper, we consider all {\it Swift} detected SGRBs, T$_{90} \le$ 2 s, observed until May 2012 with an X-ray afterglow or which were promptly slewed to and observed by the X-ray Telescope \citep[XRT;][]{burrows2005}. This allows the inclusion of SGRBs without an X-ray afterglow but which do have a constraining upper limit. For all the SGRBs, we analysed the BAT \citep[Burst Alert Telescope;][]{barthelmy2005b} data by creating lightcurves using a variety of binning in signal-to-noise ratios and time looking for evidence of extended emission at the 3$\sigma$ level where we consistently saw extended emission over more than 30 s (the SGRBs with identified extended emission are 050724, 050911, 051227, 060614, 061006, 061210, 070714B, 071227, 080123, 080503, 090531B, 090715A, 090916 and 111121A). This procedure recovered all of the extended emission bursts identified by \cite{norris2010}. Hence, our selection criteria excludes SGRBs with extended emission, which may share a common progenitor to SGRBs but this remains uncertain. This sample is used to identify those with a plateau phase in their lightcurves suggesting ongoing central engine activity. These results are discussed in section 2. A sub-sample with sufficient data are then studied for the signature of a magnetar (with or without collapse to a BH) which may signify the coalescence of two NSs. If found, this would provide additional support to this popular progenitor theory although forming a magnetar via the AIC of a WD is not ruled out. The magnetar model is considered in section 3; with a description of the model and sample used and analysis of the available data. A discussion of the implications, e.g. for gravitational waves, is given in section 4 and our conclusions are given in section 5. Throughout this work, we adopt a cosmology with $H_0 = 71$ km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_m = 0.27$, $\Omega_\Lambda = 0.73$. Errors are quoted at 90\% confidence for X-ray data and at 1$\sigma$ for fits to the magnetar model. \section{Plateau phases in SGRB lightcurves} \begin{figure} \centering \includegraphics[width=7.5cm]{new_fluence_v_100s_flux.ps} \includegraphics[width=7.5cm]{new_peakflux_v_100s_flux.ps} \caption{(a) The BAT fluence (15 -- 150 keV) plotted against the XRT unabsorbed flux at 100 s (0.3 -- 10 keV). Blue stars have 2 or more significant breaks in their lightcurves, green circles have 1 break and red triangles have no significant breaks in their lightcurves. (b) The BAT peak photon flux (15-150 keV) against the XRT unabsorbed flux at 100 s in the observer frame (0.3 -- 10 keV). Symbols are as in (a).} \label{fig0.2} \end{figure} \begin{figure} \centering \includegraphics[width=7.5cm]{new2_fluence_prompt_v_plateau.ps} \caption{The prompt BAT 15 -- 150 keV fluence in comparison to the shallow decay phase unabsorbed X-ray fluence extrapolated to the 15 -- 150 keV energy band. Symbols are as defined in Figure \ref{fig0.2} and the black line shows where the shallow decay phase fluence is equal to the prompt fluence.} \label{fig0.8} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{new2_gamma_alpha1.ps} \includegraphics[width=7cm]{new2_gamma_alpha2.ps} \includegraphics[width=7cm]{new2_gamma_alpha3.ps} \caption{The spectral index $\beta$ versus the temporal index $\alpha$ for the three regimes of the lightcurves with a plateau phase: (a) steep decay phase, (b) plateau phase and (c) standard afterglow phase. Where there is no XRT spectrum available for the steep decay phase, the BAT spectrum is used. All symbols are as defined in Figure \ref{fig0.2}, the solid lines and grey regions show the closure relations as defined by \citet{zhang2004}, and the black dashed line shows where $\alpha = \beta + 2$.} \label{fig0.9} \end{figure} \begin{figure} \centering \includegraphics[width=7.5cm]{new2_plateau_flux_v_duration.ps} \includegraphics[width=7.5cm]{new2_plateau_Lx_v_duration.ps} \caption{(a) The plateau phase unabsorbed flux versus the duration of this phase. Symbols are as defined in Figure \ref{fig0.2}. (b) The plateau phase luminosity, using published redshifts (filled symbols) or the average redshift (open symbols), versus the restframe duration of this phase. The light grey data points are the \citet{dainotti2010} sample of LGRBs. The black line shows the correlation between the luminosity and duration for the SGRB and LGRB samples which is consistent with the relationship found by \citet{dainotti2010}.} \label{fig0.7} \end{figure} Out of our sample of 43 SGRBs, shown in Table \ref{lcfits}, only 6 did not have a detected X-ray afterglow (GRBs 050906, 051105, 070209, 070810B 071112B and 081101). Hence, $\sim$86\% of {\it Swift} SGRBs with a prompt slew have detected X-ray afterglows. The observed properties of the SGRB sample are given in Table \ref{candidates}. The 0.3 -- 10 keV observed flux X-ray lightcurves were obtained from the automated analysis page for each individual SGRB from the UK {\it Swift} Science Data Centre website \citep{evans2007, evans2009}. The Burst Alert Telescope (BAT) lightcurves were created using standard pipelines in the {\sc Heasoft} package with 3$\sigma$ significance bins. The 15 -- 150 keV BAT spectra were fitted in {\sc XSpec} for each SGRB and then extrapolated to obtain the flux at 0.3 -- 10 keV. Using this extrapolated flux and the net count-rate in the BAT spectrum, each count-rate data point in the BAT lightcurve was scaled to an 0.3 -- 10 keV flux using a simple power law spectral model. These were combined with the XRT lightcurves to make the BAT-XRT lightcurves used in this analysis. For later comparison with the magnetar model, these lightcurves were converted into unabsorbed flux lightcurves and then into restframe 1-10000 keV luminosity lightcurves using a k-correction \citep{bloom2001} giving an approximation to a bolometric lightcurve. The range of k-corrections obtained are typically consistent with the range 0.4-7 as obtained by \cite{bloom2001}. However, there are a small number of large k-corrections (particularly for GRBs 070809, 080905A and 101219A), showing the spectrum may be poorly understood at the frequency range we extrapolate to. If no redshift is known the mean SGRB redshift is used, $z\sim0.72$ \citep[excluding the redshift for GRB 061201 as the host galaxy association remains uncertain;][Tunnicliffe et al. in prep]{stratta2007}, and the implications of choosing this average redshift are discussed in Section 3.3. All the SGRB observed BAT-XRT lightcurves were fitted with multiple power laws from the final decay phase in the BAT prompt emission throughout the total X-ray afterglow using {\sc QDP}\footnote{https://heasarc.gsfc.nasa.gov/docs/software/ftools/others/qdp/qdp.html}. These fits were then used to identify those with a ``canonical'' like lightcurve. An XRT spectrum was created for each region of the lightcurve using the automatic data products on the UK {\it Swift} Science Data Centre website \citep{evans2007, evans2009}. The SGRB lightcurves are shown in Figure \ref{fig0.1}. We assume that $F_{\nu} \propto \nu^{-\beta} t^{-\alpha}$ where $\beta=\Gamma-1$ is the spectral index, $\Gamma$ is the photon index ($\Gamma_{\gamma}$ is the photon index measured using BAT and $\Gamma_{x}$ is the photon index measured using XRT) and $\alpha$ is the temporal index. We define the steep decay phase following the prompt emission to have a power law decay of $\alpha_{1}$, after which the decay can break to a decay of $\alpha_{2}$ and a further break to $\alpha_{3}$. We always define $\alpha_{2}$ to be the shallowest decay phase, as this allows the direct comparison of all plateau phases in the subsequent analysis. All SGRBs with 1 or more breaks in their lightcurves have a plateau phase. In three cases there are more than two breaks in the lightcurve. GRB 070724 has a third break at $T_3 = 152^{+18}_{-5}$ ($\alpha_{4} = 1.15^{+0.07}_{-0.06}$ and $\Gamma_{4} = 1.45^{+0.48}_{-0.29}$), GRB 090515 has a third break at $T_4=241^{+8}_{-10}$ ($\alpha_4=10^{+0}_{-0.97}$) and GRB 101219A has a break at $T_3 = 241^{+15}_{-13}$ ($\alpha_3 = 1.88^{+0.23}_{-0.25}$ and $\Gamma_4 = 1.63^{+0.37}_{-0.49}$). The 6 GRBs which were undetected by XRT are fitted with lower limits for $\alpha_{1}$ using the shallowest decay allowed by the BAT data and the XRT upper limit. In Table \ref{lcfits}, we provide the lightcurve fits for all the SGRBs in the sample. An F-test was conducted using the $\chi^2$ and degrees of freedom for each fit to determine the probability that the fit is a chance improvement on a simpler model (i.e. an F-test between the model provided in Table \ref{lcfits} and a model with 1 less break in the lightcurve). We utilise the method described in \cite{evans2009} to determine the best fit, i.e. the model that has the most breaks and the probability of being a chance improvement on a simpler model as $\le$0.3\%. There are several caveats which need to be considered with the results in this Section and for the magnetar fits in Section 5.3.2. As SGRB afterglows are often faint and fade rapidly, these lightcurves and spectra can be poorly sampled giving large errors on the values in Table \ref{lcfits}. This could also cause breaks in the lightcurve to be missed due to large bin sizes \citep[bins typically contain 20 photons in PC mode data so bins could have long durations;][]{evans2007}. Additionally, the {\it Swift} satellite slews to observe GRBs after detection, leading to a characteristic gap between the BAT data and the XRT data, and XRT can only observe for short windows, due to Earth occultation during orbits, giving further gaps in the lightcurves which could also hide features in the lightcurves. Using the broken powerlaw fit method, we find that 22 SGRBs ($\sim$50\%) are consistent with having a plateau phase in their lightcurves, although the plateau phase is not always directly observed due to the gap in the lightcurve prior to the XRT observations. It is hard to rule out plateau phases in other cases (since the plateau phase could be missed by the sampling or lost due to the faintness of the afterglow). Those which were undetected by XRT do not require extreme decay slopes relative to the rest of the sample of SGRBs. The break times of the SGRBs with plateaus are typically occuring orders of magnitude earlier than for the canonical LGRBs (as shown in Figure \ref{fig0.1c}). This may be caused by our use of BAT and XRT data whereas \cite{evans2009} use only the XRT data and is only able to find plateaus at times after XRT has started observing. However, it is very rare for XRT to not observe the steep decay phase for LGRBs so the inclusion of BAT data does not affect the plateau fits \citep[e.g.][]{obrien2006, willingale2007}. Additionally, \cite{evans2009} discussed whether their type b and type c LGRBs can be cannonical (i.e. those which are steep then shallow or shallow then steep). They conclude that they are not based on the plateau decay rates and break times for type b and the relative BAT versus XRT fluxes for type c. Whereas for SGRBs, the BAT observations need including in order to identify the steep decay phase. Histograms showing the various SGRB decay slopes for those with plateau phases are shown in Figure \ref{fig0.1b} with the values for canonical LGRBs, determined by \cite{evans2009}. The values for $\alpha_1$ and $\alpha_2$ are consistent with the LGRB sample, but the final decay phase ($\alpha_3$) is typically steeper than for the LGRB counterparts \citep[consistent with the results obtained by ][ where the overall decay of SGRBs is typically found to be steeper than LGRBs, however their sample of SGRBs used for this is dominated by SGRBs with extended emission]{margutti2012}. Using a Kolmogorov-Smirnov test between the values for LGRBs and SGRBs, $\alpha_1$ is consistent with being drawn from the same distribution (p-value = 0.07), although the values for $\alpha_2$ are unlikely to be from the same distribution (p-value = 0.003) and $\alpha_3$ are highly likely to be drawn from completely different distributions (p-value = 0.00007). In the following analysis we consider SGRBs with 2 or more breaks in their lightcurves (blue stars, the GRBs in Figure \ref{fig0.1}a), 1 break (green circles, the GRBs in Figure \ref{fig0.1}b) and those with no breaks in their lightcurves (red triangles, the GRBs in Figure \ref{fig0.1}c and d). The BAT fluence (15 -- 150 keV) of these GRBs is plotted against their 0.3 -- 10 keV flux at 100 s in Figure \ref{fig0.2}a. Those GRBs with a plateau tend to be clustered at somewhat higher fluences and their X-ray fluxes are significantly higher at 100 s ($\sim 10^{-11}$ -- $10^{-9}$ erg cm$^{-2}$ s$^{-1}$). The GRBs which do not have a plateau phase in their lightcurves tend to have faint X-ray afterglows at 100 s ($\le 2 \times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$) and relatively low fluences ($\le 2 \times 10^{-7}$ erg cm$^{-2}$). Figure \ref{fig0.2}b shows there is a wide variation in XRT flux at 100 s for SGRBs with similar prompt fluxes. \cite{obrien2006} and \cite{willingale2007} found that the prompt fluence is comparable to the plateau fluence for LGRBs. In order to compare this result to our sample, we took the average flux for the plateau phase and multiplied it by the time at which the decay broke to a more typical afterglow (assuming this component started at the initial trigger time) giving the 0.3 -- 10 keV fluence. This fluence was then converted to a 15 -- 150 keV fluence using the spectral index and fitted absorption. Figure \ref{fig0.8} shows the prompt and plateau fluence are generally comparable, which is consistent with the result obtained for LGRBs. There are four significant outliers (GRBs 061201, 070724A, 080905A and 090515), lying significantly above the one-to-one line, whose plateaus are significantly more energetic than their prompt emission. Figure \ref{fig0.9} shows the spectral indicies plotted against the temporal indicies for the lightcurves with a plateau phase (values all tabulated in Table \ref{lcfits}). Also plotted are the closure relations for the slow cooling regime (grey band) and the fast cooling regime (solid black lines) \citep{zhang2004}. These show the same behaviour identified by \cite{evans2009} for the canonical sample of LGRBs. In particular Figure \ref{fig0.9}b shows evidence of energy injection during the plateau phase as described by \cite{evans2009}. These figures can be compared to updated values for the whole GRB sample using the UK Swift Science Data Centre \citep[www.swift.ac.uk/xrt{\_}live{\_}cat;][]{evans2009}. \cite{dainotti2010} identified a correlation between the plateau phase luminosity and duration for LGRBs with a canonical lightcurve. Using redshifts where available or the average SGRB redshift ($z\sim0.72$) and a k-correction \citep{bloom2001} we calculated the luminosity and restframe durations for the SGRB sample (XRT fluxes used are the observed values which have not been corrected for absorption). These results are plotted in Figure \ref{fig0.7} and the luminosity -- duration correlation is identified. The fitted correlation for the SGRB and LGRB sample, $b=-1.29\pm0.12$, $log(a)= 48.74\pm0.44$, intrinsic scatter $\sigma_{V}=9\times10^{-11}\pm 0.01$ \citep[where $L_X=aT_{\rm plateau}^b$ and the uncertainties on each datapoint and an intrinsic scatter are accounted for in the fit using the method described in ][]{dagostini2005}, is consistent with that for obtained the LGRB sample \citep[$-1.06\pm0.28$, $51.06\pm1.02$, although they did not account for the intrinsic scatter;][]{dainotti2010}. The SGRB plateau phases are typically more luminous and the plateau is shorter in duration than the LGRB counterparts. This may be a selection affect due to the inclusion of BAT observations in this analysis, and hence finding earlier plateaus. However, when BAT data are included in LGRB analysis the plateau properties do not significantly change \citep{obrien2006, willingale2007} additionally, there is a shortage of long duration plateaus observed in the SGRB sample. \cite{cannizzo2011} argue that the relationship identified by \cite{dainotti2010} is dominated by selection affects at z$>1.5$ as there is an observational bias against faint plateaus due to the limiting XRT flux. However, SGRBs are typically at lower redshift (the SGRBs with an observed redshift in our sample have an average redshift of z$\sim$0.72) so our sample lies well within the region which is not dominated by selection affects. The plateau phases of GRB lightcurves are typically explained as ongoing central engine activity, for example on going accretion onto the central BH. However, ongoing accretion is problematic for NS-NS and NS-BH merger theories as there is insufficient surrounding material to maintain this accretion \citep{lee2007}. Fallback accretion from material on highly eccentric orbits has been postulated to resolve this \citep{rosswog2007, kumar2008, cannizzo2011}, however it is unclear how to produce the required reservoir of material at a fixed radius. In the remainder of this paper, we suggest that the plateau phases could be powered by a magnetar formed via the merger of two NSs. \section{Magnetar model} \begin{table*} \begin{tabular}{cccccccc} \hline GRB & E$_{iso}$ & P$_{-3}$ & B$_{15}$ & $\alpha_1 = \Gamma_{\gamma} + 1$ & Collapse time & Plateau Luminosity & Plateau Duration\\ & (erg) & (ms) & ($10^{15}$ G) & & (s) & (erg s$^{-1}$) & (s) \\ \hline \multicolumn{8}{l|}{Magnetar candidates}\\ 051221A & 1.83$^{+0.45}_{-0.35}\times$10$^{52}$ & 7.79$^{+0.31}_{-0.28}$ & 1.80$^{+0.14}_{-0.13}$ & (1.39$^{+0.01}_{-0.02}$) & - & 8.8$^{+3.0}_{-2.3}\times$10$^{45}$ & 38300$^{+9800}_{-7700}$ \\ 060313 & 3.12$^{+1.06}_{-0.79}\times$10$^{53}$ & 3.80$^{+0.15}_{-0.13}$ & 3.58$^{+0.24}_{-0.22}$ & 1.71 & - & 6.2$^{+1.9}_{-1.5}\times$10$^{47}$ & 2310$^{+520}_{-420}$\\ 060801 & 1.17$^{+1.79}_{-0.71}\times$10$^{53}$ & 1.95$^{+0.15}_{-0.13}$ & 11.24$^{+1.93}_{-1.78}$ & 1.47 & 326 & 8.7$^{+7.1}_{-4.1}\times$10$^{49}$ & 62$^{+39}_{-23}$\\ 070724A & 1.13$^{+1.87}_{-0.40}\times$10$^{50}$ & 1.80$^{+1.04}_{-0.38}$ & 28.72$^{+1.42}_{-1.29}$ & (1.16$^{+0.10}_{-0.06}$) & 90 & 7.9$^{+14.5}_{-6.7}\times$10$^{50}$ & 8$^{+14}_{-4}$ \\ 070809 & 8.87$^{+9.06}_{-3.48}\times$10$^{49}$ & 5.54$^{+0.48}_{-0.43}$ & 2.06$^{+0.48}_{-0.42}$ & (1.68$^{+0.11}_{-0.08}$) & - & 4.5$^{+5.0}_{-2.5}\times$10$^{46}$ & 14800$^{+12800}_{-6500}$\\ 080426 & 3.48$^{+0.67}_{-0.24}\times$10$^{51}$ & 6.17$^{+0.28}_{-0.24}$ & 8.94$^{+1.53}_{-1.17}$ & 2.98 & - & 5.5$^{+3.3}_{-2.0}\times$10$^{47}$ & 976$^{+436}_{-319}$ \\ 080905A & 6.16$^{+12.3}_{-4.03}\times$10$^{50}$ & 9.80$^{+0.78}_{-0.77}$ & 39.26$^{+10.24}_{-12.16}$ & (0.69$^{+0.05}_{-0.10}$) & 274 & 1.8$^{+2.0}_{-1.1}\times$10$^{48}$ & 128$^{+185}_{-60}$ \\ 080919 & 5.18$^{+9.34}_{-3.26}\times$10$^{51}$ & 7.68$^{+0.91}_{-0.44}$ & 37.36$^{+13.92}_{-14.67}$ & 2.10 & 421 & 4.0$^{+5.6}_{-3.1}\times$10$^{48}$ & 87$^{+207}_{-46}$ \\ 081024 & 5.65$^{+7.53}_{-3.16}\times$10$^{51}$ & 2.30$^{+0.12}_{-0.11}$ & 31.04$^{+2.82}_{-2.35}$ & 2.33 & 125 & 3.4$^{+1.5}_{-1.0}\times$10$^{50}$ & 11$^{+3}_{-3}$ \\ 090426 & 3.98$^{+1.30}_{-0.03}\times$10$^{52}$ & 1.89$^{+0.08}_{-0.07}$ & 4.88$^{+0.88}_{-0.90}$ & 2.93 & - & 1.9$^{+1.2}_{-0.8}\times$10$^{49}$ & 310$^{+190}_{-110}$\\ 090510 & 5.76$^{+6.86}_{-3.10}\times$10$^{52}$ & 1.86$^{+0.04}_{-0.03}$ & 5.06$^{+0.27}_{-0.23}$ & 1.98 & - & 2.1$^{+0.4}_{-0.4}\times$10$^{49}$ & 277$^{+40}_{-35}$\\ 090515 & 3.44$^{+3.55}_{-1.55}\times$10$^{50}$ & 2.05$^{+0.06}_{-0.05}$ & 12.27$^{+1.14}_{-1.11}$ & 2.60 & 175 & 8.5$^{+2.7}_{-2.2}\times$10$^{49}$ & 57$^{+16}_{-12}$ \\ 100117A & 1.42$^{+2.08}_{-0.84}\times$10$^{52}$ & 1.13$^{+0.07}_{-0.06}$ & 11.89$^{+0.50}_{-0.52}$ & 1.88 & - & 8.7$^{+3.0}_{-2.4}\times$10$^{50}$ & 19$^{+4}_{-3}$ \\ 100702A & 2.28$^{+1.46}_{-0.80}\times$10$^{51}$ & 1.29$^{+0.22}_{-0.12}$ & 19.50$^{+0.24}_{-0.76}$ & 2.54 & 178 & 1.4$^{+0.7}_{-0.7}\times$10$^{51}$ & 9$^{+4}_{-2}$ \\ 101219A & 1.69$^{+0.79}_{-0.54}\times$10$^{53}$ & 0.95$^{+0.05}_{-0.05}$ & 2.81$^{+0.47}_{-0.39}$ & (1.22$^{+0.03}_{-0.03}$) & 138 & 9.7$^{+6.7}_{-3.8}\times$10$^{49}$ & 234$^{+116}_{-80}$\\ 111020A & 1.98$^{+2.55}_{-0.99}\times$10$^{51}$ & 7.76$^{+1.06}_{-0.69}$ & 2.24$^{+1.13}_{-0.73}$ & (1.44$^{+0.05}_{-0.05}$) & - & 1.4$^{+3.9}_{-1.0}\times$10$^{46}$ & 24600$^{+45300}_{-16300}$ \\ 120305A & 2.02$^{+0.10}_{-0.10}\times$10$^{52}$ & 2.22$^{+0.09}_{-0.04}$ & 10.22$^{+0.35}_{-0.27}$ & (6.26$^{+0.17}_{-0.16}$) & 182 & 4.3$^{+0.6}_{-0.8}\times$10$^{49}$ & 97$^{+14}_{-10}$\\ 120521A & 8.42$^{+12.19}_{-4.95}\times$10$^{51}$ & 4.88$^{+0.63}_{-1.10}$ & 15.04$^{+8.42}_{-7.93}$ & 1.98 & 207 & 4.0$^{+23.0}_{-3.4}\times$10$^{48}$ & 216$^{+1015}_{-163}$\\ \hline \multicolumn{8}{l|}{Possible candidates}\\ 050509B & 3.82$^{+16.9}_{-2.87}\times$10$^{49}$ & 80.32$^{+24.98}_{-17.91}$& 21.85$^{+16.44}_{-11.98}$ & 2.5 & - & 1.2$^{+8.5}_{-1.1}\times$10$^{44}$ & 27700$^{+206000}_{-22300}$ \\ 051210 & 5.98$^{+13.5}_{-4.05}\times$10$^{51}$ & 0.68$^{+0.03}_{-0.03}$ & 7.68$^{+0.44}_{-0.39}$ & 2.1 & 225 & 2.8$^{+0.9}_{-0.7}\times$10$^{51}$ & 16$^{+3}_{-3}$ \\ 061201 & 1.42$^{+1.67}_{-0.69}\times$10$^{51}$ & 14.52$^{+0.59}_{-0.52}$ & 19.00$^{+1.75}_{-1.44}$ & 1.57 & - & 8.1$^{+3.1}_{-2.2}\times$10$^{46}$ & 1200$^{+320}_{-260}$ \\ 070714A & 3.28$^{+3.08}_{-1.48}\times$10$^{51}$ & 10.77$^{+1.04}_{-1.06}$ & 16.21$^{+4.29}_{-4.04}$ & 3.60 & - & 2.0$^{+2.7}_{-1.2}\times$10$^{47}$ & 905$^{+1000}_{-460}$ \\ 080702A & 1.20$^{+4.90}_{-0.90}\times$10$^{51}$ & 13.55$^{+1.39}_{-1.10}$ & 36.18$^{+12.25}_{-8.32}$ & 2.34 & - & 3.9$^{+5.9}_{-2.3}\times$10$^{47}$ & 290$^{+300}_{-150}$ \\ 090621B & 1.31$^{+2.07}_{-0.80}\times$10$^{52}$ & 26.65$^{+5.44}_{-3.42}$ & 23.05$^{+10.79}_{-6.6}$ & (4.72$^{+0.04}_{-0.05}$) & - & 1.0$^{+2.9}_{-8.0}\times$10$^{46}$ & 2700$^{+5100}_{-1800}$\\ 091109B & 5.25$^{+3.95}_{-2.27}\times$10$^{52}$ & 13.60$^{+1.61}_{-1.24}$ & 9.16$^{+2.75}_{-2.33}$ & (3.16$^{+0.45}_{-0.53}$) & - & 2.5$^{+3.6}_{-1.6}\times$10$^{46}$ & 4500$^{+5600}_{-2300}$ \\ 100625A & 3.27$^{+1.76}_{-1.15}\times$10$^{52}$ & 23.08$^{+3.59}_{-3.92}$ & 168.40$^{+32.78}_{-25.72}$& (4.09$^{+1.52}_{-0.73}$) & - & 1.0$^{+2.0}_{-0.6}\times$10$^{48}$ & 38$^{+33}_{-20}$ \\ 110112A & 2.91$^{+5.85}_{-0.17}\times$10$^{50}$ & 13.14$^{+0.93}_{-0.75}$ & 18.85$^{+3.48}_{-2.52}$ & 3.14 & - & 1.2$^{+0.9}_{-0.5}\times$10$^{47}$ & 996$^{+530}_{-370}$\\ 111117A & 4.78$^{+5.71}_{-2.58}\times$10$^{52}$ & 17.73$^{+2.08}_{-2.47}$ & 68.69$^{+20.17}_{-17.39}$ & 1.65 & - & 5.5$^{+11.6}_{-3.5}\times$10$^{47}$ & 127$^{+160}_{-72}$\\ \end{tabular} \caption{The SGRB magnetar sample used with their magnetar fits. E$_{iso}$, 1--10000 keV, is calculated using the fluences and redshifts in Table \ref{candidates}, a simple power law model and a k-correction \citet{bloom2001}. The values for $\alpha$ are input into the model unless they are bracketed - in this case the values are fit within the model. If there is a steep decay phase, we assume the magnetar collapses to form a BH and the model determines the collapse time. The values for P$_{-3}$ and B$_{15}$ are fitted from the model assuming isotropic emission. Using the values of P$_{-3}$ and B$_{15}$ obtained from the model, we derive the plateau luminosity and duration using equations \ref{luminosity} and \ref{period}. The derived plateau duration is from the initial formation of the magnetar (i.e. the time of the GRB) and the point at which the X-ray emission from the magnetar starts to turn over from the plateau phase to a powerlaw decay phase.} \label{table:log} \end{table*} The magnetar model predicts a plateau phase in the X-ray lightcurve which is powered by the spin down of a newly formed magnetar. This section fits the model directly to the restframe SGRB lightcurves. The magnetar component is expected to be an extra component to the typical lightcurve. Therefore, we assume there is a single power law decay, $\alpha_{1}$, underlying the magnetar component. This value has been set to $\alpha_{1} = \Gamma_{\gamma} + 1$, where $\Gamma_{\gamma}$ is the photon index of the prompt emission, assuming that the decay slope is governed by the curvature effect \citep{kumar2000}, i.e. that the surrounding medium is very low density as might be expected for neutron star mergers. We note that the simple curvature effect assumed here does not account for any spectral evolution (for example as the peak energy moves through the observation band) however this is to keep the number of free parameters fitted in the model low. The normalisation of the power law decay fit is constrained using the last decay from the prompt emission. In a small number of cases, the decay slope is significantly different from prediction and we allow $\alpha_{1}$ to vary. It is important to note that the underlying lightcurve could be similar to other GRBs with a more complex afterglow light curve, but this work assumes that these are naked bursts (i.e. no surrounding ISM for neutron star mergers) and only the curvature effect is important. Also, we expect there to be flares also overlying the powerlaw decay and magnetar component \citep[e.g.][]{margutti2011}. Due to the limited statistics in SGRB lightcurves, we do not attempt to exclude possible flares from the lightcurve fits (except GRB 060313, which has multiple flares early in the lightcurve) and the underlying flares will slightly affect the fit parameters. \subsection{Theory} The model used here is as described in \cite{zhang2001} and was suggested to explain GRB 051221A with a long lived magnetar \citep{fan2006}, for several LGRBs \citep{troja2007, lyons2009, bernardini2012} and for the short GRB 090515 \citep{rowlinson2010b}. This model is consistent with the late time residual spin down phase driving a relativistic magnetar wind as described in \cite{metzger2010}. We use the equations below with an underlying powerlaw component. Previously, the plateau duration and luminosity were calculated and then input into the equations. In this work, the equations are fit directly to the rest-frame light curves, taking into account the shape of the lightcurve \citep[this is a comparable method to that used by][who fitted a stable magnetar to the lightcurves of 4 LGRBs]{dallosso2011, bernardini2012}. We can then use the values of the magnetic field and spin period obtained to derive the luminosity and plateau duration. \begin{eqnarray} T_{em,3}=2.05~(I_{45}B^{-2}_{p,15}P^2_{0,-3}R^{-6}_6)\label{period}\\ L_{0,49}\sim(B^2_{p,15}P^{-4}_{0,-3}R^6_6)\label{luminosity}\\ B^{2}_{p,15}=4.2025 I_{45}^{2}R^{-6}_{6}L_{0,49}^{-1}T_{em,3}^{-2}\label{b^2}\\ P^{2}_{0,-3}=2.05 I_{45}L_{0,49}^{-1}T_{em,3}^{-1}\label{p^2} \end{eqnarray} \begin{figure*} \centering \includegraphics[width=5.5cm]{050509B.ps} \includegraphics[width=5.5cm]{051210.ps} \includegraphics[width=5.5cm]{051221A.ps} \includegraphics[width=5.5cm]{060313.ps} \includegraphics[width=5.5cm]{060801.ps} \includegraphics[width=5.5cm]{061201.ps} \includegraphics[width=5.5cm]{070714A.ps} \includegraphics[width=5.5cm]{070724A.ps} \includegraphics[width=5.5cm]{070809.ps} \includegraphics[width=5.5cm]{080426.ps} \includegraphics[width=5.5cm]{080702A.ps} \includegraphics[width=5.5cm]{080905A.ps} \caption[SGRB lightcurves fit with the magnetar model]{SGRB BAT-XRT restframe lightcurves fit with the magnetar model. The light grey data points have been excluded from the fit. The dashed line shows the power-law component and the dotted line shows the magnetar componenet.} \label{fig1} \end{figure*} \begin{figure*} \centering \includegraphics[width=5.5cm]{080919.ps} \includegraphics[width=5.5cm]{081024.ps} \includegraphics[width=5.5cm]{090426.ps} \includegraphics[width=5.5cm]{090510.ps} \includegraphics[width=5.5cm]{090515.ps} \includegraphics[width=5.5cm]{090621B.ps} \includegraphics[width=5.5cm]{091109B.ps} \includegraphics[width=5.5cm]{100117A.ps} \includegraphics[width=5.5cm]{100625A.ps} \includegraphics[width=5.5cm]{100702A.ps} \includegraphics[width=5.5cm]{101219A.ps} \includegraphics[width=5.5cm]{110112A.ps} \contcaption{} \end{figure*} \begin{figure*} \centering \includegraphics[width=5.5cm]{111020A.ps} \includegraphics[width=5.5cm]{111117A.ps} \includegraphics[width=5.5cm]{120305A.ps} \includegraphics[width=5.5cm]{120521A.ps} \contcaption{} \end{figure*} \begin{figure*} \centering \includegraphics[width=8cm]{new_BP.ps} \includegraphics[width=8cm]{BP_zevol.ps} \caption{(a) A graph showing the magnetic field and spin period of the magnetar fits produced. The solid (dashed) red line and dark shaded area represent the spin break up period for a collapsar (binary merger) progenitor \citep{lattimer2004} and the unshaded region shows the expected region for an unstable pulsar, as defined in \citet{lyons2009} and \citet{rowlinson2010b}. The initial rotation period needs to be $\le$10 ms \citep{usov1992} and the lower limit for the magnetic field is $\ge$10$^{15}$ G \citep{thompson2007}. Blue stars = stable magnetar and Green circles = unstable magnetar which collapses to form a BH. The black '+' symbols are the LGRB candidates identified by \citet{lyons2009, dallosso2011, bernardini2012}. Filled symbols have observed redshifts whereas open symbols use the average SGRB redshift. (b) This graph is as (a) but focusing on the fits for two GRBs at different redshifts. The number below each data point is the corresponding redshift. GRB 060801 in blue is an unstable magnetar which collapses to form a BH whereas GRB 080702A forms a stable magnetar. As expected, the paths of these lines are consistent with the predictions for GRB 090515 \citep{rowlinson2010b}.} \label{fig5} \end{figure*} \begin{figure*} \centering \includegraphics[width=7.8cm]{new_flux100_flux1000.ps} \includegraphics[width=7.8cm]{new_flux100_flux10000.ps} \includegraphics[width=7.8cm]{new_magnetar_fluence_v_100s_flux.ps} \includegraphics[width=7.8cm]{new_magnetar_fluence_v_1000s_flux.ps} \caption{(a) The 0.3 -- 10 keV unabsorbed flux at 100 s versus 1000 s. (b) The 0.3 -- 10 keV unabsorbed flux at 100 s versus 10000 s. (c) The 15 -- 150 keV flunce versus the 0.3 -- 10 keV unabsorbed flux at 100 s. (d) The 15 -- 150 keV flunce versus the 0.3 -- 10 keV unabsorbed flux at 1000 s. Symbols are as in Figure \ref{fig5}.} \label{fig7a} \end{figure*} where $T_{em,3}$ is the plateau duration in $10^{3}$ s, $L_{0,49}$ is the plateau luminosity in $10^{49}$ erg s$^{-1}$, $I_{45}$ is the moment of inertia in units of $10^{45}$g cm$^{2}$, $B_{p, 15}$ is the magnetic field strength at the poles in units of $10^{15} G$, $R_{6}$ is the radius of the neutron star in $10^{6}$cm and $P_{0,-3}$ is the initial period of the compact object in milliseconds. These equations apply to the electromagnetic dominated spin down regime, as the gravitational wave dominated regime would be extremely rapid and produce a negligble electromagnetic signal. We have assumed that the emission is 100\% efficient and isotropic as the beaming angle and emission mechanism remains very uncertain (see however section 3.4.4). The equations of vacuum dipole spin-down given above neglect the enhanced angular momentum losses due to neutrino-driven mass loss, which are important at early times after the magnetar forms \citep{metzger2010}. Nevertheless, these expressions reasonably approximate the spin-down of very highly magnetized neutron stars of most relevance in this paper. Isotropic emission is also a reasonable assumption for relatively powerful magnetar winds, since (unlike following the collapse of a massive star) the magnetar outflow cannot be confined efficiently by the relatively small quantity of surrounding material expected following a NS merger or AIC \citep{bucciantini2011}. We use equation \ref{inertia} to obtain the mass dependence of the model, where $M_{1.4} = 1.4 M_{\odot}$, and equation \ref{time_dep} \citep[from ][]{zhang2001} to determine the time dependence of the magnetar emission. \begin{eqnarray} I_{45} \sim M_{1.4}R_{6}^2 \label{inertia}\\ L_{em,49}(T) = L_{0,49}\left( 1 + \frac{T}{10^{-3}T_{em,3}} \right)^{-2} \label{time_dep} \end{eqnarray} If there is a steep decay phase after the plateau, it is assumed the magnetar has collapsed to a BH at the start of the steep decay (giving the Collapse Time parameter). The decay after collapse to a BH assumes the same powerlaw decay from the curvature effect, but starting at $t_0 = t_{collapse}$. This model was then written into a {\sc QDP} cod file (COmponent Definition file, used to generate new models within {\sc QDP} which can then be fitted to data sets). In this analysis, the mass ($M_{1.4}$) and radius ($R_6$) of the neutron star are constrained to be equal to 1 to reduce the number of free parameters in our model. These canonical values are consistent with the values determined by observations of three typical neutron stars, namely $M\le2 M_\odot$ and $7\le R \le 11$ km \citep{ozel2010}. As the model considers an extreme neutron star, we note that the mass and radius may differ from these results. However, this only has a relatively small affect on the magnetic fields and spin periods calculated \citep[as shown in][]{rowlinson2010b} and so it is a reasonable approximation as we are just demontrating the plausibility of the magnetar model fitting the SGRB lightcurves. When this model is fit to the restframe lightcurves it produces B$_{p,15}$, P$_{0,-3}$, $\alpha_{1}$ and the collapse time where appropriate. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline GRB & $\Gamma_{X}$ & Galactic N$_{H}$ & Restframe Intrinsic N$_{H}$ \\ & & (10$^{20}$ cm$^{-2}$) & (10$^{20}$ cm$^{-2}$) \\ \hline \multicolumn{4}{|l|}{Magnetar candidates}\\ \hline 051221A & 2.04$^{+0.14}_{-0.13}$ & 5.70$\pm$0.37 & 18.0$^{+7.10}_{-6.60}$ \\ 060313 & 1.61$^{+0.16}_{-0.13}$ & 5.00$\pm$1.17 & 0.00$^{+5.84}_{-0.00}$ \\ 060801 & 1.53$^{+0.47}_{-0.43}$ & 1.40$\pm$0.31 & 29.9$^{+68.8}_{-29.9}$ \\ 070809 & 1.73$^{+0.83}_{-0.43}$ & 6.40$\pm$0.17 & 2.95$^{+14.9}_{-2.95}$ \\ 080426 & 1.93$^{+0.29}_{-0.27}$ & 37.0$\pm$4.19 & 32.0$^{+31.6}_{-25.5}$ \\ 080919 & 2.23$^{+1.02}_{-0.84}$ & 26.0$\pm$3.78 & 105$^{+126}_{-75.8}$ \\ 090426 & 2.03$^{+0.19}_{-0.11}$ & 1.50$\pm$0.11 & 0.00$^{+36.0}_{-0.00}$ \\ 090510 & 1.56$^{+0.20}_{-0.19}$ & 1.70$\pm$0.11 & 10.0$^{+16.0}_{-10.0}$ \\ 090515 & 1.89$^{+0.25}_{-0.24}$ & 1.90$\pm$0.25 & 13.1$^{+11.6}_{-10.5}$ \\ 101219A & 1.65$^{+0.32}_{-0.31}$ & 4.90$\pm$0.87 & 56.8$^{+26.7}_{-20.4}$ \\ 111020A & 2.56$^{+1.69}_{-1.69}$ & 6.89$\pm$0.48 & 7.94$^{+7.90}_{-7.90}$ \\ 120305A & 1.94$^{+0.21}_{-0.20}$ & 11.3$\pm$0.70 & 109$^{+32}_{-26}$ \\ 120521A & 1.61$^{+0.36}_{-0.22}$ & 20.80$\pm$1.69 & 1.2$^{+14.2}_{-1.2}$ \\ \hline \multicolumn{4}{|l|}{Possible candidates}\\ \hline 050509B & 1.92$^{+1.09}_{-0.60}$ & 1.60$\pm$0.04 & 8.00$^{+8.10}_{-8.00}$ \\ 061201 & 1.44$^{+0.20}_{-0.19}$ & 5.20$\pm$1.58 & 6.77$^{+4.25}_{-3.88}$ \\ 070714A & 2.12$^{+0.37}_{-0.35}$ & 9.20$\pm$1.25 & 214$^{+51.8}_{-45.7}$ \\ 080702A & 1.57$^{+0.85}_{-0.76}$ & 15.0$\pm$1.50 & 125$^{+251}_{-121}$ \\ 090621B & 2.50$^{+1.60}_{-1.00}$ & 19.0$\pm$1.96 & 42.8$^{+108}_{-42.8}$ \\ 091109B & 1.96$^{+0.64}_{-0.43}$ & 9.20$\pm$0.96 & 14.5$^{+27.9}_{-14.5}$ \\ 110112A & 2.07$^{+0.46}_{-0.24}$ & 5.50$\pm$0.40 & 7.86$^{+12.7}_{-7.86}$ \\ 111117A & 2.13$^{+0.39}_{-0.36}$ & 3.70$\pm$0.15 & 39.8$^{+69.7}_{-31.3}$ \\ \hline \end{tabular} \caption[Plateau spectral fits for the SGRB magentar sample]{The 0.3 -- 10 keV spectral fits for the derived plateau durations given in Table \ref{table:log}. These are the SGRBs in the magnetar sample which have X-ray data during the plateau phase. Provided are the photon index, $\Gamma_{X, plateau}$, the Galactic N$_{H}$ and the restframe intrinsic N$_{H}$ using the redshifts provided in Table \ref{candidates}.} \label{spectra} \end{center} \end{table} \subsection{The Sample GRBs for magnetar fits} The selected GRBs are those SGRBs in our sample with sufficient data to produce multiple data points in the X-ray lightcurve, giving a sample of 28 SGRBs. GRBs which have insufficient data to fit the magnetar model are not excluded from being magnetar candidates as it's possible to fit a range of realistic magnetar parameters with the minimal data points and unknown redshift. 68\% of SGRBs in our sample have been investigated for evidence of extended emission by \cite{norris2010} but, of these, none show evidence of extended emission. The remaining SGRBs in our sample have no evidence of extended emission in their lightcurves (using a variety of binning in signal-to-noise ratios and time looking for evidence of extended emission at the 3$\sigma$ level). The magnetar sample are listed in Table \ref{table:log}. The restframe BAT-XRT lightcurves were fitted using the magnetar model, as shown in Figure \ref{fig1}. The lightcurves are fit over plateau region and the power law decay, including the last decay in the prompt emission and the X-ray observations. This removes the effect of the poorly understood flaring prompt emission not modeled by this method. We also provide the derived plateau luminosity and plateau duration calculated using the magnetic field strengths, the spin periods and equations \ref{period} and \ref{luminosity}. The magnetar candidates fit the model well and the possible candidates are GRBs which may fit the magnetar model if various assumptions are made. There are two potential outcomes: a stable long lived magnetar which does not collapse to form a BH and an unstable magnetar which collapses forming a BH after a short timescale (which have a collapse time in Table \ref{table:log}). The following Sections compare the properties of the stable magnetars (blue stars in the figures) and the unstable magnetars which collapse to form a BH (green circles). We note that the fitted plateaus match the observations well but, due to insufficient data points particularly prior to XRT observations, the plateaus are not always required by the observed data which can be fitted by simple broken power-law models. In some cases, the best fitting magnetar model gives a plateau phase ending prior to the start of the XRT observations (e.g. 060801). In this situation, the fit is being constrained by the curving of the magnetar energy injection from a plateau phase to a powerlaw decline giving a characteristic curvature in the lightcurve (described by Equation 6). Therefore, the fitted model does not rely upon data during the plateau phase but instead uses the whole shape of the lightcurve. This leads to the model predition that those GRBs have a magnetar plateau phase which has not been directly observed, this can be used to test the model if we are able to observe SGRBs much sooner after the prompt emission with future X-ray telescopes. When fitting GRB 060313, which may show evidence of late time central engine activity \citep{roming2006}, it was noted that the model fits part of the lightcurve extremely well. In this case, we ignored the observations between 50 -- 200 s (the initial X-ray data) in the fit as this duration appears to be dominated by flares. If these data are included in the fit, then the model does not fit the data well. The model fits well to GRB 090515 predicting values similar to those given in \cite{rowlinson2010b}. In some cases, the model used here under predicts the flux at late times (for example GRBs 091109B, 100702A and 120305A). This shows that our simple power law component, given by a simple curvature effect model, is not sufficient and we should include spectral evolution or there may also be an additional afterglow component which has been neglected in this model. \subsection{Analysis} In Figure \ref{fig5}(a) we show the spin periods and magnetic fields determined for our sample of GRBs assuming isotropic emission. We also plot the LGRB candidates identified by \cite{lyons2009}, \cite{dallosso2011} and \cite{bernardini2012}, the SGRB candidates tend to have higher magnetic field strengths and spin periods. In Figure \ref{fig5}(b), we confirm the change in magnetic field strength and spin period caused by uncertainties in redshift expected from previous analysis of GRB 090515 \citep{rowlinson2010b}. 18 of the SGRBs fitted by the magnetar model lie within the expected region of the magnetic field strength and spin periods, these are the magnetar candidates listed in Table \ref{candidates}. 10 GRBs are outside the expected region (the possible candidates in Table \ref{candidates}). These GRBs may be in the expected (unshaded) region if they were at a higher redshift as shown in \cite{rowlinson2010b} and Figure \ref{fig5}(b). Additionally, this region is defined using angular momentum conservation during the AIC of a WD \citep{usov1992} and is not a physically forbidden region. Therefore, the candidates with spin periods $>$10 ms may remain good candidate magnetars. GRB 051210 is included in the possible candidates list as it is spinning faster than is allowed in the models, but it is worth noting that if the NS formed had a mass of 2.1M$_{\odot}$ then it would reside within the allowed region, as more massive NSs are able to spin at a faster rate. It is also worth noting that if GRB 051210 occurred at a lower redshift, as shown in Figure \ref{fig5}(b), or if the emission is significantly beamed then the spin period and magnetic field strengths would be higher and GRB 051210 would not be near to the spin break up period. The unstable magnetar candidates tend to have higher magnetic field strengths for their spin periods than the stable magnetar candidates. The only exceptions are GRBs 100117A, which has been fitted with a stable magnetar model but would also be consistent with forming an unstable magnetar, and GRB 090426. \subsubsection{Prompt and X-ray Properties} In Figures \ref{fig7a}(a) and (b), the 0.3 -- 10 keV flux at 1000 s or 10000s are compared to the flux at 100 s. The stable magnetar candidates tend to have a higher flux at 1000 s than the GRBs which are modelled as collapsing to a BH. This graph can be explained if we assume all SGRBs are occuring in a low density environment, resulting in little afterglow, and the only observed emission results from the curvature effect. The magnetar candidates which collapse to form a BH fade rapidly, whereas the stable magnetars are giving prolonged energy injection giving the higher late time X-ray fluxes. The stable magnetar candidate outlier in Figures \ref{fig7a}(a) and (b) is GRB 100117A and it has already been noted that this GRB would also be fitted well by an unstable magnetar model. This analysis suggests that mergers collapsing straight to BHs have significantly fainter X-ray afterglows, which fade rapidly, and hence there may be a selection bias against these objects in our analysis (as we required sufficient data points to fit the model). In Figures \ref{fig7a}(c) and (d) we plot the flux at 100 s and 1000 s versus the prompt 15 -- 150 keV fluence observed. At 100 s the unstable magnetar candidates clearly have a higher flux than comparable stable magnetar candidates (again GRB 100117A is the outlier) although this separation of the two populations has vanished by 1000 s. For each GRB in the sample, a 0.3 -- 10 keV XRT spectrum \citep[using the automatic data products on the UK Swift Data Centre website;][]{evans2007,evans2009} for the model derived rest frame plateau duration (converted to observed frame durations) was extracted to compare the spectral properties in the proposed magnetar emission phase. This was not possible for some of the sample as XRT observations started after the plateau phase had ended. Each spectrum was fitted in {\sc XSpec} using a power law, $\Gamma_{X}$, the Galactic N$_{H}$ \citep[neutral hydrogen column density, taken from ][]{kalberla2005} and the intrinsic N$_{H}$ at the redshift provided in Table \ref{candidates}. The spectral fits are provided in Table \ref{spectra}. The majority of the SGRBs are consistent with having negligible intrinsic N$_{H}$ observed in their spectra suggesting they are likely to have occured in low density environments. Recently, \cite{margutti2012} have compared the distribution of intrinsic N$_{H}$ observed in SGRBs to LGRBs finding that they are typically consistent with the lower end of the LGRB distribution consistent with the higher end of our distribution and we find several candidates with negligible intrinsic absorption. Some of the sample have significant N$_{H}$ values, but it is important to note that detailed observations have shown that the optical absorptions found for GRB afterglows can be orders of magnitude less than that expected from the X-ray N$_{H}$ values \citep{schady2010, campana2010}. \subsubsection{Optical Afterglows} \begin{figure*} \centering \includegraphics[width=8.5cm]{050509B_test.ps} \includegraphics[width=8.5cm]{051210_test.ps} \includegraphics[width=8.5cm]{051221A_test.ps} \includegraphics[width=8.5cm]{060313_test.ps} \includegraphics[width=8.5cm]{060801_test.ps} \includegraphics[width=8.5cm]{061201_test.ps} \caption[Flux lightcurves comparing optical and X-ray data for the SGRB magnetar sample]{Comparison of the X-ray and optical data for the SGRBs fitted with the magnetar model. These are observed X-ray flux lightcurves at 1 keV with 1 extrapolated optical observation, light shaded region = optical observation assuming the most extreme cooling break between X-ray and optical and dark shaded region = optical observation assuming no cooling break. The references are for the optical observation used. If the X-ray and optical observations are consistent with originating from the same source, the X-ray data points should pass through the shaded regions. GRB 050509B - \citet{breeveld2005} - consistent, GRB 051210 - \citet{jelinek2005} - inconsistent, GRB 051221A - \citet{soderberg2006} - optical observations are consistent with X-ray observations, GRB 060313 - \citet{roming2006} - inconsistent, GRB 060801 - \citet{brown2006} - inconsistent and GRB 061201 - \citet{stratta2007} - only consistent for with most extreme cooling break and errors. } \label{fig8d} \end{figure*} \begin{figure*} \centering \includegraphics[width=8.5cm]{070714A_test.ps} \includegraphics[width=8.5cm]{070724A_test.ps} \includegraphics[width=8.5cm]{070809_test.ps} \includegraphics[width=8.5cm]{080426_test.ps} \includegraphics[width=8.5cm]{080702A_test.ps} \includegraphics[width=8.5cm]{080905A_test.ps} \contcaption{GRB 070714A - \citet{chester2007} - upper limits inconclusive if there is an extreme cooling break, GRB 070724A - \citet{depasquale2007} - upper limits inconclusive if there is an extreme cooling break, GRB 070809 - \citet{chester2007b} - upper limits inconclusive, GRB 080426 - \citet{oates2008} - inconsistent, GRB 080702A - \citet{depasquale2008b} - upper limits inconclusive and GRB 080905A - \citet{brown2008} - upper limits inconclusive.} \end{figure*} \begin{figure*} \centering \includegraphics[width=8.5cm]{080919_test.ps} \includegraphics[width=8.5cm]{081024A_test.ps} \includegraphics[width=8.5cm]{090426_test.ps} \includegraphics[width=8.5cm]{090510_test.ps} \includegraphics[width=8.5cm]{090515_test.ps} \includegraphics[width=8.5cm]{090621B_test.ps} \contcaption{GRB 080919 - \citet{immler2008} - likely consistent GRB 081024A - \citet{depasquale2008} - upper limits inconclusive and GRB 090426 - \citet{oates2009b} - optical observations are consistent with X-ray observations, GRB 090510 - \citet{kuin2009} - inconsistent, GRB 090515 - \citet{seigel2009} - inconsistent and GRB 090621B - \citet{curran2009} - inconsistent.} \end{figure*} \begin{figure*} \centering \includegraphics[width=8.5cm]{091109B_test.ps} \includegraphics[width=8.5cm]{100117A_test.ps} \includegraphics[width=8.5cm]{100625A_test.ps} \includegraphics[width=8.5cm]{100702A_test.ps} \includegraphics[width=8.5cm]{101219A_test.ps} \includegraphics[width=8.5cm]{110112A_test.ps} \contcaption{GRB 091109B - \citet{oates2009} - upper limits inconclusive if there is an extreme cooling break, GRB 100117A - \citet{depasquale2010c} - extremely inconsistent, GRB 100625A - \citet{landsman2010} - inconsistent, GRB 100702A - \citet{depasquale2010b} - inconsistent, GRB 101219A - \citet{kuin2010} - inconsistent and GRB 110112A - \citet{breeveld2011} - inconsistent.} \end{figure*} \begin{figure*} \centering \includegraphics[width=8.5cm]{111020A_test.ps} \includegraphics[width=8.5cm]{111117A_test.ps} \includegraphics[width=8.5cm]{120305A_test.ps} \includegraphics[width=8.5cm]{120521A_test.ps} \contcaption{GRB 111020A - \citet{guidorzi2011} - consistent GRB 111117A - \citet{oates2011} - upper limits inconclusive if there is an extreme cooling break, GRB 120305A - \citet{marshal2012} - inconsistent and GRB 120521A - \citet{oates2012} - inconsistent.} \end{figure*} A 1 keV observed flux lightcurve showing the prompt, X-ray and the most constraining optical observation during the plateau phase was created for each burst in the sample. These were produced using the simple relation given in equation \ref{flux} (assuming a simple power law spectrum and a spectral index $\beta_x=\Gamma_x-1$) to shift the observed fluxes at a measured energy to 1 keV. \begin{figure*} \centering \includegraphics[width=7.5cm]{new_Xray_v_opt_nocooling.ps} \includegraphics[width=7.5cm]{new_Xray_v_opt_cooling.ps} \caption{The optical flux shifted to 1 keV is plotted against the average X-ray observed flux during the optical observation also shifted to 1 keV. The solid black line represents where these are equal, as expected if they are consistent with each other. In (a) we assume there is no cooling break between the optical and X-ray observations and in (b) we assume the most extreme cooling break. Symbols are as in Figure \ref{fig5}.} \label{fig7b} \end{figure*} \begin{eqnarray} F_{\nu(1keV)}=F_{\nu(measured)}\left(\frac{E_{(\rm measured)}}{1keV}\right)^{\beta_{x,o}} \label{flux} \end{eqnarray} $\Gamma_x$ was obtained from the time averaged PC mode spectra produced by the automated anaylsis on the UK {\it Swift} Data Centre website \citep{evans2007, evans2009}. The 0.3 -- 10 keV observed BAT-XRT lightcurves were extrapolated to flux at 1 keV using equation \ref{flux}. The optical magnitudes were converted into flux for the wavelength of the optical filter used and then shifted to 1 keV using equation \ref{flux}. As there may be a cooling break inbetween the optical and X-ray observations \citep{sari1998}, the two extreme cases are taken i.e. $\beta_o=\beta_x$ and $\beta_o=\beta_x-0.5$. The errors on the observed optical magnitudes and the errors on $\Gamma_x$ are used to define the region on the lightcurve that the optical data could reside in (dark grey - no cooling break, light grey - cooling break, note there is overlap between these two regimes). If the optical and X-ray data are consistent, then the X-ray data points should lie within the shaded regions for the optical data. The 1 keV flux lightcurves for SGRBs fitted with the magnetar model are shown in Figure \ref{fig8d} compared with the most constraining optical observation extrapolated to 1 keV. These compare the BAT-XRT lightcurve at 1 keV to the most constraining optical observation extrapolated to 1 keV. GRBs 051221A, 061201, 080905A, 080919 and 090426 have optical afterglows which are consistent with their X-ray afterglows, but many of these would require the most extreme errors on the spectral slope and cooling break. $\sim$55\% have optical afterglows that are underluminous with respect to their X-ray afterglows, signifying either significant optical absorption or an extra component in the X-ray afterglow. However, as shown in Section 3.3.1 using absorption in the X-ray spectra, the majority of the candidates are consistent with occuring in a low density environment. In Figure \ref{fig7b} we compare the average X-ray fluxes at 1 keV to the optical fluxes extrapolated to 1 keV with (b) and without (a) a cooling break in the spectrum. The average X-ray flux was calculated during the optical observation. There are several points which lie below the black line in both cases, showing there is less emission at optical wavelengths than expected. \begin{figure*} \centering \includegraphics[width=5.5cm]{A_V_MW.ps} \includegraphics[width=5.5cm]{A_V_LMC.ps} \includegraphics[width=5.5cm]{A_V_SMC.ps} \caption{A plot comparing the minimum optical absorption, A$_{V}$ required to explain the difference between the X-ray and optical absorptions to those predicted using the X-ray N$_{H}$. Unless the optical data are already consistent with the X-ray observations, all data points are lower limits given the assumptions made. These plots are for three different abundances: (a) Milky Way, (b) Large Magellanic Cloud and (c) Small Magellanic Cloud. Data points lying above the black line cannot be explained by simply using optical absorption. Symbols are as in Figure \ref{fig5}.} \label{AV_fig} \end{figure*} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline GRB & Minimum A$_{V}$ & MW A$_{V}$ & LMC A$_{V}$ & SMC A$_{V}$ \\ \hline \multicolumn{4}{|l|}{Magnetar candidates}\\ \hline 051221A & 0.00 & 1.32$^{+0.42}_{-0.39}$ & 0.68$^{+0.21}_{-0.20}$ & 0.59$^{+0.19}_{-0.17}$ \\ 060313 & 2.15 & 0.28$^{+0.39}_{-0.07}$ & 0.14$^{+0.20}_{-0.03}$ & 0.13$^{+0.18}_{-0.03}$ \\ 060801 & 0.91 & 1.75$^{+3.86}_{-0.85}$ & 0.89$^{+1.97}_{-0.43}$ & 0.78$^{+1.73}_{-0.38}$ \\ 070724 & 0.32 & 0.74$^{+1.12}_{-0.73}$ & 0.38$^{+0.57}_{-0.37}$ & 0.33$^{+0.50}_{-0.33}$ \\ 070809 & 0.00 & 0.52$^{+0.84}_{-0.17}$ & 0.27$^{+0.43}_{-0.09}$ & 0.23$^{+0.38}_{-0.08}$ \\ 080426 & 1.10 & 3.85$^{+2.00}_{-1.66}$ & 1.97$^{+1.02}_{-0.85}$ & 1.73$^{+0.89}_{-0.74}$ \\ 080905A & 0.46 & 1.61$^{+0.95}_{-0.73}$ & 0.83$^{+0.49}_{-0.37}$ & 0.72$^{+0.43}_{-0.33}$ \\ 080919 & 1.12 & 7.32$^{+7.25}_{-4.45}$ & 3.74$^{+3.71}_{-2.27}$ & 3.28$^{+3.24}_{-1.99}$ \\ 081024 & 0.00 & 7.32$^{+4.47}_{-3.02}$ & 3.74$^{+2.29}_{-1.54}$ & 3.28$^{+2.00}_{-1.35}$ \\ 090426 & 0.00 & 0.08$^{+2.02}_{-0.01}$ & 0.04$^{+1.03}_{-0.00}$ & 0.04$^{+0.90}_{-0.00}$ \\ 090510 & 2.12 & 0.65$^{+0.90}_{-0.56}$ & 0.33$^{+0.46}_{-0.29}$ & 0.29$^{+0.40}_{-0.25}$ \\ 090515 & 3.55 & 0.84$^{+0.66}_{-0.60}$ & 0.43$^{+0.34}_{-0.31}$ & 0.38$^{+0.30}_{-0.27}$ \\ 100117A & 6.98 & 2.33$^{+1.96}_{-1.45}$ & 1.19$^{+1.00}_{-0.74}$ & 1.04$^{+0.88}_{-0.65}$ \\ 100702A & 7.17 & 4.13$^{+2.07}_{-1.79}$ & 2.11$^{+1.06}_{-0.91}$ & 1.85$^{+0.93}_{-0.80}$ \\ 101219A & 2.30 & 3.45$^{+1.54}_{-1.19}$ & 1.76$^{+0.79}_{-0.61}$ & 1.54$^{+0.69}_{-0.53}$ \\ 111020A & 0.00 & 0.83$^{+0.47}_{-0.47}$ & 0.42$^{+0.24}_{-0.24}$ & 0.37$^{+0.21}_{-0.21}$ \\ 120305A & 3.47 & 6.72$^{+1.83}_{-1.49}$ & 3.44$^{+0.93}_{-0.76}$ & 3.01$^{+0.82}_{-0.67}$ \\ 120521A & 0.78 & 1.23$^{+0.89}_{-0.16}$ & 0.63$^{+0.45}_{-0.08}$ & 0.55$^{+0.40}_{-0.07}$ \\ \hline \multicolumn{4}{|l|}{Possible candidates}\\ \hline 050509B & 0.00 & 0.54$^{+0.45}_{-0.45}$ & 0.27$^{+0.23}_{-0.23}$ & 0.24$^{+0.20}_{-0.20}$ \\ 051210 & 0.00 & 2.45$^{+1.01}_{-1.01}$ & 1.25$^{+0.51}_{-0.51}$ & 1.10$^{+0.45}_{-0.45}$ \\ 061201 & 2.19 & 0.67$^{+0.33}_{-0.31}$ & 0.34$^{+0.17}_{-0.16}$ & 0.30$^{+0.15}_{-0.14}$ \\ 070714A & 0.00 & 12.47$^{+2.96}_{-2.62}$ & 6.38$^{+1.52}_{-1.34}$ & 5.58$^{+1.33}_{-1.17}$ \\ 080702A & 0.00 & 7.82$^{+14.11}_{-6.84}$ & 4.00$^{+7.21}_{-3.50}$ & 3.50$^{+6.31}_{-3.06}$ \\ 090621B & 0.99 & 3.45$^{+6.14}_{-2.50}$ & 1.77$^{+3.14}_{-1.28}$ & 1.55$^{+2.75}_{-1.12}$ \\ 091109B & 0.00 & 1.32$^{+1.61}_{-0.86}$ & 0.68$^{+0.82}_{-0.44}$ & 0.59$^{+0.72}_{-0.39}$ \\ 100625A & 3.49 & 0.12$^{+0.42}_{-0.00}$ & 0.06$^{+0.21}_{-0.00}$ & 0.05$^{+0.19}_{-0.00}$ \\ 110112A & 1.29 & 0.75$^{+0.73}_{-0.46}$ & 0.38$^{+0.37}_{-0.24}$ & 0.33$^{+0.33}_{-0.21}$ \\ 111117A & 0.00 & 2.43$^{+3.90}_{-1.76}$ & 1.24$^{+2.00}_{-0.90}$ & 1.09$^{+1.75}_{-0.79}$ \\ \hline \end{tabular} \caption{The minimum optical absorption, A$_{V}$, is the absorption required for the optical observations to just be consistent with the X-ray observations (0 means they are already consistent). The MW (Milky Way), LMC (Large Magellanic Cloud) and SMC (Small Magellanic Cloud) absorptions are the values predicted using the X-ray N$_{H}$.} \label{AV_table} \end{center} \end{table} To determine if the observed X-ray excess could be caused by optical absorption, we compare the optical absorption (A$_{V}$) estimated using the observed X-ray N$_{H}$ to the minimum absorption that could explain the difference between the X-ray and optical fluxes. The observed spectra during the plateau regime (given in Table \ref{spectra}) are used when available and the other spectral fits are obtained from the automated data products from the UK {\it Swift} Science Data Centre \citep{evans2007,evans2009}. We convert the observed X-ray N$_{H}$ to optical absorptions using $\frac{N_{H}}{A_{V}}$ for Milky Way \citep[MW, $1.8\times10^{21}$;][]{predehl1995}, Large Magellanic Cloud \citep[LMC, $3.5\times10^{21}$;][]{koornneef1982, fitzpatrick1985} and Small Magellanic Cloud \citep[SMC, $4.0\times10^{21}$;][]{martin1989} abundances. Note that there are known to be significant scatter and uncertainties involved in this conversion \citep[e.g.][]{schady2010, campana2010}. To obtain the minimum A$_{V}$ which would be sufficient to explain the difference between the X-ray and optical fluxes, the maximum possible optical flux (including errors and assuming the most extreme cooling break) and the X-ray plateau flux are converted to V band magnitudes\footnote{using the webtool: http://www.stsci.edu/hst/nicmos/tools/conversion{\_}form.html}. The obtained optical absorptions are given in Table \ref{AV_table} and plotted in Figure \ref{AV_fig}. Many of the GRBs may be explicable via absorption however we note that $\sim$25\% of the sample are based on unconstraining optical upper limits while some rely on using the most extreme cooling breaks and uncertainties. In Figure \ref{AV_fig}, we also show that if some of the host galaxies are more consistent with LMC or SMC abundances then more of the GRBs cannot be explained via absorption. Results obtained by \cite{schady2010} for LGRBs, also suggest that $\frac{N_{H}}{A_{V}}$ may be an order of magnitude higher for GRB host galaxies, in which case even more GRBs in the sample would not be explicable via absorption. Despite all the uncertainties involved in this calculation, 8 GRBs in the sample clearly cannot have the difference between their X-ray and optical fluxes explained via absorption (GRBs 060313, 061201, 090510, 090515, 100117A, 100625A, 100702A and 110112A). This analysis shows that at least some of the GRBs in this sample are consistent with there having an additional X-ray component. This may provide supporting evidence of energy injection, although energy injection is thought to cause an increase in flux at all wavelengths \citep[e.g.][however this also depends upon the electron energy distribution]{sari2000}. Although there is some evidence that the magnetar candidates have additional X-ray emission, it is not known what spectrum is expected from a newly formed magnetar and hence we cannot completely discount those whose optical emission is consistent with their X-ray emission. \begin{table*} \begin{tabular}{ccccccc} \hline GRB & Expected region & Extra component & Predicted region & Stable/Unstable \\ \hline 050509B & ? & ? & No & Stable \\ 051210 & ? & Yes & ? & Unstable \\ 051221A & Yes & No & No & Stable \\ 060313 & Yes & Yes & Yes & Stable \\ 060801 & Yes & Yes & Yes & Unstable \\ 061201 & ? & ? & ? & Stable \\ 070714A & ? & ? & ? & Stable \\ 070724A & Yes & ? & No & Unstable \\ 070809 & Yes & ? & Yes & Stable \\ 080426 & Yes & Yes & ? & Stable \\ 080702A & ? & ? & Yes & Stable \\ 080905A & Yes & ? & No & Unstable \\ 080919 & Yes & No & No & Unstable \\ 081024 & Yes & ? & Yes & Unstable \\ 090426 & Yes & No & Yes & Stable \\ 090510 & Yes & Yes & Yes & Stable \\ 090515 & Yes & Yes & Yes & Unstable \\ 090621B & ? & Yes & ? & Stable \\ 091109B & ? & ? & No & Stable \\ 100117A & Yes & Yes & No & ? \\ 100625A & ? & Yes & ? & ? \\ 100702A & Yes & Yes & Yes & Unstable \\ 101219A & Yes & Yes & Yes & Unstable \\ 110112A & ? & Yes & ? & Stable \\ 111020A & Yes & No & No & Stable \\ 111117A & ? & ? & ? & Stable \\ 120305A & Yes & Yes & Yes & Unstable \\ 120521A & Yes & Yes & Yes & Unstable \\ \end{tabular} \caption{A summary showing the main features studied. This gives best magnetar candidates found and the possible candidates. ``Expected region'' : fits within the required parameter space in Figure \ref{fig5} (? = could fit with various assumptions), ``Extra component'' : there is evidence of an extra component in the X-ray afterglow which is not observed in the optical note this could also be due to absorption (? = borderline case or optical upper limit not constraining), ``Predicted region'' : do the values for the plateau luminosity and the plateau duration, calculated using equations \ref{period} and \ref{luminosity}, lie within the predicted region in \citet{metzger2010}? (? = outside region but would fit with reasonable assumptions). ``Stable/Unstable'' : whether the magnetar is stable or if it collapses to form a BH (? = would be fitted well by either case.)} \label{summary} \end{table*} \section{Discussion} \subsection{The sample of SGRBs} Here we discuss some particular SGRBs and then the sample as a whole. GRB 070809 is one of the best fitting stable magnetar candidates and lies within the allowed regions. This GRB had a faint optical afterglow and is offset by 20 kpc from a galaxy at z = 0.219 \citep{perley2008a}, making it an ideal candidate for a magnetar formed via the merger of two NSs. However it is important to be cautious about this candidate host galaxy association as the likelihood that this is an unrelated field galaxy is 5 -- 10\% (Tunnicliffe et al. in prep). GRB 061201, with a spin period of $\sim$16 ms, fits the magnetar model well but is spinning slower than expected. However the redshift used relies on the correct host galaxy identification which remains highly uncertain \citep[][; Tunnicliffe et al. in prep]{stratta2007}. If it actually occured at a higher redshift than used in this analysis it would lie within the expected region. Additionally, the approximate 10 ms limit imposed by \cite{usov1992} is dependent on the initial radius of the collapsing object and the radius of the final NS. This limit is also derived for the model involving AIC of a WD. Therefore there is some level of flexibility in this imposed limit. We still consider this, and other GRBs close to this boundary, to be potential candidate magnetars. GRB 051221A is consistent with having energy injection in its lightcurve out to $\sim2\times10^4$ s \citep{burrows2006,soderberg2006}. \cite{fan2006} explained this as energy injection from a magnetar. Our model fits this GRB very well. \cite{jin2007} proposed an alternative two jet model to explain the lightcurves without requiring additional energy injection. GRB 060313 has been included in the magnetar sample by ignoring the first 50 -- 200 s of the lightcurve due to the flaring activity, this gives a good fit to the later data but this result should be treated with caution. Flares could be associated with on-going accretion onto the newly formed magnetar. Alternatively, \cite{dai2006} and \cite{gao2006} suggest that the X-ray flares originate from reconnection of twisted magnetic fields within the NS. \cite{margutti2011} have conducted a systematic study into SGRB flares, including the flares observed in GRB 060313, and concluded that the flares are consistent with a central engine origin. Included in this sample are SGRBs whose progenitors are subject to significant debate, particularly GRB 090426 at z$\sim$2.6 which could have orginated from a collapsar instead of a binary merger \citep{antonelli2009,levesque2010, thone2011, xin2011}. GRB 090426 fits the model well, irrespective of the progenitor, but the progenitor debate is important to note as we are specifically studying possible NS binary merger progenitors. Interestingly, 12 out of the 28 magnetar candidates require collapse to a BH. This implies that, if these SGRBs are making magnetars, they only collapse to a BH in a small number of cases. Comparing the derived plateau durations and the collapse times provided in Table \ref{table:log}, the magnetar typically (but not always) collapses to a BH after the plateau phase, i.e. when the magnetar has spun down significantly. The only exception to this is GRB 101219A where collapse occurs prior to the end of the plateau phase, however the collapse time and end of the plateau are consistent within errors. The collapse time is related to the mass of the magnetar and the spin period at which the differential rotation can no longer support gravitational collapse. The discrepancy between collapse time and plateau duration are hence likely to be reliant upon the mass of the magnetar. Additionally, there may be ongoing accretion on to the magnetar (remnants of the merger) which may raise the mass of the magnetar above the critical point prior to significant spin down. Interestingly, those candidates which collapse to form a BH and are within the allowed (unshaded) region of Figure \ref{fig5} have a higher magnetic field for a given spin period than the candidates which do not collapse to a BH. Many of the magnetar candidates lie within, or near to, the predicted plateau luminosity and duration regions for newly formed magnetars given in \cite{metzger2010} when considering uncertainties due to redshift, efficiency and beaming. However, there are candidates whose plateaus are significantly shorter than predicted or at a lower luminosity. Our analysis and that of \cite{metzger2010} assumes a NS mass of 1.4M$_{\odot}$ and this is likely to be significantly higher for a NS merger progenitor (e.g. 2.1M$_{\odot}$). This has a small affect on the values of the magnetic field strength and the spin period calculated in our model \citep[as shown in][]{rowlinson2010b} but does not significantly affect the predicted regions for plateau luminosity and duration from \cite{metzger2010}. A summary of the properties of the whole magnetar sample are shown in Table \ref{summary}. \subsection{Accretion Effects} \begin{figure} \centering \includegraphics[width=8cm]{piro_ott.ps} \caption{(a) The accretion rate as a function of time assuming the accretion rate for a compact binary merger \citet{metzger2010b} starting at 0.16 s after the trigger time giving a total accretion disk mass of $\sim$0.3 M$_{\odot}$. (b) The evolution of the spin period of the magnetar for the two accretion rates, red - the magnetar predicted for GRB 060313 and blue - GRB 090515. Solid lines include accretion and dashed lines have no accretion. In these plots, accretion has a very small or negligible effect. (c) The amount of rotational energy available in the magnetar for each case.} \label{fig10} \end{figure} \begin{figure} \centering \includegraphics[width=7.8cm]{new_accn_prop.ps} \caption{The amount of mass accreted by the magnetar against the duration of the propeller regime. The dashed line represents the maximum mass available in the accretion disk and is 0.3 M$_{\odot}$ an upper limit for the amount of mass which can be accreted. Symbols are as in Figure \ref{fig5}.} \label{fig10b} \end{figure} \begin{figure} \centering \includegraphics[width=7.8cm]{new_Eiso_Emagnetar.ps} \caption{The energy emitted during the plateau phase, calculated using the fits in Table \ref{table:log}, compared to the isotropic energy emitted during the prompt phase (1--10000 keV). Symbols are as in Figure \ref{fig5}.} \label{fig7c} \end{figure} In our analysis we have not accounted for any ongoing accretion onto the magnetar from the surrounding torus of material formed during the merger. This could significantly effect the results obtained, especially if accretion increases the NS mass to more than can be supported as this results in collapse to a BH. Additionally, accretion could explain flares observed overlaying the plateau model. Flares may also be associated with ongoing magnetar activity as described in \cite{dai2006}. \cite{piro2011} studied the affect of accretion onto magnetars formed during SNe, however their results are also applicable to magnetars produced from neutron star binary mergers. The main difference for mergers is the significantly reduced reservoir of material available for accretion and the different accretion rate. In this section, we assume the simplest accretion rate published by \cite{metzger2010b} assuming that accretion starts at 0.16 s after the trigger time, giving a total accretion disk mass of $\sim$ 0.3 M$_{\odot}$. Accretion onto the magnetar occurs when the the propeller regime ends, given by equation \ref{propeller} from \cite{piro2011} where $\mu_{33} = B_{15}R_{6}^{3}$. \begin{eqnarray} \dot M < 6.0 \times 10^{-3} \mu_{33}^{2} M_{1.4}^{-5/3} P_{0,-3}^{-7/3} M_{\odot} s^{-1} \label{propeller} \end{eqnarray} As before, we assume an initial NS mass of 1.4 M$_{\odot}$ and radius of $10^6$ cm. In Figure \ref{fig10}a we show the accretion rate as a function of time after formation. In Figure \ref{fig10}b we show the evolution of the spin period of two different magnetars (using the parameters for GRBs 060313 and 090515 as these have contrasting magnetar properties) assuming there is either accretion onto the magnetar or no accretion. When there is significant accretion (e.g. GRB 090515) it can marginally prevent spin down and increase the rotational energy (Figure \ref{fig10}c) available, although these are negligible effects for the low accretion rates considered. It is worth noting that accretion would potentially have a very large effect on the results obtained for LGRB magnetar candidates \citep[e.g. the sample in][]{lyons2009} as these are thought to have a significantly higher mass accretion disk and an accretion rate similar to that proposed by \cite{piro2011}. In that case, the energy reservoir could reach values in excess of 10$^{53}$ ergs for particular combinations of the initial conditions. This additional energy source could be a potential explanation for large flares observed in some of the LGRB candidate lightcurves \citep[e.g.][]{margutti2011}. In Figure \ref{fig10b} we show the total mass accreted after the propeller regime has ended. The linear correlation between the duration of the propeller regime and the mass accreted is caused by the relationship: $\dot M \propto t^{-5/3}$ (i.e. the sooner the propeller regime ends, the greater the mass that can be accreted). The candidates which accrete the most mass are those which also collapse to form a BH within a few hundred seconds, leading to the suggestion that accretion is an alternative to drive this collapse. Typically, the magnetar is thought to collapse when the fast rotation can no longer support the mass of the magnetar. The stable magnetar outliers are GRBs 100625A and 100117A which were also well fit by the unstable magnetar model but we chose the stable model to reduce the number of free parameters. Additionally GRB 090426 is again a clearly stable magnetar candidate which is separate from the other stable candidates. \subsection{Energy Constraints} Including all of the possible candidates, the SGRBs in our sample can be fitted with the magnetar model. In Table \ref{table:log} we show the isotropic energy released during the prompt emission phase of the GRB. These values tend to be consistent with the maximum expected energy output from the magnetar central engine model, $E_{iso} < 3\times 10^{52}$ erg \citep{metzger2010}. Within the uncertainties many of the magnetar candidates are consistent with this limit while some others exceed it. However, we have not corrected for beaming and had to assume redshifts in many cases. Not correcting for beaming will undoubtedly affect these results by increasing the spin period and the magnetic field strengths as shown in \cite{rowlinson2010b}. Beaming, with a half-opening angle of 30$^{\circ}$, has been shown to form via the formation of an ordered magentic field during the merger of two 1.5 M$_{\odot}$ NSs which collapse to form a BH \citep{rezzolla2011}. However, the beaming angles of SGRBs and associated magnetars remain unconstrained \citep[see recent work on SGRB jets by ][]{fong2012}. With a reasonable beaming correction, all of the GRBs which exceed the energy constraint would lie well below the maximum expected energy output. Another consideration is that $E_{iso} \propto M_{1.4} P_{0,-3}^{-2}$, so if magnetars can have masses up to 2.1M$_{\odot}$ then the maximum energy output could be as high as E$_{iso} \sim 1 \times 10^{53}$ erg. In Figure \ref{fig7c}, we show the energy emitted during the magnetar plateau phase (the plateau luminosity multiplied by the duration from Table \ref{table:log}, these values were calculated from the fitted $B_{15}$ and $P_{-3}$ using Equations \ref{luminosity} and \ref{period}) against the isotropic energy emitted during the prompt emission. Only five GRBs which fit the magnetar model emit more energy during the plateau phase, GRBs 051210, 070724A, 070809, 090515 and 100702A. We have also assumed 100\% efficiency in the conversion of rotational energy into EM radiation. This will not be the case and assuming a lower efficiency would act counter to the effect of any beaming, in the sense of reducing the inferred spin period and the magnetic field strengths. For example, GRB 090515 has $B \sim 1.4 \times 10^{16}$ G and $P \sim 2.3$ ms assuming 100\% efficiency, at 10\% efficiency these drop to $B \sim 4.4 \times 10^{15}$ G and $P \sim 0.73$ ms. Given the uncertainties in both beaming and efficiency, we note that the real values of the magnetic field strength and the spin period may be uncertain by at least a factor of 3. \subsection{Gravitational Wave Signals} \begin{table*} \begin{tabular}{ccccccc} \hline Phase & Citation & Predicted Amplitude & Distance used & AdLIGO/LCGT limit & ET limit & Amplitude at z$\sim$0.1\\ & & (h) & (Mpc) & (Mpc) & (Mpc) & (h) \\ \hline Inspiral & \cite{abadie2010} & $4 \times 10^{-24}$ & 445 & 445 & 5900 & $4.6\times10^{-24}$ \\ Magnetar Spindown & \cite{corsi2009} & $7\times10^{-24}$ & 100 & 175 & 2300 & $1.8\times10^{-24}$\\ Collapse to BH & \cite{novak1998} & $4\times10^{-23}$ & 10 & 100 & 1300 & $1\times10^{-24}$ \end{tabular} \caption{Gravitational wave predictions for the three different regimes in this magnetar model and applied to future observatories. The distances quoted are luminosity distances. The magnetar spindown values are calculated using Equation 14 in \citet{corsi2009}.} \label{grav1} \end{table*} Systems of the kind we have considered represent interesting sources of gravitational waves as there are predicted signals for all of the stages this system can go through: inspiral, magnetar spindown and final collapse to BH. In Table \ref{grav1}, we show the distances out to which each phase would be visible, assuming the amplitude ($h$) of the gravitational waves is inversely proportional to distance for Advanced LIGO (AdLIGO, with a sensitivity of $h \sim 4 \times 10^{-24}$), the Large Cryogenic Gravitational Telescope \citep[LCGT, comparable sensitivity to AdLIGO;][]{kuroda2010} and the Einstein Telescope \citep[ET, $h \sim 3 \times10^{-25}$;][]{hild2011}. The gravitational wave amplitude is quoted for a distance of $z\sim0.1$ or 390 Mpc. The magnetar phase prediction is an upper limit assuming a spin period of 1 ms, $I_{45}=1.5$ for a binary merger progenitor, and an ellipticity $\epsilon=1$. AdLIGO predictions by \cite{abadie2010} are for NS-NS mergers. Using the lowest and maximum possible rates for NS-NS mergers per Milky Way Equivalent Galaxy from \cite{abadie2010}, it is possible to predict the number of unstable magnetars (i.e. one source giving 2 distinct gravitational wave signals) we might expect to detect with AdLIGO and ET. To detect all the stages for the formation and collapse of a magnetar, AdLIGO would require it to be at a distance $\sim$100 Mpc and ET would require $\sim$1300 Mpc. Within these volumes there is predicted to be a NS-NS merger rate of $2\times10^{-5}$ -- $0.08$ yr$^{-1}$ for AdLIGO and $10$ -- $4\times10^{5}$ yr$^{-1}$ for ET. However, the rates need modification as not all NS-NS mergers will lead to an unstable magnetar which will give both signals. From the analysis in this paper, only 11 SGRBs in the total sample of 28 magnetar candidates (39\%, assuming NS-NS mergers always produce a magnetar) are thought to form unstable magnetars, giving rates of $8\times10^{-6}$ -- $0.03$ yr$^{-1}$ for AdLIGO and $4$ -- $2\times10^{5}$ yr$^{-1}$ for ET. Therefore, it is unlikely that AdLIGO or LCGT will observe both the formation and collapse of an unstable magnetar but ET should detect many cases. On a more optimistic note, \cite{bausswein2012} estimate that AdLIGO will be able to detect a post-merger signal associated with a newly formed massive NS with a rate of 0.015 -- 1.2 yr$^{-1}$. \cite{shibata2006} also study different masses relative to the maximum mass of a NS. They determined that if $M<M_{\rm max}$ then the NS will emit gravitational waves during the magnetar spindown phase until it is a stable sphere and collapse to a black hole is dependant on the gravitational wave emission (possibly collapsing within 50 ms) or on forces such as magnetic breaking. In this case, they predict that advanced gravitational wave detectors will be able to observe these events out to 50 Mpc using detectors such as AdLIGO. Alternatively if $M\sim M_{\rm max}$, then it collapses rapidly to spherical shape and hence is more likely to create a stable NS which may collapse at late times due to magnetic breaking. The gravitational waves from the more massive NS would be detectable to 10 Mpc. In both \cite{baiotti2008} and \cite{shibata2006}, instabilities in the NS formed by a compact merger produce detectable gravitational waves in contrast to the spherical collapse model of \cite{piro2011}. However \cite{piro2011} showed that accretion may have an important affect on the gravitational wave signal. Therefore, these objects are potentially important sources of gravitational waves and further analysis combining all these factors and the new limits on maximum NS masses is required. The predictions by \cite{metzger2010} do not take into account the loss of energy via gravitational waves and this may play a significant role for the formation of a magnetar via the merger of two NSs. Some of our candidates have shorter plateau durations than predicted by \cite{metzger2010} however if the energy losses via gravitational waves are more significant then the magnetar will spin down more rapidly. \section{Conclusions} We have analysed the BAT-XRT lightcurves of all the {\it Swift} GRBs with prompt durations $T_{90} \le2$ s detected until May 2012. About half of these SGRBs require fitting with a broken powerlaw model showing a plateau phase. Although the plateau phases show many similarities with those observed in LGRB lightcurves, they are typically orders of magnitude earlier. The initial temporal indicies ($\alpha_{1}$ and $\alpha_{2}$) are comparable to those found for the ``canonical'' LGRBs but there is much more variation in the final decay ($\alpha_{3}$). The correlation between luminosity and duration of the plateau phase is found to be consistent with the identified correlation for ``canonical'' LGRB lightcurves identified by \cite{dainotti2010}. Following on from the study of GRB 090515, this work has shown that the X-ray lightcurves of some SGRBs considered could be explained with energy injection from a magnetar which can collapse to form a BH. 18 firm candidates ($64\%$) and 10 possible candidates were found. Of the 18 firm candidates, 10 are thought to collapse to form a BH and when including possible candidates, 11 out of 28 magnetar candidates may collapse to form a BH. This implies that $29$--$56\%$ of events forming magnetars would collapse to a BH within the first few hundred seconds. In some cases the magnetar plateau phase is not directly observed as it occurs prior to the XRT observations. This predicts plateau emission that may be observable with future missions that are able to slew faster than {\it Swift}. The X-ray fluxes at 1000 s and 10000 s are typically higher for the stable magnetar candidates. The late time fluxes are significantly lower for the unstable magnetar cases. There is excess emission in the X-ray afterglows not observed in the optical afterglows for many of the magnetar sample. Many of the magnetar candidates lie within or close to the predicted regions for plateau luminosity and duration for newly formed magnetars given in \cite{metzger2010}. Accretion onto the newly formed magnetar formed by a NS-NS binary merger has a negligible affect on the spin periods and hence the rotational energy budget of the magnetar. However, it can be shown that accretion can have a significant affect for collapsar progenitors. This may explain late time flares for collapsar progenitors and our calculations suggest the rotational energy budget could exceed $10^{53}$ erg for some combinations of initial spin periods and magnetic fields. The unstable magnetar candidates, those which collapse to form a BH, are potentially accreting more material than the stable candidates. We suggest this is an additional solution for why they collapse at late times which would work alongside the theory that the magnetar spins down to a critical point where it can no longer support its mass using rotation. These objects are highly interesting targets for future gravitational wave observatories as they are predicted to emit gravitational waves during merger, the magnetar phase (likely to be increased via accretion and bar mode instabilities) and, in some cases, the final collapse to form a BH. In this paper, we have focused on NS-NS merger progenitors, however the accretion induced collapse of a WD could also produce a SGRB and leave behind a rapidly rotating magnetar with similar X-ray emission properties. Among other observational signatures, the very different gravitational wave signals between these events may someday allow these progenitors to be distinguished however the inspiral remains the most luminous phase of gravitational wave emission. For the candidates which form a stable magnetar, \cite{duncan1992} showed that the amount of energy available for an SGR giant flare is $E~\propto~3\times10^{47}~B_{15}^{2}$ erg. Hence a young magnetar, with a magnetic field of $B_{15}\sim 10$, could produce a giant flare with an energy of $3\times10^{49}$ erg. This value is comparable to the isotropic energy of some SGRBs \citep[e.g. GRB 080905A at $z\sim0.12$,][]{rowlinson2010a} so would be observable in the local universe. Both of the merger and giant flare events are very rare, however considering these models it is possible (although very unlikely) that in the future we may have two spatially co-incident SGRBs. This has also been proposed for LGRBs by\cite{giannios2010} and they suggest that these magnetar candidates could be identified by discovering an old spatially coincident radio GRB afterglow in nearby galaxies. We have shown that a model of SGRB production from binary NS mergers that result in the formation of a magnetar can explain the plateaus seen in many SGRB X-ray lightcurves. Although this is not conclusive proof of such a model, it would tie in to the evidence for late time central engine activity in SGRBs and may have important observational consequences. \section{Acknowledgements} AR acknowledges funding from the Science and Technology Funding Council. This work makes use of data supplied by the UK {\it Swift} Science Data Centre at the University of Leicester and the {\it Swift} satellite. {\it Swift}, launched in November 2004, is a NASA mission in partnership with the Italian Space Agency and the UK Space Agency. Swift is managed by NASA Goddard. Penn State University controls science and flight operations from the Mission Operations Center in University Park, Pennsylvania. Los Alamos National Laboratory provides gamma-ray imaging analysis. BDM is supported by NASA through Einstein Postdoctoral Fellowship grant number PF9-00065 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060.
1,477,468,751,372
arxiv
\section{Introduction} The question of whether all of us, living humans, descend exclusively from an anatomically modern African population which completely replaced archaic populations in other continents, or if Africans could have interbred with these local hominids has been the subject of a long lasting and interesting debate. The first of these possibilities, known as Out of Africa model, is based mainly on genetic evidence \cite{cann} further supported by paleontological \cite{stringer} and archaeological findings \cite{mcbreartybrooks}. The latter, known as Multiregional model, on the contrary, has been more supported by morphological studies \cite{thornewolpoff}, but recently it has also been found consistent with genetic data \cite{templeton2005}. A third, intermediate possibility, known as assimilation model \cite{fagundes2007}, suggests that Africans may have interbred with local archaic hominids to a limited extent. The decision of which model correctly describes the origin of \textit{Homo sapiens} is obscured by the intricacies of the statistical methods proposed for evaluating the models themselves. Examples of such intricate methods, their conflicting conclusions and subsequent debate are given in \cite{templeton2005,fagundes2007, templeton2010}. In this paper we will describe by a simple and realistic model the dynamics of two subpopulations -- Africans and Neanderthals -- interbreeding at a slow rate. In particular, we quantitatively determine the fequency of interbreeding events which are necessary in order that non-African living humans have between 1 and $4\%$ nuclear DNA of Neanderthal origin, according to the discovery of Green \textit{et al} \cite{greenetal}. Among other important achievements, the recent seminal paper by Green \textit{et al} provides the first direct evidence of interbreeding of modern humans with archaic hominids, Neanderthals in this case. By direct evidence we mean having sequenced Neanderthal nuclear DNA and showing that this DNA is more similar to nuclear DNA of living non-Africans than to nuclear DNA of living Africans. Of course, the findings of Green \textit{et al} await anxiously for replication by the scientific community. Improvements in the resolution of the genome sequencing, in the comparison with present day individuals and DNA sequencing of other fossils classified as Neanderthals, \textit{H. erectus}, \textit{H. floresiensis} and modern humans are mostly welcome. Based on their findings and on archaeological evidence \cite{baryosef}, it was suggested in \cite{greenetal} that interbreeding between anatomically modern Africans and Neanderthals might have occurred in the Middle East before expansion of modern Africans into Eurasia, at a time in which both coexisted there. This hypothesis is assumed in this paper, allowing inference of the only parameter in the model, the rate of exchange of individuals between Africans and Neanderthals, and giving some idea on the size of the total population involved in the interbreeding. The model will be fully explained in the next section, but we anticipate here its main features. Total population size is supposed fixed, but African and Neanderthal subpopulations sizes fluctuate according to the neutral (i.e. Africans and Neanderthals are supposed to have the same fitness) Wright-Fisher model \cite{ewens} for two alleles at a single locus. We also assume no biological barriers for interbreeding and no strong hypotheses on the initial composition of the population. Gene flow between subpopulations is implemented by assuming that a fixed number $\alpha$ of pairs of individuals per generation is exchanged between them. The model is characterized by a deterministic component -- a system of two linear ordinary differential equations (ODEs) -- and a stochastic component -- a realization of the Wright-Fisher drift process to be introduced as an external function in the ODEs. The ODEs are exactly solvable, up to definite integrals depending on the stochastic part. The stochastic part can be dealt with by simple simulations. Assuming a random initial fraction of Africans, our main result is the conditional probability density distribution for the exchange parameter $\alpha$, illustrated in Fig. \ref{novafig}. The condition to be satisifed is that, after interbreeding with Neanderthals, a fraction of 1 to $4 \%$ of Neanderthal genes, as suggested by \cite{greenetal}, will be present in the African population. Fig. \ref{novafig} shows this condition is attained with maximum probability for $\alpha_{\mathrm{max}} \approx 0.013$, i.e. one pair of individuals is exchanged between the two subpopulations every 77 generations. The mean value of $\alpha$ is $\alpha_{\mathrm{mean}} \approx 0.083$, which corresponds to one pair of individuals exchanged at each 12 generations. Such conclusions are based on a solvable mathematical model and simple simulations, avoiding statistical in favor of probabilistic methods. Application of probabilistic methods reminiscent of Statistical Mechanics to biological problems has been abundant in the literature of Physics and Mathematics communities, but penetration into Biology and Anthropology has proved more difficult. In particular, both authors of this paper have previously and separately anticipated \cite{nm1,nm2,nm3,serva1,serva2,serva3} that evidences based on mitochondrial DNA (mtDNA) could not rule out the possibility of interbreeding among modern humans and other archaic forms. We hope that the direct experimental proof of such interbreeding provided by \cite{greenetal} can be the occasion for better acceptance of methods such as the ones we will discuss. While writing the present paper a new report \cite{reichetal} concerning the interbreeding of modern humans with another archaic hominid group was published. Results have been obtained by studying the fossil nuclear DNA extracted from the finger of a single individual previously known only from its mtDNA \cite{krauseetal}. The individual is considered a representative of an archaic group of hominids (Denisovans) different both from moderns and Neanderthals. According to the authors, Denisovan nuclear DNA is present in living Melanesians in a proportion of about $6 \%$. Very few is known about the morphology of Denisovans, as complete fossils belonging to this group are not yet known. Although we still have no data concerning the size of populations and the duration of coexistence, the model described in this paper might be used to describe the interbreeding between modern humans and Denisovans. \section{The model} Consider a population of constant size equal to $N$ individuals. We suppose that the population is divided into two subpopulations we call 1 and 2, generations are non-overlapping and the number of generations is counted from past to future. Reproduction is sexual and diploid. We also suppose that the subpopulations have lived isolated from each other for a long time before they meet. At generation $g=0$, when subpopulations meet, the total population consists then of two groups, each of which consisting in individuals of a pure race. Starting at this time subpopulations will share a common environment for a long period. We do not suppose that the numbers $N_1(g)$ and $N_2(g)$ of individuals at generation $g$ in each of the two subpopulations are constant, although their sum $N_1(g)+N_2(g)=N$ is. Instead, $N_1(g+1)$ and $N_2(g+1)$ are random variables which can be determined by Wright-Fisher rule i.e., any of the $N$ individuals of generation $g+1$ independently chooses to belong to subpopulation 1 with probability $N_1(g)/N$ and to subpopulation 2 with probability $N_2(g)/N$. After that, both father and mother of an individual in generation $g+1$ are uniformly randomly chosen among all males and females of generation $g$ in the subpopulation he/she has chosen. With such a reproduction mechanism the numbers $N_1(g)$ and $N_2(g)$ fluctuate as generations pass until one of the subpopulations becomes extinct. This stochastic process is the same as in the simplest version of the neutral, i.e. no selective advantage for any of the alleles, Wright-Fisher model for two alleles at a single locus \cite{ewens}. The time for extinction is random as well as which of the two subpopulations becomes extinct. If $x(0)= N_1(0)/N$ is the initial fraction of individuals of subpopulation 1, then subpopulation 1 will survive with probability $x(0)$ and the mean number of generations until extinction is $-2N [x(0) \ln x(0) + (1-x(0)) \ln (1-x(0))]$ (see \cite{ewens}). As the mean number of generations for extinction of one subpopulation scales with $N$, it is reasonable to measure time not in generation units, but in generations divided by $N$. From here on, we will refer to $t=g/N$ simply as \textit{time} and we will refer to $x(t)=N_1(t)/N$ in a realization of the above stochastic process as the \textit{history} of the Wright-Fisher drift process. In the previously described dynamics no mechanism of gene admixture between subpopulations was present and we add it as follows. We assume that at each generation a number $\alpha$ of random individuals from subpopulation 1 migrates to subpopulation 2 and vice-versa the same number of random individuals from subpopulation 2 migrates to subpopulation 1. In other words, $\alpha$ \textit{pairs} per generation are exchanged. We strongly underline that $\alpha$ is a number of order 1, not of order $N$. Migrants will contribute with their genes for the next generation just like any other individual in their host subpopulation. Their offspring, if any, is considered as normal members of the host subpopulation. The parameter $\alpha$ introduced above may be non-integer and also less than 1. In such cases we interpret it as the average number of pairs of exchanged individuals per generation. By the hypothesis of isolation between subpopulations for a long time before $t=0$, we may suppose that in many {\it loci} the two subpopulations will have different and characteristic alleles. Therefore, we can assume that there exists a large set of alleles which are exclusive of subpopulation 1 and the same for subpopulation 2. We will refer to these alleles respectively as \textit{type 1} and \textit{type 2}. At any time $t\geq0$ any individual will be characterized by his/her fractions of type 1 and type 2 alleles. We define then $y_1(t)$ as the \textit{mean fraction of type 1 alleles in subpopulation 1 at time $t$} and $y_2(t)$ \textit{as the mean fraction of type 1 alleles in subpopulation 2 at time $t$}. The \textit{mean} here is due to the fact that individuals in subpopulation 1 in general have different allelic fractions, but $y_1(t)$ is calculated by averaging allelic fractions among all individuals in subpopulation 1, and similarly for $y_2(t)$. Of course $y_1(0)=1$ and $y_2(0)=0$. Similar quantities might have been defined for type 2 alleles, but they are easily related to $y_1(t)$ and $y_2(t)$ and thus unnecessary. It is now possible to derive the basic equations relating the mean allelic fractions at generation $g+1$ with the mean allelic fractions at generation $g$. In doing so we will make the assumption that the $\alpha$ individuals of subpopulation 1 migrating to subpopulation 2 all have an allelic fraction equal to $y_1(t)$. The analogous assumption will be made for all the individuals of subpopulation 2 migrating to subpopulation 1. Of course the above assumption of exchanged individuals all having the mean allelic fractions in their subpopulations is a very strong one and it is not strictly true. Nonetheless, it is indeed a very good approximation if $\alpha$ is much smaller than $1/\log_2 N$. In fact, $1/\alpha$ is the number of generations between two consecutive exchanges of individuals. As the typical number of generations for genetic homogenization in a population of $N$ individuals with diploid reproduction and random mating is $\log_2 N$, see \cite{derrida1,derrida2,derrida3,chang}, the condition that $\alpha$ is much smaller than $1/\log_2 N$ makes sure that subpopulations 1 and 2 are both rather homogeneous at the exchange times. The allelic fraction $y_1(t+1/N)$ will be equal to $y_1(t)$ plus the contribution of type 1 alleles from the immigrating individuals of subpopulation 2 and minus the loss of type 1 alleles due to emigration. We remind that these loss and gain terms are both proportional to $\alpha$ and inversely proportional to the number $N x(t)$ of individuals in subpopulation 1. Similar considerations apply to $y_2(t+1/N)$. In symbols: \begin{equation} \label{eqdif} \left\{\begin{array}{rcl} y_1(t+\frac{1}{N}) &=& \left(1- \frac{\alpha}{N x(t)}\right) \,y_1(t) \,+\, \frac{\alpha}{N x(t)} \, y_2(t) \\ y_2(t+\frac{1}{N}) &=& \frac{\alpha}{N (1-x(t))} \, y_1(t) \,+\, \left(1- \frac{\alpha}{N (1-x(t))}\right) \,y_2(t) \end{array}\right. \;. \end{equation} The above equations, after taking the $N \rightarrow \infty$ limit, become a system of linear ODEs \begin{equation} \label{odes} \left\{\begin{array}{rcl} y_1'(t) &=& - \frac{\alpha}{x(t)} \, (y_1(t)-y_2(t)) \\ y_2'(t) &=& \frac{\alpha}{1-x(t)} \, (y_1(t)-y_2(t)) \end{array}\right. \;. \end{equation} We stress here that we think of $x(t)$ as a stochastic function obtained by realizing the Wright-Fisher drift, but Eqs. (\ref{eqdif}) and (\ref{odes}) still hold if $x(t)$ is any description of the history of the size of subpopulation 1, be it stochastic or deterministic. For example, the possibility of individuals in subpopulation 1 being fitter than individuals in subpopulation 2 has been explored, still using (\ref{eqdif}), in another work \cite{biomat2011}. Eqs. (\ref{odes}) can be exactly solved up to integrals depending on $x(t)$. Although such integrals cannot be calculated in general, the exact solution can be used to give a qualitative view of the behaviour of functions $y_1(t)$ and $y_2(t)$. It turns out that $y_1$ is a decreasing function, whereas $y_2$ increases. The decrease and increase rates are larger when $\alpha$ is large and, despite symmetry in our immigration assumption, gene flow between subpopulations is in general asymmetrical. Such features are shown in appendix A. Moreover, Eqs. (\ref{eqdif}) lend themselves to simple and rapid numerical solution for quantitative purposes. In appendix B, we address the question of comparing numerical solutions of Eqs. (\ref{eqdif}) and direct simulation of all stochastic processes involved. We see that there is good agreement between simulations and numerical solutions of Eqs. (\ref{eqdif}). In all that follows, unless explicitly stated, we will use results obtained by numerically solving Eqs. (\ref{eqdif}), because the computer time for numerical solution is much smaller than for simulation. \section{Estimating the exchange parameter} We know that Neanderthals were extinct and, according to \cite{greenetal}, before disappearing they interbred with modern humans. Despite comparisons between nuclear DNA of Neanderthals and living humans having been limited up to now by a sample of only 3 Neanderthals and 5 living humans, the authors of \cite{greenetal} observed that all three non-Africans in their sample are equally closer to the Neanderthals than the two Africans. They estimate that non-African living humans possess $1$ to $4\%$ of their nuclear DNA derived from Neanderthals. Supposing that Africans are subpopulation 1 in our model, this means that the final value of $y_1$ should lie between $0.96$ and $0.99$ in order to comply with their experimental conclusions. We will refer in the following to the interval between 0.96 and 0.99 as the \textit{experimental interval} for the final value of $y_1$. As we do not know the composition of the total population at the time the two subpopulations met, we will take the initial fraction $x(0)$ of Africans as a random number. With this hypothesis, the only free parameter is the exchange rate $\alpha$. As can be seen in Fig. S1 the value of $\alpha$ largely influences the final value of $y_1$. Furthermore, in both Figs. S1 and S2 it can be seen that with $\alpha=1$ or $\alpha=0.1$ the final values of $y_1$ tend to be too small to be compatible with the experimental interval. We stress that these figures are based only on two realizations of the history $x(t)$ and a single value $x(0)=0.5$. In order to produce estimates of $\alpha$ we must produce a large number of histories $x(t)$ with many values of $x(0)$ and for any of these simulated histories recursively solve Eqs. (\ref{eqdif}) in order to determine the associated final value of $y_1$. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{novafig.pdf} \caption{\label{novafig} The $\alpha$ probability density, i.e. the probability density that the final value of $y_1$ is in the experimental interval 0.96 - 0.99 given a value of $\alpha$. The plot was built by obtaining one million ``successful" pairs $(x(t),\alpha)$ such that the final value of $y_1$ obtained by solving Eqs. (\ref{eqdif}) lies in the experimental interval. These pairs were obtained out of a total of around 140 million simulations with random $x(0)$ uniformly distributed between 0 and 0.8 and $\alpha$ uniformly distributed between 0 and 2. For the sucessful pairs we then computed the fraction associated to any given $\alpha$. In the inset we plot the probability density for the theoretical final values of $y_1$ for three different values of $\alpha$. The densities are empirically determined by simulating 400,000 Wright-Fisher drift histories $x(t)$ with random $x(0)$ uniformly distributed between 0 and 1 and selecting the histories in which subpopulation 2 is extinct. The empty dots (blue) are data for $\alpha=1$, the full dots (purple) are data for $\alpha=0.1$ and the full curve (black) are for $\alpha=0.01$.} \end{figure*} The inset in Fig. \ref{novafig} is realized by producing 400,000 Wright-Fisher drift histories $x(t)$ with random $x(0)$ uniformly distributed between 0 and 1. For all these histories we compute the final theoretical value of $y_1$ by solving Eqs. (\ref{eqdif}) using the three values $\alpha=1$, $\alpha=0.1$ and $\alpha=0.01$. Therefore, for each of the three values of $\alpha$ we have about 200,000 data which allow inference of the probability density for the final value of $y_1$. The data plotted in the inset of Fig. \ref{novafig} show that for $\alpha=1$ the probability that the final value of $y_1$ lies in the experimental interval is approximately equal to $8.1\%$. For $\alpha=0.1$ the corresponding probability is approximately of $21.5\%$ and for $\alpha=0.01$ it is approximately of $34.0\%$. In all three cases the density of the final values of $y_1$ is rather thick, meaning that there is a large probability that the final value of $y_1$ does not lie in the experimental interval. The above information shows that the experimental data are better explained by values of $\alpha$ much smaller than 1. By Fig. \ref{novafig} we see that the value of $\alpha$ which explains with largest probability the experimental data is $\alpha_{\mathrm{max}} \approx 0.013$. In order to produce that plot, we simulated a large number of Wright-Fisher histories $x(t)$ with random $x(0)$ uniformly extracted between 0 and 0.8 and random values for $\alpha$ uniformly distributed between 0 and 2. From these data we selected the histories in which subpopulation 2 was extinct and such that the final theoretical value of $y_1$ lied in the experimental interval. In this way we can empirically determine the probability that the final value of $y_1$ lies in the experimental interval as a function of $\alpha$ . We also see that the probability density for $\alpha$ is rather asymmetrical around $\alpha_{\mathrm{max}}$ with values $\alpha \ge\alpha_{\mathrm{max}}$ contributing with large probability. This asymmetry is reflected in the fact that the mean value is $\alpha_{\mathrm{mean}} \approx 0.083$, much larger than $\alpha_{\mathrm{max}}$. A technical detail in producing Fig.\ref{novafig} is that the random values for $x(0)$ are chosen with uniform distribution in the interval $0 \le x(0)\leq 0.8$, avoiding values either close to or inside the experimental interval. Such a choice is related to the assumption of \textit{slow} rather than \textit{rapid} interbreeding between Africans and Neanderthals. See aapendix C and Fig. S3 for a more detailed explanation on that choice. \section{Other results} O. Bar-Yosef \cite{natgeo} compares occupation of the Middle East by Neanderthals and Africans with a long football game. The occupants of the caves of Skuhl and Kafzeh in Israel alternated between Africans and Neanderthals several times over a period of more than 130,000 years. Although the model described before becomes independent of the total population $N$, we may obtain some hints on the size of $N$ if we accept the constraint that at least for 130,000 years Neanderthals had not been extinct in the Middle East. By taking random values for $x(0)$ between 0 and 0.8 and $\alpha$ between 0 and 2 we obtained a sample of 790 events such that Neanderthals were extinct and $y_1$ lied in the experimental interval. For each of these events we recorded the time it took for extinction of Neanderthals and we found out that the mean extinction time was 0.58. If we take this mean value as the typical value, suppose that one generation is 20 years and equate it to 130,000 years, we get $N \approx 11,200$ individuals. The whole distribution of extinction times in the above sample is shown in Fig. S4. In Fig. S3 we plotted the same sample of events in the plane $x(0) - \alpha$. We see that smaller values of $x(0)$ are correlated with smaller values of $\alpha$ and also that the events such that $y_1$ lies in the experimental interval are concentrated around the largest values of $x(0)$. The mean value of $x(0)$ for the whole sample is 0.64. Using the same sample we may also explore the values of $y_2$ at the time Neanderthals were extinct, i.e. the fraction of African DNA in the last Neanderthals which interbred with Africans. Fig. S5 shows a histogram of the $y_2$ values for the events in the sample. Observe that typical values of $y_2$ are much larger than the values of $1-y_1$, which range from 0.01 to 0.04. This is due to the fact that in most events such that $y_1$ falls within the experimental interval, Africans were the majority of population for most of the time. According to the explanation in appendix A, this implies that, despite symmetry in the number of exchanged individuals, transfer of African alleles to Neanderthals will be larger than transfer of Neanderthal alleles to Africans. By simulating the complete reproduction and individual exchange process described in appendix B we were able also of empirically determining the conditional probability -- the condition being that the fraction of African DNA in Africans is in the experimental interval -- that the most recent common ancestors in the population for the maternal (mtDNA) and paternal lineages (Y chromosome) are both African. We ran several simulations with populations of 100 individuals and random values of $\alpha$ uniformly distributed between 0.01 and 0.2 and random $x(0)$ constrained to be smaller than 0.8. In each simulation we waited until all male individuals had the same paternal ancestor and all female individuals had the same maternal ancestor. We selected those simulations in which subpopulation 1 survived and $y_1$ lied in the experimental interval. Out of 96 simulations satisfying the above criteria, only in 7 of them the survived Y chromosome and mtDNA lineages were not both of ancestors belonging to subpopulation 1. Therefore, according to our interbreeding model, the conditional probability of an African origin of both mtDNA and Y chromosome can be estimated to be of order $0.93$. \section{Discussion and conclusions} Large samples of mtDNA \cite{cann} and Y chromosomes \cite{underhill} in living humans have been sequenced. The small variation among living humans is compatible with a single ancestor woman (mtDNA) and a single ancestor man (Y chromosome) to the whole population, probably both of African origin and living about 100-200 thousand years ago. These facts have been interpreted as proofs of the Out of Africa model, but our interbreeding model is perfectly compatible with them. In fact, conditioned to $y_1$ being in the experimental interval, our model yields a large probability of 93\% for African origin of both mtDNA and Y chromosome. More recently \cite{kringsetal}, the whole mtDNA of a few Neanderthal fossils became available. The average number of pairwise differences in mtDNA between a Neanderthal and a living human is significantly larger than the average number of pairwise differences in mtDNA among living humans. This has been considered as a further confirmation of the claim that Neanderthals belong to a separate species, see e.g. \cite{curratexcoffier}, and also for the Out of Africa model. Before any data on Neanderthal nuclear DNA was available, both authors of this paper had separately anticipated \cite{serva1,serva2,serva3,nm1,nm2,nm3} that the above facts are all compatible with anatomically modern Africans and Neanderthals being part of a single interbreeding population at the times they coexisted. Some further details about these claims are given in appendix E. In the framework of the model proposed in this article we could infer that the 1 to 4\% fraction \cite{greenetal} of Neanderthal DNA in present day non-Africans can be explained with maximum probability by assuming that the African and Neanderthal subpopulations exchanged only 1 pair of individuals in about 77 generations. But the mean value of the exchange parameter in the model corresponds to a larger frequency of about 1 pair of individuals exchanged in about 12 generations. We also estimated the mean number of generations for Neanderthal extinction in the Middle East to be approximately $0.58 N$. Together with the fact that Neanderthals and Africans seem to have coexisted in the Middle East for at least 130,000 years, this allows us to estimate the total population $N$ in the model to be of order $10^4$ individuals. Although Green \textit{et al} have observed in \cite{greenetal} gene flow from Neanderthals into Africans, they have not observed the reverse flow. This fact is also compatible both with our results and the fact that living Europeans are as close to Neanderthals as living Asians or Oceanians. The explanation is that the Neanderthal specimens which had their DNA sequenced in \cite{greenetal} were all excavated in European sites. It seems that only a part of the total Neanderthal population took part in the interbreeding process in Middle East, the other part of the population remaining in Europe. The descendants of these Neanderthals which have never left Europe did not interbreed later with Africans when they came into Europe, or this interbreeding was very small. On the contrary, according to our model, see Fig. S5, we expect to find a larger fraction of African DNA in late Middle East Neanderthal fossils than the 1 to 4\% Neanderthal fraction of present non-Africans. Thus, DNA sequencing of one such fossil would be a good test for the present model. Neanderthals are implicitly considered in this work as a group within the \textit{Homo sapiens} species and we renounce the strict Out of Africa model for the origin of our species, in which anatomically modern Africans would have replaced without gene flow other hominids in Eurasia. In particular, our model is neutral in the sense that we assign the same fitness to Neanderthals and Africans. Our results show that neither strong sexual isolation between Africans and Neanderthals or else some kind of Neanderthal cognitive or reproductive inferiority, are necessary to explain both their extinction and the small fraction of their DNA in most living humans. In fact, within the assumptions of the model, if two subpopulations coexist in the same territory for a sufficiently long time, only one of them survives. The fact that Neanderthals were the extinct subpopulation is then a random event. Although we do not intend to back up any kind of superiority for Neanderthals, our neutrality hypothesis is at least supported by recent results \cite{zilhaoetal,wongzilhao} by J. Zilh\~ao \textit{et al}, which claim that Neanderthals in Europe already made use of symbolic thinking before Africans arrived there. Current knowledge about Denisovans morphology and life style is much less than what we know about Neanderthals. In particular we do not know whether Denisovans lived only in Siberia, where up to now the only known fossils have been found, or elsewhere. Where and when this people made contact with the African ancestors of present day Melanesians is still a mystery. Nevertheless, if such a contact occurred for a sufficiently long time in a small geographical region, then the present model can be straightforwardly applied. As we now know of our Neanderthal and Denisovan inheritances, it is time to ask whether they were the only hominids that Africans mated. We believe that the future may still uncover lots of surprises when Denisovans will be better studied and nuclear DNA of many more Neanderthal and other hominid fossils will become available.
1,477,468,751,373
arxiv
\section{Introduction} This paper is devoted to the numerical approximation of measure valued solutions to the so-called aggregation equation in space dimension $d$. This equation reads \begin{equation}\displaystyle\label{EqInter} \partial_t\rho = \mathop{\rm div}\nolimits\big((\nabla_x W*\rho) \rho\big) , \qquad t>0,\quad x\in\mathbb{R}^d, \end{equation} with the initial condition $\rho(0,\cdot)=\rho^{ini}$. Here, $W$ plays the role of an interaction potential whose gradient $\nabla_x W(x-y)$ measures the relative force exerted by a unit mass localized at a point $y$ onto a unit mass located at a point $x$. This system appears in many applications in physics and population dynamics. In the framework of granular media, equation \eqref{EqInter} is used to describe the large time dynamics of inhomogeneous kinetic models, see \cite{benedetto,CCV,Toscani}. Models of crowd motion with a nonlinear term of the form $\nabla_xW*\rho$ are also addressed in \cite{pieton,pieton2}. In population dynamics, \eqref{EqInter} provides a biologically meaningful description of aggregative phenomena. For instance, the description of the collective migration of cells by swarming leads to such a kind of PDEs with non-local interaction, see e.g. \cite{morale,okubo,topaz}. Another example is the modelling of bacterial chemotaxis. In this framework, the quantity $S=W*\rho$ is the chemoattractant concentration, which is a substance emitted by bacteria allowing them to interact with one another. The dynamics can be macroscopically modelled by the Patlak-Keller-Segel system \cite{keller,patlack}. In the kinetic framework, the most frequently used model is the Othmer-Dunbar-Alt system, the hydrodynamic limit of which leads to the aggregation equation \eqref{EqInter}, see \cite{dolschmeis,filblaurpert,NoDEA}. In many of these examples, the potential $W$ is usually mildly singular, i.e. $W$ has a weak singularity at the origin. Because of this low regularity, smooth solutions of such systems may blow-up in finite time, see e.g. \cite{Li,BV,Bertozzi2,Carrillo}. In the latter case, finite time concentration may be regarded as a very simple mathematical way to account for aggregation of individuals, as opposed to diffusion. Since finite time blow-up of smooth solutions may occur and since equation \eqref{EqInter} conserves mass, a natural framework to study the existence of global in time solutions is to work in the space of probability measures. In this regard, two strategies have been proposed in the literature. In \cite{Carrillo}, the aggregation equation is seen as a gradient flow taking values in the Wasserstein space and minimizing the interaction energy. In \cite{NoDEA,GF_dual,CJLV, lava}, this system is considered as a conservative transport equation with velocity field $\nabla_x W*\rho$. Then a unique flow, say $Z=(Z(t,\cdot))_{t \geq 0}$, can be constructed, hence allowing to define the solution as a pushforward measure by the flow, namely $\rho=(\rho(t)=Z(t,\cdot)_\# \rho^{ini})_{t \ge 0}$. When the singularity of the potential is stronger than the mild form described above, such a construction has been achieved in the radially symmetric case in \cite{Andrea_c_toaa}, but uniqueness is then lacking. Actually, the assumptions on the potential $W$ that are needed to ensure the well-posedness of the equation in the space of measure valued solutions require a certain convexity property of the potential that allows only for a mild singularity at the origin. More precisely, we assume that the interaction potential $W\,:\,\mathbb{R}^d\to\mathbb{R}$ satisfies the following properties: \begin{itemize} \item[{\bf (A0)}] $W(x)=W(-x)$ and $W(0)=0$; \item[{\bf (A1)}] $W$ is $\lambda$-convex for some {$\lambda \in \mathbb{R}$}, i.e. $W(x)-\frac{\lambda}{2}|x|^2$ is convex; \item[{\bf (A2)}] $W\in C^1(\mathbb{R}^d\setminus\{0\})$; \item[{\bf (A3)}] $W$ is Lipschitz-continuous. \end{itemize} Such a potential will be referred to as a {\it pointy} potential. Typical examples are fully attractive potentials $W(x)=1-e^{-|x|}$, which is $-1$-convex, and $W(x) = |x|$, which is $0$-convex. Notice that the Lipschitz-continuity of the potential allows to bound the velocity field: there exists a nonnegative constant $w_\infty$ such that for all $x\neq 0$, \begin{equation} \label{borngradW} |\nabla W(x)| \leq w_\infty. \end{equation} Observe also that {\bf (A3)} forces $\lambda$ in \textbf{\bf (A1)} to be non-positive, as otherwise $W$ would be at least of quadratic growth, whilst {\bf (A3)} forces it to be at most of linear growth. However, we shall sometimes discard \textbf{(A3)}, when the initial datum is compactly supported. In this case, as $W - \lambda |x|^2/2$ is convex, it is locally Lipschitz-continuous, so that $W$ is locally Lipschitz-continuous, what will be sufficient for compactly supported initial data. In that case it perfectly makes sense to assume $\lambda >0$ in \textbf{\bf (A1)}. For numerical analysis, we will assume in this case that the potential is radial, that is to say that $W$ is a function of the sole scalar $|x|$, $W(x) = \mathcal{W}(|x|)$. Although very accurate numerical schemes have been developped to study the blow-up profile for smooth solutions, see \cite{Huang1,Huang2}, very few numerical schemes have been proposed to simulate the behavior of solutions to the aggregation equation after blow-up. The so-called sticky particles method was shown to be convergent in \cite{Carrillo} and used to obtain qualitative properties of the solutions such as the time of total collapse. However, this method is not so practical to catch the behavior of the solutions after blow-up in dimension $d$ larger than one. In dimension $d=1$, this question has been addressed in \cite{NoDEA}. In higher dimension, particle methods have been recently proposed and studied in \cite{CB,Freda}, but only the convergence of smooth solutions, before the blowup time, has been proved. Finite volume schemes have also been developed. In \cite{sinum}, the authors propose a finite volume scheme to approximate the behavior of the solution to the aggregation equation \eqref{EqInter} after blow-up and prove that it is convergent. A finite volume method for a large class of PDEs including in particular \eqref{EqInter} has been also proposed in \cite{CCH}, but no convergence result has been given. Finally, a finite volume scheme of Lax-Friedrichs type for general measures as initial data has been introduced and investigated in \cite{CJLV}. Numerical simulations of solutions in dimension greater than one have been obtained, allowing to observe the behavior after blow-up. Moreover, convergence towards measure valued solutions has been proved. However, no estimate on the order of convergence has been established so far. In the current work, we provide a precise estimate of the order of convergence in Wasserstein distance for an upwind type scheme. This scheme is based on an idea introduced in \cite{NoDEA} and used later on in \cite{sinum,CJLV}. It consists in discretizing properly the macroscopic velocity so that its product with the measure solution $\rho$ is well-defined. In this paper, we introduce an upwind scheme for which this product is treated accurately, and we prove its convergence at order $1/2$ in Wasserstein distance (the definition of which is recalled below). For a given velocity field, the study of the order of convergence for the finite volume upwind scheme for the transport equation has received a lot of attention. This scheme is known to be first order convergent in $L^\infty$ norm for any smooth initial data in $C^2(\mathbb{R}^d)$ and for well-suited meshes, provided a standard stability condition (Courant-Friedrichs-Lewy condition) holds, see \cite{bouche}. However, this order of convergence falls down to $1/2$ in $L^p$ norm when considering non-smooth initial data or more general meshes. This result has been first proved in the Cartesian framework by Kuznetsov in \cite{Kuznetsov}. In \cite{Despres}, a $1/2$ order estimate in the $L^\infty([0,T],L^2(\mathbb{R}^d))$ norm for $H^2(\mathbb{R}^d)$ initial data has been established. Finally in \cite{MV,caniveau}, a $1/2$ order estimate in $L^1$ has been proved for initial data in $L^1(\mathbb{R}^d)\cap BV(\mathbb{R}^d)$, whilst, for Lipschitz-continuous initial data, an estimate of order $1/2-\varepsilon$ in $L^\infty$ for any $\varepsilon>0$ has been obtained in \cite{M,caniveau}. We emphasize that the techniques used in \cite{M,MV} and \cite{caniveau} are totally different. In the former, the strategy of proof is based on entropy estimates, whereas in the latter, the proof relies on the construction and the analysis of stochastic characteristics for the numerical scheme. Finally, when the velocity field is only $L^\infty$ and one-sided Lipschtiz-continuous, solutions of the conservative transport equation are defined only in the sense of measures. In this regard, Poupaud and Rascle \cite{PoupaudRascle} have proved that solutions of the conservative transport equation could be defined as the pushforward of the initial condition by a flow of characteristics. A stability estimate for such solutions has been stated later in \cite{Bianchini}. In dimension $d=1$, these solutions, as introduced in \cite{PoupaudRascle}, are equivalent to duality solutions, as defined in \cite{bj1}. Numerical investigations may be found in \cite{GJ}. In such a framework with a low regularity, numerical analysis requires to work with a sufficiently weak topology, which is precisely what has been done in \cite{DLV}. Therein, the convergence at order $1/2$ of a finite volume upwind scheme has been shown in Wasserstein distance by means of a stochastic characteristic method, as done in \cite{caniveau}. Observe also that, recently, such an approach has been successfully used in \cite{schlichting} for the numerical analysis of the upwind scheme for the transport equation with rough coefficients. In the current work, we adapt the strategy initiated in \cite{DLV} to prove the convergence at order $1/2$ of an upwind scheme for the aggregation equation for which the velocity field depends on the solution in a nonlinear way. We will strongly use the fact that, as mentioned above, measure valued solutions of \eqref{EqInter} are constructed by pushing forward the initial condition by an $\mathbb{R}^d$-valued flow. Noticeably, we entirely reformulate the stochastic approach used in \cite{DLV} by means of analytical tools. In the end, our proof is completely deterministic. Although using analytical instead of probabilistic arguments do not change the final result (neither nor the general philosophy of the proof), it certainly makes the whole more accessible for the reader. As we pointed out, the key fact in \cite{DLV} is to represent the scheme through a Markov chain; here, the main idea is to use the sole transition kernel of the latter Markov chain to couple the measure-valued numerical solution at two consecutive times (and hence to bypass any use of the Markov chain itself). We refer to Remark \ref{comparison} below for more details. The outline of the paper is the following. In the next section, we introduce the notations and recall the theory for the existence of a measure solution to \eqref{EqInter}. Then we present the upwind scheme and state the main result: the scheme is convergent at order $1/2$. In case when the potential $W$ is strictly convex and radially symmetric and the initial condition has a bounded support, the rate is claimed to be uniform in time. Section \ref{sec:num} is devoted to the properties of the scheme. The proof of the main result for a Cartesian grid mesh is presented in Section \ref{sec:ordre}. In Section \ref{sec:unstruct}, we explain briefly how to extend our result to simplicial meshes. Finally, numerical illustrations are given in Section \ref{sec:sim}. In particular, we show that the order of convergence is optimal and we provide several numerical simulations in which we recover the behavior of the solutions after blow-up time. \section{Notations and main results} \subsection{Notations} Throughout the paper, we will make use of the following notations. We denote by $C_0(\mathbb{R}^d)$ the space of continuous functions from $\mathbb{R}^d$ to $\mathbb{R}$ that tend to $0$ at $\infty$. We denote by ${\mathcal M}_b(\mathbb{R}^d)$ the space of Borel signed measures whose total variation is finite. For $\rho\in {\cal M}_{b}(\mathbb{R}^d)$, we call $|\rho|(\mathbb{R}^d)$ its total variation. The space ${\mathcal M}_b(\mathbb{R}^d)$ is equipped with the weak topology $\sigma({\cal M}_b(\mathbb{R}^d),C_0(\mathbb{R}^d))$. For $T>0$, we let ${\cal S}_{\cal M} :=C([0,T];{\cal M}_b(\mathbb{R}^d)-\sigma({\cal M}_b(\mathbb{R}^d),C_0(\mathbb{R}^d)))$. For $\rho$ a measure in ${\mathcal M}_b(\mathbb{R}^d)$ and $Z$ a measurable map, we denote $Z_\#\rho$ the pushforward measure of $\rho$ by $Z$; it satisfies, for any continuous function $\phi$, $$ \int_{\mathbb{R}^d} \phi(x)\, Z_\#\rho(dx) = \int_{\mathbb{R}^d} \phi(Z(x))\,\rho(dx). $$ We call ${\mathcal P}(\mathbb{R}^d)$ the subset of ${\mathcal M}_b(\mathbb{R}^d)$ of probability measures. We define the space of probability measures with finite second order moment by $$ {\mathcal P}_2(\mathbb{R}^d) := \left\{\mu \in {\mathcal P}(\mathbb{R}^d),\ \int_{\mathbb{R}^d} |x|^2 \mu(dx) <\infty\right\}. $$ Here and in the following, $|\cdot|^2$ stands for the square Euclidean norm, and $\langle\cdot,\cdot\rangle$ for the Euclidean inner product. The space ${\mathcal P}_{2}(\mathbb{R}^d)$ is equipped with the Wasserstein distance $d_W$ defined by (see e.g. \cite{Ambrosio,Villani1, Villani2, Filippo c touo}) \begin{equation}\displaystyle\label{defWp} d_W(\mu,\nu) := \inf_{\gamma\in \Gamma(\mu,\nu)} \left\{\int_{\mathbb{R}^d\times \mathbb{R}^d} |y-x|^2\,\gamma(dx,dy)\right\}^{1/2} \end{equation} where $\Gamma(\mu,\nu)$ is the set of measures on $\mathbb{R}^d\times\mathbb{R}^d$ with marginals $\mu$ and $\nu$, i.e. \begin{align*} \Gamma(\mu,\nu) = \left\{ \gamma\in {\mathcal P}_2(\mathbb{R}^d\times\mathbb{R}^d); \ \forall\, \xi\in C_0(\mathbb{R}^d), \right. & \int \xi(y_1)\gamma(dy_1,dy_2) = \int \xi(y_1) \mu(dy_1), \\ & \left.\int \xi(y_2)\gamma(dy_1,dy_2) = \int \xi(y_2) \nu(dy_2) \right\}. \end{align*} By a minimization argument, we know that the infimum in the definition of $d_{W}$ is actually a minimum. A measure that realizes the minimum in the definition \eqref{defWp} of $d_W$ is called an {\it optimal plan}, the set of which is denoted by $\Gamma_0(\mu,\nu)$. Then, for all $\gamma_0\in \Gamma_0(\mu,\nu)$, we have $$ d_W(\mu,\nu)^2= \int_{\mathbb{R}^d\times \mathbb{R}^d} |y-x|^2\,\gamma_0(dx,dy). $$ We will make use of the following properties of the Wasserstein distance. Given $\mu \in {\mathcal P}_{2}(\mathbb{R}^d)$ and two $\mu$-square integrable Borel measurable maps $X,Y:\mathbb{R}^d\to \mathbb{R}^d$, we have the inequality \begin{equation* d_W(X_\#\mu,Y_\#\mu) \leq \|X-Y\|_{L^2(\mu)}. \end{equation*} It holds because $\pi=(X,Y)_\# \mu\in \Gamma(X_\#\mu,Y_\#\mu)$ and $\int_{\mathbb{R}^d\times \mathbb{R}^d} |x-y|^2\,\pi(dx,dy)=\|X-Y\|_{L^2(\mu)}^2$. \subsection{Existence of a unique flow} In this section, we recall the existence and uniqueness result for the aggregation equation \eqref{EqInter} obtained in \cite{CJLV} (and extend it a bit for non-globally Lipschitz-continuous potentials). For $\rho \in C([0,T];{\mathcal P}_2(\mathbb{R}^d))$, we define the velocity field $\widehat{a}_{\rho}$ by \begin{equation}\displaystyle\label{achapo} \widehat{a}_{\rho}(t,x) := -\int_{\mathbb{R}^d} \widehat{\nabla W}(x-y) \rho(t,dy)\,, \end{equation} where we have used the notation $$ \widehat{\nabla W}(x) := \left\{ \begin{array}{ll} \nabla W(x), \qquad & \mbox{ for } x\neq 0, \\ 0, & \mbox{ for } x=0. \end{array} \right. $$ Due to the $\lambda$-convexity of $W$, see {\bf (A2)}, we deduce that, for all $x$, $y$ in $\mathbb{R}^d\setminus \{0\}$, \begin{equation}\displaystyle\label{lambdaconv} \langle \nabla W(x)-\nabla W(y) , x-y\rangle \geq \lambda |x-y|^2. \end{equation} Moreover, since $W$ is even, $\nabla W$ is odd and by taking $y=-x$ in \eqref{lambdaconv}, we deduce that inequality \eqref{lambdaconv} still holds for $\widehat{\nabla W}$, even when $x$ or $y$ vanishes: \begin{equation}\displaystyle \label{lambdaconvWchapo} \forall\, x,y\in\mathbb{R}^d, \qquad \langle\widehat{\nabla W}(x)-\widehat{\nabla W}(y),x-y\rangle \geq \lambda |x-y|^2. \end{equation} This latter inequality provides a one-sided Lipschitz-continuity (OSL) estimate for the velocity field $\widehat{a}_\rho$ defined in \eqref{achapo}, i.e. we have \begin{equation*} \forall\, x,y\in\mathbb{R}^d, \ t \geq 0, \qquad \bigl\langle \widehat{a}_{\rho}(t,x)-\widehat{a}_{\rho}(t,y), x-y\bigr\rangle \leq -\lambda |x-y|^2. \end{equation*} We recall that, for a velocity field $b\in L^\infty([0,+\infty);L^\infty(\mathbb{R}^d))^d$ satisfying an OSL estimate, i.e. $$ \forall\, x,y\in \mathbb{R}^d,\ t \geq 0, \qquad \langle b(t,x)-b(t,y),x-y\rangle \leq \alpha(t) |x-y|^2, $$ for $\alpha\in L^1_{loc}([0,+\infty))$, it has been established in \cite{Filippov} that a Filippov characteristic flow could be defined. For $s\geq 0$ and $x\in \mathbb{R}^d$, a Filippov characteristic starting from $x$ at time $s$ is defined as a continuous function $Z(\cdot;s,x)\in C([s,+\infty);\mathbb{R}^d)$ such that $\frac{\partial}{\partial t}Z(t;s,x)$ exists for a.e. $t\in[s,+\infty)$ and satisfies $Z(s;s,x)=x$ together with the differential inclusion $$ \frac{\partial}{\partial t} Z(t;s,x) \in \bigl\{ \textrm{\rm Convess}\bigl(\widehat{a}_{\rho}\bigr)(t,\cdot)\bigr\}(Z(t;s,x)), \qquad \textrm{\rm for a.e.} \quad t \geq s. $$ In this definition, $\{\textrm{\rm Convess}(\widehat{a}_{\rho})(t,\cdot) \}(x)$ denotes the essential convex hull of the vector field $\widehat{a}_{\rho}(t,\cdot)$ at $x$. We remind briefly the definition for the sake of completeness (see \cite{Filippov,AubinCellina} for more details). We denote by $\textrm{\rm Conv}(E)$ the classical convex hull of a set $E \subset \mathbb{R}^d$, i.e., the smallest closed convex set containing $E$. Given the vector field $\widehat{a}_{\rho}(t,\cdot):\mathbb{R}^d\rightarrow\mathbb{R}^d$, its essential convex hull at point $x$ is defined as $$ \bigl\{ \textrm{\rm Convess} \bigl(\widehat{a}_{\rho} \bigr)(t,\cdot) \bigr\}(x) :=\bigcap_{r>0} \bigcap_{N\in \mathcal{N}_0} \textrm{\rm Conv}\bigl[ \widehat{a}_{\rho}\bigl( t, B(x,r)\setminus N\bigr)\bigr]\,, $$ where $\mathcal{N}_0$ is the set of zero Lebesgue measure sets. Moreover, we have the semi-group property: for any $t,\tau,s \in [0,+\infty)$ such that $t \geq \tau \geq s$ and $x\in \mathbb{R}^d$, \begin{equation} \label{eq:characteristics} Z(t;s,x)=Z(\tau;s,x)+\int_{\tau}^t \widehat{a}_{\rho}\bigl(\sigma,Z(\sigma;s,x)\bigr)\,d\sigma. \end{equation} From now on, we will make use of the notation $Z(t,x)=Z(t;0,x)$. Using this characteristic, it has been established in \cite{PoupaudRascle} that solutions to the conservative transport equation with a given bounded and one-sided Lipschitz-continuous velocity field could be defined as the pushforward of the initial condition by the Filippov characteristic flow. Based on this approach, existence and uniqueness of solutions to \eqref{EqInter} defined by a Filippov flow has been established in \cite{CJLV}. More precisely the statement reads: \begin{theorem}\label{Exist}\cite[Theorem 2.5 and 2.9]{CJLV} (i) Let $W$ satisfy assumptions {\bf (A0)--(A3)} and let $\rho^{ini}$ be given in ${\mathcal P}_2(\mathbb{R}^d)$. Then, there exists a unique solution $\rho \in C([0,+\infty);{\mathcal P}_2(\mathbb{R}^d))$ satisfying, in the sense of distributions, the aggregation equation \begin{equation} \label{eq:agreg:TH} \partial_t \rho + \mathop{\rm div}\nolimits\bigl(\widehat{a}_{\rho} \rho \bigr) = 0, \qquad \rho(0,\cdot)=\rho^{ini}, \end{equation} where $\widehat{a}_{\rho}$ is defined by \eqref{achapo}. This solution may be represented as the family of pushforward measures $(\rho(t):=Z_{\rho}(t,\cdot){}_\# \rho^{ini})_{t \geq 0}$ where $(Z_{\rho}(t,\cdot))_{t \geq 0}$ is the unique Filippov characteristic flow associated to the velocity field $\widehat{a}_{\rho}$. Moreover, the flow $Z_{\rho}$ is Lipschitz-continuous and we have \[ \sup_{x,y\in \mathbb{R}^d, \, x \not = y} \frac{\vert Z_{\rho}(t,x) - Z_{\rho}(t,y) \vert}{\vert x- y \vert} \leq e^{{\vert \lambda \vert}t}, \quad t \geq 0. \] At last, if $\rho$ and $\rho'$ are the respective solutions of \eqref{eq:agreg:TH} with $\rho^{ini}$ and $\rho^{ini,\prime}$ as initial conditions in ${\mathcal P}_{2}(\mathbb{R}^d)$, then \[ d_{W}(\rho(t),\rho'(t)) \leq e^{\vert \lambda \vert t} d_{W}(\rho^{ini},\rho^{ini,\prime}), \qquad t \geq 0. \] (ii) Let $W$ satisfy \textbf{\bf (A0)}--\textbf{\bf (A2)} and be radial, $\lambda$ be (strictly) positive and let $\rho^{ini}$ be given in ${\mathcal P}_2(\mathbb{R}^d)$ with compact support included in $B_\infty(M_1,R)$, where $M_1$ is the first moment of $\rho^{ini}$ ({\em i.e.} its center of mass) and $B_\infty(M_1,R)$ the closed ball for the infinite norm on $\mathbb{R}^d$ centered at $M_1$ with radius $R$. Then, there exists a unique solution $\rho \in C([0,+\infty);{\mathcal P}_2(\mathbb{R}^d))$ with support included in $B_\infty(M_1,R)$ satisfying, in the sense of distributions, the aggregation equation \eqref{eq:agreg:TH} where $\widehat{a}_{\rho}$ is defined by \eqref{achapo}. Moreover, the flow $Z_{\rho}$ is Lipschitz-continuous and we have \begin{equation}\label{bound1Xbis} \sup_{x,y\in \mathbb{R}^d, \, x \not = y} \frac{\vert Z_{\rho}(t,x) - Z_{\rho}(t,y) \vert}{\vert x- y \vert} \leq e^{-\lambda t}, \quad t \geq 0. \end{equation} At last, if $\rho^{ini}$ and $\rho^{ini,\prime}$ have a bounded support, then, \[ d_{W}(\rho(t),\rho'(t)) \leq d_{W}(\rho^{ini},\rho^{ini,\prime}), \qquad t \geq 0. \] \end{theorem} The stability estimates that are present in this result are Dobrushin type estimates in the quadratic Wasserstein distance, in the case where the kernel is not Lipschitz-continuous but only one-sided Lipschitz-continuous. See \cite{dob} and \cite{Golse}. We mention that the solution, which is here represented by the Filippov characteristic flow, may be also constructed as a gradient flow solution in the Wasserstein space ${\mathcal P}_{2}(\mathbb{R}^d)$, see \cite{Carrillo}. Here it is also important to remark that \eqref{bound1Xbis} is true under the sole assumptions \textbf{\bf (A0)}--\textbf{\bf (A2)} whenever $\lambda >0$ (which is a mere consequence of \eqref{eq:aux:proof:1} and \eqref{eq:aux:proof:2} below). In that case, it ensures that $B_2(M_1,R)$ (the closed Euclidean ball) is preserved by the flow {\em without the assumption that $W$ is radial.} As a result, it may be tempting to address the analysis below without requiring the potential to be radial. Nevertheless, the problem is that the numerical scheme does not satisfy a similar property. Indeed, the Euclidean ball $B_2(M_1,R)$ is not convex from a numerical point of view, that is to say, if we regard the mesh underpinning the scheme, then the union of the square cells whose center is included in $B_2(M_1,R)$ is not convex. Due to this drawback, the flow associated to the scheme does not preserve the ball $B_2(M_1,R)$. This is in contrast with Lemma \ref{lem:CFL:lambda:>0} below, which shows that, in the radial setting, the ball $B_\infty(M_1,R+\Delta x)$ is kept stable by the scheme, where $\Delta x$ is the step of the spatial mesh. This latter fact is the reason why we here assume that the potential is radial. \vskip 4pt \begin{proof} For the first two statements of the Theorem, existence of a unique solution and Lipschitz-continuity of the flow, we refer to \cite{CJLV}. These statements remain true whenever the sole \textbf{\bf (A0)}--\textbf{\bf (A2)} hold true, $W$ is radial, $\lambda$ is (strictly) positive and the support of $\rho^{ini}$ is bounded, provided that the notion of solution is limited to collections $(\rho(t,\cdot))_{t \geq 0}$ that have a compact support, uniformly in $t$ in compact subsets. Indeed, if we denote by $M_1(t)$ the center of mass of the solution at time $t$, namely $M_1(t) := \int_{\mathbb{R}^d} \rho(t,dx)$, then this center of mass is known to be preserved: $M_1(t) = M_1(0) =: M_1$ (see \cite{CJLV} or Lemma \ref{bounddismom} below for the discrete counterpart). Now, if $\lambda \geq 0$ and if $W$ is radial, $\nabla W(x - y)$ is positively proportional to $x-y$, so that $-\nabla W(x - y)$ is parallel to $x - y$ and directed from $x$ to $y$. Thus, if $\rho(t)$ is zero outside the ball $B_\infty(M_1,R)$, then, for any $x \in \partial B_\infty(M_1,R)$, the velocity $\widehat{a}_\rho(t,x)$ is directed toward the interior of $B_\infty(M_1,R)$. This shows that $B_\infty(M_1,R)$ is preserved by the flow and guarantees that $\rho(t)$ has its support included in $B_\infty(M_1,R)$ for any time $t \geq 0$, if it is the case for $t = 0$. Given the fact that the support of $\rho(t)$ remains bounded in $B_\infty(M_1,R)$, everything works as if $W$ was globally Lipschitz-continuous. Existence and uniqueness of a solution to the aggregation equation can thus be proved by a straightforward localization argument. Indeed, observe that from the very definition of the velocity $a$, the Lipschitz-continuity constant of $W$ that is involved in the existence and uniqueness theory is the local one of $W$ on the compact subset $B_{\infty}(M_{1},R)$, provided that the support of $\rho^{ini}$ is included in $B_{\infty}(M_{1},R)$. Now it only remains to prove the two inequalities regarding the Wasserstein distance between solutions starting from different data. Under assumptions {\bf (A0)--(A3)} on the potential, it was proven in \cite{CJLV}, but with a constant $2|\lambda|$ instead of $|\lambda|$ in the exponential (as in \cite{dob} and \cite {Golse}, where the convolution operator is however replaced with a slightly more general integral operator), thus we here provide a proof of the present better estimate. We consider the two Filippov flows $(Z_{\rho}(t,\cdot))_{t \geq 0}$ and $(Z_{\rho'}(t,\cdot))_{t \geq 0}$ as defined in the statement of Theorem \ref{Exist}. We recall that \begin{equation} \label{eq:aux:proof} Z_{\rho}(t,\cdot){}_{\#} \rho^{ini}= \rho(t,\cdot), \qquad Z_{\rho'}(t,\cdot){}_{\#} \rho^{ini,\prime}= \rho'(t,\cdot), \qquad t \geq 0. \end{equation} To simplify, we just write $Z(t,\cdot) = Z_{\rho}(t,\cdot)$ and $Z'(t,\cdot) = Z_{\rho'}(t,\cdot)$. Then, for any $x,y \in \mathbb{R}^d$ and $t \geq 0$, \begin{equation*} \begin{split} &\frac{d}{dt} \vert Z(t,x) - Z'(t,y) \vert^2 \\ &\hspace{15pt}= - 2 \Bigl\langle Z(t,x) - Z'(t,y), \\ &\hspace{45pt} \int_{\mathbb{R}^d} \widehat{\nabla W}\bigl( Z(t,x) - Z(t,x') \bigr) \rho^{ini}(dx') - \int_{\mathbb{R}^d} \widehat{\nabla W}\bigl( Z'(t,y) - Z'(t,y') \bigr) \rho^{ini,\prime}(dy') \Bigr\rangle. \end{split} \end{equation*} Call $\pi \in \Gamma_{0}(\rho^{ini},\rho^{ini,\prime})$ an optimal plan between $\rho^{ini}$ and $\rho^{ini,\prime}$. Then, \begin{equation*} \begin{split} &\frac{d}{dt} \vert Z(t,x) - Z'(t,y) \vert^2 \\ &\hspace{5pt}= - 2 \Bigl\langle Z(t,x) - Z'(t,y), \int_{\mathbb{R}^{2d}} \bigl[ \widehat{\nabla W}\bigl( Z(t,x) - Z(t,x') \bigr) - \widehat{\nabla W}\bigl( Z'(t,y) - Z'(t,y') \bigr) \bigr] \pi(dx',dy') \Bigr\rangle. \end{split} \end{equation*} Integrating in $(x,y)$ with respect to $\pi$, we get \begin{equation*} \begin{split} &\frac{d}{dt} \int_{\mathbb{R}^{2d}} \vert Z(t,x) - Z'(t,y) \vert^2 \pi(dx,dy) \\ &\hspace{15pt} =- 2 \int_{\mathbb{R}^{2d}} \int_{\mathbb{R}^{2d}} \Bigl\langle Z(t,x) - Z'(t,y), \\ &\hspace{95pt} \bigl[ \widehat{\nabla W}\bigl( Z(t,x) - Z(t,x') \bigr) - \widehat{\nabla W}\bigl( Z'(t,y) - Z'(t,y') \bigr) \bigr] \Bigr\rangle \, \pi(dx,dy) \, \pi(dx',dy'). \end{split} \end{equation*} Thanks to the fact that $\widehat{\nabla W}$ is odd, see \textbf{(A0)}, we can write, by a symmetry argument, \begin{equation*} \begin{split} &\frac{d}{dt} \int_{\mathbb{R}^{2d}} \vert Z(t,x) - Z'(t,y) \vert^2 \pi(dx,dy) \\ &\hspace{15pt} =- \int_{\mathbb{R}^{2d}} \int_{\mathbb{R}^{2d}} \Bigl\langle Z(t,x) - Z'(t,y)- \bigl( Z(t,x') - Z'(t,y') \bigr), \\ &\hspace{95pt} \bigl[ \widehat{\nabla W}\bigl( Z(t,x) - Z(t,x') \bigr) - \widehat{\nabla W}\bigl( Z'(t,y) - Z'(t,y') \bigr) \bigr] \Bigr\rangle \, \pi(dx,dy) \, \pi(dx',dy'). \end{split} \end{equation*} Using \eqref{lambdaconvWchapo}, we obtain \begin{equation} \label{eq:aux:proof:1} \begin{split} &\frac{d}{dt} \int_{\mathbb{R}^{2d}}\vert Z(t,x) - Z'(t,y) \vert^2 \pi(dx,dy) \\ &\hspace{15pt} \leq - \lambda \int_{\mathbb{R}^{2d}} \int_{\mathbb{R}^{2d}} \bigl\vert Z(t,x) - Z'(t,y)- \bigl( Z(t,x') - Z'(t,y') \bigr) \bigr\vert^2 \, \pi(dx,dy) \, \pi(dx',dy'). \end{split} \end{equation} Observe that the above right-hand side is equal to \begin{equation} \label{eq:aux:proof:2} \begin{split} &\int_{\mathbb{R}^{2d}} \int_{\mathbb{R}^{2d}} \bigl\vert Z(t,x) - Z'(t,y)- \bigl( Z(t,x') - Z'(t,y') \bigr) \bigr\vert^2 \, \pi(dx,dy) \, \pi(dx',dy') \\ &\hspace{15pt}= 2 \int_{\mathbb{R}^{2d}} \bigl\vert Z(t,x) - Z'(t,y) \bigr\vert^2 \, \pi(dx,dy) - 2 \biggl\vert \int_{\mathbb{R}^{2d}} \bigl( Z(t,x) - Z'(t,y) \bigr) \, \pi(dx,dy) \biggr\vert^2. \end{split} \end{equation} \vskip 4pt \textbf{1st case.} If $\lambda \leq 0$, we deduce from \eqref{eq:aux:proof:1} and \eqref{eq:aux:proof:2} that \begin{equation*} \begin{split} &\frac{d}{dt} \int_{\mathbb{R}^{2d}}\vert Z(t,x) - Z'(t,y) \vert^2 \pi(dx,dy) \leq 2 \vert \lambda \vert \int_{\mathbb{R}^{2d}} \bigl\vert Z(t,x) - Z'(t,y) \bigr\vert^2 \, \pi(dx,dy), \end{split} \end{equation*} which suffices to complete the proof of the first claim by noting that \begin{equation*} \int_{\mathbb{R}^{2d}}\vert Z(0,x) - Z'(0,y) \vert^2 \pi(dx,dy) = \int_{\mathbb{R}^{2d}}\vert x - y \vert^2 \pi(dx,dy) = d_{W}(\rho^{ini},\rho^{ini,\prime})^2, \end{equation*} and \begin{equation*} \int_{\mathbb{R}^{2d}}\vert Z(t,x) - Z'(t,y) \vert^2 \pi(dx,dy) \geq d_{W}(\rho(t),\rho(t)^{\prime})^2, \end{equation*} see \eqref{eq:aux:proof}. \vskip 4pt \textbf{2nd case.} If $\lambda \geq 0$, we just use the fact that the right-hand side in \eqref{eq:aux:proof:1} is non-positive. Proceeding as above, this permits to complete the proof of the second claim. \end{proof} \subsection{Main result} The aim of this paper is to prove the convergence at order $1/2$ of an upwind type scheme in distance $d_W$ for the aggregation equation. The numerical scheme is defined as follows. We denote by $\Delta t$ the time step and consider a Cartesian grid with step $\Delta x_i$ in the $i$th direction, $i=1,\ldots,d$; we then let $\Delta x:=\max_i \Delta x_i$. We also introduce the following notations. For a multi-index $J=(J_1, \ldots, J_d)\in \mathbb{Z}^d$, we call $C_J:=[(J_1-\frac{1}{2})\Delta x_1,(J_1+\frac{1}{2})\Delta x_1)\times \ldots [(J_d-\frac{1}{2})\Delta x_d,(J_d+\frac{1}{2})\Delta x_d)$ the corresponding elementary cell. The center of the cell is denoted by $x_J := (J_1\Delta x_1, \ldots, J_d \Delta x_d).$ Also, we let $e_i := (0,\ldots,1,\ldots,0)$ be the $i$th vector of the canonical basis, for $i \in \{1,\ldots,d\}$, and we expand the velocity field in the canonical basis under the form $a=(a_1,\ldots,a_d)$. For a given nonnegative measure $\rho^{ini}\in {\mathcal P}_2(\mathbb{R}^d)$, we put, for any $J\in \mathbb{Z}^d$, \begin{equation}\displaystyle\label{disrho0} \rho_{J}^0:= \int_{C_J} \rho^{ini}(dx)\geq 0. \end{equation} Since $\rho^{ini}$ is a probability measure, the total mass of the system is $\sum_{J\in \mathbb{Z}^d} \rho_{J}^0 = 1$. We then construct iteratively the collection $((\rho_J^n)_{J \in \mathbb{Z}^d})_{n \in {\mathbb N}}$, each $\rho^n_{J}$ being intended to provide an approximation of the value $\rho(t^n,x_J)$, for $J\in \mathbb{Z}^d$. Assuming that the approximating sequence $(\rho_{J}^n)_{J\in \mathbb{Z}^d}$ is already given at time $t^n:=n \Delta t$, we compute the approximation at time $t^{n+1}$ by: \begin{equation}\displaystyle\label{dis_num} \begin{array}{ll} \displaystyle \rho_{J}^{n+1} := \displaystyle \rho_{J}^n - \sum_{i=1}^{d} \frac{\Delta t}{\Delta x_i} \Big(({a_i}^n_{J})^+ \rho_{J}^n - ({a_i}^n_{J+e_i})^- \rho_{J+e_i}^n -({a_i}^n_{J-e_i})^+ \rho_{J-e_i}^n + ({a_i}^n_{J})^- \rho_{J}^n \Big). \end{array} \end{equation} The notation $(a)^+ = \max\{0,a\}$ stands for the positive part of the real $a$ and respectively $(a)^- = \max\{0,-a\}$ for the negative part. The macroscopic velocity is defined by \begin{equation} \label{def:aij} {a_i}^n_{J} := -\sum_{K\in \mathbb{Z}^d} \rho_{K}^n \,D_iW_J^K, \quad \mbox{ where } \quad D_iW_J^K := \widehat{\partial_{x_i} W}\bigl(x_J-x_K \big). \end{equation} Since $W$ is even, we also have: \begin{equation} \label{eq:gradients:symmetry:cells} D_iW_{J}^{K} = -D_iW^{J}_{K}. \end{equation} The main result of this paper is the proof of the convergence at order $1/2$ of the above upwind scheme. More precisely the statement reads: \begin{theorem}\label{TH} (i) Assume that $W$ satisfies hypotheses {\bf (A0)--(A3)} and that the so-called strict $\frac 12$-CFL condition holds: \begin{equation} \label{CFL} w_\infty \sum_{i=1}^d \frac{\Delta t}{\Delta x_i} < \frac 12, \end{equation} with $w_{\infty}$ as in \eqref{borngradW}. For $\rho^{ini} \in {\mathcal P}_2(\mathbb{R}^d)$, let $\rho=(\rho(t))_{t \ge 0}$ be the unique measure solution to the aggregation equation with initial data $\rho^{ini}$, as given by Theorem \ref{Exist}. Define $((\rho_J^n)_{J\in \mathbb{Z}^d})_{n \in {\mathbb N}}$ as in \eqref{disrho0}--\eqref{dis_num}--\eqref{def:aij} and let $$ \rho_{\Delta x}^n := \sum_{J\in \mathbb{Z}^d} \rho_J^n \delta_{x_J}, \quad n \in {\mathbb N}. $$ Then, there exists a nonnegative constant $C$, only depending on $\lambda$, $w_{\infty}$ and $d$, such that, for all $n\in \mathbb{N}^*$, \begin{equation} \label{eq:TH:bound:1} d_W(\rho(t^n),\rho_{\Delta x}^n ) \leq C \, e^{\vert \lambda \vert (1+\Delta t)t^n} \, \bigl( \sqrt{t^n \Delta x} + \Delta x \bigr). \end{equation} (ii) Assume that $W$ is radial and satisfies hypotheses \textbf{\bf (A0)}--\textbf{\bf(A2)} with $\lambda$ (strictly) positive, that $\rho^{ini}$ is compactly supported in $B_\infty(M_1,R)$ where $M_1$ is the center of mass of $\rho^{ini}$, and that the CFL condition \eqref{CFL} holds, with $w_{\infty}$ defined as \begin{equation} \label{borngradW:lambda>0} w_{\infty} = \sup_{x \in B_{\infty}(0,2R+2\Delta x) \setminus \{0\} } \vert \nabla W(x) \vert, \end{equation} Assume also that $\Delta t \leq 1/2$ and $2 \lambda \Delta t < 1$. Then, there exists a nonnegative constant $C$, only depending on $\lambda$, $w_{\infty}$, $d$ and $R$ such that, for all $n\in \mathbb{N}^*$, \eqref{eq:TH:bound:1} is valid, as well as \begin{equation} \label{eq:TH:bound:2} d_W(\rho(t^n),\rho_{\Delta x}^n) \leq C \, \bigl( \sqrt{\Delta x} + \Delta x \bigr), \end{equation} which proves that the error can be uniformly controlled in time. \end{theorem} We stress the fact that, under the setting defined in $(ii)$, \eqref{eq:TH:bound:1} is valid. In small time, it provides a better estimate than \eqref{eq:TH:bound:2}. As indicated in the statement, the constant $C$ in \eqref{eq:TH:bound:2} may depend on the value of $R$ in the assumption $\textrm{\rm Supp}(\rho^{ini}) \subset B_{\infty}(M_{1},R)$. We also point out that, although the computations below are performed for the sole upwind scheme, the first part of the statement, which holds true under the full set of hypotheses {\bf (A0)--(A3)}, can be straightforwardly adapted to other diffusive schemes, see for instance our previous article \cite{DLV}. As for $(ii)$, the statement remains true provided that the supports of the approximating measures $(\rho^n)_{n \geq 0}$ remain bounded as $n$ grows up. It must be stressed that there are some schemes for which the latter property fails (e.g. Lax-Friedrichs' scheme). Moreover, as already mentioned in Introduction, the convergence rate is optimal; this latter fact will be illustrated by numerical examples in Section \ref{sec:sim}. \begin{example} \label{ex1D} In one dimension, the scheme \eqref{dis_num} reads $$ \rho_{i}^{n+1} = \rho_i^n - \frac{\Delta t}{\Delta x}\Big((a_i^n)^+ \rho_i^n - (a_{i+1}^n)^- \rho_{i+1}^n - (a_{i-1}^n)^+ \rho_{i-1}^n + (a_i^n)^-\rho_i^n\Big), $$ where $i$ is just taken in $\mathbb{Z}$. The scheme has then the following interpretation. Given $\rho^n_{\Delta x} = \sum_{j\in \mathbb{Z}} \rho_j^n \delta_{x_j}$, we construct the approximation at time $t^{n+1}$ by implementing the following two steps: \begin{itemize} \item The Delta mass $\rho_i^n$ located at position $x_i$ moves with velocity $a_i^n$ to the position $x_i+a_i^n \Delta t$. Under the CFL condition $w_\infty \Delta t \leq \Delta x$ (which is obviously weaker than what we require in \eqref{CFL}), the point $x_i+a_i^n\Delta t$ belongs to the interval $[x_i,x_{i+1}]$ if $a_i^n\geq 0$, and to the interval $[x_{i-1},x_{i}]$ if $a_i^n\leq 0$. \item Then the mass $\rho_{i}^n$ is split into two parts; if $a_{i}^n \geq 0$, a fraction $a_{i}^n \Delta t/\Delta x$ of it is transported to the cell $i+1$, while the remaining fraction is left in cell $i$; if $a_{i}^n \leq 0$, the same fraction $\vert a_{i}^n \vert \Delta t/\Delta x$ of the mass is not transported to the cell $i+1$ but to the cell $i-1$. This procedure may be regarded as a linear interpolation of the mass $\rho_{i}^n$ between the points $x_{i}$ and $x_{i+1}$ if $a_{i}^n \geq 0$ and between the points $x_{i}$ and $x_{i-1}$ if $a_{i}^n \leq 0$. \end{itemize} This interpretation holds only in the one dimensional case. However thanks to this interpretation, we can define a forward semi-Lagrangian scheme in any dimension on (unstructured) simplicial meshes, which is then different from \eqref{dis_num}. Such a scheme is introduced in Section \ref{sec:unstruct}. Finally, we emphasize that this scheme differs from the standard finite volume upwind scheme in which the velocity is computed at the interface $a_{i+1/2}^n$. This subtlety is due to the particular structure of the equation, as the latter requires the product $\widehat{a}_{\rho} \rho$ to be defined properly. A convenient way to make it proper is to compute, in the discretization, the velocity and the density at the same grid points. This fact has already been noticed in \cite{sinum,sisc} and is also illustrated numerically in Section \ref{sec:sim}. \end{example} \section{Numerical approximation} \label{sec:num} \subsection{Properties of the scheme} The following lemma explains why we called CFL the condition on the ratios $(\Delta t/\Delta x_{i})_{i=1,\cdots,d}$ that we formulated in the statement of Theorem \ref{TH}. \begin{lemma} \label{lem:CFL} Assume that $W$ satisfies hypotheses {\bf (A0)--(A3)} and that the condition \eqref{CFL} is in force. For $\rho^{ini}\in {\mathcal P}_2(\mathbb{R}^d)$, define $(\rho_{J}^0)_{J \in \mathbb{Z}^d}$ by \eqref{disrho0}. Then the sequences $(\rho_J^n)_{n \in \mathbb{N},J \in \mathbb{Z}^d}$ and $({a_i}_J^n)_{n \in \mathbb{N},J \in \mathbb{Z}^d}$, $i=1,\ldots,d$, given by the scheme defined in \eqref{dis_num}--\eqref{def:aij}, satisfy, for all $J\in \mathbb{Z}^d$ and $n\in \mathbb{N}$, $$ \rho_{J}^n \geq 0, \qquad |{a_i}_{J}^n|\leq w_\infty, \quad i=1,\ldots, d, $$ and, for all $n \in \mathbb{N}$, \begin{equation*} \sum_{J \in \mathbb{Z}^d} \rho_{J}^n=1. \end{equation*} \end{lemma} \begin{proof} The total initial mass of the system is $\sum_{J} \rho_{J}^0=1$. By summing equation \eqref{dis_num} over $J$, we can show that the mass is conservative, namely, for all $n\in \mathbb{N}^*$, $\sum_{J} \rho_{J}^n= \sum_{J} \rho_{J}^0=1$. Also, we can rewrite equation \eqref{dis_num} as \begin{equation} \rho_{J}^{n+1} = \rho_{J}^n \left[ 1 - \sum_{i=1}^d \frac{\Delta t}{\Delta x_i} |{a_i}^n_{J}| \right] + \sum_{i=1}^d \rho_{J+e_i}^n \frac{\Delta t}{\Delta x_i}({a_i}^n_{J+e_i})^- + \sum_{i=1}^d \rho_{J-e_i}^n \frac{\Delta t}{\Delta x_i}({a_i}^n_{J-e_i})^+. \label{schemarho} \end{equation} We prove by induction on $n$ that $\rho_{J}^n \geq 0$ for all $J \in \mathbb{Z}^d$ and for all $n \in {\mathbb N}$. Indeed, if, for some $n \in {\mathbb N}$, it holds $\rho_{J}^n \geq 0$ for all $J \in \mathbb{Z}^d$, then, by definition \eqref{def:aij} and assumption \eqref{borngradW}, we clearly have $$ |{a_i}_{J}^{n}|\leq w_\infty \sum_{K\in \mathbb{Z}^d} \rho_{K}^n = w_\infty, \qquad i=1,\ldots,d. $$ Then, assuming that the condition \eqref{CFL} holds, we deduce that, in the relationship \eqref{schemarho}, all the coefficients in front of $\rho_{J}^n$, $\rho_{J-e_i}^n$ and $\rho_{J+e_i}^n$, $i=1,\ldots,d$, are nonnegative. Thus, using the induction assumption, we deduce that $\rho_{J}^{n+1}\geq 0$ for all $J\in \mathbb{Z}^d$. \end{proof} In the following lemma, we collect two additional properties of the scheme: the conservation of the center of mass and the finiteness of the second order moment. \begin{lemma}\label{bounddismom} Let $W$ satisfy {\bf (A0)--(A3)} and condition \eqref{CFL} be in force. For $\rho^{ini}\in {\mathcal P}_2(\mathbb{R}^d)$, define $(\rho_{J}^0)_{J\in \mathbb{Z}^d}$ by \eqref{disrho0}. Then, the sequence $(\rho_{J}^n)_{J\in \mathbb{Z}^d}$ given by the numerical scheme \eqref{dis_num}--\eqref{def:aij} satisfies: $(i)$ Conservation of the center of mass. For all $n\in \mathbb{N}^*$, $$ \sum_{J\in \mathbb{Z}^d} x_J \rho_{J}^n = \sum_{J\in \mathbb{Z}^d} x_J \rho_{J}^0. $$ We will denote the right-hand side (and thus the left-hand side as well) by $M_{1,\Delta x}{}$. $(ii)$ Bound on the second moment. There exists a constant $C>0$, independent of the parameters of the mesh, such that, for all $n\in \mathbb{N}^*$, \begin{equation* M_{2,\Delta x}^n := \sum_{J\in \mathbb{Z}^d} |x_J|^2\rho_{J}^n \leq e^{C t^n} \big(M_{2,\Delta x}^0 + C\big), \end{equation*} where we recall that $t^n=n\Delta t$. \end{lemma} \begin{proof} We recall from Lemma \ref{lem:CFL} that, for all $n\in \mathbb{N}$, the sequence $(\rho_{J}^n)_{J \in \mathbb{Z}^d}$ is nonnegative and that its sum is equal to 1. $(i)$ Using \eqref{dis_num} together with a discrete integration by parts, we have: $$ \begin{array}{ll} \displaystyle \sum_{J\in \mathbb{Z}^d} x_J\rho_{J}^{n+1} = & \displaystyle \sum_{J\in \mathbb{Z}^d} x_J\rho_{J}^n - \sum_{i=1}^d\frac{\Delta t}{\Delta x_i} \sum_{J\in \mathbb{Z}^d} \left(({a_i}^n_{J})^+ \, \rho_{J}^n \big(x_J-x_{J+e_i}\big) -({a_i}^n_{J})^- \, \rho_{J}^n \big(x_{J-e_i}-x_J\big) \right). \end{array} $$ By definition of $x_J$, we deduce $$ \sum_{J\in \mathbb{Z}^d} x_J\rho_{J}^{n+1} = \sum_{J\in \mathbb{Z}^d} x_J\rho_{J}^n + \Delta t \sum_{i=1}^d \sum_{J\in \mathbb{Z}^d} {a_i}^n_{J} \, \rho_{J}^n. $$ By definition of the macroscopic velocity \eqref{def:aij} and by \eqref{eq:gradients:symmetry:cells}, we also have \begin{equation*} \begin{split} \sum_{J\in \mathbb{Z}^d} {a_i}^n_{J} \, \rho_{J}^n = -\sum_{J\in \mathbb{Z}^d} \sum_{K\in \mathbb{Z}^d} D_iW_{J}^{K}\, \rho_{K}^n \, \rho_{J}^n &= \sum_{J\in \mathbb{Z}^d} \sum_{K\in \mathbb{Z}^d} D_iW^{J}_{K}\, \rho_{K}^n \, \rho_{J}^n \\ &= \sum_{J\in \mathbb{Z}^d} \sum_{K\in \mathbb{Z}^d} D_iW^{K}_{J}\, \rho_{K}^n \, \rho_{J}^n, \end{split} \end{equation*} where we exchanged the role of $J$ and $K$ in the latter sum. We deduce that it vanishes. Thus, $$ \sum_{J\in \mathbb{Z}^d} x_J\rho_{J}^{n+1} = \sum_{J\in \mathbb{Z}^d} x_J\rho_{J}^n. $$ $(ii)$ For the second moment, still using \eqref{dis_num} and a similar discrete integration by parts, we get $$ \begin{array}{ll} \displaystyle \sum_{J\in \mathbb{Z}^d} |x_J|^2 \rho_{J}^{n+1} &= \displaystyle \sum_{J\in \mathbb{Z}^d} |x_J|^2\rho_{J}^n \\ [2mm] &\hspace{5pt}- \displaystyle \sum_{i=1}^d \frac{\Delta t}{\Delta x_i} \sum_{J\in \mathbb{Z}^d} \Bigl[ ({a_i}^n_{J})^+ \, \rho_{J}^n \big(|x_J|^2-|x_{J+e_i}|^2\big) -({a_i}^n_{J})^- \, \rho_{J}^n \big(|x_{J-e_i}|^2-|x_{J}|^2\big) \Bigr]. \end{array} $$ By definition of $x_J$, $|x_J|^2-|x_{J+e_i}|^2=-2J_i\, \Delta x_i^2 - \Delta x_i^2$ and $|x_{J-e_i}|^2-|x_J|^2=-2J_i\, \Delta x_i^2 + \Delta x_i^2$. Therefore, we get $$ \sum_{J\in \mathbb{Z}^d} |x_J|^2 \rho_{J}^{n+1} = \sum_{J\in \mathbb{Z}^d} |x_J|^2 \rho_{J}^{n} + 2\Delta t \sum_{i=1}^d\sum_{J\in \mathbb{Z}^d} J_i \Delta x_i \, {a_i}^n_{J} \, \rho_{J}^n + \Delta t\sum_{i=1}^d \Delta x_i \sum_{J\in \mathbb{Z}^d} \rho_{J}^n |{a_i}^n_{J}|. $$ As a consequence of Lemma \ref{lem:CFL}, we have $|{a_i}^n_{J}|\leq w_\infty$. Using moreover the mass conservation, we deduce that the last term is bounded by $w_\infty \Delta t \sum_{i=1}^d \Delta x_i$. Moreover, applying Young's inequality and using the mass conservation again, we get $$ \Big|\sum_{J\in \mathbb{Z}^d} {a_i}^n_{J} \, \rho_{J}^n \, J_i \Delta x_{i} \Big| \leq \frac{1}{2} \Big( w_\infty^2 + \sum_{J\in \mathbb{Z}^d} |J_{i} \Delta x_i|^2 \, \rho_{J}^n \Big) \leq \frac{1}{2} \Big( w_\infty^2 + \sum_{J\in \mathbb{Z}^d} \rho_{J}^n \, \vert x^n_{J} \vert^2\Big). $$ We deduce then that there exists a nonnegative constant $C$ only depending on $d$ and $w_\infty$ such that $$ \sum_{J\in \mathbb{Z}^d} |x_J|^2 \rho_{J}^{n+1} \leq \Big(1+C\Delta t\Big)\sum_{J\in \mathbb{Z}^d} |x_J|^2 \rho_{J}^{n} + C\Delta t\left(\sum_{i=1}^d\Delta x_i+1\right). $$ We conclude the proof using a discrete version of Gronwall's lemma. \end{proof} {In case when $W$ is radial and satisfies \textbf{\bf (A0)}--\textbf{(A2)}, $\lambda$ is (strictly) positive and $\rho^{ini}$ has a bounded support, Lemmas \ref{lem:CFL} and \ref{bounddismom} become: \begin{lemma} \label{lem:CFL:lambda:>0} Assume that $W$ is radial and satisfies \textbf{\bf (A0)}--\textbf{\bf (A2)}, $\lambda$ is (strictly positive) and $\rho^{ini}$ has a bounded support, then the conclusions of Lemmas \ref{lem:CFL} and \ref{bounddismom} remain true provided that $w_{\infty}$ is defined as in \eqref{borngradW:lambda>0}. Moreover, for any $R \geq 0$ such that $\textrm{\rm Supp}(\rho^{ini}) \subset B_{\infty}(M_{1},R)$, it holds, for any $n \in \mathbb{N}$, \begin{equation*} {\rm Supp}(\rho^n_{\Delta x}) \subset B_{\infty}(M_{1,\Delta x}{},R+\Delta x), \end{equation*} that is \begin{equation*} \forall J \in \mathbb{Z}^d, \quad x_{J} \not \in B_{\infty}(M_{1,\Delta x}{},R+\Delta x) \Rightarrow \rho^n_{J} = 0. \end{equation*} \end{lemma} The meaning of Lemma \ref{lem:CFL:lambda:>0} is pretty clear. For $R$ as in the statement, the mass, as defined by the numerical scheme, cannot leave the ball $B_{\infty}(M_{1,\Delta x}{},R+\Delta x)$. We here recover the same idea as in Theorem \ref{Exist}. \vskip 4pt \begin{proof} As long as we can prove that the mass, as defined by the numerical scheme, cannot leave the ball $B_{\infty}(M_{1,\Delta x}{},R+\Delta x)$, the proof is similar to that of Lemmas \ref{lem:CFL} and \ref{bounddismom}. So, we focus on the second part of the statement. We first recall that $\rho^0_{J} = \int_{C_{J}} \rho^{ini}(dx)$, for $J \in \mathbb{Z}^d$. Hence, if $x_{J} \not \in B_{\infty}(M_{1,\Delta x}{},R+\Delta x)$, we have $x_{J} \not \in B_{\infty}(M_{1},R+\Delta x/2)$ and then $C_{J} \cap B_{\infty}(M_{1},R) = \emptyset$ and thus $\rho^0_{J}=0$. Below, we prove by induction that the same holds true for any $n \in \mathbb{N}$. To do so, we assume that there exists an integer $n \in \mathbb{N}$ such that, for all $J \in \mathbb{Z}^d$, $\rho^n_{J} =0$ if \begin{equation} \label{eq:lambda>0:xJ} x_{J} \not \in B_{\infty}(M_{1,\Delta x}{},R + \Delta x). \end{equation} The goal is then to prove that, for any $J$ satisfying \eqref{eq:lambda>0:xJ}, $\rho^{n+1}_{J}=0$. By \eqref{schemarho}, it suffices to prove that, for any coordinate $i \in \{1,\cdots,d\}$ and any $J$ as in \eqref{eq:lambda>0:xJ}, \begin{equation} \label{eq:lambda>0:induction} \rho^{n}_{J+e_{i}} \bigl( {a_{i}}^n_{J+e_{i}}\bigr)^- = 0, \quad \textrm{\rm and} \quad \rho^{n}_{J-e_{i}} \bigl( {a_{i}}^n_{J-e_{i}}\bigr)^+ = 0. \end{equation} Without any loss of generality, we can assume that there exists a coordinate $i_{0} \in \{1,\cdots,d\}$ such that $(x_{J})_{i_{0}} > R + \Delta x + (M_{1,\Delta x}{})_{i_{0}}$ (otherwise $(x_{J})_{i_{0}} < -R - \Delta x+ (M_{1,\Delta x}{})_{i_{0}}$ and the argument below is the same). Hence, $(x_{J+e_{i_{0}}})_{i_{0}} > R + \Delta x + (M_{1,\Delta x}{})_{i_{0}}$ and, by the induction hypothesis, $\rho^n_{J+e_{i_{0}}}=0$, which proves the first equality in \eqref{eq:lambda>0:induction} when $i=i_{0}$. In order to prove the second equality when $i=i_{0}$, we notice from \eqref{def:aij} that \begin{equation*} \begin{split} {a_{i_{0}}}^n_{J-e_{i_{0}}} = -\sum_{K\in \mathbb{Z}^d} \rho_{K}^n \, \widehat{\partial_{x_{i_{0}}} W}\bigl(x_{J-e_{i_{0}}}-x_K \bigr) &= -\sum_{K\in \mathbb{Z}^d : (x_{K})_{i_{0}} \le R + \Delta x + (M_{1,\Delta x}{})_{i_{0}}} \rho_{K}^n \, \widehat{\partial_{x_{i_{0}}} W}\bigl(x_{J-e_{i_{0}}}-x_K \bigr) \\ &= -\sum_{K\in \mathbb{Z}^d : (x_{K})_{i_{0}} < (x_{J})_{i_{0}}} \rho_{K}^n \, \widehat{\partial_{x_{i_{0}}} W}\bigl(x_{J-e_{i_{0}}}-x_K \bigr) \\ &= -\sum_{K\in \mathbb{Z}^d : (x_{K})_{i_{0}} \leq (x_{J-e_{i_{0}}})_{i_{0}}} \rho_{K}^n \, \widehat{\partial_{x_{i_{0}}} W}\bigl(x_{J-e_{i_{0}}}-x_K \bigr). \end{split} \end{equation*} As $W$ is radial and $\lambda >0$, $\nabla W(x - y)$ is positively proportional to $x-y$. Hence, $\widehat{\partial_{x_{i_{0}}} W}(x_{J-e_{i_{0}}}-x_K) \geq 0$ when $(x_{K})_{i_{0}} \leq (x_{J-e_{i_{0}}})_{i_{0}}$. Therefore, $({a_{i_{0}}}^n_{J-e_{i_{0}}})^{+}=0$, which proves the second equality in \eqref{eq:lambda>0:induction}. It remains to prove \eqref{eq:lambda>0:induction} for $i \not = i_{0}$. Obviously, $(x^n_{J-e_{i}})_{i_{0}}= (x^n_{J+e_{i}})_{i_{0}} = (x^n_{J})_{i_{0}} > R + \Delta x + (M_{1,\Delta x}{})_{i_{0}}$. By the induction hypothesis, $\rho^n_{J-e_{i}} = \rho^n_{J+e_{i}} = 0$, which completes the proof. \end{proof} } {\begin{remark} Lemma \ref{lem:CFL:lambda:>0} is the main rationale for requiring $W$ to be radial. Indeed, the counter-example below shows that the growth of the support of $\rho^{ini}$ can be hardly controlled whenever $\lambda >0$ and $W$ is just assumed to satisfy \textbf{\bf (A0)}--\textbf{\bf (A2)}. Consider for instance the following potential in dimension $d=2$: \begin{equation*} W(x_{1},x_{2}) = \frac12 \bigl( x_{1} - q x_{2} \bigr)^2 + \frac{q^2}2 x_{2}^2, \quad (x_{1},x_{2}) \in \mathbb{R}^2, \end{equation*} where $q$ is a free integer whose value will be fixed later on. It is well checked that \begin{equation*} \partial_{x_{1}} W(x_{1},x_{2}) = x_{1}- q x_{2}, \quad \partial_{x_{2}} W(x_{1},x_{2}) = q ( q x_{2} - x_{1}) + q^2 x_{2}. \end{equation*} Standard computations show that the smallest eigenvalue of the Hessian matrix (which is independent of $(x_{1},x_{2})$) is \begin{equation*} \begin{split} &\frac{(1+2q^2) - 2q^2 \sqrt{1+1/(4q^4)}}{2} \sim_{q \rightarrow \infty} \frac12, \end{split} \end{equation*} so that $W$ is $\lambda$-convex with $\lambda$ converging to $1/2$ as $q$ tends to $\infty$. Take now a centered probability measure $\rho$ and compute the first coordinate of the velocity field $\widehat{a}_{\rho}$. By centering, \begin{equation*} \bigl(\widehat{a}_{\rho}\bigr)_{1}(x_{1},x_{2}) = qx_{2}- x_{1}. \end{equation*} In particular, if $x_{2}=1$, then $(\widehat{a}_{\rho})_{1}(x_{1},1) = q- x_{1}$, which is non-negative as long as $x_{1}< q$. Therefore, if the numerical scheme is initialized with some centered $\rho^0_{\Delta x}$ supported by the unit square $[-1,1]^2$, it holds \begin{equation*} (\widehat{a}_{\rho^0_{\Delta x}})_{1}(1,1) >0, \end{equation*} if $q>1$. Hence, provided that condition \eqref{CFL} holds true, $\rho^{1}_{\Delta x}$ charges the point $(1+\Delta x,1)$. Since the numerical scheme preserves the centering, we also have \begin{equation*} (\widehat{a}_{\rho^1_{\Delta x}})_{1}(1+\Delta x,1) >0, \end{equation*} if $q>1+\Delta x$, and then $\rho^2_{\Delta x}$ also charges the point $(1+2\Delta x,1)$, and so on up until $(\Delta x \lfloor q/\Delta x \rfloor,1)$. This says that there is no way to control the growth of the support of the numerical solution in terms of the sole lower bound of the Hessian matrix. Somehow, the growth of $\nabla W$ plays a key role. This is in stark contrast with the support of the real solution, which may be bounded independently of $q$, as emphasized in the proof of Theorem \ref{Exist}. A possible way to overcome the fact that the numerical scheme does not preserve any ball containing the initial support in the general case when $W$ is not radial would be to truncate the scheme. We feel more reasonable not to address this question in this paper, as it would require to revisit in deep the arguments used to tackle the case $\lambda \leq 0$. \end{remark} } \subsection{Comparison with a potential non-increasing scheme} \label{subse:potential:nonincreasing} It must be stressed that the scheme could be defined differently in order to force the potential (or total energy: $\iint_{\mathbb{R}^d \times \mathbb{R}^d} W(x - y) \, \rho(dx)\, \rho(dy)$) to be non-increasing. Basically, this requires the velocity $a$ to be defined as a discrete derivative. For simplicity, we provide the construction of the scheme in dimension 1 only. For a probability measure $\varrho \in {\mathcal P}(\mathbb{Z})$ and a cell $I \in \mathbb{Z}$, we consider the following two discrete convolutions of finite differences: \begin{equation*} \begin{split} &\frac1{\Delta x}\sum_{J \in \mathbb{Z}} \Bigl[ \Bigl( W\bigl( \Delta x (I + 1 - J) \bigr) - W\bigl( \Delta x (I - J) \bigr) \Bigr) \varrho_{J} \Bigr] \\ &\hspace{15pt} = \biggl[ \int_{\mathbb{R}^d} \frac{W(x+\Delta x - y ) - W(x-y)}{\Delta x} \varrho_{\Delta x}(dy) \biggr]_{\vert x=I \Delta x} \\ \textrm{\rm and} \quad &\frac1{\Delta x} \sum_{J \in \mathbb{Z}} \Bigl[ \Bigl( W\bigl( \Delta x (I -1 - J) \bigr) - W\bigl( \Delta x (I - J) \bigr) \Bigr) \varrho_{J} \Bigr] \\ &\hspace{15pt} = \biggl[ \int_{\mathbb{R}^d} \frac{W(x-\Delta x - y ) - W(x-y)}{\Delta x} \varrho_{\Delta x}(dy) \biggr]_{\vert x=I \Delta x}, \end{split} \end{equation*} where, as before, $\varrho_{\Delta x}$ is obtained by pushing forward $\varrho$ by the mapping $y \mapsto \Delta x \, y$. The two terms above define velocities at the interfaces of the cell $I$. Namely, we call the first term $-a_{I+\tfrac12}$ and the second one $a_{I-\tfrac12}$. Of course, the sign $-$ in the former term guarantees the consistency of the notation, that is $a_{(I+1)-\tfrac12}$ is equal to $a_{I+\tfrac12}$. Following \eqref{dis_num}, the scheme is defined by: \begin{equation} \label{dis_num_Carrillo} \rho^{n+1}_{J} := \rho^n_{J} - \frac{\Delta t}{\Delta x} \Bigl( \bigl( a^n_{J+\tfrac12} \bigr)^{+} \rho_{J}^n - \bigl( a^n_{J+\tfrac12} \bigr)^{-} \rho_{J+1}^n + \bigl( a^n_{J-\tfrac12} \bigr)^{-} \rho_{J}^n - \bigl( a^n_{J-\tfrac12} \bigr)^{+} \rho_{J-1}^n \Bigr), \end{equation} for $n \in \mathbb{N}$ and $J \in \mathbb{Z}$. It is shown in \cite{CCH} that the potential is non-increasing for the semi-discretized version of this scheme, which is to say that, up to a remainder of order 2 in $\Delta t$ (the value of $\Delta x$ being fixed), the potential of the fully discretized scheme does not increase from one step to another. The proof of the latter claim follows from a direct expansion of the quantity \begin{equation*} \frac12 \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} W(x-y) \rho_{\Delta x}^{n+1}(dx) \rho_{\Delta x}^{n+1}(dy) \end{equation*} by using the updating rule for $\rho^{n+1}_{J}$ in terms of $\rho^{n}_{J}$, $\rho^{n}_{J-1}$ and $\rho^{n}_{J+1}$. The numerical scheme investigated in this paper does not satisfy the same property. Indeed, we provide a counter example, which shows that the potential may increase when $W$ is convex, as a consequence of the numerical diffusion. However, the same example, but in dimension 1, shows that the scheme \eqref{dis_num_Carrillo} may not be convergent for certain forms of potential for which Theorem \ref{TH} applies, see Subsection \ref{subse:newtonian}. \begin{proposition} Choose $d=2$, $W(x)= | x |$ and take $\Delta x_{1} = \Delta x_{2} = 1$. Let the initial condition of the scheme, which we just denote by $\rho^0$, charge the points $0=(0,0)$, $e_{1}=(1,0)$ and $e_{2}=(0,1)$ with $1-p$, $p/2$ and $p/2$ as respective weights, where $p \in (0,1)$. Then, denoting by $\rho^1$ the distribution at time $1$ obtained by implementing the upwind scheme, it holds that: \begin{equation} \label{eq:counterexample} \int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \vert x- y \vert \rho^1(dx) \rho^1(dy) = \int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \vert x- y \vert \rho^0(dx) \rho^0(dy) + \bigl( \sqrt{2} -1 \bigr) p^2 (2p-1) \Delta t + O(\Delta t^2), \end{equation} where the Landau symbol $O(\cdot)$ may depend upon $p$. \end{proposition} Choosing $p>1/2$ in \eqref{eq:counterexample}, we see that the potential may increase at the same rate as the time step. \vspace{5pt} \begin{proof} We first compute the potential at time $0$. To do so, we compute $\int_{\mathbb{R}^2} \vert x- y \vert \rho^0(dy)$, for $x \in \{0,e_{1},e_{2}\}$: \begin{equation*} \begin{split} &\int_{\mathbb{R}^2} \vert y \vert \rho^0(dy) = p, \quad \int_{\mathbb{R}^2} \vert e_{1}- y \vert \rho^0(dy) = \int_{\mathbb{R}^2} \vert e_{2}- y \vert \rho^0(dy) = (1-p) + \frac{p}{\sqrt{2}}, \end{split} \end{equation*} so that \begin{equation*} \begin{split} &\int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \vert x- y \vert \rho^0(dx) \rho^0(dy) = 2 (1-p) p + \frac{p^2}{\sqrt{2}}. \end{split} \end{equation*} In order to compute the potential at time 1, we compute the velocity at each of the above points. Observing that the velocity at point $x$ is given by the formula: \begin{equation*} {a_{i}}^0_{x} = \int_{\mathbb{R}^2} \frac{y_{i} -x_{i} }{\vert y-x \vert} \rho^0(dy), \qquad i =1,2, \qquad \textrm{\rm with the convention} \ \frac{0}{0} = 0, \end{equation*} we get: \begin{alignat*}{2} &{a_{1}}^{0}_{(0,0)} = \frac{p}{2}, \quad &&\displaystyle {a_{2}}^{0}_{(0,0)} = \frac{p}{2}, \\ &{a_{1}}^{0}_{(1,0)} = - (1-p) - \frac{p}{2\sqrt{2}}, \quad &&\displaystyle {a_{2}}^{0}_{(1,0)} = \frac{p}{2\sqrt{2}}, \\ &{a_{1}}^{0}_{(0,1)} = \frac{p}{2\sqrt{2}}, \quad &&\displaystyle {a_{2}}^{0}_{(0,1)} = - (1-p) - \frac{p}{2\sqrt{2}}. \end{alignat*} We then compute the new masses at time $1$. There is one additional point which is charged: $e_{1}+e_{2}=(1,1)$. We have: \begin{alignat*}{2} \rho^{1}(0) &= (1-p) + \frac{p^2}{2\sqrt{2}} \Delta t, \\ \rho^{1}(e_{1}) = \rho^{1}(e_{2}) &= \frac{p}{2} - \frac{p^2}{2 \sqrt{2}} \Delta t, \\ \rho^{1}(e_{1}+e_{2}) &= \frac{p^2}{2 \sqrt{2}} \Delta t. \end{alignat*} We now have all the required data to compute the potential at time 1. \begin{alignat*}{2} \int_{\mathbb{R}^2} \vert y \vert \rho^1(dy) &= p - \frac{p^2}{\sqrt{2}} \Delta t + \frac{p^2}{2} \Delta t, \\ \int_{\mathbb{R}^2} \vert e_{1}- y \vert \rho^1(dy) = \int_{\mathbb{R}^2} \vert e_{2}- y \vert \rho^1(dy) &= (1-p) + \frac{p}{\sqrt{2}} + \frac{p^2}{\sqrt{2}} \Delta t - \frac{p^2}{2} \Delta t , \\ \int_{\mathbb{R}^2} \vert e_{1}+e_{2} - y \vert \rho^1(dy) & = (1-p) \sqrt{2} + p + \frac{p^2}{2} \Delta t - \frac{p^2}{\sqrt{2}} \Delta t. \end{alignat*} Finally, the potential at time 1 is given by: \begin{equation*} \begin{split} \int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \vert x- y \vert \rho^{1}(dx) \rho^{1}(dy) &= \Bigl( (1-p) + \frac{p^2}{2 \sqrt{2}} \Delta t \Bigr) \Bigl( p - \frac{p^2}{\sqrt{2}} \Delta t + \frac{p^2}{2} \Delta t \Bigr) \\ &\hspace{5pt} +\Bigl( p - \frac{p^2}{\sqrt{2}} \Delta t \Bigr) \Bigl( (1-p) + \frac{p}{\sqrt{2}} + \frac{p^2}{\sqrt{2}} \Delta t - \frac{p^2}{2} \Delta t \Bigr) \\ &\hspace{5pt} + \frac{p^2}{2 \sqrt{2}} \Delta t \Bigl( (1-p) \sqrt{2} + p + \frac{p^2}{2} \Delta t - \frac{p^2}{\sqrt{2}} \Delta t \Bigr). \end{split} \end{equation*} We expand the above right-hand side in powers of $\Delta t$. The zero-order term is exactly equal to $\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \vert x-y \vert \rho^0(dx) \rho^0(dy)$. So, we just compute the terms in $\Delta t$. It is equal to \begin{equation*} (1- \sqrt{2}) (1-p) p^2 + (\sqrt{2}-1)p^3 = (\sqrt{2}-1) p^2 (2p-1), \end{equation*} which completes the proof. \end{proof} \section{Order of convergence} \label{sec:ordre} This section is devoted to the proof of Theorem \ref{TH}. \subsection{Preliminaries} Before presenting the proof, we introduce some notations and establish some useful properties. We first define the following interpolation weights: for $J\in\mathbb{Z}^d$ and $y \in \mathbb{R}^d$, we let \begin{equation}\label{def:alpha} \alpha_J(y) = \left\{ \begin{array}{ll} \displaystyle 1-\sum_{i=1}^d \frac{|\langle y-{x_J},e_i\rangle|}{\Delta x_i} & \textrm{when} \ y\in C_J, \vspace{5pt} \\ \displaystyle \frac{1}{\Delta x_i}\bigl(\langle y-x_{J-e_i},e_i\rangle\bigr)^+ &\textrm{when} \ y\in C_{J-e_i}, \ \ \textrm{for} \ i= 1,\dots,d, \vspace{5pt} \\ \displaystyle \frac{1}{\Delta x_i}\bigl(\langle y-x_{J+e_i},e_i\rangle\bigr)^- &\textrm{when} \ y\in C_{J+e_i}, \ \ \textrm{for} \ i=1,\dots,d, \vspace{5pt} \\ 0 \quad &\textrm{otherwise}. \end{array} \right. \end{equation} The terminology \textit{interpolation weights} is justified by the following straightforward observation. Given a collection of reals $(h_{J})_{J \in {\mathbb Z}^d}$ indexed by the cells of the mesh, which we may regard as a real-valued function $h : x_{J} \mapsto h_{J}$ defined at the nodes of the mesh, we may define an interpolation of $h=(h_{J})_{J \in {\mathbb Z}^d}$ by letting \begin{equation} \label{eq:matchalI} {\mathcal I}(h)(y) = \sum_{J \in {\mathbb Z}^d} h_{J} \alpha_{J}(y), \quad y \in \mathbb{R}^d. \end{equation} Obviously, the sum in the right-hand side makes sense since only a finite number of weights are non-zero for a given value of $y$. Clearly, the functional ${\mathcal I}$ is an \textit{interpolation operator}. As explained below, ${\mathcal I}$ makes the connection between the analysis we perform in this paper and the one we performed in our previous work \cite{DLV}. Several crucial facts must be noticed. The first one is that, contrary to what one could guess at first sight, the weights are not necessarily non-negative. For a given $J \in \mathbb{Z}^d$, take for instance $y=(y_{i}=(J_{i}-\tfrac12) \Delta x_{i})_{i=1,\dots,d}\in C_{J}$. Then $\alpha_{J}(y) = 1 - \tfrac{d}{2}$, which is obviously negative if $d\geq 3$. However, the second point is that, for \textit{useful values} of $y$, the weights are indeed non-negative provided that the CFL condition \eqref{CFL} is in force. For a given $J \in \mathbb{Z}^d$, call indeed $U_{J}$ the subset of $C_{J}$ of so-called \textit{useful values} that are in $C_{J}$, as given by \begin{equation*} U_{J} = \bigl\{ y \in \mathbb{R}^d : \bigl\vert \bigl\langle y - x_{J},e_{i} \rangle \bigr\vert \leq w_{\infty} \Delta t, \quad i = 1,\dots,d \bigr\}. \end{equation*} Then, for any $J,L \in \mathbb{Z}^d$ and any $y \in U_{L}$, $\alpha_{J}(y)$ is non-negative, which is a direct consequence of the CFL condition \eqref{CFL}. In fact, the CFL condition \eqref{CFL} says more, and this is the rationale for the additional factor $\tfrac12$ in \eqref{CFL}: $U_{J}$ is included in $C_{J}$. Of course, the consequence is that, under the CFL condition \eqref{CFL}, we have, for any $J\in\mathbb{Z}^d$, $x_J+a_J^n\Delta t \in C_J$, where $a^n_{J}$ is the $d$-dimensional vector with entries $({a_i}^n_{J})_{i=1,\cdots,d}$ (indeed $|{a_{i}}_J^n| \Delta t \leq w_\infty \Delta t < \Delta x_i/2$). Another key fact is that the definition of $\alpha_{J}(y)$ in \eqref{def:alpha} is closely related to the definition of the numerical scheme \eqref{dis_num}. Indeed, we have the following formula, for any $J,L \in {\mathbb Z}^d$, \begin{equation} \label{eq:transition:probas} \alpha_{J} \bigl( x_{L} + \Delta t a^n_{L} \bigr) = \left\{ \begin{array}{ll} \displaystyle 1-\sum_{i=1}^d \vert {a_{i}}^n_{J} \vert \frac{\Delta t}{\Delta x_i} & \textrm{when} \ L=J, \vspace{5pt} \\ \displaystyle \frac{\Delta t}{\Delta x_i}\bigl( {a_{i}}^n_{J-e_{i}} \bigr)^{+} &\textrm{when} \ L = J-e_{i}, \ \ \textrm{for} \ i= 1,\dots,d, \vspace{5pt} \\ \displaystyle \frac{\Delta t}{\Delta x_i}\bigl( {a_{i}}^n_{J+e_{i}} \bigr)^{-} &\textrm{when} \ L = J+e_{i}, \ \ \textrm{for} \ i= 1,\dots,d, \vspace{5pt} \\ 0 \quad &\textrm{otherwise}. \end{array} \right. \end{equation} In particular, we may rewrite \eqref{dis_num} as \begin{equation}\label{scheme3} \forall\, J \in \mathbb{Z}^d, \quad \rho^{n+1}_J = \sum_{L\in\mathbb{Z}^d} \rho^n_L \alpha_{J}\bigl(x_L+\Delta t a^n_L\bigr), \end{equation} which is the core of our analysis below. In this regard, The following lemma gathers some useful properties. \begin{lemma}\label{propalpha} Let $(\alpha_{L}(y))_{L \in \mathbb{Z}^d,y \in \mathbb{R}^d}$ be defined as in \eqref{def:alpha}. Then, for any $y \in \mathbb{R}^d$, we have $$ \sum_{L\in \mathbb{Z}^d} \alpha_{L}(y) = 1 \quad \mbox{ and } \quad \sum_{L\in \mathbb{Z}^d} x_L \alpha_{L}(y) = y. $$ \end{lemma} \begin{proof} There exists a unique $J\in \mathbb{Z}^d$ such that $y\in C_J$. Then, we compute \begin{align*} \sum_{L\in \mathbb{Z}^d} \alpha_{L}(y) & = \alpha_{J}(y) + \sum_{i=1}^d \bigl(\alpha_{J+e_i}(y)+\alpha_{J-e_i}(y)\bigr) \\ & = 1 - \sum_{i=1}^d \frac{|\langle y-{x_L},e_i\rangle|}{\Delta x_i} + \frac{1}{\Delta x_i} \sum_{i=1}^d \bigl(\langle y-x_{J},e_i\rangle \bigr)^+ + \bigl(\langle y-x_{J},e_i\rangle \bigr)^- = 1 \end{align*} Then, using the fact that $x_{J+e_i}-x_J = \Delta x_i e_i$, for $i=1,\ldots,d$, we have \begin{align*} \sum_{L\in \mathbb{Z}^d} x_L \alpha_{L}(y) & = x_J \alpha_J(y) + \sum_{i=1}^d \bigl(x_{J+e_i} \alpha_{J+e_i}(y)+ x_{J-e_i} \alpha_{J-e_i}(y)\bigr) \\ & = x_J + \sum_{i=1}^d \Bigl( \frac{1}{\Delta x_i} \bigl(\langle y-x_J, e_i\rangle \bigr)^+ \Delta x_i e_i - \frac{1}{\Delta x_i} \bigl(\langle y-x_J, e_i \bigr\rangle)^- \Delta x_i e_i \Bigr) \\ & = x_J + \sum_{i=1}^d \langle y-x_J, e_i\rangle e_i = y, \end{align*} which completes the proof. \end{proof} \begin{remark} \label{comparison} Lemma \ref{propalpha} prompts us to draw a comparison with our previous paper \cite{DLV}. For a given $y \in \mathbb{R}^d$ in the set of useful values $U:=\cup_{J \in \mathbb{Z}^d} U_{J}$, namely $y \in U_{J}$ for some $J \in \mathbb{Z}^d$, the collection of weights $(\alpha_{L}(y))_{L \in \mathbb{Z}^d}$ forms a probability measure, as the weights are non-negative and their sum is 1! In particular, ${\mathcal I}(h)(y)$ in \eqref{eq:matchalI}, for $y \in U$, may be interpreted as an expectation. Using the same terminology as in \cite{DLV} (which is in fact the terminology of the theory of Markov chains), those weights should be regarded as transition probabilities: For a given $y$ in the set of useful values, $\alpha_{L}(y)$ reads as the probability of jumping from a \emph{certain state depending on the sole value of $y$} to the node $x_{L}$. Of course, the interpretation of the so-called \emph{certain state depending on the sole value of $y$} is better understood from \eqref{eq:transition:probas}. In \eqref{eq:transition:probas}, if we fix a cell $L \in \mathbb{Z}^d$ (or equivalently a node $x_{L}$), then $\alpha_{J}(x_{L}+\Delta t a^n_{L})$ should read as the probability of passing from the node $x_{L}$ to the node $x_{J}$ (or from the cell $L$ to the cell $J$) at the $n^{\textrm{\rm th}}$ step of a (time inhomogeneous) Markov chain having the collection of nodes (or of cells) as state space. In this regard, \eqref{scheme3} is nothing but the Kolmogorov equation for the corresponding Markov chain, as $(\rho^n_{J})_{J \in \mathbb{Z}^d}$ can be interpreted as the law at time $n$ of the Markov chain driven by the latter transition probabilities. The reader can easily check that the so-called \emph{stochastic characteristic} used in \cite{DLV} is in fact this Markov chain. Below, we do not make use of the Markov chain explicitly. Still, we use the weights $(\alpha_{J}(y))_{J \in \mathbb{Z}^d, y \in \mathbb{R}^d}$ to construct a coupling between the two measures $\rho^n_{\Delta x}$ and $\rho^{n+1}_{\Delta x}$, that is to construct a specific element of $\Gamma(\rho^n_{\Delta x},\rho^{n+1}_{\Delta x})$. In \cite{DLV}, this coupling does not explicitly show up but it is in fact implicitly used, as it coincides with the joint law of two consecutive states of the aforementioned Markov chain. In a nutshell, the reader can reformulate the whole analysis below in a probabilistic fashion. The only (conceptual) difficulty to do so is that, in contrast with \cite{DLV}, the Markov chain is here \emph{nonlinear}: as $a^n$ in \eqref{def:aij} depends on $\rho^n$, the transition probabilities of the Markov do depend upon the marginal law of the Markov chain itself, which fact gives rise to a so-called \emph{nonlinear Markov chain}! \end{remark} \subsection{Proof of Theorem \ref{TH}} {\bf 1st step.} We first consider the case where the initial datum is given by $\rho^{ini}:=\rho_{\Delta x}^0 = \sum_{J\in\mathbb{Z}^d} \rho_J^0 \delta_{x_J}$, where we recall that $\rho_J^0$ is defined in \eqref{disrho0}. For $n\in\mathbb{N}^*$, let us define $$ D_n:= d_W \bigl(\rho(t^n),\rho_{\Delta x}^n \bigr). $$ Clearly, with our choice of initial datum, we have $D_0=0$. Let $\gamma$ be an optimal plan in $\Gamma_0(\rho(t^n),\rho_{\Delta x}^n)$, we have \[ D_n = \left(\iint_{\mathbb{R}^d\times\mathbb{R}^d} |x-y|^2 \gamma(dx,dy)\right)^{1/2}. \] Let us introduce $a^n_{\Delta x}$, the piecewise affine in each direction reconstruction of the velocity such that for all $J\in\mathbb{Z}^d$, $a^n_{\Delta x}(x_J)=a_J^n$ Denote also by $Z := Z_{\rho}$ the flow given by Theorem \ref{Exist}, when $\rho^{ini}$ is prescribed as above. Recalling the definition of $\alpha_J(y)$ from \eqref{def:alpha}, we then consider a new measure $\gamma'$, defined as the image of $\gamma$ by the kernel ${\mathcal K}$ that associates with a point $(x,y) \in \mathbb{R}^d \times \mathbb{R}^d$ the point $(Z(t^{n+1};t^n,x),x_{L})$ with measure $\alpha_{L}(y+\Delta t a^n_{\Delta x}(y))$, namely, for any two Borel subsets $A$ and $B$ of $\mathbb{R}^d$, \begin{equation*} \begin{split} {\mathcal K} \bigl( (x,y), A \times B \bigr) &= {\mathbf 1}_{A}\bigl( Z(t^{n+1};t^n,x) \bigr) \sum_{L \in {\mathbb{Z}}^d} \alpha_{L}\bigl(y+\Delta t a^n_{\Delta x}(y) \bigr) {\mathbf 1}_{B}(x_{L}) \\ &= \iint_{\mathbb{R}^d\times \mathbb{R}^d} {\mathbf 1}_{A \times B}(x',y') \biggl[ \delta_{Z(t^{n+1};t^n,x)} \otimes \biggl( \sum_{L \in \mathbb{Z}^d} \alpha_{L}\bigl(y+\Delta t a^n_{\Delta x}(y) \bigr) \delta_{x_{L}}\biggr) \biggr] (dx',dy'), \end{split} \end{equation*} where $\delta_{z}$ denotes the Dirac mass at point $z$, and then \begin{equation*} \gamma'(A \times B) = \iint_{\mathbb{R}^d \times \mathbb{R}^d} {\mathcal K} \bigl( (x,y),A \times B \bigr) \gamma(dx,dy). \end{equation*} Equivalently, for any bounded Borel-measurable function $\theta : \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$, \begin{equation}\label{def:gamma'} \iint_{\mathbb{R}^d\times\mathbb{R}^d} \theta(x,y) \gamma'(dx,dy) = \iint_{\mathbb{R}^d\times\mathbb{R}^d} \biggl[\sum_{L\in \mathbb{Z}^d} \theta\bigl(Z(t^{n+1};t^n,x),x_L\bigr) \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr)\biggr]\,\gamma(dx,dy). \end{equation} Then we have $\gamma'\in\Gamma(\rho(t^{n+1}),\rho_{\Delta}^{n+1})$. Indeed, for any bounded Borel-measurable function $\theta_{1}: \mathbb{R}^d \rightarrow \mathbb{R}$, we have, from \eqref{def:gamma'} and Lemma \ref{propalpha}, \begin{align*} \iint_{\mathbb{R}^d\times\mathbb{R}^d} \theta_{1}(x) \gamma'(dx,dy) & = \iint_{\mathbb{R}^d\times\mathbb{R}^d} \biggl[\sum_{L\in \mathbb{Z}^d} \theta_{1}\bigl(Z(t^{n+1};t^n,x)\bigr) \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr)\biggr]\,\gamma(dx,dy) \\ & = \iint_{\mathbb{R}^d\times\mathbb{R}^d} \theta_{1}\bigl(Z(t^{n+1};t^n,x)\bigr)\,\gamma(dx,dy) \\ & = \int_{\mathbb{R}^d} \theta_{1}\bigl(Z(t^{n+1};t^n,x)\bigr) \rho(t^n,dx) = \int_{\mathbb{R}^d} \theta_{1}(x) \rho(t^{n+1},dx), \end{align*} where we used Theorem \ref{Exist} and where $\rho(t^n,dx)$ is a shorter notation for $\rho(t^n)(dx)$ and similarly for $\rho(t^{n+1},dx)$. Similarly, for any bounded Borel-measurable function $\theta_{2} : \mathbb{R}^d \rightarrow \mathbb{R}$, \begin{align*} \iint_{\mathbb{R}^d\times\mathbb{R}^d} \theta_{2}(y) \gamma'(dx,dy) & = \iint_{\mathbb{R}^d\times\mathbb{R}^d} \biggl[\sum_{L\in \mathbb{Z}^d} \theta_{2}(x_L) \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr)\biggr]\,\gamma(dx,dy) \\ & = \sum_{J\in \mathbb{Z}^d}\sum_{L\in \mathbb{Z}^d} \theta_{2}(x_L) \alpha_L\bigl(x_J+\Delta t a_J^n\bigr) \rho_J^n \\ & = \sum_{L\in \mathbb{Z}^d} \theta_{2}(x_L) \rho_L^{n+1} = \int_{\mathbb{R}^d} \theta_{2}(y) \rho^{n+1}_{\Delta x}(dy), \end{align*} where we used \eqref{scheme3}. In particular, we deduce $$ D_{n+1}^2 \leq \iint_{\mathbb{R}^d\times\mathbb{R}^d} |x-y|^2 \gamma'(dx,dy). $$ Using the definition of $\gamma'$ given in \eqref{def:gamma'}, we get \begin{equation}\label{eq1} D_{n+1}^2 \leq \iint_{\mathbb{R}^d\times\mathbb{R}^d} \sum_{L\in \mathbb{Z}^d} \bigl|Z(t^{n+1};t^n,x)-x_L\bigr|^2 \alpha_L \bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) \gamma(dx,dy). \end{equation} Using both equalities of Lemma \ref{propalpha}, we compute\footnote{The probabilistic reader will easily recognize the standard computation of the $L^2$ norm of a random variable in terms of its variance and its expectation, which indeed plays, but under a conditional form, a key role in \cite{DLV}.} \begin{align} &\sum_{L\in \mathbb{Z}^d}\bigl|Z(t^{n+1};t^n,x) - x_L\bigr|^2 \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) \notag \\ &= \sum_{L\in \mathbb{Z}^d}\Bigl| \Bigl( Z(t^{n+1};t^n,x) - \bigl( y +\Delta t a^n_{\Delta x}(y)\bigr) \Bigr) - \Bigl( x_L - \bigl( y + \Delta t a^n_{\Delta x}(y)\bigr) \Bigr) \Bigr|^2 \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) \notag \\ &= \bigl|Z(t^{n+1};t^n,x) - y - \Delta t a^n_{\Delta x}(y) \bigr|^2 + \sum_{L\in \mathbb{Z}^d} \bigl|x_L - y - \Delta t a^n_{\Delta x}(y) \bigr|^2 \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) \nonumber\\ &\hspace{15pt} -2\biggl\langle Z(t^{n+1};t^n,x) - y - \Delta t a^n_{\Delta x}(y), \sum_{L \in \mathbb{Z}^d} \bigl(x_L-y-\Delta t a^n_{\Delta x}(y)\bigr) \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y) \biggr\rangle . \label{eqZz} \end{align} Now, as a consequence of Lemma \ref{propalpha}, we observe that $$ \sum_{L\in \mathbb{Z}^d} \bigl(x_L-y-\Delta t a^n_{\Delta x}(y)\bigr) \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) = 0. $$ Thus, equation \eqref{eqZz} rewrites \begin{align*} \sum_{L\in \mathbb{Z}^d}\bigl|Z(t^{n+1};t^n,x) - x_L\bigr|^2 \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) &= \bigl|Z(t^{n+1};t^n,x)-y-\Delta t a^n_{\Delta x}(y)\bigr|^2 \\ &\hspace{15pt}+ \sum_{L\in \mathbb{Z}^d} \bigl|x_L-y-\Delta t a^n_{\Delta x}(y)\bigr|^2 \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr). \end{align*} Injecting into \eqref{eq1}, we deduce \begin{align} D_{n+1}^2 \leq &\ \iint_{\mathbb{R}^d\times\mathbb{R}^d} \bigl|Z(t^{n+1};t^n,x)-y-\Delta t a^n_{\Delta x}(y)\bigr|^2 \gamma(dx,dy) \nonumber \\ & + \int_{\mathbb{R}^d} \sum_{L\in \mathbb{Z}^d} \bigl|x_L-y-\Delta t a^n_{\Delta x}(y) \bigr|^2 \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) \rho_{\Delta x}^n(dy), \label{eq2} \end{align} where we used the fact that $\rho_{\Delta x}^n$ is the second marginal of $\gamma$. By definition, $\rho_{\Delta x}^n(y) = \sum_{J\in\mathbb{Z}^d} \rho_J^n \delta_J(y)$, so that \begin{align*} &\sum_{L\in \mathbb{Z}^d} \int_{\mathbb{R}^d} \bigl|x_L-y-\Delta t a^n_{\Delta x}(y)\bigr|^2 \alpha_L\bigl(y+\Delta t a^n_{\Delta x}(y)\bigr) \rho_{\Delta x}^n(dy) \\ &\hspace{5cm} = \sum_{J\in \mathbb{Z}^d} \sum_{L\in \mathbb{Z}^d} \bigl|x_L-x_J-\Delta t a^n_J\bigr|^2 \alpha_L\bigl(x_J+\Delta t a^n_J\bigr) \rho_J^n. \end{align*} Moreover using the definition of $\alpha_L$ in \eqref{def:alpha}, we compute \begin{align*} &\sum_{L\in \mathbb{Z}^d} \bigl|x_L-x_J-\Delta t a^n_J\bigr|^2 \alpha_L\bigl(x_J+\Delta t a^n_J\bigr) \\ &= \Delta t^2 |a_J^n|^2 \left(1-\sum_{i=1}^d \frac{\Delta t}{\Delta x_i} |{a_i}_J^n|\right) + \sum_{i=1}^d \left( \bigl|\Delta x_i e_i -\Delta t a^n_J\bigr|^2 \frac{\Delta t}{\Delta x_i} ({a_i}_J^n)^+ + \bigl|\Delta x_i e_i +\Delta t a^n_J\bigr|^2 \frac{\Delta t}{\Delta x_i} ({a_i}_J^n)^- \right) \\ & \leq C\Delta t(\Delta t + \Delta x), \end{align*} where we used, for the last inequality, the CFL condition \eqref{CFL} and the fact that the velocity $(a_J^n)_J$ is uniformly bounded (see Lemma \ref{lem:CFL} or Lemma \ref{lem:CFL:lambda:>0}). Then, \eqref{eq2} gives \begin{equation}\label{eqD1} D_{n+1}^2 \leq \iint_{\mathbb{R}^d\times\mathbb{R}^d} \bigl|Z(t^{n+1};t^n,x)-y-\Delta t a^n_{\Delta x}(y)\bigr|^2 \gamma(dx,dy) + C \Delta t (\Delta t+\Delta x). \end{equation} {\bf 2nd step.} We have to estimate the error between the exact characteristic $Z(t^{n+1};t^n,x)$ and the forward Euler discretization $y+\Delta t a^n_{\Delta x}(y)$. By definition of the characteristics \eqref{eq:characteristics}, we have \begin{align*} Z(t^{n+1};t^n,x) & = x + \int_{t^n}^{t^{n+1}} \widehat{a}_\rho\bigl(s,Z(s;t^n,x)\bigr) ds \\ & = x - \int_{t^n}^{t^{n+1}} \int_{\mathbb{R}^d}\widehat{\nabla W}\bigl(Z(s;t^n,x)-Z(s;t^n,\xi)\bigr) \rho(t^n,d\xi) ds. \end{align*} We recall also that, by definition \eqref{def:aij}, the approximating velocity is given by \begin{align*} a_L^n = - \sum_{J\in\mathbb{Z}^d} \rho_J^n \widehat{\nabla W}(x_L-x_J), \end{align*}so that for $y$, a node of the mesh, \begin{equation*} y + \Delta t a^n_{\Delta x}(y) = y - \Delta t \int_{\mathbb{R}^d} \widehat{\nabla W}( y - \zeta) \rho^n_{\Delta x}( d \zeta \bigr). \end{equation*} Thus, by a straightforward expansion and still for $y$ a node of the mesh, \begin{align*} &\bigl|Z(t^{n+1};t^n,x)-y-\Delta t a^n_{\Delta x}(y)\bigr|^2 \leq |x-y|^2 \\ & - 2\int_{t^n}^{t^{n+1}}\iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl\langle x-y, \widehat{\nabla W}\bigl(Z(s;t^n,x)-Z(s;t^n,\xi)\bigr)-\widehat{\nabla W}(y-\zeta)\bigr\rangle \rho(t^n,d\xi) \rho_{\Delta x}^n(d\zeta) + C\Delta t^2. \end{align*} By definition of the optimal plan $\gamma\in \Gamma_0(\rho(t^n),\rho^n_{\Delta x})$, we also have \begin{align*} & \iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl\langle x-y, \widehat{\nabla W}\bigl(Z(s;t^n,x)-Z(s;t^n,\xi)\bigr)-\widehat{\nabla W}(y-\zeta)\bigr\rangle \rho(t^n,d\xi) \rho_{\Delta x}^n(d\zeta) \\ & = \iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl\langle x-y, \widehat{\nabla W}\bigl(Z(s;t^n,x)-Z(s;t^n,\xi)\bigr)-\widehat{\nabla W}(y-\zeta)\bigr\rangle \gamma(d\xi,d\zeta) \end{align*} Injecting into \eqref{eqD1}, we get \begin{align*} D_{n+1}^2 \leq & \ D_n^2 + C \Delta t(\Delta t+\Delta x) \\ & - 2 \int_{t^n}^{t^{n+1}}\iint_{\mathbb{R}^d\times \mathbb{R}^d}\iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl\langle x-y, \widehat{\nabla W}\bigl(Z(s;t^n,x)-Z(s;t^n,\xi)\bigr)-\widehat{\nabla W}(y-\zeta) \bigr\rangle \\ & \hspace{10cm} \gamma(d\xi,d\zeta) \gamma(dx,dy). \end{align*} Decomposing $x-y=x-Z(s;t^n,x)+Z(s;t^n,x)-y$ and using the fact that $|Z(s;t^n,x)-x|\leq w_\infty |s-t^n|$, we get \begin{align*} D_{n+1}^2 \leq & \ D_n^2 + C \Delta t(\Delta t+\Delta x) \\ & - 2 \int_{t^n}^{t^{n+1}}\iint_{\mathbb{R}^d\times \mathbb{R}^d}\iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl\langle Z(s;t^n,x)-y, \widehat{\nabla W}\bigl(Z(s;t^n,x)-Z(s;t^n,\xi)\bigr)-\widehat{\nabla W}(y-\zeta)\bigr\rangle \\[-2mm] & \hspace{11.7cm} \gamma(d\xi,d\zeta) \gamma(dx,dy). \end{align*} Then, we may use the symmetry of the potential $W$ in assumption {\bf (A0)} for the last term to deduce \begin{align*} D_{n+1}^2 \leq & \ D_n^2 + C \Delta t(\Delta t+\Delta x) \\ & - \int_{t^n}^{t^{n+1}}\iint_{\mathbb{R}^d\times \mathbb{R}^d}\iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl\langle Z(s;t^n,x)-Z(s;t^n,\xi)-y+\zeta, \\[-2mm] & \hspace{4.7cm} \widehat{\nabla W}\bigl(Z(s;t^n,x)-Z(s;t^n,\xi)\bigr)-\widehat{\nabla W}(y-\zeta)\bigr\rangle \, \gamma(d\xi,d\zeta) \gamma(dx,dy). \end{align*} Moreover, from the $\lambda$-convexity of $W$ \eqref{lambdaconvWchapo}, we obtain \begin{align*} D_{n+1}^2 \leq & D_n^2 + C \Delta t(\Delta t+\Delta x) \\ & - \lambda \int_{t^n}^{t^{n+1}}\iint_{\mathbb{R}^d\times \mathbb{R}^d}\iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl|Z(s;t^n,x)-y-Z(s;t^n,\xi)+\zeta\bigr|^2 \, \gamma(d\xi,d\zeta) \gamma(dx,dy). \end{align*} Expanding the last term, we deduce \begin{align} D_{n+1}^2 \leq & \ D_n^2 + C \Delta t(\Delta t+\Delta x) - 2\lambda \int_{t^n}^{t^{n+1}} \iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl|Z(s;t^n,x)-y\bigr|^2 \,\gamma(dx,dy) \nonumber \\ & + 2\lambda \int_{t^n}^{t^{n+1}} \biggl\vert \iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl(Z(s;t^n,x)-y\bigr) \,\gamma(dx,dy) \biggr\vert^2. \label{interm2} \end{align} {\bf 3rd step.} Now we distinguish between the two cases $\lambda\leq 0$ and $\lambda>0$. (i) Starting with the case $\lambda\leq 0$, we have that the last term in \eqref{interm2} is nonpositive. Using Young's inequality and the estimate $|x-Z(s;t^n,x)| \leq w_\infty (s-t^n)$, we get, for any $\varepsilon>0$, $$ \bigl|Z(s;t^n,x)-y \bigr|^2 \leq (1+\varepsilon) |x-y|^2 + (1+\frac{1}{\varepsilon}) w_\infty^2 |s-t^n|^2. $$ Hence, injecting into \eqref{interm2}, we deduce $$ D_{n+1}^2 \leq \bigl(1 + 2(1+\varepsilon)|\lambda| \Delta t \bigr) D_n^2 + C \Delta t\Big(\Delta x+\Delta t(1+\frac{\Delta t}{\varepsilon})\Big). $$ Applying a discrete Gronwall inequality, we obtain $$ D_n^2 \leq e^{2 (1+\varepsilon)|\lambda| t^n}\left(D_0^2+ C t^n\Big(\Delta x+\Delta t(1+\frac{\Delta t}{\varepsilon})\Big)\right). $$ We recall that our choice of initial data implies $D_0=0$. Finally, taking $\varepsilon=\Delta t$, we conclude $$ d_W\bigl(\rho(t^n),\rho_{\Delta}^n\bigr) \leq C e^{(1+\Delta t)|\lambda| t^n} \sqrt{t^n(\Delta x+\Delta t)}. $$ It allows to conclude the proof of Theorem \ref{TH} (i) in the case $\rho^{ini}=\rho_{\Delta x}^0$. (ii) Considering now the case $\lambda>0$, we have $$ \iint_{\mathbb{R}^d\times \mathbb{R}^d} \bigl(Z(s;t^n,x)-y \bigr)\,\gamma(dx,dy) = \int_{\mathbb{R}^d} \bigl(Z(s;t^n,x)-x\bigr)\rho(t^n,dx) + \int_{\mathbb{R}^d} x \rho(t^n,dx) - \sum_{J\in\mathbb{Z}^d} x_J \rho_J^n. $$ By conservation of the center of mass, see Lemma \ref{bounddismom} (i), we deduce that $$ \int_{\mathbb{R}^d} x \rho(t^n,dx) - \sum_{J\in\mathbb{Z}^d} x_J \rho_J^n = \int_{\mathbb{R}^d} x \rho^{ini}(dx) - \sum_{J\in\mathbb{Z}^d} x_J \rho_J^0 = 0, $$ since we have chosen the initial data such that $\rho^{ini}=\rho_{\Delta x}^0$. Using also the bound $|Z(s;t^n,x)-x| \leq w_\infty (s-t^n)$, we may bound the last term of \eqref{interm2} by $w_\infty^2 \Delta t^2$. Moreover, using again Young's inequality and the estimate $|Z(s;t^n,x)-x| \leq w_\infty (s-t^n)$, we have, for any $\varepsilon>0$, $$ |x - y|^2 \leq (1+\varepsilon) \bigl|Z(s;t^n,x)-y \bigr|^2 + (1+\frac{1}{\varepsilon}) w_\infty^2 |s-t^n|^2. $$ It implies, for any $\varepsilon\in (0,1)$, \begin{align*} - \bigl|Z(s;t^n,x)-y \bigr|^2 & \leq - \frac{1}{1+\varepsilon} |x-y|^2 + \frac{1}{\varepsilon} w_\infty^2 |s-t^n|^2 \\ & \leq -(1-\varepsilon) |x-y|^2 + \frac{1}{\varepsilon} w_\infty^2 |s-t^n|^2. \end{align*} Thus we deduce that $$ - 2 \lambda \int_{t^n}^{t^{n+1}}\!\! \iint_{\mathbb{R}^d\times\mathbb{R}^d} \bigl|Z(s;t^n,x)-y \bigr|^2 \gamma(dx,dy) \leq -2 \lambda (1-\varepsilon) \Delta t D_n + \frac 23 \frac{\lambda}{\varepsilon} w_\infty^2 \Delta t^3. $$ Injecting this latter inequality into \eqref{interm2} and taking $\varepsilon=\Delta t$, we deduce $$ D_{n+1}^2 \leq \bigl(1-2\lambda (1-\Delta t)\Delta t \bigr) D_n^2 + C \Delta t(\Delta t+\Delta x) $$ Hence, since $2\lambda (1-\Delta t)\Delta t<1$, we have by induction, recalling that $D_0=0$, $$ D_n^2 \leq C \Delta t(\Delta t+\Delta x) \sum_{k=0}^{n-1} \bigl(1-2\lambda (1-\Delta t)\Delta t \bigr)^{k} \leq \frac{C}{2(1-\Delta t)\lambda} (\Delta t+\Delta x). $$ Using the assumption $\Delta t \leq 1/2$, we conclude the proof of Theorem \ref{TH} (ii) in the case $\rho^{ini}=\rho_{\Delta x}^0$. \vskip 2pt {\bf 4th step.} We are left with the case $\rho^{ini}\neq \rho_{\Delta x}^0$. Let us define $\rho'(t)=Z'(t)_\#\rho_{\Delta x}^0$, the exact solution with initial data $\rho_{\Delta x}^0$. From the triangle inequality, we have $$ d_W\bigl(\rho(t^n),\rho_{\Delta x}^n\bigr) \leq d_W\bigl(\rho(t^n),\rho'(t^n)\bigr) + d_W\bigl(\rho'(t^n),\rho_{\Delta x}^n\bigr). $$ The last term in the right hand side may be estimated thanks to the above computations. For the first term in the right hand side, we use the estimates in Theorem \ref{Exist} (we apply $(i)$ if $\lambda \leq 0$ and $(ii)$ if $\lambda >0$): \[ d_W\bigl(\rho(t^n),\rho'(t^n)\bigr) \leq e^{(\lambda)^{-} t^n} d_W\bigl(\rho^{ini},\rho_{\Delta x}^0\bigr), \] where $(\lambda)^{-}=\max(-\lambda,0)$ is the negative part of $\lambda$. Let us define $\tau:[0,1]\times \mathbb{R}^d \to \mathbb{R}^d$ by $\tau(\sigma,x) = \sigma x_J + (1-\sigma)x$, for $x \in C_J$. We have that $\tau(0,\cdot)=\mathrm{id}$ and $\tau(1,\cdot)_\# \rho^{ini} = \rho_{\Delta x}^0$. Then \begin{equation} \label{eq:wp:condition:initiale} \begin{split} d_W\bigl(\rho^{ini},\rho_{\Delta x}^0\bigr)^2 &\leq \displaystyle \int_{\mathbb{R}^d\times \mathbb{R}^d} |x-y|^2 \, \bigl[(\mathrm{id}\times\tau(1,\cdot))_\# \rho^{ini}\bigr](dx,dy) \\ & \leq \displaystyle \sum_{J\in \mathbb{Z}^d} \rho_J^0 \int_{C_J} |x-x_J|^2 \,\rho^{ini}(dx). \end{split} \end{equation} We deduce $d_W(\rho^{ini},\rho_{\Delta x}^0) \leq \Delta x$. Then, we get $$ d_W\bigl(\rho(t^n),\rho'(t^n)\bigr) \leq e^{(\lambda)^- t^n} \Delta x. $$ \section{Unstructured mesh} \label{sec:unstruct} We can extend our convergence result to more general meshes. For the sake of simplicity of the notation, we present the case of a triangular mesh in two dimensions. This approach can be easily extended to meshes made of simplices, in any dimension. \subsection{Forward semi-Lagrangian scheme} Let us consider a triangular mesh ${\mathcal T} = (T_k)_{k\in \mathbb{Z}}$ with nodes $(x_i)_{i\in \mathbb{Z}}$. We assume this mesh to be conformal: A summit cannot belong to an open edge of the grid. The triangles $(T_k)_{k \in \mathbb{Z}}$ are assumed to satisfy $\bigcup_{k\in\mathbb{Z}} T_k = \mathbb{R}^2$ and $T_k \cap T_l = \emptyset$ if $k \neq l$ (in particular, the cells are here not assumed to be closed nor open). For any triangle $T$ with summits $x$, $y$, $z$, we will use also the notation $(x,y,z) = T$. We denote by ${\mathcal V}(T) = {\mathcal V}(x,y,z)$ the area of this triangle, and $h(T)$ its height (defined as the minimum of the three heights of the triangle $T$). We make the assumption that the mesh satisfies $\hbar:=\inf_{k\in \mathbb{Z}} h(T_k) >0$. For any node $x_i$, $i \in \mathbb{Z}$, we denote by $K(i)$ the set of indices indexing triangles that have $x_i$ as a summit, and we denote by ${\mathcal T}_i$ the set of all triangles of ${\mathcal T}$ that have $x_i$ as a summit: thus ${\mathcal T}_i = \{T_k ; k \in K(i) \}$. For any triangle $T_k$, $k \in \mathbb{Z}$, we denote by \[ I(k) = \{I_1(k), I_2(k), I_3(k)\} \] the set of indices indexing the summits of $T_k$ (for some arbitrary order, whose choice has no importance for the sequel). We consider the following scheme, which may be seen as a forward semi-Lagrangian scheme on the triangular mesh. \begin{itemize} \item For an initial distribution $\rho^{ini}$ of the PDE \eqref{EqInter}, define the probability weights $(\rho^0_{i})_{i \in \mathbb{Z}}$ through the following procedure: Consider the one-to-one mapping $\iota : \mathbb{Z} \ni k \mapsto \iota(k) \in \mathbb{Z}$ such that, for each $k \in \mathbb{Z}$, $x_{\iota(k)}$ is a node of the triangle $T_k$; $\iota$ is thus a way to associate a node with a cell; then, for all $i \in \mathbb{Z}$, let $\rho^0_{i} = \sum_{k:\iota(k) = i}\rho^{ini}(T_{k})$. Observe from \eqref{eq:wp:condition:initiale} that $\rho^0_{\Delta x} = \sum_{j \in \mathbb{Z}} \rho^0_{j} \delta_{x_{j}}$ is an approximation of $\rho^{ini}$. \item Assume that, for a given $n \in \mathbb{N}$, we already have probability weights $(\rho_i^n)_{i\in\mathbb{Z}}$ such that $\rho^n_{\Delta x} = \sum_{j \in \mathbb{Z}} \rho^n_{j} \delta_{x_{j}}$ is an approximation of $\rho(t^n,\cdot)$, where $\rho$ is the solution to \eqref{EqInter} with $\rho^{ini}$ as initial condition. For $i \in \mathbb{Z}$, we let \[ a_i^n:=-\int_{\mathbb{R}^d} \widehat{\nabla W}(x_i-y)\,\rho_{\Delta x}^n(dy), \qquad \mbox{and} \qquad y_i^n:= x_i + a_i^n \Delta t. \] Under the CFL-like condition \begin{equation}\label{CFLT} w_\infty \Delta t \leq \hbar, \end{equation} $y_i^n$ belongs to one (and only one) of the elements of ${\mathcal T}_i$. We denote by $k_{i}^n$ the index of this triangle, namely $y_i^n\in T_{k_{i}^n}$. \item We use a linear splitting rule between the summits of the triangle $T_{k_i^n}$: the mass $\rho_i^n$ is sent to the three points $x_{I_1(k_i^n)}$, $x_{I_2(k_i^n)}$, $x_{I_3(k_i^n)}$ according to the {\em barycentric coordinates} of $y_i^n$ in the triangle. \end{itemize} \begin{center} \begin{tikzpicture} \draw (0,0) -- (5,0) -- (3,4) -- (0,0); \draw (0,0) node[below]{$x_i =x_{I_1(k_{i}^n)}$}; \draw (5,0) node[below]{$x_{I_2(k_{i}^n)}$}; \draw (3,4) node[above]{$x_{I_3(k_{i}^n)}$}; \draw (2.5,2) node[below]{$y_i^n$}; \draw [->] (0,0) -- (2.5,2); \draw [dashed] (5,0) -- (2.5,2) -- (3,4); \end{tikzpicture} \end{center} Let us make more precise the latter point. Let $T = (x,y,z) \in {\mathcal T}$, and $\xi \in T$. We define the barycentric coordinates of $\xi$ with respect to $x$, $y$ and $z$, $\lambda_{x}^{T}$, $\lambda_{y}^{T}$ and $\lambda_{z}^{T}$: \begin{equation}\label{lambdaetal} \lambda_{x}^{T}(\xi) = \frac{{\mathcal V}(\xi,y,z)}{{\mathcal V}(T)}, \quad \lambda_{y}^{T}(\xi) = \frac{{\mathcal V}(\xi,x,z)}{{\mathcal V}(T)}, \quad \lambda_{z}^{T}(\xi) = \frac{{\mathcal V}(\xi,x,y)}{{\mathcal V}(T)}, \quad \end{equation} and then have $\xi = \lambda_{x}^{T}(\xi) x + \lambda_{y}^{T}(\xi) y + \lambda_{z}^{T}(\xi) z$. Note also that $\lambda_{x}^{T}(\xi) + \lambda_{y}^{T}(\xi) + \lambda_{z}^{T}(\xi) = 1$. Therefore, we have the following fundamental property, which will be used in the sequel: \begin{equation} \label{fundapropo} \lambda_{x}^{T}(\xi) (x - \zeta) + \lambda_{y}^{T}(\xi) (y - \zeta) + \lambda_{z}^{T}(\xi) (z - \zeta) = \xi - \zeta, \end{equation} for any $\zeta \in \mathbb{R}^2$. In the same spirit as in Section \ref{sec:ordre}, we here define the interpolation weights by: For $j\in\mathbb{Z}$, and $y\in\mathbb{R}^2$, \begin{equation} \label{def:alphaT} \alpha_j(y) := \left\{\begin{array}{ll} \lambda_{x_j}^{T}(y), & \textrm{when} \ y\in T, \vspace{5pt} \\ 0, \ \ & \textrm{otherwise.} \end{array}\right. \end{equation} Then, the numerical scheme reads \begin{equation}\label{schemeT} \rho_j^{n+1} = \sum_{i\in \mathbb{Z}} \rho_i^n \alpha_{j}(x_i+a_i^n\Delta t), \qquad j \in \mathbb{Z}, \ n \in \mathbb{N}. \end{equation} We easily verify from \eqref{lambdaetal} and \eqref{fundapropo} that the interpolation weights satisfy: \begin{lemma}\label{propalphaT} Let $(\alpha_{j}(y))_{j \in \mathbb{Z},y \in \mathbb{R}^2}$ be defined as in \eqref{def:alphaT}. Then, for any $j \in \mathbb{Z}$ and $y \in \mathbb{R}^2$, $\alpha_{j}(y) \geq 0$. Moreover, for any $y \in \mathbb{R}^2$, $$ \sum_{j\in\mathbb{Z}} \alpha_j(y) = 1, \qquad \sum_{j\in\mathbb{Z}} x_j \alpha_j(y) = y. $$ \end{lemma} \subsection{Convergence result} By the same token as in Section \ref{sec:ordre}, we can use Lemma \ref{propalphaT} and Theorem \ref{Exist} to prove that the numerical scheme \eqref{schemeT} is of order $1/2$: \begin{theorem} \label{TH2} Assume that $W$ satisfies hypotheses {\bf (A0)--(A3)}. For $\rho^{ini} \in {\mathcal P}_2(\mathbb{R}^d)$, let $(\rho(t))_{t \ge 0}$ be the unique measure solution to the aggregation equation with initial data $\rho^{ini}$, as given by Theorem \ref{Exist}. Let us also consider a triangular conformal mesh $(T_k)_{k\in \mathbb{Z}}$ with nodes $(x_j)_{j\in \mathbb{Z}}$ such that $\hbar = \inf_{k\in \mathbb{Z}} h(T_k) >0$ and the CFL condition \eqref{CFLT} holds true. We denote by $\Delta x$ the longest edge in the mesh. Define $((\rho_j^n)_{j\in \mathbb{Z}})_{n \in {\mathbb N}}$ as in \eqref{schemeT} and let $$ \rho_{\Delta x}^n := \sum_{j\in \mathbb{Z}} \rho_j^n \delta_{x_j}, \quad n \in {\mathbb N}. $$ Then, there exists a nonnegative constant $C$, independent of the discretization parameters, such that, for all $n\in \mathbb{N}^*$, $$ d_W(\rho(t^n),\rho_{\Delta x}^n) \leq C e^{|\lambda|(1+\Delta t) t^n} \bigl( \sqrt{t^n \Delta x} + \Delta x \bigr). $$ \end{theorem} Importantly, we do not claim that $(ii)$ in the statement of Theorem \ref{TH} remains true in the framework of Theorem \ref{TH2}. Indeed, it would require to prove that the support of the numerical solution remains included in a ball when the support of the initial condition is bounded. As made clear by the proof of Lemma \ref{lem:CFL:lambda:>0}, this latter fact depends on the geometry of the mesh. \section{Numerical illustrations} \label{sec:sim} We now address several numerical examples. In Subsection \ref{subsec:optimality}, we show that the rate of convergence established in Theorem \ref{TH} is optimal in a one-dimensional example. This prompts us to start with a short reminder on the Wasserstein distance in dimension $d=1$. In Subsection \ref{subse:newtonian}, we provide several numerical examples in dimension $d=1$ for the Newtonian potential, whilst examples in dimension $d=2$ are handled in Subsection \ref{subsec:numerical:d2}. \subsection{Wasserstein distance in one dimension} The numerical computation of the Wasserstein distance between two probablity measures in any dimension is generally quite difficult. However, in dimension $d=1$, there is an explicit expression of the Wasserstein distance and this allows for direct computations, including numerical purposes, as shown in the pioneering work \cite{GT}. Indeed, any probability measure $\mu$ on the real line $\mathbb{R}$ can be described thanks to its cumulative distribution function $F(x)=\mu((-\infty,x ])$, which is a right-continuous and non-decreasing function with $F(-\infty)=0$ and $F(+\infty)=1$. Then we can define the generalized inverse $Q_\mu$ of $F$ (or monotone rearrangement of $\mu$) by $Q_\mu(z)=F^{-1}(z):=\inf\{x\in \mathbb{R} : F(x) > z\}$; it is a right-continuous and non-decreasing function, defined on $[0,1)$. For every non-negative Borel-measurable map $\xi : \mathbb{R} \rightarrow \mathbb{R}$, we have $$ \int_\mathbb{R} \xi(x) \mu(dx) = \int_0^1 \xi(Q_\mu(z))\,dz. $$ In particular, $\mu\in {\mathcal P}_2(\mathbb{R})$ if and only if $Q_\mu\in L^2((0,1))$. Moreover, in the one-dimensional setting, there exists a unique optimal transport plan realizing the minimum in \eqref{defWp}. More precisely, if $\mu$ and $\nu$ belong to ${\mathcal P}_p(\mathbb{R})$, with monotone rearrangements $Q_\mu$ and $Q_\nu$, then $\Gamma_0(\mu,\nu)=\{(Q_\mu,Q_\nu)_\# {\mathbb{L}}_{(0,1)}\}$ where ${\mathbb{L}}_{(0,1)}$ is the restriction to $(0,1)$ of the Lebesgue measure. Then we have the explicit expression of the Wasserstein distance (see \cite{rachev,Villani2}) \begin{equation}\label{dWF-1} d_{W}(\mu,\nu) = \left(\int_0^1 |Q_\mu(z)-Q_\nu(z)|^2\,dz\right)^{1/2}, \end{equation} and the map $\mu \mapsto Q_\mu$ is an isometry between ${\mathcal P}_2(\mathbb{R})$ and the convex subset of (essentially) non-decreasing functions of $L^2([0,1))$. We will take advantage of this expression \eqref{dWF-1} of the Wasserstein distance in dimension 1 in our numerical simulations to estimate the numerical error of the upwind scheme \eqref{dis_num}. This scheme in dimension 1 on a Cartesian mesh reads, with time step $\Delta t$ and cell size $\Delta x$: \begin{equation} \label{scheme1D} \rho_j^{n+1} = \rho_j^n - \frac{\Delta t}{\Delta x}\left((a_{j}^n)^+ \rho_j^n -(a_{j+1}^n)^- \rho_{j+1}^n - (a_{j-1}^n)^+ \rho_{j-1}^n +(a_{j}^n)^- \rho_j^n \right). \end{equation} With this scheme, we define the probability measure $\rho_{\Delta x}^n =\sum_{j\in \mathbb{Z}} \rho_j^n\delta_{x_j}$. Then the generalized inverse of $\rho_{\Delta x}^n$, denoted by $Q_{\Delta x}^{n}$, is given by \begin{equation} \label{Fdelta} Q_{\Delta x}^{n}(z) = x_{j+1}, \qquad \mbox{for } z\in \Big[\sum_{k\leq j}\rho_{k}^n,\sum_{k\leq j+1}\rho_{k}^n\Big). \end{equation} \subsection{Optimality of the order of convergence} \label{subsec:optimality} Thanks to formula \eqref{dWF-1} in dimension $d=1$, we can verify numerically the optimality of our result. Let us consider the potential $W(x)=2x^2$ for $|x|\leq 1$ and $W(x)=4|x|-2$ for $|x|> 1$; such a potential verifies our assumptions {\bf (A0)--(A3)} with $\lambda=0$. We choose the initial datum $\rho^{ini}=\frac 12 \delta_{-x_0}+\frac 12 \delta_{x_0}$ with $x_0=0.25$. Then the solution to the aggregation equation \eqref{EqInter} is given by $$ \rho(t) = \frac 12 \delta_{-x_0(t)} + \frac 12 \delta_{x_0(t)}, \qquad x_0(t)=\frac 14 e^{-4t}, \qquad t \ge 0. $$ The generalized inverse $Q_{\rho}(t,\cdot)= Q_{\rho(t)}$ of $\rho(t)$ is given, for $z\in [0,1)$, by $Q_{\rho}(t,z) = -x_0(t)$ if $z\in [0,1/2)$, and $Q_{\rho}(t,z) = x_0(t)$ if $z\in [1/2,1)$. Therefore, letting $u_j^n:=\sum_{k\leq j} \rho_k^n$ for $j \in \mathbb{Z}$, we can easily compute the error at time $t^n=n\Delta t$ by means of the two formulas \eqref{dWF-1}--\eqref{Fdelta}: $$ e_n:=d_W\bigl(\rho(t^n),\rho_{\Delta x}^n\bigr) = \sum_{k\in\mathbb{Z}} \int_{u_{k-1}^n}^{u_k^n} |x_{k} - Q_{\rho}(t^n,z)|dz. $$ We then define the numerical error as $e:=\max_{n\leq T/\Delta t} e_n$. We display in Figure \ref{fig:error} the numerical error with respect to the number of nodes in logarithmic scale, as computed with the above procedure (the time steps being chosen in a such a way that the ratio \eqref{CFL} in the CFL condition is kept constant). We observe that the computed numerical error is of order $1/2$. \begin{figure}[!ht] \centering \includegraphics[width=8cm,height=6.5cm]{error_upw_W2_reg.pdf} \caption{Numerical error with respect to the number of nodes in logarithmic scale for the upwind scheme in Wasserstein distance for the potential $W$ defined by $W(x)=2x^2$ for $|x|\leq 1$ and $W(x)=4|x|-2$ for $|x|>1$, and an initial datum given by the sum of two Dirac deltas.} \label{fig:error} \end{figure} \subsection{Newtonian potential in one dimension} \label{subse:newtonian} An interesting and illustrative example is the Newtonian potential in dimension $d=1$. Let us indeed consider the case $W(x)=|x|$ and an initial datum given by the sum of two masses located at points $x_{i_1}$ and $x_{i_2}$ of the grid mesh, namely $\rho^{ini}=\frac 12 \delta_{x_{i_1}}+\frac 12 \delta_{x_{i_2}}$, with say $x_{i_1}<x_{i_2}$. The solution of the aggregation equation in Theorem \ref{Exist} is given by $\rho(t) =\frac 12 \delta_{x_1(t)}+ \frac 12 \delta_{x_2(t)}$, where $$ x_1(t) = x_{i_1} + \frac{t}{2}, \qquad x_2(t) = x_{i_2} - \frac{t}{2}, \qquad \mbox{for } \ t<x_{i_2}-x_{i_1}. $$ Indeed, recalling definition \eqref{achapo}, we have, for $t<x_{i_2}-x_{i_1}$: $$ \widehat{a}_{\rho}(t,x) = \left\{ \begin{array}{cl} 1, \qquad & \mbox{ if } x<x_1(t), \\[1mm] \frac 12, \qquad & \mbox{ if } x=x_1(t), \\[1mm] 0, \qquad & \mbox{ if } x_1(t)<x<x_2(t), \\[1mm] -\frac 12, \qquad & \mbox{ if } x=x_2(t), \\[1mm] - 1, \qquad & \mbox{ if } x>x_2(t). \end{array}\right. $$ At $t= x_{i_2}-x_{i_1}$, the two particles collapse, then for $t\geq x_{i_2}-x_{i_1}$, we have $\rho(t)=\delta_{\frac 12 (x_{i_1}+x_{i_2})}$. \\ {\bf Standard finite volume upwind scheme.} This simple example explains why we have chosen the scheme \eqref{scheme1D} instead of the standard finite volume upwind scheme introduced in Subsection \ref{subse:potential:nonincreasing}. In dimension $d=1$ and on a Cartesian grid, this latter one reads \begin{equation}\label{badscheme} \rho_i^{n+1} = \rho_i^n - \frac{\Delta t}{\Delta x} \left((a_{i+1/2}^n)^+ \rho_i^n -(a_{i+1/2}^n)^- \rho_{i+1}^n - (a_{i-1/2}^n)^+ \rho_{i-1}^n + (a_{i-1/2}^n)^- \rho_{i}^n \right), \end{equation} where $a_{i+1/2}^n=-\sum_{k\in\mathbb{Z}} \rho_k^n \mbox{ sign} (x_{i+1/2}-x_k)$. Assume indeed that, at time $t^n$, for some $n \in \mathbb{N}$, we have obtained the approximation $\rho_i^n = 0$ for $i\in\mathbb{Z}\setminus\{i_1,i_2\}$, and $\rho_{i_1}^n=\rho_{i_2}^n=1/2$. We then compute $$ a_{i+1/2}^n = \left\{ \begin{array}{cl} 1, \quad &\mbox{for } i<i_1\\ 0, \quad &\mbox{for } i_1\leq i < i_2\\ -1, \quad &\mbox{for } i\geq i_2. \end{array} \right. $$ So, when applying the upwind scheme for $i\in \{i_1-1,i_1,i_1+1\}$, we get $$ \begin{array}{l} \displaystyle \rho_{i_1-1}^{n+1} = \rho_{i_1-1}^n - \frac{\Delta t}{\Delta x}\left(\rho_{i_1-1}^n-\rho_{i_1-2}^n\right) = 0, \\[2mm] \displaystyle \rho_{i_1}^{n+1} = \rho_{i_1}^n + \frac{\Delta t}{\Delta x}\rho_{i_1-1}^n = \rho_{i_1}^n, \\[2mm] \displaystyle \rho_{i_1+1}^{n+1} = \rho_{i_1+1}^n = 0. \\ \end{array} $$ Doing the same computation for $i\in \{i_2-1,i_2,i_2+1\}$, we deduce that $\rho^{n+1}=\rho^n$. Thus the above upwind scheme may be not able to capture the correct dynamics of Dirac deltas. The above computation is illustrated by the numerical results in Figure \ref{fig:wrong}, where a comparison between the numerical results obtained with \eqref{badscheme} (left) and with \eqref{scheme1D} (right) is displayed. We observe that the Dirac deltas are stationary when using the scheme \eqref{badscheme}, whereas the scheme \eqref{scheme1D} permits to catch the right dynamics. Another interesting numerical illustration of this phenomenon is provided by Figure \ref{fig:wrong_exp}. In this example, we choose the potential $W(x)=1-e^{-2|x|}$, which is $-4$-convex, and a smooth initial datum given by the sum of two Gaussian functions: $\rho^{ini}(x) = \frac{1}{M}(e^{-20(x-0.5)^2} + e^{-20(x+0.5)^2}$), {where $M=\|\rho^{ini}\|_{L^1}$ is a normalization coefficient.} With this choice, we observe that the solution blows-up quickly. Dirac deltas appear in finite time and, as observed above, the scheme \eqref{badscheme} (Fig. \ref{fig:wrong_exp}-left) does not allow to capture the dynamics after blow-up time, whilst the scheme \eqref{scheme1D} (Fig. \ref{fig:wrong_exp}-right) succeeds to do so. For these numerical simulations, the numerical spatial domain is $[-1.25,1.25]$; it is discretized with a uniform Cartesian grid of $800$ nodes, and the ratio in the CFL condition \eqref{CFL} is $1/2$. \\ \begin{figure}[!ht] \centering\includegraphics[width=8cm,height=6.5cm]{wrong_upwind.pdf} \includegraphics[width=8cm,height=6.5cm]{good_upwind.pdf} \caption{Numerical result for the one dimensional aggregation equation with $W(x)=|x|$ and an initial datum given by two Dirac deltas. Left: Result obtained with the standard upwind scheme \eqref{badscheme} with a velocity computed at the interfaces of the mesh. Right: Result with the scheme \eqref{scheme1D}. As already emphasized in Example \ref{ex1D}, this shows once again that a great care must be paid to the choice of the scheme in order to recover the correct dynamics of Dirac deltas.} \label{fig:wrong} \end{figure} \begin{figure}[!ht] \centering\includegraphics[width=8cm,height=6.5cm]{wrong_upwind_exp_norm.pdf} \includegraphics[width=8cm,height=6.5cm]{good_upwind_exp_norm.pdf} \caption{Numerical result for the one dimensional aggregation equation with $W(x)=1-e^{-2|x|}$ and an initial datum given by the sum of two Gaussian functions. Left: Result obtained with the standard upwind scheme \eqref{badscheme} with a velocity computed at the interfaces of the mesh. Right: Result with the scheme \eqref{scheme1D}. As in Fig. \ref{fig:wrong}, the upwind scheme \eqref{badscheme} does not capture the right dynamics of the Dirac deltas after blow-up time.} \label{fig:wrong_exp} \end{figure} {\bf Comparison with Burgers-Hopf equation.} Considering the potential $W(x)=\frac 12 |x|$, it has been proved in \cite{GF_dual} (see also \cite{bonaschi}) that the following equivalence holds true: $\rho$ is the solution in Theorem \ref{Exist} if and only if $u=-W'*\rho$ is the entropy solution of the Burgers-Hopf equation $\partial_t u+\frac 12 \partial_x u^2 = 0$. Let $(\rho^n_i)_{i \in \mathbb{Z},n \in \mathbb{N}}$ be given by the scheme \eqref{dis_num}--\eqref{def:aij}. By conservation of the total mass, see Lemma \ref{bounddismom}, we have $\sum_{k\in \mathbb{Z}} \rho_k^n=1$. Introducing \begin{equation*} u_i^n := \frac 12 - \sum_{k\leq i} \rho_k^n, \quad i \in \mathbb{Z}, \ n \in \mathbb{N}, \end{equation*} we deduce, by summing \eqref{dis_num} and by using the fact that $\rho_i^n=-(u_i^n-u_{i-1}^n)$, that the family $(u_i^n)_{i \in \mathbb{Z},n \in \mathbb{N}}$ satisfies: \begin{equation} \label{eq:scheme:burgers} u_i^{n+1} = u_i^n -\frac{\Delta t}{\Delta x} \bigl((a_i^n)^+ (u_i^n - u_{i-1}^n) -(a_{i+1}^n)^- (u_{i+1}^n-u_i^n) \bigr), \end{equation} where, with \eqref{def:aij}, we have $$ a_i^n = -\frac 12 \sum_{k\neq i} \rho_k^n \mbox{sign }(x_i-x_k). $$ Then \begin{equation* a_i^n = -\frac 12 \biggl(\sum_{k<i} \rho_k^n - \sum_{k>i} \rho_k^n \biggr) = -\frac 12 \biggl(\sum_{k<i} \rho_k^n - 1 + \sum_{k\leq i} \rho_k^n \biggr) = \frac 12 (u_{i-1}^n + u_i^n). \end{equation*} Moreover, as $\rho_i^n$ remains nonnegative under the CFL condition (see Lemma \ref{lem:CFL}), $u_i^n - u_{i-1}^n = -\rho_i^n \leq 0$, so that $$ (a_i^n)^+ (u_i^n-u_{i-1}^n) = -\left(a_i^n (u_i^n-u_{i-1}^n)\right)^- = -\frac 12\left( (u_i^n)^2-(u_{i-1}^n)^2\right)^-. $$ Similarly, we get $$ (a_{i+1}^n)^- (u_{i+1}^n-u_{i}^n) = -\left(a_{i+1}^n (u_{i+1}^n-u_{i}^n)\right)^+ = -\frac 12\left( (u_{i+1}^n)^2-(u_{i}^n)^2\right)^+, $$ so that the scheme \eqref{eq:scheme:burgers} for $u$ finally rewrites \begin{equation}\label{schemeBurgers} u_i^{n+1} = u_i^n -\frac{\Delta t}{2\Delta x} \Big( ((u_{i+1}^n)^2-(u_i^n)^2)^- -((u_i^n)^2 - (u_{i-1}^n)^2)^+ \Big). \end{equation} Then we may apply the main result of this paper and deduce the convergence at order $1/2$ of the above scheme: \begin{lemma} Let $u^{ini}$ be given in $BV(\mathbb{R})$ such that $\partial_xu^{ini} \leq 0$ and $TV(u^{ini})=1$. Define the family $(u^n_{i})_{i \in \mathbb{Z},n\in\mathbb{N}}$ by means of \eqref{schemeBurgers}, with the initial data $u_i^{0} := \frac 12 + \partial_x u^{ini}(-\infty,x_{i+\frac 12})$, and let $u_{\Delta x}^n:=\sum_{i\in\mathbb{Z}} u_i^n \mathbf{1}_{[x_i,x_{i+1})}$. Let $u$ be the entropy solution to the Burgers equation $\partial_t u + \frac 12 \partial_x u^2=0$ with $u^{ini}$ as initial condition. Then, there exists $C\geq 0$, independent of the discretization parameters, such that if the CFL condition $\Delta t < \Delta x$ is satisfied, one has $$ \|u(t^n)-u^n_{\Delta x}\|_{L^1} \leq C \bigl( \sqrt{t^n \Delta x} + \Delta x \bigr). $$ \end{lemma} \begin{remark} We do not claim that the scheme converges for any initial datum of the Cauchy problem for the Burgers equation (and actually it does not). The convergence result above only applies to a non-increasing initial condition belonging to $[-1/2, 1/2]$. \\ Note that this scheme is not conservative, but, surprisingly (see \cite{Legloc}) this does not prevent it from converging toward the right solution. \end{remark} \begin{proof} First remark that the CFL condition that is here required is $w_\infty \Delta t < \frac 12 \Delta x$, with $w_\infty = 1/2$ as $W(x) = \frac{1}{2} |x|$. \\ The entropy solution $u$ of the Burgers equation with a nonincreasing $BV$ initial datum is a nonincreasing $BV$ function. By Cauchy-Schwarz inequality, we have $$ \int_{0}^{1} |Q_{\rho(t^n)}(z) - Q_{\rho^n_{\Delta x}}(z)|\,dz \leq \| Q_{\rho(t^n)} - Q_{\rho^n_{\Delta x}}\|_{L^2(0,1)} = d_W\bigl(\rho(t^n),\rho_{\Delta x}^n\bigr), $$ where $(\rho(t))_{t \ge 0}$ is the solution of \eqref{EqInter}, with $W(x) = \frac{1}{2}|x|$ as before and $\rho^{ini}=-\partial_{x} u^{ini}$ as initial condition, and $(\rho^n_{\Delta x})_{n \geq 0}$ is the numerical solution obtained by Scheme \eqref{dis_num} with $d = 1$ together with initial condition \eqref{disrho0} (numerical solution whose convergence at order $1/2$ is stated in Theorem \ref{TH}). Observing that $W$ is convex, we apply Theorem \ref{TH} with $\lambda =0$. We obtain $$ \int_{0}^{1} |Q_{\rho(t^n)}(z) - Q_{\rho^n_{\Delta x}}(z)|\,dz \leq d_W(\rho(t^n),\rho_{\Delta x}^n) \leq C \bigl( \sqrt{t^n \Delta x} + \Delta x \bigr). $$ The claim follows provided we prove that \begin{equation}\label{eq:integral} \int_{\mathbb{R}} |u(t^n,x)-u^n_{\Delta x}(x)|\,dx = \int_{0}^{1} |Q_{\rho(t^n)}(z) - Q_{\rho^n_{\Delta x}}(z)|\,dz. \end{equation} In order to prove \eqref{eq:integral}, we notice that, from a geometrical point of view, the left hand side of equality \eqref{eq:integral} corresponds to the area between the curves $x\mapsto u(t^n,x)$ and $x\mapsto u^n_{\Delta x}(x)$. Also, the right hand side is a measure of the area between their generalized inverses. However, the graph of the pseudo-inverse of a function may be obtained by flipping the graph of the function with respect to the diagonal. Since this operation conserves the area, we deduce that both areas are equal, that is \eqref{eq:integral} holds. Another way to prove the identity \eqref{eq:integral} is to observe that the solution $u$ of the Burgers-Hopf equation reads: \begin{equation*} u(t,x) = \frac12\bigl[ \rho\bigl(t,(x,+\infty)\bigr) - \rho\bigl(t,(-\infty,x)\bigr) \bigr], \quad t \geq 0, \ x \in {\mathbb R}, \end{equation*} where $\rho$ is the solution in Theorem \ref{Exist}. In fact, as the number of points $x$ for which $\rho(t,\{x\})>0$ is at most countable for any given $t >0$, we have the almost everywhere equality: \begin{equation*} u(t,x) = \rho\bigl(t,(x,+\infty)\bigr) - \frac12. \end{equation*} Similarly, \begin{equation*} \begin{split} u^n_{\Delta x}(t,x) &= \sum_{i \in {\mathbb Z}} u ^n_{i} {\mathbf 1}_{[x_{i},x_{i+1})}(x) = \frac12 - \sum_{i \in {\mathbb Z}} {\mathbf 1}_{[x_{i},x_{i+1})}(x) \sum_{k \leq i} \rho^n_{k} \\ &= \frac12 - \sum_{i \in {\mathbb Z}} {\mathbf 1}_{[x_{i},x_{i+1})}(x) \rho^n_{\Delta x}(t,(-\infty,x_{i}]) = \frac12 - \rho^n_{\Delta x}\bigl(t,(-\infty,x]\bigr) = \rho^n_{\Delta x}\bigl(t,(x,+\infty)\bigr) - \frac12. \end{split} \end{equation*} So, to complete the proof, it suffices to use the fact that, for any two probability measures $\mu$ and $\mu'$ on $\mathbb{R}$, \begin{equation*} \begin{split} \int_{\mathbb{R}} \bigl\vert \mu\bigl((x,+\infty)\bigr)- \mu'\bigl((x,+\infty)\bigr) \bigr\vert dx &= \int_{0}^1 \vert Q_{\mu}(z) - Q_{\mu'}(z) \vert dz, \end{split} \end{equation*} see \cite[Theorems 2.9 and 2.10]{bobkov:ledoux}, noticing that the function $Q_{\mu}$ we use here is the right continuous version of the quantile function used in \cite{bobkov:ledoux}. \end{proof} \subsection{Numerical simulation in two dimensions} \label{subsec:numerical:d2} \begin{figure}[ht!] \includegraphics[width=.51\textwidth]{W2upw_bump0_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_bump1_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_bump2_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_bump3_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_bump4_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_bump5_norm.pdf} \caption{Time dynamics of the numerical solution of the aggregation equation \eqref{EqInter} with {$W(x)=W_1(x) = 1-e^{-5|x|}$} and an initial datum given by the sum of three bumps. Time increases from top left to bottom right.} \label{bump2D} \end{figure} \begin{figure}[ht!] \includegraphics[width=.51\textwidth]{W1upw_bump0_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_bump1_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_bump2_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_bump3_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_bump4_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_bump5_norm.pdf} \caption{Time dynamics of the numerical solution of the aggregation equation \eqref{EqInter} with $W(x)=W_2(x) = 5|x|$ and an initial datum given by the sum of three bumps. Time increases from top left to bottom right.} \label{bump2Dbis} \end{figure} \begin{figure}[ht!] \includegraphics[width=.51\textwidth]{W2upw_sq0_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_sq1_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_sq2_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_sq3_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_sq4_norm.pdf} \includegraphics[width=.51\textwidth]{W2upw_sq5_norm.pdf} \caption{Time dynamics of the numerical solution of the aggregation equation \eqref{EqInter} with {$W(x)=W_1(x) = 1-e^{-5|x|}$} and an initial datum given by a square. Time increases from top left to bottom right.} \label{sq2D} \end{figure} \begin{figure}[ht!] \includegraphics[width=.51\textwidth]{W1upw_sq0_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_sq1_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_sq2_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_sq3_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_sq4_norm.pdf} \includegraphics[width=.51\textwidth]{W1upw_sq5_norm.pdf} \caption{Time dynamics of the numerical solution of the aggregation equation \eqref{EqInter} with {$W(x)=W_2(x) = 5|x|$} and an initial datum given by a square. Time increases from top left to bottom right.} \label{sq2Dbis} \end{figure} As an illustration, we propose now a numerical example in two dimensions. The spatial domain is the square $[0,1]\times[0,1]$; it is discretized with $N_x=70$ nodes in the $x$-direction and $N_y=70$ nodes in the $y$-direction; we take a time step $\Delta t=10^{-3}$. We consider two different initial data: the sum of three bumps (as in \cite{CJLV}) \begin{equation*} \begin{split} &\rho^{ini}(t,x) \\ &\hspace{5pt} = \frac{1}{M}\left(e^{-100((x_1-0.25)^2+(x_2-0.3)^2)}+e^{-100((x_1-0.77)^2+(x_2-0.7)^2)} + 0.9 e^{-100((x_1-0.37)^2+(x_2-0.62)^2)}\right), \end{split} \end{equation*} where $M$ is a normalization constant such that $\|\rho^{ini}\|_{L^1}=1$; and an initial density with a square shape $$ \rho^{ini}(t,x)=5\times\mathbf{1}_{[0.2,0.8]\times[0.2,0.8]\setminus [0.3,0.7]\times[0.3,0.7]}. $$ With these numerical data, we compare the numerical results between the two potentials {$W_1(x)=1-e^{-5|x|}$ and $W_2(x)=5|x|$.} For $|x|$ close to $0$, we have that $\nabla W_1 \sim \nabla W_2$. Then the short range interaction is similar between both potentials, but the long range interaction is different. The numerical results are displayed in Figures \ref{bump2D} and \ref{sq2D} for the potential {$W_1(x)=1-e^{-5|x|}$} and in Figures \ref{bump2Dbis} and \ref{sq2Dbis} for the potential {$W_2(x)=5|x|$}. In each case, we observe, as expected, the aggregation in finite time of $\rho$ towards a Dirac delta. Indeed it has been proved in \cite{Carrillo} that when the initial data is compactly supported, solutions converge towards a Dirac delta in finite time. We also observe that the time dynamics during this step of concentration is different between potentials $W_1$ and $W_2$. The case with an initial datum with three bumps has been implemented in \cite{CJLV} with a Lax-Friedrichs scheme. We obtain here similar results but we observe a smaller numerical diffusion. Then we can make similar comments for the comparison between the two potentials $W_1$ and $W_2$. For the potential $W_1$, we observe that each bump coalesces into a Dirac delta, then the three remaining Dirac deltas merge into a single Dirac delta (see Fig~\ref{bump2D}). For the potential $W_2$, the solution seems to be more regular and Dirac deltas seems to appear for larger time (see Fig~\ref{bump2Dbis}). For the initial data with a square shape, the density $\rho$ keeps, for both potentials, a shape similar to the initial square shape which tightens as time increases. However with the potential $W_1$ (Fig~\ref{sq2D}), we notice a strong concentration at the corners of the square, whereas in the case of the potential $W_2$ (Fig~\ref{sq2Dbis}) the density is homogeneous along the edges of the square with a slight concentration in the middle of the edges. \bigskip {\bf Acknowledgements.} The authors acknowledge partial support from the french ``ANR blanche" project Kibord~: ANR-13-BS01-0004, as well as from the ``BQR Acceuil EC 2017'' grant from Universit\'e Lyon 1. \bigskip
1,477,468,751,374
arxiv
\section{Introduction} \label{sec:introduction} The Wigner function~\cite{wigner32,hillery84,case08,curtright14} provides a particularly useful visual representation of the state of a bosonic single mode quantum system as a real valued function on the two-dimensional system phase space. Integrating the Wigner function with respect to the system position coordinate $x$ gives the marginal probability density in the momentum coordinate $p$ (and vice versa); more generally, integrating the Wigner function with respect to any quadrature phase coordinate $X_1=x\cos(\theta)+p\sin(\theta)$ gives the marginal probability density in the complementary quadrature phase coordinate $X_2=-x\sin(\theta)+p\cos(\theta)$. In terms of the Wigner function, the quantum expectation value of a (Weyl ordered) observable $A(x,p)$ is evaluated in exactly the same way as for the corresponding classical system described by a phase space probability distribution function. Furthermore, the master equation that describes the open quantum system dynamics gets mapped to a partial differential equation for the Wigner function dynamics that closely resembles the Fokker Planck equation for the classical system statistical dynamics. Given the close resemblance between the Wigner function representation of the quantum system dynamical equations and the corresponding classical statistical dynamical equations, the Wigner function has helped provide an understanding of how classical dynamics arises by approximation from the underlying quantum dynamics~\cite{zurek94,kohler98,habib98,monteoliva01,habib02,everitt05,dykman07,greenbaum07,katz08,stobinska08}. Of particular interest in this respect are nonlinear single mode systems such as the driven, damped Duffing oscillator. A number of investigations have employed the Wigner function representation to explore the resulting quantum phase space dynamics in parameter regimes where the corresponding classical nonlinear dynamics exhibits, for example, bistability or chaos~\cite{habib98,habib02,dykman07,greenbaum07,katz08}. By varying the system damping and noise (diffusion) due to coupling to the environment, the quantum to classical transition can be explored in a controllable and visually direct way by comparing the corresponding quantum Wigner function phase space and classical phase space pictures. However, the Wigner function can take negative values and so is not a true probability distribution, despite the properties mentioned above. The presence of regions in phase space where the Wigner function is negative is conventionally interpreted as a signature of nonclassicality in the quantum state; With the exception of Gaussian (i.e., coherent and squeezed) states, {\it all} pure states have negative-valued Wigner function regions and hence are nonclassical~\cite{hudson74}. Well known examples are the harmonic oscillator energy eigenstates or Fock states and so-called Schr\"{o}dinger cat states involving superpositions of different coherent states. As with the above-mentioned quantum-classical correspondence investigations, the Wigner function has served as a very effective visual tool in the investigation of the generation and detection of nonclassical states of bosonic, single as well as few mode quantum systems. A range of investigations have been carried out involving optical~\cite{bimbard10,yoshikawa13}, microwave cavity~\cite{deleglise08} and superconducting circuit systems~\cite{hofheinz09,wang09,mallet11,eichler11,eichler12,shalibo12,shalibo13,kirchmair13,vlastakis13,wang16}, as well as nanomechanical systems~\cite{katz08,rips12,nation13,rips14,vanner15,abdi16}. A sought after goal for such bosonic mode systems is to generate macroscopic quantum states that are stable over long times against the decohering effects of the environment -- possibly even under `warm' conditions where the environment temperature is large compared to the characteristic frequency scale of the system dynamics. Stabilized macroscopic quantum states are useful not only for quantum information processing applications, but also for fundamental explorations, especially concerning how macroscopic a quantum state can be in the presence of unavoidable decohering environments. By `macroscopic', we mean that the average photon or phonon number of the stabilized system quantum state is large, while by `quantum' we mean that that the Wigner function representation of the state has significant negative regions in the system phase space. How large can a negative Wigner valued region be? Two classic theorems that can be easily generalized to mixed states establish that the Wigner function is generally bounded in magnitude by $(\pi\hbar)^{-1}=2/h$~\cite{baker58}, while the area of a given negative Wigner valued region can exceed $\hbar$, but where at least one of the dimensions must be of order $\sqrt{\hbar}$ or smaller~\cite{cartwright76}. For example, the $N=2$ harmonic oscillator Fock state has a single negative annular Wigner function region of area $\approx 4.44\,\hbar$ and radial width $\approx 0.765\sqrt{\hbar}$. Approaches to stabilizing quantum states involve measurement feedback to control the quantum system dynamics~\cite{joana16}, as well as so-called autonomous methods that do not require measurement feedback control. The latter typically involve `reservoir engineering', where the effective system-environment interaction is tailored in such a way as to evolve the system into a quantum state as well as to protect the state from the decohering effects of the environment~\cite{poyatos96,sarlette11,lin13,shankar13,Roy15,leghtas15}. Another approach to autonomously generating quantum states exploits the nonlinearities in the closed bosonic mode system dynamics -- equivalently anharmonicities in the system Hamiltonian. The presence of anharmonicities can cause initial Gaussian states with associated positive Wigner functions to evolve into nonclassical states with associated negative valued Wigner functions (see, e.g., Ref.~\onlinecite{katz08}). In terms of the quantum Fokker Planck dynamical equations for the Wigner function, the root cause of such evolution is the presence of a third or higher order position derivative term involving the system potential energy. Only when the potential energy is anharmonic is this term present and without this term, the Wigner function dynamical equation coincides with the classical Fokker Planck equation. For example, In the case of the driven Duffing oscillator with $ x^4$ anharmonicity in the potential energy and the resulting coexistence of bistable large and small amplitude oscillatory solutions for the classical dynamics, an initial coherent state will transiently evolve into a Schr\"{o}dinger cat-like state where the Wigner function displays a sequence of alternating negative and positive regions in between the corresponding large and small amplitude positive Wigner function peaks~\cite{katz08}; in a classically chaotic regime, an initial coherent state will spread out in phase space, exhibiting a complex inference pattern of positive and sub-$\hbar$ (sub-Planckian) scale negative Wigner function regions~\cite{habib98}. However, depending on the environment temperature, such non-classical features will typically diffuse away for the usual device system-environment couplings, leaving a long time steady state that is closely approximated by the corresponding classical system Fokker-Planck equation. Nevertheless, the question is still largely unresolved as to whether it might be possible to stabilize quantum states largely with anharmonicities alone. In particular, for certain anharmonicity types and drives (whether externally or internally generated by the system dynamics), we may be able to prepare and maintain quantum states with significant associated negative Wigner function regions, despite the counteracting decoherence effects of environmental noise. Recent relevant developments in superconducting microwave resonator (as well as coupled nanomechanical resonator) circuits involving embedded Josephson junction elements provide strong motivation for pursuing this question~\cite{chen11,blencowe12,armour13,gramich13,chen14,rimberg14,armour15,souquet16,dambach16}. In particular, the Josephson elements can induce strong effective anharmonicities in the microwave mode Hamiltonian, as well as internally generated drive tones through the AC-Josephson effect. One consequence is lasing-like behavior~\cite{chen14}, with the continuous, stimulated emission of amplitude-squeezed microwave photons~\cite{armour13}. With the above motivations in mind, in the present work we will extend the Wigner formulation of the open quantum system dynamics and take into account also the so-called Wigner function flow vector field (or `Wigner flow' in short)~\cite{bauke11,steuernagel13}. The Wigner flow allows a particularly concise reformulation of the quantum Fokker Planck equation as a standard continuity equation, equating (via the familiar Gauss's theorem of vector calculus) the rate of change of the net Wigner quasiprobability within some two dimensional region to net Wigner flows normal to the boundary enclosing the region; loosely speaking, the phase space flow picture is somewhat analogous to a system involving a distribution of positive and negative charges (e.g. electrons and holes) that can be annihilated and created, albeit described by a quite different quantum statistical dynamics. The potential advantage of bringing the Wigner flows into play is that they can give a strong visual representation of how non-classical states form through the system Hamiltonian anharmonicity, as well as diffuse away due to the environment. By exploring the relative contributions to the net Wigner flow across the boundary of a given negative region arising from the system Hamiltonian anharmonicity and from the interactions with the environment, we may be able to improve our understanding of how to `engineer' system Hamiltonian anharmonicities and drive stones so as to stabilize macroscopic bosonic quantum states in the face of environmental noise. The present work gives some initial steps in this direction. In Sec.~\ref{sec:wigner}, we introduce the Wigner flow formulation of the quantum Fokker-Planck equation, giving as specific system examples the harmonic oscillator and additively driven Duffing oscillator. In Sec.~\ref{sec:simulation} we present our numerical results, which include simulations of the open harmonic and Duffing oscillator Wigner function and flows. Section~\ref{sec:discussion} discusses our numerical results and in particular gives some initial steps towards an understanding of how negative Wigner function regions can form and diffuse away from a Wigner flow perspective. In Sec.~\ref{sec:conclusion}, we give some concluding remarks. \section{The Wigner Flow} \label{sec:wigner} For a one dimensional particle with Hamiltonian $H=p^2/(2 m)+m\omega_0^2 x^2/2+V(x,t)$, where $V(x,t)$ is the (time dependent) anharmonic potential energy, a possible Lindblad master equation that describes the quantum dynamics of the system state characterized by density matrix $\rho(t)$ interacting weakly with an oscillator bath can be written as follows: \begin{eqnarray} \frac{d \rho}{dt}&=&-\frac{i}{\hbar}[H,\rho]+\frac{\gamma}{2}(\bar{n}+1)\left(2 a\rho a^{\dagger}-a^{\dagger}a\rho-\rho a^{\dagger}a\right)\cr &&+\frac{\gamma}{2}\bar{n}\left(2a^{\dagger}\rho a-a a^{\dagger}\rho-\rho a a^{\dagger}\right), \label{mastereq} \end{eqnarray} where $\gamma$ is the system energy damping rate, $\bar{n}=(e^{\hbar\omega_0/(k_BT)}-1)^{-1}$ is the Bose-Einstein thermal average occupation number of the temperature $T$ bath at the characteristic frequency $\omega_0$ of the system Hamiltonian. Strickly speaking, the master equation~(\ref{mastereq}) is valid to a good approximation provided the system-environment interaction is weak: $\gamma\ll\omega_0$, the temperature is in the range $\hbar\gamma\ll k_B T\ll\hbar\omega_0$, and the anharmonic potential is sufficiently weak~\cite{haake86}. However, as is frequent practice, we will assume that the master equation still gives reasonable open system quantum dynamics even for larger temperatures $k_B T\sim\hbar\omega_0$. The Wigner function representation of the quantum state $\rho(t)$ as a real-valued function on phase space can be defined as~\cite{wigner32,hillery84,case08,curtright14} \begin{eqnarray} W(x,p,t)&=&\frac{1}{\pi\hbar}\int_{-\infty}^{+\infty} dy\, e^{-2ipy/\hbar}\langle x+y|{\rho}(t)|x-y\rangle\cr &=&\frac{1}{\pi\hbar}\int_{-\infty}^{+\infty} dp'\, e^{+2ip' x/\hbar}\langle p+p'|{\rho}(t)|p-p'\rangle.\cr &&\label{wfeq} \end{eqnarray} Expressing the master equation~(\ref{mastereq}) in terms of the Wigner function (\ref{wfeq}), we obtain the so-called quantum Fokker-Planck equation \begin{eqnarray} &&\frac{\partial W}{\partial t}=-\frac{p}{m}\frac{\partial W}{\partial x}+\left(m\omega_0 x +\frac{\partial V}{\partial x}\right)\frac{\partial W}{\partial p}\cr &&+\sum_{n\geq 1}\frac{(-1)^n(\hbar/2)^{2n}}{(2n+1)!}\frac{\partial^{2n+1}}{\partial x^{2n+1}}V\frac{\partial^{2n}}{\partial p^{2n}} W\cr &&+\frac{\gamma}{2}\frac{\partial}{\partial x}\left[x W+\hbar\left(\bar{n}+\frac{1}{2}\right)\frac{1}{m\omega_0}\frac{\partial W}{\partial x}\right]\cr &&+\frac{\gamma}{2}\frac{\partial}{\partial p}\left[p W+\hbar\left(\bar{n}+\frac{1}{2}\right){m\omega_0}\frac{\partial W}{\partial p}\right]. \label{wfmastereq} \end{eqnarray} The Wigner flow vector fields for the system~(\cite{bauke11,steuernagel13}) and environment are,respectively \begin{eqnarray} {\mathbf{J}}_{\mathrm{sys}}= \left( \begin{array}{c} \frac{p}{m} W\\ -\sum_{n=0} \frac{(-1)^{n}(\hbar/2)^{2n}}{(2n+1)!}\partial_x^{(2n+1)}V'\partial_p^{(2n)}W \\ \end{array} \right) \label{sysfloweq} \end{eqnarray} and \begin{eqnarray} {\mathbf{J}}_{\mathrm{env}}= -\frac{\gamma}{2}\left( \begin{array}{c} x W+\hbar\left(\bar{n}+\frac{1}{2}\right)(m\omega_0)^{-1}\partial_x W\\ p W+\hbar\left(\bar{n}+\frac{1}{2}\right) m\omega_0 \partial_p W \\ \end{array} \right), \label{envfloweq} \end{eqnarray} where the first row is the position $x$ component and the second row is momentum $p$ component of the flow vector, and where we have used the shorthand notation $V'=\frac{1}{2}m\omega_0^2 x^2+V$, $\partial x \equiv\frac{\partial}{\partial x}$, and $\partial p \equiv\frac{\partial}{\partial p}$. The environment flow can be further decomposed as a sum of damping and diffusion contributions: ${\mathbf{J}}_{\mathrm{env}}={\mathbf{J}}_{\mathrm{damp}}+{\mathbf{J}}_{\mathrm{diff}}$, where \begin{eqnarray} {\mathbf{J}}_{\mathrm{damp}}= -\frac{\gamma}{2}\left( \begin{array}{c} x W\\ p W\\ \end{array} \right) \label{dampfloweq} \end{eqnarray} and \begin{eqnarray} {\mathbf{J}}_{\mathrm{diff}}= -\frac{\gamma\hbar}{2}\left(\bar{n}+\frac{1}{2}\right)\left( \begin{array}{c} (m\omega_0)^{-1}\partial_x W\\ m\omega_0 \partial_p W \\ \end{array} \right). \label{difffloweq} \end{eqnarray} In terms of the system and environment flows, the master equation for the Wigner function~(\ref{wfmastereq}) takes the concise form of a continuity equation: \begin{equation} \frac{\partial W}{\partial t} +\nabla\cdot {\mathbf{J}}=0, \label{continuityeq} \end{equation} where ${\mathbf{J}}={\mathbf{J}}_{\mathrm{sys}}+{\mathbf{J}}_{\mathrm{env}}$ and $\nabla=(\partial_x,\partial_p)$. The driven Duffing oscillator is characterized by the anharmonic $+$ additive driving potential \begin{equation} V(x,t)=\frac{\lambda}{4} x^4 -x F\cos(\omega_d t), \label{duffingpoteq} \end{equation} where the parameter $\lambda$ gives the strength of the anharmonic potential, the parameter $F$ gives the strength of the time-dependent sinusoidal drive, and $\omega_d$ is the drive frequency. Substituting Eq.~(\ref{duffingpoteq}) into Eq.~(\ref{sysfloweq}), we obtain for the driven Duffing oscillator system Wigner flow: \begin{eqnarray} {\mathbf{J}}_{\mathrm{Duff}}= \left( \begin{array}{c} \frac{p}{m} W\\ \left[-m\omega_0 x+F\cos(\omega_d t)-\lambda x^3+\frac{\hbar^2\lambda }{4} x \partial_p^2\right] W \\ \end{array} \right).\cr &&\label{duffingsysfloweq} \end{eqnarray} For the harmonic oscillator the system Wigner flow simplifies to \begin{eqnarray} {\mathbf{J}}_{\mathrm{HO}}= \left( \begin{array}{c} \frac{p}{m} W\\ -m\omega_0 x W \\ \end{array} \right). \label{hosysfloweq} \end{eqnarray} In the numerical investigations below, it is convenient to work in terms of dimensionless forms of the Wigner function and flow. In terms of the length unit $x_0=\sqrt{\hbar/(m\omega_0)}$ and time unit $t_0=\omega_0^{-1}$, we transform the various coordinates and parameters into dimensionless form as follows: $\tilde{x}=x/x_0$, $\tilde{p}=p/(m\omega_0 x_0)$, $\tilde{F}=x_0 F/(\hbar\omega_0)$, $\tilde{\lambda}=\lambda x^4/(\hbar\omega_0)$, $\tilde{\gamma}=\gamma/\omega_0$, $\tilde{\omega}_d=\omega_d/\omega_0$, and $\tilde{t}=\omega_0 t$, where the tilde denotes the dimensionless form. The dimensionless form for the Wigner function is \begin{eqnarray} \tilde{W}&=&\hbar W\cr &=&\frac{1}{\pi}\int_{-\infty}^{+\infty} dy\, e^{-2ipy/\hbar}\langle x+y|{\rho}(t)|x-y\rangle\cr &=&\frac{1}{\pi}\int_{-\infty}^{+\infty} d\tilde{y}\, e^{-2i\tilde{p}\tilde{y}}\langle \tilde{x}+\tilde{y}|{\rho}(t)|\tilde{x}-\tilde{y}\rangle, \label{dimlesswfeq} \end{eqnarray} where $|\tilde{x}\rangle=\sqrt{x_0}|x\rangle$ [so that $\langle\tilde{x}|\tilde{x}'\rangle=\delta(\tilde{x}-\tilde{x}')$]. The continuity equation becomes in dimensionless form: \begin{equation} \frac{\partial \tilde{W}}{\partial \tilde{t}} +\tilde{\nabla}\cdot \tilde{{\mathbf{J}}}=0, \label{dimlesscontinuityeq} \end{equation} where ${\tilde{\mathbf{J}}}=\tilde{{\mathbf{J}}}_{\mathrm{Duff}}+\tilde{{\mathbf{J}}}_{\mathrm{env}}$, with \begin{eqnarray} \tilde{{\mathbf{J}}}_{\mathrm{Duff}}= \left( \begin{array}{c} \tilde{p} \tilde{W}\\ \left[-\tilde{x}+\tilde{F}\cos(\tilde{\omega}_d \tilde{t})-\tilde{\lambda} \tilde{x}^3+\frac{\tilde{\lambda}}{4} \tilde{x} \partial_{\tilde{p}}^2\right] \tilde{W} \\ \end{array} \right),\cr &&\label{dimlessduffingsysfloweq} \end{eqnarray} and \begin{equation} \tilde{{\mathbf{J}}}_{\mathrm{env}}=\tilde{{\mathbf{J}}}_{\mathrm{damp}}+\tilde{{\mathbf{J}}}_{\mathrm{diff}}, \label{dimlessenvfloweq} \end{equation} with \begin{eqnarray} \tilde{{\mathbf{J}}}_{\mathrm{damp}}= -\frac{\tilde{\gamma}}{2}\left( \begin{array}{c} \tilde{x} \tilde{W}\\ \tilde{p}\tilde{W}\\ \end{array} \right) \label{dimlessdampfloweq} \end{eqnarray} and \begin{eqnarray} \tilde{{\mathbf{J}}}_{\mathrm{diff}}= -\frac{\tilde{\gamma}}{2}\left(\bar{n}+\frac{1}{2}\right)\left( \begin{array}{c} \partial_{\tilde{x}} \tilde{W}\\ \partial_{\tilde{p}} \tilde{W} \\ \end{array} \right). \label{dimlessdifffloweq} \end{eqnarray} For the harmonic oscillator, we have for the dimensionless flow: $\tilde{\mathbf{J}}=\tilde{\mathbf{J}}_{\mathrm{HO}}+\tilde{\mathbf{J}}_{\mathrm{env}}$, with \begin{eqnarray} \tilde{\mathbf{J}}_{\mathrm{HO}}= \left( \begin{array}{c} \tilde{p}\tilde{W}\\ -\tilde{x} \tilde{W} \\ \end{array} \right) \label{dimlesshosysfloweq} \end{eqnarray} and $\tilde{\mathbf{J}}_{\mathrm{env}}$ given by Eq.~(\ref{dimlessenvfloweq}) From now on, we drop the tildes for notational convenience, the dimensionless form of the parameters and coordinates understood. \section{Numerical Results} \label{sec:simulation} In this section we present the results of our numerical solutions to the Wigner function $W$ and associated flow vector field ${\mathbf{J}}$ for the undriven, open harmonic and driven Duffing oscillator systems. This involves first solving the Lindblad master equation~(\ref{mastereq}) for the system density matrix $\rho(t)$ using QuTiP~\cite{johansson13} and then evaluating the Wigner function and flows in terms of the density matrix; the source code can be obtained from Ref.~\onlinecite{friedman16}. Complete videos of each simulation can be viewed at Ref.~\onlinecite{youtubeplaylist}. While the Wigner function time dependence for the open harmonic oscillator system can be determined analytically~\cite{kim92,paz93}, we nevertheless solve the harmonic oscillator master equation numerically as a check on the validity of our code. In the following Wigner function plots, regions color-coded blue correspond to positive Wigner function value, red regions correspond to negative Wigner function value, while the local color density gives a measure of the Wigner function magnitude. \subsection{Harmonic Oscillator} \label{sec:harmonic} Figure~\ref{fig:figure1} shows snapshots of the evolving Wigner function and associated flow ${\mathbf{J}}={\mathbf{J}}_{\mathrm{HO}}+{\mathbf{J}}_{\mathrm{env}}$ for the harmonic oscillator initially in an $N=2$ Fock state~\cite{bauke11} and in the presence of a zero temperature bath; the damping rate is chosen to be $\gamma=0.01$. A unit area square corresponding to Planck's constant $\hbar$ in our dimensionless units is indicated at the bottom right of each figure to give the scale, while the arrow legend at the top left of each figure indicates the scale for the flow vector field. The snapshot times are given in multiples of the free oscillation period $\tau = 2\pi / \omega_0=2\pi t_0$. Figure~\ref{fig:figure2} shows the same evolving Wigner function snapshots as in Fig.~\ref{fig:figure1} but with just the environmental diffusion flow ${\mathbf{J}}_{\mathrm{diff}}$ (\ref{dimlessdifffloweq}) indicated. Figures~\ref{fig:figure3} and~\ref{fig:figure4} show snapshots of the evolving Wigner function and associated full and environmental diffusion flows, respectively, for the harmonic oscillator in an initial superposition of coherent states separated by $x=4$; the damping rate is $\gamma=0.01$ and the bath temperature $T=0$. In the final indicated snapshots corresponding to $t=100\tau$ [Figs.~\ref{fig:figure1}-\ref{fig:figure4}(c)], the Wigner function and flows hardly change between subsequent snapshots separated by a free oscillation period (see also the strobe videos~\cite{youtubeplaylist}), indicating that the system dynamics has reached a steady state to a good approximaton. This is to be expected given that $\gamma t=2 \pi$, i.e., final the snapshot time is approximately six times longer than the harmonic oscillator relaxation time. \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure1a} \includegraphics[width=0.32\textwidth]{fig1a.png} } \subfloat[$t=4\tau$]{\label{fig:figure1b} \includegraphics[width=0.32\textwidth]{fig1b.png} } \subfloat[$t=100\tau$]{\label{fig:figure1c} \includegraphics[width=0.32\textwidth]{fig1c.png} } \caption{Snapshots of evolving harmonic oscillator Wigner function and associated flow vector field ${\mathbf{J}}={\mathbf{J}}_{\mathrm{HO}}+{\mathbf{J}}_{\mathrm{env}}$ for an initial $N=2$ Fock state with damping rate $\gamma=0.01$ and bath temperature $T=0$.} \label{fig:figure1} \end{figure*} \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure2a} \includegraphics[width=0.32\textwidth]{fig2a.png} } \subfloat[$t=4\tau$]{\label{fig:figure2b} \includegraphics[width=0.32\textwidth]{fig2b.png} } \subfloat[$t=100\tau$]{\label{fig:figure2c} \includegraphics[width=0.32\textwidth]{fig2c.png} } \caption{Snapshots of evolving harmonic oscillator Wigner function and associated environmental diffusion flow vector field ${\mathbf{J}}_{\mathrm{diff}}$ for an initial $N=2$ Fock state with damping rate $\gamma=0.01$ and bath temperature $T=0$.} \label{fig:figure2} \end{figure*} \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure3a} \includegraphics[width=0.32\textwidth]{fig3a.png} } \subfloat[$t=4\tau$]{\label{fig:figure3b} \includegraphics[width=0.32\textwidth]{fig3b.png} } \subfloat[$t=100\tau$]{\label{fig:figure3c} \includegraphics[width=0.32\textwidth]{fig3c.png} } \caption{Snapshots of evolving harmonic oscillator Wigner function and associated flow vector field ${\mathbf{J}}={\mathbf{J}}_{\mathrm{HO}}+{\mathbf{J}}_{\mathrm{env}}$ for an initial superposition of coherent states with separation $x=6$; the damping rate $\gamma=0.01$ and bath temperature $T=0$. } \label{fig:figure3} \end{figure*} \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure4a} \includegraphics[width=0.32\textwidth]{fig4a.png} } \subfloat[$t=4\tau$]{\label{fig:figure4b} \includegraphics[width=0.32\textwidth]{fig4b.png} } \subfloat[$t=100\tau$]{\label{fig:figure4c} \includegraphics[width=0.32\textwidth]{fig4c.png} } \caption{Snapshots of evolving harmonic oscillator Wigner function and associated environmental diffusion flow vector field ${\mathbf{J}}_{\mathrm{diff}}$ for an initial superposition of coherent states with separation $x=6$; the damping rate $\gamma=0.01$ and bath temperature $T=0$.} \label{fig:figure4} \end{figure*} \subsection{Duffing Oscillator} \label{sec:duffing} We now turn to the numerical solution of the Wigner function and associated flow vector field for the damped, driven Duffing oscillator. We choose the dimensionless Duffing oscillator parameter values $\lambda=0.05$ (anharmonic strength), $\omega_d=1.09$ (drive frequency), $F=0.092$ (drive strength), and $\gamma=0.01$ (damping rate). These parameter values result in the classical Duffing oscillator exhibiting bistability for the steady state dynamics at zero temperature, corresponding to coexisting small and large amplitude oscillations. For the above parameter choices, these small and large steady state amplitudes are $0.52$ and $2.46$, respectively. Figure~ \ref{fig:figure5} shows snapshots of the evolving Wigner function and associated flow ${\mathbf{J}}={\mathbf{J}}_{\mathrm{Duff}}+{\mathbf{J}}_{\mathrm{env}}$ for the Duffing oscillator initially in an undisplaced coherent state and in the presence of a zero temperature bath; the snapshot times are given in multiples of the drive period $\tau_d = 2\pi/\omega_d$. Figure~\ref{fig:figure6} shows the same evolving Wigner function snapshots as in Fig.~\ref{fig:figure5}, but with just the environmental diffusion flow ${\mathbf{J}}_{\mathrm{diff}}$ (\ref{dimlessdifffloweq}) indicated. Figures~\ref{fig:figure7} and~\ref{fig:figure8} show snapshots of the evolving Wigner function together with the full flow and environmental diffusion flows, respectively, but at nonzero temperature $T=2\hbar\omega_0/k_B$. In the final indicated snapshots corresponding to $t=300\tau_d$ [Figs.~\ref{fig:figure6}-\ref{fig:figure8}(c)], the Wigner function and flows hardly change between subsequent snapshots separated by a drive period (see also the strobe videos~\cite{youtubeplaylist}); these final snapshots should therefore correspond pretty accurately to the long time limit steady state Wigner function and flows. \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure5a} \includegraphics[width=0.32\textwidth]{fig5a.png} } \subfloat[$t=18\tau_d$]{\label{fig:figure5b} \includegraphics[width=0.32\textwidth]{fig5b.png} } \subfloat[$t=300\tau_d$]{\label{fig:figure5c} \includegraphics[width=0.32\textwidth]{fig5c.png} } \caption{Snapshots of evolving Duffing oscillator Wigner function and associated flow vector field ${\mathbf{J}}={\mathbf{J}}_{\mathrm{Duff}}+{\mathbf{J}}_{\mathrm{env}}$ for an initial undisplaced coherent state; the damping rate $\gamma=0.01$ and bath temperature $T=0$.} \label{fig:figure5} \end{figure*} \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure6a} \includegraphics[width=0.32\textwidth]{fig6a.png} } \subfloat[$t=18\tau_d$]{\label{fig:figure6b} \includegraphics[width=0.32\textwidth]{fig6b.png} } \subfloat[$t=300\tau_d$]{\label{fig:figure6c} \includegraphics[width=0.32\textwidth]{fig6c.png} } \caption{Snapshots of evolving Duffing oscillator Wigner function and associated environmental diffusion flow vector field ${\mathbf{J}}_{\mathrm{diff}}$ for an initial undisplaced coherent state; the damping rate $\gamma=0.01$ and bath temperature $T=0$.} \label{fig:figure6} \end{figure*} \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure7a} \includegraphics[width=0.32\textwidth]{fig7a.png} } \subfloat[$t=18\tau_d$]{\label{fig:figure7b} \includegraphics[width=0.32\textwidth]{fig7b.png} } \subfloat[$t=300\tau_d$]{\label{fig:figure7c} \includegraphics[width=0.32\textwidth]{fig7c.png} } \caption{Snapshots of evolving Duffing oscillator Wigner function and associated flow vector field ${\mathbf{J}}={\mathbf{J}}_{\mathrm{Duff}}+{\mathbf{J}}_{\mathrm{env}}$ for an initial undisplaced coherent state; the damping rate $\gamma=0.01$ and bath temperature $T=2 \hbar\omega_0/k_B$.} \label{fig:figure7} \end{figure*} \begin{figure*}[htp] \centering \subfloat[$t=0$ (initial state)]{\label{fig:figure8a} \includegraphics[width=0.32\textwidth]{fig8a.png} } \subfloat[$t=18\tau_d$]{\label{fig:figure8b} \includegraphics[width=0.32\textwidth]{fig8b.png} } \subfloat[$t=300\tau_d$]{\label{fig:figure8c} \includegraphics[width=0.32\textwidth]{fig8c.png} } \caption{Snapshots of evolving Duffing oscillator Wigner function and associated environmental diffusion flow vector field ${\mathbf{J}}_{\mathrm{diff}}$ for an initial undisplaced coherent state; the damping rate $\gamma=0.01$ and bath temperature $T=2\hbar\omega_0/k_B$.} \label{fig:figure8} \end{figure*} \section{Discussion} \label{sec:discussion} Common to the harmonic and Duffing oscillator quantum dynamics indicated in Figs.~\ref{fig:figure1},~\ref{fig:figure3},~\ref{fig:figure5}, and~\ref{fig:figure7}, the direction of the flow ${\mathbf{J}}$ in the regions of positive-valued Wigner function is clockwise about the phase space origin, just as is the case for an evolving classical probability density that results from solving the corresponding classical Fokker-Planck equation for some initial probability distribution; for the harmonic oscillator system, the Wigner flow continuity equation~(\ref{continuityeq}) coincides with the classical, Brownian motion Fokker-Planck equation, while for the Duffing oscillator the Wigner flow continuity equation~(\ref{continuityeq}) differs from the classical Fokker-Planck equation only in the presence of the system quantum flow term $(0,\lambda x\partial_p^2 W/4)$ [see Eq.~(\ref{dimlessduffingsysfloweq})]. In contrast, the flow direction in the regions of negative-valued Wigner function is {\em{counterclockwise}}, i.e., in the opposite direction to the corresponding classical flow~\cite{bauke11,steuernagel13,albarelli2016}. In Figs.~\ref{fig:figure2},~\ref{fig:figure4},~\ref{fig:figure6}, and~\ref{fig:figure8}, we can see that for any negative-valued Wigner function region, the diffusion contribution to the environmental flow ${\mathbf{J}}_{\mathrm{diff}}$ is always directed inwards on the boundary of the negative region, with the result that the environmental diffusion flow acts to destroy negative regions. This is just the process of decoherence viewed in terms of the Wigner flow. In order to gain a better understanding of how regions where the Wigner function is negative initially form, are stabilized, or eventually disappear, let us suppose that the Wigner function at some given time instant $t$ is negative in certain regions of phase space. This is the case for the initial Fock state and coherent state superposition examples considered above (see Figs.~\ref{fig:figure1}-\ref{fig:figure4}), while for the Duffing oscillator, we see that negative Wigner function regions are generated through the dynamics (Figs.~\ref{fig:figure5}-\ref{fig:figure8}). Consider a particular negative region with phase space area $A(t)$ and boundary $\partial A(t)$, where the indicated $t$-dependence accounts for the fact that the negative region evolves in time. In particular, the boundary is defined by $\left. W(x,p,t)\right|_{\partial A(t)}=0$. A measure of the degree of negativity of the region is given by the negative `volume' under the integral $\int_{A(t)} dx dp\, W(x,p,t)$. From Eqs.~(\ref{dimlesscontinuityeq})-(\ref{dimlessdifffloweq}) and Gauss's theorem, the time rate of change of this negative volume is \begin{eqnarray} &&\frac{d}{dt}\int_{A(t)} dx dp\, W(x,p,t)=\frac{\lambda}{4} \int_{\partial A(t)} ds\, {\mathbf{n}}\cdot\left(0,-x\right) \frac{\partial^2 W}{\partial p^2} \cr &&+ \frac{\gamma}{2}\left(n+\frac{1}{2}\right) \int_{\partial A(t)}ds\, {\mathbf{n}}\cdot\nabla W, \label{negduffrateeq} \end{eqnarray} where we have used the fact that the Wigner function vanishes on the boundary $\partial A(t)$, $s$ parametrizes the boundary curve, and ${\mathbf{n}}$ is the unit vector outwards normal to the curve. For the harmonic oscillator system, the first term on the right hand side of the equals sign in Eq.~(\ref{negduffrateeq}) vanishes (since $\lambda=0$) and the rate of change of the region negativity is affected solely by the environmental diffusion flow~(\ref{dimlessdifffloweq}). Since the Wigner function is by definition negative on the interior region and positive on at least the immediate exterior region of the boundary $\partial A(t)$, the gradient $\nabla W$ points outwards so that ${\mathbf{n}}\cdot \nabla W\geq 0$ everywhere on the boundary. Therefore, for the harmonic oscillator we have that \begin{equation} \frac{d}{dt}\int_{A(t)} dx dp\, W(x,p,t)\geq 0 \label{increasingWFeq} \end{equation} and we thus see that the size of the negative regions always decreases with time at a rate governed by the environmental diffusion flow. On the other hand, for the Duffing oscillator system ($\lambda\neq 0$), we see from Eq.~(\ref{negduffrateeq}) that the rate of change of the region negativity is governed by two flow contributions: the system quantum flow $(0, \lambda x\partial_p^2 W/4)$ and the environmental diffusion flow~(\ref{dimlessdifffloweq}). Since the environmental diffusion destroys negative regions, the system quantum flow must therefore be responsible for the initial generation and possible eventual stabilization of negative regions in the steady state. In particular, for a growing negative region we necessarily require that \begin{equation} \frac{\lambda}{4} \int_{\partial A(t)} ds\, {\mathbf{n}}\cdot\left(0,-x\right) \frac{\partial^2 W}{\partial p^2}<0. \label{negquantumfloweq} \end{equation} If the term $\partial_p^2W$ were a constant on the boundary $\partial A(t)$ of a negative region, then the integral in Eq.~(\ref{negquantumfloweq}) would simply vanish. Supposing that $\left.\partial_p^2 W\right|_{\partial A(t)}\geq 0$ [i.e., $W(x,p)$ is concave upwards with respect to its momentum dependence on the boundary $\partial A(t)$] and for positive anharmonic coupling strength $\lambda>0$, the term $\partial_p^2W$ must therefore be larger on segments of the boundary where ${\mathbf{n}}\cdot(0,-x)<0$ for condition (\ref{negquantumfloweq}) to hold. Referring to Fig.~\ref{fig:figure9}, we thus see that negative regions will tend to form at the leading edges of the clockwise rotating Wigner function peaks. \begin{figure}[htp] \includegraphics[width=\columnwidth]{fig9.png} \caption{Simplified picture of negative-valued Wigner function regions indicated as red discs and the dominant positive-valued Wigner function regions indicated as blue semicircular arches. The indicated relative positioning of these regions in the $x>0$ and $x<0$ half-planes is dictated by the sign of the dot product $(0,-x)\cdot{\mathbf{n}}$, where ${\mathbf{n}}$ is the unit vector outwards normal to the negative region boundary circle $\partial A(t)$. In particular, $(0,-x)\cdot{\mathbf{n}}<0$ for the upper semicircle boundary segment in the $x>0$ half-plane and $(0,-x)\cdot{\mathbf{n}}<0$ for the lower semicircle boundary segment in the $x<0$ half-plane.} \label{fig:figure9} \end{figure} The instant at which the first negative region appears for our Duffing oscillator simulations is shown in Fig.~\ref{fig:figure10}, which appears to be in accord with the above prediction. \begin{figure*}[htp] \centering \subfloat[$t=2.0 \tau_d$]{\label{fig:figure10a} \includegraphics[width=0.32\textwidth]{fig10a.png} } \subfloat[$t=2.1 \tau_d$]{\label{fig:figure10b} \includegraphics[width=0.32\textwidth]{fig10b.png} } \subfloat[$t=2.2 \tau_d$]{\label{fig:figure10c} \includegraphics[width=0.32\textwidth]{fig10c.png} } \caption{Snapshots showing the formation of the first negative Wigner region (appearing in the lower right quadrant) for the Duffing Oscillator at bath temperature $T=0$. Wigner functions are scaled in order to resolve the initial negative region.} \label{fig:figure10} \end{figure*} Eventually, the negative regions practically vanish even at zero temperature as is clear from Fig.~\ref{fig:figure5}(c). Because the chosen parameter values result in coexisting small and large amplitude stable oscillations for the classical dynamics, the Wigner function must correspondingly spread out through flow and quantum diffusion from its initially narrow and strongly peaked coherent state distribution [Fig.~\ref{fig:figure5}(a)]. As a result, the magnitude of the term $\partial_p^2W$ must decrease overall, and with the small chosen anharmonic coupling strength value $\lambda$~($=0.05$), the system quantum term is too weak to counter the deleterious effects of the diffusion term in Eq.~(\ref{negduffrateeq}) and stabilize sizable negative regions. Although only small negative Wigner function regions remain in the steady state for the considered Duffing parameter values, under the right conditions it might be possible to generate and stabilize sizable negative Wigner function regions, perhaps even at non-zero temperatures. In particular, the form of the quantum term in Eq.~(\ref{negduffrateeq}) and the accompanying picture provided by Fig.~\ref{fig:figure9} suggest that negative regions are favored by large magnitude anharmonic coupling, single amplitude oscillatory solutions where the corresponding rotating Wigner function remains narrow and strongly peaked so as to ensure that the term $\partial_p^2W$ at the leading (trailing) edge for $\lambda>0$ ($\lambda<0$) is sufficiently large. One possible way to maintain large values for $\partial_p^2W$ at the leading and trailing peak edges might be through the continuous squeezing of the momentum uncertainty noise~\cite{friedman17}. \section{Conclusion} \label{sec:conclusion} In this present work, we extended the Wigner phase space formulation of open quantum system dynamics to include a description of the Wigner flow vector fields on phase space. This enables the quantum Fokker-Planck equation describing the Wigner function dynamics to be written in the concise form of a continuity equation. The evolving Wigner flows were investigated numerically for a harmonic oscillator and a driven Duffing oscillator in the bistable regime, the latter serving as an illustrative anharmonic system. Through the application of the two-dimensional Gauss's theorem to boundary-enclosed, negative Wigner function regions on system phase space, we saw that the formation and disappearance of the negative regions are governed solely by the so-called quantum flow due to the system anharmonicity and the diffusion flow across the negative region boundaries. By examining the form of these specific contributions to the total Wigner flow, we were able to gain some initial insights as to how negative regions form as a result of the system anharmonicity, as well as how they might be stabilized through the use of suitable system anharmonicities and drive tones. \clearpage \section*{acknowledgements} We thank Paul Nation for helpful discussions, as well as John Hudson and Susan Schwarz for their assistance with using the Dartmouth Discovery Cluster. This work was supported by the National Science Foundation under Grant No. DMR-1507383.
1,477,468,751,375
arxiv
\section{Introduction} ATLAS and CMS groups in the LHC experiment have reported the discovery of the Higgs-like particle \cite{lhc}. All the standard model contents seem to have been found by now. However, the standard model has serious problems from experimental and observational view points. Although the existence of neutrino masses and dark matter has been confirmed through various experiments and observations \cite{nexp,t13,uobs,planck}, it cannot be explained in the standard model. The standard model cannot give a framework for the generation of baryon number asymmetry in the Universe, either \cite{basym}. These facts now cause serious tension between the standard model and Nature so that they motivate us to consider its extension. The radiative neutrino mass model proposed in \cite{ma} is a simple and interesting extension of the standard model which could be an explanation. In several previous articles \cite{ndm,u1,susyndm,ndm1,ks,infl}, we have studied these problems in this model and its extensions. They suggest that these problems could be explained in a consistent way, simultaneously. Unfortunately, however, we could not justify several assumptions and the parameter tuning adopted in these explanations. For example, if we consider thermal leptogenesis in this model, both finely degenerate right-handed neutrino masses and a small Yukawa coupling for the lightest right-handed neutrino are required in order to make possible sufficient generation of lepton number asymmetry through the out-of-equilibrium decay of the lightest right-handed neutrino. In this work, we have just assumed them independently in a way consistent with other phenomenological issues. In this paper, we consider an extension of the model which makes it possible to realize these required conditions simultaneously in the evolution of the Universe. We suppose a new symmetry breaking at a scale of $O(1)$ TeV for this purpose. After this symmetry breaking, a small mass difference is induced between two lighter right-handed neutrinos, although they have an equal mass originally. At the same time, a Yukawa coupling of the lightest right-handed neutrino becomes much smaller than that of the heavier one. To realize this scenario, we introduce a low energy U(1) gauge symmetry to the model. We show that (i) both the almost degenerate right-handed neutrino masses and a tiny neutrino Yukawa coupling, which are indispensable for TeV scale resonant leptogenesis \cite{res}, are brought about after the breaking of this symmetry. Moreover, we find that this extension can also explain important key features required in the original Ma model, that is, (ii) a small quartic coupling between the Higgs doublet scalar and an inert doublet scalar which plays a crucial role in the neutrino mass generation, and (iii) the origin of the $Z_2$ symmetry which guarantees the stability of dark matter. The remaining part of this paper is organized as follows. After introducing an extended model in the next section, we discuss features in the scalar sector and also the right-handed neutrino mass degeneracy. Baryon number asymmetry generated through the thermal leptogenesis is studied taking account of these. In section 3, we study the dark matter relic abundance and other cosmological aspects of the model. Finally, in section 4 we give a brief summary of the main results of the paper. \section{An extended model} \subsection{U(1) gauge symmetry at a TeV scale} The original Ma model is a simple extension of the standard model which can relate neutrino masses and dark matter \cite{ma}. In this model, only an inert doublet scalar $\eta$ and right-handed neutrinos $N_i$ are added to the standard model. Although ingredients of the standard model are assigned an even parity of the imposed $Z_2$ symmetry, new fields are assumed to have odd parity. This feature forbids tree-level neutrino mass generation and guarantees the stability of dark matter. We extend this model with a U(1)$_X$ gauge symmetry, a singlet scalar $S$, and also additional right-handed neutrinos $\tilde N_i$ whose number is equal to the one of $N_i$. The U(1)$_X$ charge is assigned each new ingredient as $Q_X(S)=2$, $Q_X(\eta)=-1$, $Q_X(N_i)=1$, and $Q_X(\tilde N_i)=-1$. Normalization for the U(1)$_X$ charge and coupling is fixed through a covariant derivative, which is defined as $D_\mu=\partial_\mu-ig\frac{\tau^a}{2}W_\mu^a-ig_Y\frac{Y}{2}B_\mu -ig_X\frac{Q_X}{2}X_\mu$. Since the standard model fields are assumed to have no charge for this U(1)$_X$, it is obvious that the U(1)$_X$ is anomaly free. If this symmetry is assumed to break down due to a vacuum expectation value $\langle S\rangle$, the model has a remnant exact symmetry $Z_2$ after this breaking. Since only $\eta$, $N_i$ and $\tilde N_i$ have its odd parity, the lightest one of them is stable and can be dark matter. We assume that dark matter is the lightest neutral component of $\eta$ in this study. The relevant part of the Lagrangian for these new ingredients of the model is summarized as \begin{eqnarray} -{\cal L}_N&=&h_{\alpha i} \bar{N_i}\eta^\dagger\ell_{\alpha} +f_{\alpha i}\frac{S^\dagger}{M_\ast}\bar{\tilde {N_i}} \eta^\dagger \ell_{\alpha}+M_iN_i\tilde N_i +\frac{y_i}{2}S^\dagger N_iN_i + \frac{\tilde y_i}{2}S\tilde{N_i}\tilde{N_i} + {\rm h.c.}, \nonumber \\ V&=&\lambda_1(\phi^\dagger\phi)^2+\lambda_2(\eta^\dagger\eta)^2 +\lambda_3(\phi^\dagger\phi)(\eta^\dagger\eta) +\lambda_4(\eta^\dagger\phi)(\phi^\dagger\eta) +\frac{\lambda_5^\prime}{2} \Big[\frac{S}{M_\ast}(\phi^\dagger\eta)^2 + {\rm h.c.}\Big] \nonumber \\ &+&\lambda_6(S^\dagger S)(\phi^\dagger\phi) + \lambda_7(S^\dagger S) (\eta^\dagger\eta) +\kappa(S^\dagger S)^2 +m_\phi^2\phi^\dagger\phi +m_\eta^2\eta^\dagger\eta +m_S^2S^\dagger S, \label{model} \end{eqnarray} where $\ell_{\alpha}$ is a left-handed doublet lepton and $\phi$ is an ordinary doublet Higgs scalar. $M_\ast$ is a cut-off scale of this model. The bare masses $M_i$ and $m_\eta$ in eq.~(\ref{model}) are assumed to be real and of $O(1)$~TeV. The couplings $h_{\alpha i}$ and $f_{\alpha i}$ in the neutrino sector are considered to be written by using the basis in which the Yukawa coupling matrix of charged leptons is diagonal. As easily found in eq.~(\ref{model}), if the singlet $S$ has a vacuum expectation value, the coupling $\lambda_5$ in the original Ma model and neutrino Yukawa couplings $\tilde h_{\alpha i}$ for $\tilde N_i$ are determined as \cite{u1} \begin{equation} \lambda_5=\lambda_5^\prime\frac{\langle S\rangle}{M_\ast}, \qquad \tilde h_{\alpha i}=f_{\alpha i}\frac{\langle S^\dagger\rangle}{M_\ast}, \label{l5c} \end{equation} where it may be natural to consider that both $\lambda_5^\prime$ and $f_{\alpha i}$ are of $O(1)$. The magnitude of $\lambda_5$ is crucial for the neutrino mass determination in the model. We note that it can be small enough if $|\langle S\rangle|\ll M_\ast$ is satisfied. Scales assumed for $|\langle S\rangle|$ and $M_\ast$ in the present study are discussed below. \subsection{Scalar sector} First, we discuss the scalar sector of the model. We express the scalar fields by using a unitary gauge \begin{equation} \phi^T=(0, \langle\phi\rangle +\frac{h}{\sqrt{2}}), \quad \eta^T=(\eta^+, \frac{1}{\sqrt 2}(\eta_R+i\eta_I)), \quad S=\langle S\rangle + \frac{\sigma}{\sqrt{2}}, \label{unitary} \end{equation} where both vacuum expectation values $\langle\phi\rangle$ and $\langle S\rangle$ are assumed to be real and positive. In this vacuum, the new Abelian gauge boson $X_\mu$ gets a mass $m_X^2=2g_X^2\langle S\rangle^2$. The scalar potential $V$ in eq.~(\ref{model}) can be represented by using eq.~(\ref{unitary}) as \begin{eqnarray} V&=& \frac{1}{2}\left( 4\lambda_1\langle\phi\rangle^2h^2+ 4\kappa\langle S\rangle^2\sigma^2 + 4\lambda_6\langle\phi\rangle\langle S\rangle h\sigma\right) +\frac{1}{2}M_{\eta_R}^2\eta_R^2+ \frac{1}{2}M_{\eta_I}^2\eta_I^2 + M_{\eta_c}^2\eta^+\eta^- \nonumber \\ &+&\frac{1}{4}\left[\sqrt{\lambda_1}h^2-\sqrt{\lambda_2} (2\eta^+\eta^-+\eta_R^2+\eta_I^2)-\sqrt\kappa\sigma^2\right]^2 +\frac{1}{4}\left[\left\{2(\lambda_3+2\sqrt{\lambda_1\lambda_2})\eta^+\eta^- \right.\right. \nonumber \\ &+&\left.\left.(\lambda_++2\sqrt{\lambda_1\lambda_2})\eta_R^2+ (\lambda_-+2\sqrt{\lambda_1\lambda_2})\eta_I^2 +(\lambda_6+2\sqrt{\lambda_1\kappa})\sigma^2\right\}h^2\right. \nonumber \\ &+&\left.(\lambda_7-2\sqrt{\lambda_2\kappa})(2\eta_+\eta_-+\eta_R^2 +\eta_I^2)\sigma^2\right] +\sqrt{2}\lambda_1\langle\phi\rangle h^3 +\sqrt 2\kappa\langle S\rangle \sigma^3 \nonumber \\ &+&{\sqrt 2}(\lambda_3 \langle\phi\rangle h + \lambda_7\langle S\rangle\sigma)\eta_+\eta_- +\frac{1}{\sqrt 2}\left(\lambda_+\langle\phi\rangle h +\lambda_7\langle S\rangle\sigma\right)\eta_R^2 \nonumber \\ &+&\frac{1}{\sqrt 2}\left(\lambda_-\langle\phi\rangle h +\lambda_7\langle S\rangle\sigma\right)\eta_I^2 +\frac{\lambda_6}{\sqrt 2}\left(\langle\phi\rangle h\sigma^2 +\langle S\rangle\sigma h^2\right) + \frac{\lambda_5^\prime}{4\sqrt 2M_\ast}\sigma h^2(\eta_R^2-\eta_I^2), \label{pot} \end{eqnarray} where we use the definition $\lambda_\pm=\lambda_3+\lambda_4\pm \lambda_5$ and \begin{equation} M_{\eta_c}^2=m_\eta^2+\lambda_7\langle S\rangle^2 + \lambda_3\langle\phi\rangle^2, \qquad M_{\eta_{R(I)}}^2=m_\eta^2+ \lambda_7\langle S\rangle^2 + \lambda_{+(-)}\langle\phi\rangle^2. \label{emass} \end{equation} The difference between these masses is estimated to be \begin{equation} \frac{M_{\eta_I}-M_{\eta_R}}{M_{\eta_R}}\simeq \frac{\lambda_5\langle\phi\rangle^2}{M_{\eta_R}^2} \equiv\frac{\delta}{M_{\eta_R}}, \qquad \frac{M_{\eta_c}-M_{\eta_R}}{M_{\eta_R}}\simeq \frac{(\lambda_4+\lambda_5)\langle\phi\rangle^2}{2M_{\eta_R}^2}, \label{mdif} \end{equation} which could be a good approximation as long as $m_\eta^2+\lambda_7\langle S\rangle^2 \gg \langle\phi\rangle^2$ is satisfied. A large value of $m_\eta^2+\lambda_7\langle S\rangle^2$ is favored from the analysis of the $T$ parameter in precise measurements of the electroweak interaction \cite{idm,idm1}. We assume such a situation in the present study. Quartic scalar couplings in the potential $V$ are constrained by several conditions. The stability of the assumed vacuum requires \begin{equation} \lambda_1,~ \lambda_2, ~\kappa >0; \quad \lambda_3,~\lambda_\pm > -2\sqrt{\lambda_1\lambda_2}; \quad \lambda_6>-2\sqrt{\lambda_1\kappa}; \quad \lambda_7>-2\sqrt{\lambda_2\kappa}. \label{cstab} \end{equation} These can be easily read off from the expression of the scalar potential $V$ given in eq.~(\ref{pot}).\footnote{The last condition can be found by using a different expression of $V$, which is modified so that $\sqrt\kappa$ has the opposite sign to eq.~(\ref{pot}).} Perturbativity of the model imposes that these quartic couplings should be smaller than $4\pi$.\footnote{More precisely, $|\lambda_{1,2}|$ and $|\kappa|$ should be smaller than $\frac{2\pi}{3}$.} Moreover, if we assume that $\eta_R$ is the lightest one among the fields with odd parity of the remnant $Z_2$, eq.~(\ref{emass}) shows that the following conditions should be satisfied: \begin{equation} \lambda_4+\lambda_5<0, \qquad \lambda_5<0; \qquad M_{\eta_R} <{\rm min}(M_{\pm i}), \label{cdm} \end{equation} where $M_{\pm i}$ are the mass eigenvalues for $N_i$ and $\tilde N_i$ which are discussed in detail later. Using the value of $\lambda_1$ predicted by the Higgs mass observed at LHC experiments \cite{lhc} and the conditions given in eqs.~(\ref{cstab}) and (\ref{cdm}), we can roughly estimate the allowed range of $\lambda_{3,4}$ as \begin{equation} -2.5<\lambda_3<4\pi, \qquad -4\pi<\lambda_4 <0, \label{clambda} \end{equation} for sufficiently small values of $|\lambda_5|$. The potential minimum in eq.~(\ref{pot}) is obtained as \begin{equation} \langle\phi\rangle^2=\frac{\lambda_6m_S^2-2\kappa m_\phi^2}{4\lambda_1\kappa-\lambda_6^2}, \qquad \langle S\rangle^2=\frac{\lambda_6m_\phi^2-2\lambda_1 m_S^2}{4\lambda_1\kappa-\lambda_6^2}. \label{cvac} \end{equation} Since the new gauge boson does not couple with the standard model fields, both cases $\langle S\rangle^2 \gg\langle\phi\rangle^2$ and $\langle S\rangle^2 \ll \langle\phi\rangle^2$ could be phenomenologically allowed. However, if we apply this model to the leptogenesis, $\langle S\rangle^2 \gg\langle\phi\rangle^2$ should be satisfied as discussed later. Such a vacuum can be realized for a sufficiently small $|\lambda_6|$ satisfying $4\lambda_1\kappa \gg \lambda_6^2$ and negative values of $m_S^2$ and $m_\phi^2$ satisfying $|m_S^2|\gg |m_\phi^2|$. In this case, both vacuum expectation values are approximately expressed as $\langle \phi\rangle^2\simeq -\frac{m_\phi^2}{2\lambda_1}$ and $\langle S\rangle^2\simeq -\frac{m_S^2}{2\kappa}$. If the contribution of $\langle S\rangle$ to the $\eta$ mass is of the same order as that of $\langle\phi\rangle$, $|\lambda_7|$ should be much smaller than $|\lambda_{3,4}|$ as found from eq.~(\ref{emass}). Since $h$ and $\sigma$ defined in eq.~(\ref{unitary}) have mass mixing as found from the first line in eq.~(\ref{pot}), mass eigenstates $\tilde h$ and $\tilde\sigma$ are a mixture of these. They are found to be written as \begin{equation} \tilde h\simeq h-\frac{\lambda_6\langle \phi\rangle}{2\kappa\langle S\rangle}\sigma, \qquad \tilde\sigma=\sigma+\frac{\lambda_6\langle \phi\rangle}{2\kappa\langle S\rangle}h. \end{equation} However, since $\langle S\rangle^2\gg\langle \phi\rangle^2$ is assumed and $|\lambda_6|< \sqrt\kappa$ is expected, mass eigenstates could be almost equal to $h$ and $\sigma$. In this case, the mass eigenvalues are approximately expressed as \begin{equation} m_{\tilde h}^2 = \left(4\lambda_1 -\frac{\lambda_6^2}{\kappa}\right)\langle\phi\rangle^2, \qquad M_{\tilde\sigma}^2\simeq 4\kappa\langle S\rangle^2. \label{hmass} \end{equation} These should have positive values for the stability of the considered vacuum. It requires $4\lambda_1\kappa > \lambda_6^2$, which is consistent with the above discussion. The value of $\lambda_1$ might be estimated by using $m_{\tilde h}\simeq 125$~GeV. If we apply it to the tree-level formula in eq.~(\ref{hmass}), we have \begin{equation} \lambda_1-\frac{\lambda_6^2}{4\kappa}\sim 0.13. \label{lam1} \end{equation} This result suggests that $\lambda_1$ could have a somewhat larger value than the corresponding quartic coupling in the standard model. However, this effect is expected to be small since the assumed vacuum requires $4\lambda_1\kappa \gg \lambda_6^2$. On the other hand, the model has the additional scalar couplings $\lambda_3$ and $\lambda_4$, which are known to improve the potential stability \cite{stab}. Thus, the constraint from the potential stability against the radiative correction in the present model could be milder than that of the standard model. If we impose the requirement that $\tilde\sigma$ is heavier than the Higgs scalar, $\kappa$ satisfies $\kappa~{^>_\sim}~10^{-3}\left(\frac{2~{\rm TeV}}{\langle S\rangle}\right)^2$ and $\lambda_6$ could take a small value so as to be consistent with the condition $|\lambda_6|<2\sqrt{\lambda_1\kappa}$. If the above condition for $\kappa$ is not satisfied, $\tilde\sigma$ can be lighter than $\tilde h$ so as to realize $m_{\tilde h}>2 M_{\tilde\sigma}$. In that case, the coupling $\lambda_6$ satisfies $|\lambda_6|~{^<_\sim}~ 10^{-2} \left(\frac{\lambda_1}{0.13}\right)^{\frac{1}{2}} \left(\frac{2~{\rm TeV}}{\langle S\rangle}\right)$ and the interaction in the last line of eq.~(\ref{pot}) induces the invisible decay $\tilde{h}\rightarrow 2\tilde\sigma$. The decay width can be estimated as \begin{equation} \Gamma(\tilde h\rightarrow 2\tilde\sigma)= \frac{\lambda_6^2|\langle\phi\rangle|^2} {16\pi m_{\tilde h}}\sqrt{1-4\frac{M_{\tilde\sigma}^2}{m_{\tilde h}^2}}. \end{equation} The branching ratio of this invisible decay should be less than $19\%$ of the Higgs total width $\sim 4$~MeV \cite{hwidth}. This constrains the value of $\lambda_6$ as $|\lambda_6|<0.0126$ \cite{wein}, which could be consistent with the vacuum condition discussed above. Here, we note that both $\kappa$ and $\lambda_6$ take small values for the light $\tilde\sigma$. In that case, $\tilde\sigma$ could have non-negligible cosmological effects. We will come back to this point later. \subsection{Degenerate right-handed neutrinos} Next, we discuss the neutrino sector. If the thermal leptogenesis at TeV scales is supposed to be the origin of baryon number asymmetry in the Universe, the mass degeneracy among right-handed neutrinos is indispensable, at least in certain parameter regions \cite{ks}. In the present model, spontaneous breaking of a new Abelian gauge symmetry due to a vacuum expectation value of $S$ could make the singlet fermions $N_i$ and $\tilde N_i$ behave as pseudo-Dirac fermions. In fact, if $|y_i\langle S^\dagger\rangle|, |\tilde y_i\langle S\rangle|\ll M_i$ is satisfied, their masses are almost degenerate.\footnote{The same scenario has been considered to explain the mass degeneracy among right-handed neutrinos first in \cite{res-f2}. It is also discussed in \cite{pseudoD}.} The mass matrix of the singlet fermions is expressed as \begin{equation} \frac{1}{2}(N_i, \tilde{N_i})\left(\begin{array}{cc} |y_i|e^{i\gamma_i}\langle S^\dagger\rangle & M_i \\ M_i & |\tilde{y_i}|e^{i\tilde\gamma_i} \langle S\rangle \\ \end{array}\right) \left(\begin{array}{c} N_i \\ \tilde{N_i}\\\end{array}\right) +{\rm h.c.}, \end{equation} where $M_i$ and $\langle S\rangle$ can be taken to be positive generally. The mass eigenvalues $M_{\pm i}$ are derived as \begin{eqnarray} && M_{+i}\simeq M_i\sin 2\theta_i+\Big(|y_i|\cos(\gamma_i-\xi_i)\cos^2\theta_i +|\tilde y_i|\cos(\tilde\gamma_i+\xi_i)\sin^2\theta_i\Big) \langle S\rangle, \nonumber \\ && M_{-i}\simeq M_i\sin 2\theta_i -\Big(|y_i|\cos(\gamma_i-\xi_i)\sin^2\theta_i +|\tilde y_i|\cos(\tilde\gamma_i+\xi_i)\cos^2\theta_i\Big) \langle S\rangle, \label{e-nmass} \end{eqnarray} and the corresponding mass eigenstates ${\cal N}_{\pm i}$ are found to be written as \begin{eqnarray} &&{\cal N}_{+i}=e^{-i\frac{\xi_i}{2}} \left(N_i\cos\theta_i + \tilde{N_i}e^{-i\xi_i}\sin\theta_i\right), \nonumber \\ && {\cal N}_{-i}=ie^{-i\frac{\xi_i}{2}} \left(-N_i\sin\theta_i + \tilde{N_i}e^{-i\xi_i}\cos\theta_i\right), \end{eqnarray} respectively. Here, the phase $\xi_i$ is fixed by the parameters in the mass matrix as \begin{equation} \tan\xi_i=\frac{|y_i|\sin\gamma_i-|\tilde y_i|\sin\tilde\gamma_i} {|y_i|\cos\gamma_i+|\tilde y_i|\cos\tilde\gamma_i}, \label{beta} \end{equation} and the mixing angle $\theta_i$ is given by using this $\xi_i$ as \begin{equation} \tan2\theta_i=\frac{M_i}{\langle S\rangle} \frac{2}{|y_i|\cos(\gamma_i-\xi_i) -|\tilde{y_i}|\cos(\tilde\gamma_i+\xi_i)}. \label{theta} \end{equation} The difference of the mass eigenvalues given in eq.~(\ref{e-nmass}) is expressed by using these as \begin{equation} \Delta_i\equiv \frac{M_{+i}-M_{-i}}{M_{-i}}\simeq \frac{\langle S\rangle}{M_i} \frac{|y_i|\cos(\gamma_i-\xi_i)+|\tilde y_i|\cos(\tilde\gamma_i +\xi_i)} {\sin 2\theta_i}. \label{msplit} \end{equation} From these formulas, we find that $\theta_i$ could be approximated as $\frac{\pi}{4}$ and also the right-handed neutrino masses might be finely degenerate at a period where the sphaleron interaction is in thermal equilibrium, simultaneously. The condition required for this is that both $|y_i|\langle S\rangle$ and $|\tilde y_i|\langle S\rangle$ are much smaller than $M_i$ which is assumed to be of $O(1)$ TeV. This implies that the resonant leptogenesis could occur for a value of $\langle S\rangle$ which is larger than the weak scale as long as both $|y_i|$ and $|\tilde y_i|$ are sufficiently small. The neutrino Yukawa couplings and other relevant interactions of the right-handed neutrinos in eq.~(\ref{model}) can be written by using the mass eigenstates ${\cal N}_{\pm i}$ as \begin{eqnarray} &&\sum_{i=1,2}\Big[e^{-i\frac{\xi_i}{2}} \Big(h_{\alpha i}\cos\theta_i +\tilde h_{\alpha i}e^{-i\xi_i}\sin\theta_i\Big) \bar{\cal N}_{+i}\eta^\dagger\ell_\alpha \nonumber \\ &&\hspace*{1cm}-ie^{-i\frac{\xi_i}{2}}\Big(h_{\alpha i}\sin\theta_i -\tilde h_{\alpha i}e^{-i\xi_i}\cos\theta_i\Big)\bar{\cal N}_{-i}\eta^\dagger \ell_\alpha \nonumber \\ &&\hspace*{1cm}+\frac{1}{2\sqrt 2}\Big\{ \left(|y_i|e^{i(\gamma_i+\xi_i)}\cos^2\theta_i+|\tilde y_i| e^{i(\tilde\gamma_i+ 3\xi_i)}\sin^2\theta_i\right) \tilde\sigma{\cal N}_{+i}^2 \nonumber \\ &&\hspace*{1cm}-\left(|y_i|e^{i(\gamma_i+\xi_i)}\sin^2\theta_i+|\tilde y_i| e^{i(\tilde\gamma_i +3\xi_i)}\cos^2\theta_i\right) \tilde\sigma{\cal N}_{-i}^2 \nonumber \\ &&\hspace*{1cm}+i\sin 2\theta_i\left(|y_i|e^{i(\gamma_i+\xi_i)}-|\tilde y_i| e^{i(\tilde\gamma_i +3\xi_i)}\right)\tilde\sigma{\cal N}_{+i}{\cal N}_{-i} \Big\} \nonumber \\ &&\hspace*{1cm}+ ig_X\sin 2\theta_i~X_\mu(\bar{\cal N}_{+i} \gamma^\mu {\cal N}_{-i}) + {\rm h.c.}~ \Big]. \label{couplings} \end{eqnarray} If $h_{\alpha i}=\tilde h_{\alpha i}$ is satisfied,\footnote{Although this assumption is not necessary for the present scenario, we adopt it to make the analysis easier.} the flavor structure of the model becomes very simple. In that case, the neutrino Yukawa couplings can be rewritten as \begin{eqnarray} g_{\alpha i}^{(+)}&\equiv&e^{-i\frac{\xi_i}{2}} h_{\alpha i}\Big(\cos\theta_i +e^{-i\xi_i}\sin\theta_i\Big) =h_{\alpha i}\left(1+\cos\xi_i\sin 2\theta_i\right)^{\frac{1}{2}} e^{i(\delta_{+i}-\frac{\xi_i}{2})}, \nonumber \\ g_{\alpha i}^{(-)}&\equiv&-ie^{-i\frac{\xi_i}{2}}h_{\alpha i} \Big(\sin\theta_i- e^{-i\xi_i}\cos\theta_i\Big) =h_{\alpha i}\left(1-\cos\xi_i\sin 2\theta_i\right)^{\frac{1}{2}} e^{i(\delta_{-i}-\frac{\xi_i}{2})}, \label{nyukawa} \end{eqnarray} where we suppose $h_{\alpha i}$ to be real, for simplicity. The phases $\delta_{\pm i}$ are defined as \begin{equation} \tan\delta_{+ i}=\frac{-\sin\xi_i\tan\theta_i} {1+\cos\xi_i\tan\theta_i}, \qquad \cot\delta_{-i}=\frac{\sin\xi_i}{\cos\xi_i-\tan\theta_i}. \end{equation} We use these simplified neutrino Yukawa couplings in the following discussion. The neutrino mass is induced through one-loop diagrams which have ${\cal N}_{+i}$ or ${\cal N}_{-i}$ in an internal fermion line as in the original model. The mass formula is given by \begin{equation} {\cal M}_{\alpha\beta} =\sum_{i}\sum_{s=\pm}\left|g_{\alpha i}^{(s)}g_{\beta i}^{(s)}\lambda_5\right| e^{i(2\delta_{si}-\xi_i)}\Lambda(M_{s i}), \label{nmass} \end{equation} where $\Lambda(M_{\pm i})$ is defined as \begin{equation} \Lambda(M_{\pm i})=\frac{\langle\phi\rangle^2}{8\pi^2} \frac{M_{\pm i}}{M_\eta^2-M_{\pm i}^2} \left(1+\frac{M_{\pm i}^2}{M_\eta^2-M_{\pm i}^2} \ln\frac{M_{\pm i}^2}{M_\eta^2}\right). \label{nmass1} \end{equation} $M_\eta$ is an averaged value of the mass eigenvalues of $\eta_R$ and $\eta_I$. If the model has two sets of $(N_i, \tilde N_i)$ at least, neutrino mass eigenvalues suitable for the explanation of the neutrino oscillation data could be derived.\footnote{We can consider another minimal model which has one set of $(N_1, \tilde N_1)$ and an additional right-handed neutrino which has no charge of U(1)$_X$. A result similar to the present one could be expected for neutrino masses and leptogenesis also in such a model.} We consider a model with two sets of $(N_i, \tilde N_i)$ in the following. Since the scale $\Lambda(M_{\pm i})$ is estimated as $\Lambda(M_{\pm i})=O(10^9)$~eV for $\eta$ and ${\cal N}_{\pm i}$ whose masses are in the TeV range, eq.(\ref{nmass}) suggests that the atmospheric neutrino data require the relevant neutrino Yukawa couplings to satisfy \begin{equation} \sum_i \left|g_{\alpha i}^{(\pm)}g_{\beta i}^{(\pm)}\lambda_5\right| =O(10^{-11}). \label{c-nmass} \end{equation} On the other hand, if ${\cal N}_{-1}$ is identified with the lightest right-handed neutrino, its decay should occur in out-of-thermal equilibrium for successful leptogenesis. This condition could impose strong constraints on various interactions of ${\cal N}_{-1}$. They can be roughly estimated by imposing both reaction rates of the decay of ${\cal N}_{-1}$ and its scattering with other particles to be smaller than the Hubble parameter. The most important process is the ${\cal N}_{-1}$ decay. If the neutrino Yukawa couplings of ${\cal N}_{-1}$ satisfy \begin{equation} \left(\sum_\alpha \left|g_{\alpha 1}^{(-)}\right|^2\right)^{\frac{1}{2}} \le 10^{-8}, \label{lept1} \end{equation} it does not reach equilibrium at the temperature $T~{^>_\sim}~100$~GeV. The condition (\ref{lept1}) shows that ${\cal N}_{-1}$ causes a negligible contribution to the neutrino mass generation, which is found from eqs.~(\ref{nmass}) and (\ref{c-nmass}). On the other hand, if ${\cal N}_{+1}$ is supposed to cause a main contribution to the neutrino mass generation, the condition (\ref{c-nmass}) shows that its Yukawa couplings should satisfy \begin{equation} \left|g_{\alpha 1}^{(+)}\right|^2= O\left(\frac{10^{-11}}{|\lambda_5|}\right) \qquad (\alpha=e,\mu,\tau). \label{lept2} \end{equation} Equation (\ref{nyukawa}) suggests that the original neutrino Yukawa couplings $|h_{\alpha 1}|$ do not need to be extremely small for the simultaneous realization of the conditions (\ref{lept1}) and (\ref{lept2}) as long as $\cos\xi_1\sin 2\theta_1\simeq 1$ is satisfied to a good accuracy and also $|\lambda_5|$ takes a small value of $O(10^{-4})$. Other nonzero neutrino mass eigenvalues could be determined through the second pair $(N_2,\tilde N_2)$. Since the relevant Yukawa couplings $h_{\alpha 2}$ are not constrained by the leptogenesis, we can derive neutrino masses and mixing favorable for the explanation of the neutrino oscillation data through eq.~(\ref{nmass}) independently. If only one of ${\cal N}_{\pm 2}$ contributes to the neutrino mass generation as in the $(N_1, \tilde N_1)$ sector, one of three neutrino mass eigenvalues is expected to be negligibly small as in the model studied in \cite{ks}. \input epsf \begin{figure}[t] \begin{center} \epsfxsize=7.5cm \leavevmode \epsfbox{fcp.eps} \end{center} \vspace*{-3mm} {\footnotesize {\bf Fig.~1}~~$CP$ asymmetry as a function of $\gamma_1$ for typical values of $(|y_1|, |h_{\alpha 1}|)$. In each case, these parameters are fixed as A$(10^{-5}, 4\times 10^{-4})$, B$(10^{-5}, 5\times 10^{-4})$, C$(2\times 10^{-5}, 4\times 10^{-4})$, and D$(2\times 10^{-5}, 5\times 10^{-4})$. Other relevant parameters are taken to be $\tilde\gamma_1=0.1$, $\tilde y_1=10^{-8}$, $M_1=\langle S\rangle=2$~TeV and $M_\eta=1$~TeV.} \end{figure} \subsection{Resonant leptogenesis} In this framework, we consider resonant leptogenesis \cite{res,res-f1,res-f2}. The dominant contribution to the $CP$ asymmetry $\varepsilon$ in the ${\cal N}_{-1}$ decay comes from the resonance appearing in the one-loop self-energy diagram. In that case, $\varepsilon$ is known to be expressed as \cite{res-f1,res-f2} \begin{eqnarray} \varepsilon&=&\frac{{\rm Im} \left(\sum_\alpha g_{\alpha 1}^{(+)\ast} g_{\alpha 1}^{(-)}\right)^2} {\left(\sum_\alpha g_{\alpha 1}^{(-)\ast} g_{\alpha 1}^{(-)}\right) \left(\sum_\alpha g_{\alpha 1}^{(+)\ast}g_{\alpha 1}^{(+)}\right)}~ \frac{2\Delta_1{\tilde\Gamma_{{\cal N}_{+1}}}} {4\Delta_1^2+\tilde\Gamma_{{\cal N}_{+1}}^2} \nonumber \\ &=&\frac{\cos2\theta_1\sin 2\xi_1}{1-\sin^22\theta_1\cos^2\xi_1} \frac{2\Delta_1{\tilde\Gamma_{{\cal N}_{+1}}}} {4\Delta_1^2+\tilde\Gamma_{{\cal N}_{+1}}^2}, \end{eqnarray} where we use the expression of the neutrino Yukawa couplings $|g_{\alpha 1}^{\pm}|$ given in eq.~(\ref{nyukawa}). The mass degeneracy $\Delta_1$ is defined in eq.~(\ref{msplit}) and $\tilde\Gamma_{{\cal N}_{+1}}= \frac{\sum_\alpha\left|g_{\alpha 1}^{(+)}\right|^2} {8\pi}\left(1-\frac{M_\eta^2}{M_{+1}^2}\right)^2$. If we assume $\langle S\rangle=M_1$ for simplicity, the right-handed neutrino sector $(N_1,\tilde N_1)$ has five free parameters. Using these, we study the relation between the $CP$ asymmetry and the structure of right-handed neutrino sector. In Fig.~1, we plot the $CP$ asymmetry $\varepsilon$ as a function of $\gamma_1$ for four typical sets of $(|y_1|, |h_{\alpha 1}|)$. Other parameters are fixed at the values given in the caption of Fig.~1. We find that $\varepsilon$ changes the sign from minus to plus at $\gamma_1\sim 10^{-4}$ and $5\times 10^{-5}$ for the cases A, B and C, D, respectively. Its absolute value is enhanced largely around these values of $\gamma_1$. If we note that $\left|g_{\alpha 1}^{(-)}\right|\le O(10^{-8})$ is required for the out-of-equilibrium decay of ${\cal N}_{-1}$, we find that $|\xi_1|$ should take a very small value such as $O(10^{-4})$ for $|h_{\alpha 1}|=O(10^{-4})$. As found from eq.~(\ref{beta}), such a small $|\xi_1|$ could be easily realized for hierarchical $|y_1|$ and $|\tilde y_1|$ by fixing the values of $\gamma_1$ and $\tilde\gamma_1$ appropriately. In these examples, such hierarchical values are assumed for $|y_1|$ and $|\tilde y_1|$. We also note that the same parameter set could induce the degenerate right-handed neutrino masses as found from eq.~(\ref{msplit}). This feature makes it for the model possible to satisfy the minimum conditions for the success of resonant leptogenesis. Although we have to introduce a tiny coupling $|\tilde y_1|$ in this scenario, the important quantities for the leptogenesis are closely related each other. The model can bring about their favorable values simultaneously based on the common parameters. In fact, for the parameters used in Fig.~1, the desirable values of the relevant quantities to the leptogenesis can be obtained. We present their values derived from these parameters in Table 1. These results show that $\left|g_{\alpha 1}^{(-)}\right|$ takes small values which satisfy the condition (\ref{lept2}) at the points where the $CP$ asymmetry $|\varepsilon|$ has large values. The mass degeneracy $\Delta_1=O(10^{-5})$ between the right-handed neutrinos ${\cal N}_{\pm 1}$ is also realized at this region. This level of degeneracy has been shown to be sufficient for the leptogenesis in the radiative neutrino mass model in the previous study \cite{ks}. Although the smallness of $|\tilde y_1|$ should be explained by considering some complete model in the high energy region, it is beyond the scope of the present study and we do not go further in this direction here. \begin{figure}[t] \begin{center} \begin{tabular}{ccccc}\hline & $|g_{\alpha 1}^{(-)}|$ & $|g_{\alpha 1}^{(+)}|$ & $\Delta_1$ &$\varepsilon$ \\ \hline\hline A & $3.12\cdot 10^{-9}$ & $5.66\cdot 10^{-4}$ & $1.00\cdot 10^{-5}$ & $-1.73 \cdot 10^{-3}$ \\ B & $6.71\cdot 10^{-9}$ & $7.07\cdot 10^{-4}$ & $1.00\cdot 10^{-5}$ & $-2.71\cdot 10^{-3}$ \\ C & $1.17\cdot 10^{-8}$ & $5.66\cdot 10^{-4}$ & $2.00\cdot 10^{-5}$ & $-5.04\cdot 10^{-4}$ \\ D &$1.46\cdot 10^{-8}$ & $7.07\cdot 10^{-4}$ & $2.00\cdot 10^{-5}$ & $-7.88\cdot 10^{-4}$ \\ \hline \end{tabular} \end{center} \vspace*{-1mm} {\footnotesize {\bf Table~1}~ Derived values of the quantities relevant to the leptogenesis for each case given in Fig.~1. These are estimated at $\gamma_1\sim 9 \times 10^{-5}$ and $4\times 10^{-5}$ for the cases A, B and C, D, respectively. } \end{figure} The baryon number asymmetry generated through the decay of ${\cal N}_{-1}$ can be fixed by estimating the generated lepton number asymmetry through solving the Boltzmann equations numerically for both the ${\cal N}_{-1}$ number density $n_{{\cal N}_{-1}}$ and the lepton number asymmetry $n_L(\equiv n_\ell-n_{\bar\ell})$. We introduce these number densities in the co-moving volume as $Y_{{\cal N}_{-1}}=\frac{n_{{\cal N}_{-1}}}{s}$ and $Y_L=\frac{n_L}{s}$ by using the entropy density $s$. The Boltzmann equations for these are written as \begin{eqnarray} &&\frac{dY_{{\cal N}_{-1}}}{dz}=-\frac{z}{sH(M_{-1})} \left(\frac{Y_{{\cal N}_{-1}}}{Y_{{\cal N}_{-1}}^{\rm eq}}-1\right)\left\{ \gamma^D_{{\cal N}_{-1}}+ \gamma_{{{\cal N}_{-1}} {\tilde\sigma}}^{S}+ \gamma_{{{\cal N}_{-1}}{X}}^{S}\right\}, \nonumber \\ &&\frac{dY_L}{dz}=\frac{z}{sH(M_{-1})}\left\{ \varepsilon\left(\frac{Y_{{\cal N}_{-1}}}{Y_{{\cal N}_{-1}}^{\rm eq}}-1\right) \gamma^D_{{\cal N}_{-1}}-\frac{2Y_L}{Y_\ell^{\rm eq}} \left(\frac{\gamma^D_{{\cal N}_{+1}}}{4} +\gamma_{{\cal N}_{+1}}^{(2)} +\gamma_{{\cal N}_{+1}}^{(13)}\right)\right\}, \label{bqn} \end{eqnarray} where $z=\frac{M_{-1}}{T}$ and $H(M_{-1})=1.66g_\ast^{1/2}\frac{M_{-1}^2} {m_{\rm pl}}$. The equilibrium values for these are expressed as $Y_{{\cal N}_{-1}}^{\rm eq}(z)=\frac{45}{2\pi^4g_\ast}z^2K_2(z)$ and $Y_\ell^{\rm eq}\simeq\frac{81}{\pi^4g_\ast}$, where $K_2(z)$ is the modified Bessel function of the second kind. Since the Yukawa couplings of ${\cal N}_{+1}$ are large enough, it is expected to be in thermal equilibrium throughout the relevant period. In these equations, we take into account the important reactions which could keep ${\cal N}_{-1}$ in the equilibrium and wash out the generated lepton number asymmetry. The former ones include the 2-2 scatterings of ${\cal N}_{-1}$ with $\tilde\sigma$ and $X_\mu$, whose reaction densities are represented by $\gamma^S_{{\cal N}_{-1}\tilde\sigma}$ and $\gamma^S_{{\cal N}_{-1}X}$ in eq.~(\ref{bqn}). These could be effective if $\tilde\sigma$ and $X_\mu$ are light enough. Other reaction densities in eq.~(\ref{bqn}) can be found in the appendix of \cite{ks}. \begin{figure}[t] \begin{center} \epsfxsize=4.8cm \leavevmode \epsfbox{fbasym-a.eps} \hspace*{3mm} \epsfxsize=4.8cm \leavevmode \epsfbox{fbasym-b.eps} \hspace*{3mm} \epsfxsize=4.8cm \leavevmode \epsfbox{fbasym-c.eps} \\ \epsfxsize=4.8cm \leavevmode \epsfbox{frate-a.eps} \hspace*{3mm} \epsfxsize=4.8cm \leavevmode \epsfbox{frate-b.eps} \hspace*{3mm} \epsfxsize=4.8cm \leavevmode \epsfbox{frate-c.eps} \\ \end{center} \vspace*{-3mm} {\footnotesize {\bf Fig.~2}~~In the upper panels, solutions of the Boltzmann equations are plotted as a function of $z$ for the case A shown in Table~1. In the lower panels, relevant reaction rates $\Gamma/H$ are plotted as a function of $z$ for the same parameters used in the corresponding upper panels. Reaction rates of the ${\cal N}_{-1}$ decay, the ${\cal N}_{+1}$ inverse decay and the lepton number violating ${\cal N}_{+1}$ scatterings are represented by $\Gamma_{N_{-1}}^D$, $\Gamma_{N_{+1}}^{ID}$ and $\Gamma_{N_{+1}}^{(2)}$, $\Gamma_{N_{+1}}^{(13)}$, respectively. Masses of $\tilde\sigma$ and $X_\mu$ are set as $(M_{\tilde\sigma},m_X)=(200,300)$, $(60,100)$ and $(200,10^{-3})$ in a GeV unit from left to right, respectively.} \end{figure} In Fig.~2, the solutions of these equations and the reaction rates $\Gamma$ of the relevant processes are plotted as functions of $z$ for the case A in Table 1. In these panels, the masses of $\tilde\sigma$ and $X_\mu$ are fixed to be $(M_{\tilde\sigma}, m_X)=(200,300)$, $(60, 100)$, and $(200, 10^{-3})$ in GeV units, respectively. As the initial condition for $Y_{{\cal N}_{-1}}$ in the Boltzmann equations we use its equilibrium value, since both $N_1$ and $\tilde N_1$ are expected to be in thermal equilibrium. Since we adopt this initial condition, its deviation $\Delta_{{\cal N}_{-1}}$ from the equilibrium value does not change sign as found in the upper panels of this figure. After $\langle S\rangle$ becomes nonzero, the mass eigenstate ${\cal N}_{-1}$ leaves the equilibrium because of its small Yukawa coupling $g_{\alpha 1}^{(-)}$. Thus, it could be crucial in the estimation of the lepton number asymmetry at what value of $z$ we introduce the effect of nonzero $\langle S\rangle$ in the equations. As a simple approximation, we introduce its effect as a step function at $z_0$. In order to check the validity of this analysis, we change the value of $z_0$ in the range $0.3<z_0<1$ to examine the $z_0$ dependence of the final results. Since their difference stays at most in a few 10\% range without showing a serious $z_0$ dependence, the present treatment can be considered to give reliable results. In the lower panels, which plot the behavior of the reaction rates, we find that the inverse decay of ${\cal N}_{+1}$ plays a dominant role for the wash-out of the generated lepton number asymmetry among various processes. Although the ${\cal N}_{+1}$ mass is almost degenerate with the mass of ${\cal N}_{-1}$, its Yukawa coupling $g_{\alpha 1}^{(+)}$ is not so small as to decouple at an earlier period. This is an expected feature in the resonant leptogenesis generally. The rapid increase of the lepton number asymmetry shown in the $z>10$ region can be understood from the large decrease of $\Gamma_{N_{+1}}^{ID}$ there. The scatterings of ${\cal N}_{-1}$ with $\tilde\sigma$ and $X_\mu$ cannot be effective in keeping ${\cal N}_{-1}$ in thermal equilibrium even if $\tilde\sigma$ and $X_\mu$ are light enough. Since $\langle S\rangle$ is supposed to be rather large, the assumed masses for $\tilde\sigma$ and $X_\mu$ are obtained only for the small couplings $\kappa$ and $g_X$. This is considered to be the cause of these results. \begin{figure}[t] \begin{center} \begin{tabular}{c|ccccc}\hline $(M_{\tilde\sigma},~m_X)$ & A & B & C & D \\ \hline\hline $(200,~300)$ & $5.2\cdot 10^{-10}$ & $2.3\cdot 10^{-9}$ & $4.2\cdot 10^{-10}$ & $5.6 \cdot 10^{-10}$ \\ $(60, ~100)$ & $3.9\cdot 10^{-10}$ & $1.7\cdot 10^{-9}$ & $1.5\cdot 10^{-10}$ & $1.9\cdot 10^{-10}$ \\ $(200,~10^{-3})$ & $4.0 \cdot 10^{-10}$ & $1.8 \cdot 10^{-9}$ & $1.6\cdot 10^{-10}$ & $2.2\cdot 10^{-10}$ \\ $(600,~600)$ & $7.0 \cdot 10^{-10}$ & $ 3.1\cdot 10^{-9}$ & $1.1\cdot 10^{-9}$ & $1.4\cdot 10^{-9}$ \\ \hline \end{tabular} \end{center} \vspace*{-1mm} {\footnotesize {\bf Table~2}~ Baryon number asymmetry $Y_B$ predicted for the parameter sets given in Table 1. $M_{\tilde\sigma}$ and $m_X$ are given in GeV units.} \end{figure} The baryon number asymmetry $Y_B(\equiv\frac{n_B}{s})$ is expressed by using the solution $Y_L$ of the Boltzmann equations as \begin{equation} Y_B=-\frac{8}{23}Y_L(z_{\rm EW}), \end{equation} where $z_{\rm EW}$ is related to the sphaleron decoupling temperature $T_{\rm EW}$ by $z_{\rm EW}=\frac{M_{-1}}{T_{\rm EW}}$. The baryon number asymmetry predicted for the parameters given in Table 1 is listed in Table 2 for several values of $(M_{\tilde\sigma},m_X)$. These results show that the model could generate the sufficient baryon number asymmetry compared with $8.1\times 10^{-11}<Y_B< 9.2\times 10^{-11}~(95\% CL)$ required from the observation \cite{pdg} as long as the relevant parameters take suitable values.\footnote{For more precise estimation, one could refer to the study in \cite{res2}, which includes the analysis not only for the phenomenon of mixing of heavy neutrinos, but also for oscillations among the heavy neutrinos.} We note that the light $\tilde\sigma$ which can contribute to the invisible decay of the Higgs particle $\tilde h$ is also allowed from the view point of the generation of baryon number asymmetry. The condition (\ref{c-nmass}) imposed by the neutrino oscillation data requires $|\lambda_5|=O(10^{-4})$ for the above numerical results. As we will see in the next section, it is consistent with the constraint derived from the dark matter direct search. The values of $\lambda_5$ and $\tilde h_{\alpha 1}$ used in the above study are found to be realized through eq.~(\ref{l5c}) for the cut-off scale such as $M_\ast=O(10^4)$~TeV, since we assume $\langle S\rangle =M_1$ here. Even if we do not assume this relation and $\langle S\rangle$ is supposed to have a larger value, a similar result is expected to be obtained for a larger value of $M_\ast$ and smaller values of $|y_i|$ and $|\tilde y_i|$. \section{Physics in dark sector} \subsection{Relic abundance and detection of dark matter} It is well known that there are three possible mass ranges for an inert doublet dark matter to realize the required relic abundance \cite{idm,idm1}. We are considering the high mass possibility here.\footnote{We note that a much more severe mass degeneracy between the right-handed neutrinos is required in the low mass possibility if the resonant leptogenesis is applied to the model. This is because the wash-out of the generated lepton asymmetry is kept in the thermal equilibrium until a much later period in this case.} The $\eta_R$ relic abundance can be estimated along the same lines as the original model \cite{ks,idm1}. However, we have to take into account that the thermally averaged (co)annihilation cross section $\langle\sigma_{\rm eff}v\rangle$ has additional contributions from the processes which have $X_\mu$ or $\tilde\sigma$ in the final states or intermediate states in the present model. Moreover, for the inert doublet dark matter $\eta_R$, the direct search imposes severe constraints on the scalar couplings $\lambda_i$. First, we consider the constraint induced through inelastic scattering of $\eta_R$ with a nucleus. Since the masses of $\eta_R$ and $\eta_I$ are almost degenerate for the small values of $|\lambda_5|$ as found from eq.~(\ref{mdif}), this inelastic scattering of $\eta_R$ mediated by the $Z^0$ exchange brings about substantial effects to the direct search experiments \cite{inel,l5}. The interaction of $\eta_{R}$ relevant to this process is given by \begin{equation} {\cal L}=\frac{g}{2\cos\theta_W}Z^\mu\left(\eta_R\partial_\mu\eta_I - \eta_I\partial_\mu\eta_R\right). \label{v-int} \end{equation} Inelastic $\eta_R$-nucleus scattering can occur for $\eta_R$ whose velocity is larger than the minimum value \cite{inelvel} given by \begin{equation} v_{\rm min}=\frac{1}{\sqrt{2m_NE_R}}\left(\frac{m_NE_R}{\mu_N} +\delta\right), \end{equation} where $\delta$ is the mass difference between $\eta_R$ and $\eta_I$ defined in eq.(\ref{mdif}). $E_R$ is the nucleus recoil energy. The mass of the target nucleus and the reduced mass of the nucleus-$\eta_R$ system are represented by $m_N$ and $\mu_N$. The mass difference $\delta$ is constrained by the fact that no dark matter signal has been found in the direct search yet \cite{direct1,lux,direct2}. This condition might be estimated as $\delta~{^>_\sim}150$~keV \cite{l5}. Since $\delta$ is related to $\lambda_5$ through eq.~(\ref{mdif}), the condition on $\delta$ constrains the value of $|\lambda_5|$ to satisfy \cite{ks} \begin{equation} |\lambda_5|\simeq \frac{M_{\eta_R}\delta}{\langle\phi\rangle^2} ~{^>_\sim}~5.0\times 10^{-6} \left(\frac{M_{\eta_R}}{1~{\rm TeV}}\right) \left(\frac{\delta}{150~{\rm keV}}\right). \label{direct} \end{equation} Since $\tilde\lambda_5=O(1)$ is expected, eq.~(\ref{l5c}) suggests that $\langle S\rangle~{^>_\sim}~5\times 10^{-6}M_\ast$ should be satisfied. The present results from a dark matter direct search also impose a constraint on the values of the scalar couplings $\lambda_{3,4}$ and $\lambda_6$. The elastic scattering $\eta_R$-nucleus is induced through the exchange of $\tilde h$ and $\tilde\sigma$. The corresponding cross section for $\eta_R$-nucleon scattering at zero momentum transfer can be calculated to be \begin{equation} \sigma_n^0=\frac{f^{(n)2}m_n^4\lambda_+^2}{8\pi M_{\eta_R}^2m_{\tilde h}^4} \left(1 +\frac{\lambda_6^2}{4\kappa\lambda_1}\right)^2, \label{direct0} \end{equation} where $m_n$ is a nucleon mass and $f^{(n)}\simeq 0.3$. The second term in the parentheses comes from the $\tilde\sigma$ exchange. If we apply the present direct search constraint $\sigma_n^0<1\times 10^{-44}~{\rm cm}^2$ for $M_{\eta_R}=O(1)$~TeV \cite{lux}, we find that the scalar couplings $\lambda_{3,4}$ should satisfy \begin{equation} \lambda_+\left(1+\frac{\lambda_6^2}{4\kappa\lambda_1}\right) <1.5\left(\frac{M_{\eta_R}}{1~{\rm TeV}}\right), \label{direct01} \end{equation} where $\lambda_+\simeq\lambda_3+\lambda_4$. Since the potential stability requires $\lambda_6^2<4\kappa\lambda_1$ as seen before, the $\tilde\sigma$ exchange contribution to the $\eta_R$-nucleon scattering can be generally neglected except for the case where $\lambda_6^2$ takes the same value as regards the order, $4\kappa\lambda_1$. We now proceed to the estimation of the $\eta_R$ relic abundance taking account of the conditions discussed above. We use the notation $(\eta_1,\eta_2,\eta_3,\eta_4)=(\eta_R,\eta_I,\eta_+,\eta_-)$ for convenience here. The dominant parts of the effective (co)annihilation cross section including the new contributions are calculated to be \begin{eqnarray} \langle\sigma_{\rm eff}v\rangle&\simeq&\frac{1}{128\pi M_{\eta_1}^2} \left(\frac{g_2^4(1+2\cos^4\theta_W)}{\cos^4\theta_W} + \frac{2g_2^2g_X^2}{\cos^2\theta_W} + g_X^4\right) \left(N_{11}+N_{22}+ 2N_{34}\right) \nonumber\\ &+&\frac{1}{32\pi M_{\eta_1}^2} \left(\frac{g_2^4\sin^2\theta_W}{\cos^2\theta_W} + g_2^2g_X^2\right) \left(N_{13}+N_{14}+N_{23}+ N_{24}\right) \nonumber \\ &+&\frac{1}{64\pi M_{\eta_1}^2} \Big[\left\{\lambda_+^2+\lambda_-^2+2(\lambda_3^2+\lambda_7^2)\right\} (N_{11}+N_{22}) \nonumber \\ &+&(\lambda_+-\lambda_-)^2(N_{33}+N_{44}+ N_{12}) +\left\{(\lambda_+ +\lambda_-)^2 +4\lambda_3^2+ 2\lambda_7^2 \right\}N_{34} \nonumber\\ &+&\left\{(\lambda_+-\lambda_3)^2+ (\lambda_- -\lambda_3)^2\right\} (N_{13}+N_{14}+N_{23}+N_{24})\Big], \label{cross} \end{eqnarray} where $g_X$ is assumed to be much smaller than $g_Y$ and then $X_\mu$ is sufficiently lighter than the dark matter $\eta_R$. $N_{ij}$ is defined by using $g_{\rm eff}=\sum_i \frac{n^{\rm eq}_i}{n_1^{\rm eq}}$, \begin{equation} N_{ij}\equiv\frac{1}{g_{\rm eff}^2} \frac{n_i^{\rm eq}}{n_1^{\rm eq}}\frac{n_j^{\rm eq}} {n_1^{\rm eq}} =\frac{1}{g_{\rm eff}^2} \left(\frac{M_{\eta_i}M_{\eta_j}}{M_{\eta_1}^2}\right)^{\frac{3}{2}} \exp\left[-\frac{M_{\eta_i}+M_{\eta_j}-2M_{\eta_1}}{T}\right], \label{eqfactor} \end{equation} where $n_i$ is for the $\eta_i$ number density and $n_i^{\rm eq}$ is its equilibrium value. In order to estimate the relic abundance of $\eta_R$, we use the well-known analytic formula instead of solving the Boltzmann equation numerically. The formula is given by \cite{relic}, \begin{equation} \Omega_{\eta_1}h^2\simeq \frac{1.07\times 10^9~{\rm GeV}^{-1}} {J(x_F) g_\ast^{\frac{1}{2}}m_{\rm pl}}, \end{equation} where $g_\ast$ is the relativistic degrees of freedom. The freeze-out temperature $T_F(\equiv \frac{M_{\eta_1}}{x_F})$ and $J(x_F)$ are defined as \begin{equation} x_F=\ln\frac{0.038~ m_{\rm pl}~ g_{\rm eff}~ M_{\rm \eta_1}\langle \sigma_{\rm eff}v\rangle }{(g_\ast x_F)^{\frac{1}{2}}}, \qquad J(x_F)=\int^\infty_{x_F} \frac{\langle\sigma_{\rm eff}v\rangle}{x^2}dx. \end{equation} \begin{figure}[t] \begin{center} \epsfxsize=7.5cm \leavevmode \epsfbox{mrelic.eps} \hspace*{5mm} \epsfxsize=7.5cm \leavevmode \epsfbox{prelic.eps} \end{center} \vspace*{-3mm} {\footnotesize {\bf Fig.~3}~~Relic abundance of $\eta_R$ under the existence of new interactions. It is plotted as a function of $\lambda_4$ for typical sets of $(|\lambda_7|,\lambda_3)$. In the left and right panels, $\lambda_3$ is assumed to be negative and positive, respectively. A horizontal dashed line stands for the observed value $\Omega_{\eta_R}h^2=0.12$ \cite{planck}. In this plot, $g_X=0.1g_Y$ and $\lambda_5=-10^{-4}$ are assumed.} \end{figure} In Fig.~3 we show the predicted relic abundance of $\eta_R$ when the new interactions are taken into account. It is plotted as a function of $\lambda_4$ by assuming typical values of $(|\lambda_7|,\lambda_3)$. To plot this figure, we assume a small value for $g_X$, such as $0.1g_Y$, and we fix the value of $m_\eta^2+\lambda_7\langle S\rangle^2$ at 1~${\rm TeV}^2$ for $\langle S\rangle=2$~TeV. Thus, the mass of $X_\mu$ is comparable to the one of the weak bosons and $\lambda_7$ is confined to $|\lambda_7| < 0.25$. The figure shows that the above cross section can explain the required dark matter relic abundance for a wide range of values of $\lambda_{3,4}$. Since the additional (co)annihilation decay processes can generate substantial contributions for a larger $|\lambda_7|$ in this extended model, $|\lambda_3|$ and $|\lambda_4|$ could take much smaller values in comparison with the values required in the original model \cite{ks}. From the view point of dark matter search, however, the small $|\lambda_7|$ may be promising as suggested through eq.~(\ref{direct0}). Since larger values of $|\lambda_{3,4}|$ are required by the relic abundance in this case, the $\eta_R$ dark matter could be found in the Xenon1T direct search as discussed in \cite{ks}. On the other hand, it might be difficult to detecte even in the Xenon1T experiment in the case of a large $|\lambda_7|$. \subsection{Cosmological signal} In this model, the main phenomenological difference from the original Ma model is the existence of the neutral scalar $\tilde\sigma$ and the neutral gauge boson $X_\mu$.\footnote{A U(1) extended model has been discussed in a different context \cite{mr}. } They have no direct interaction with the contents of the standard model except for the one caused by the $\lambda_6S^\dagger S\phi^\dagger\phi$ term. If $\tilde\sigma$ is light enough, it induces the Higgs invisible decay through this term as discussed already. Even in that case, if $\lambda_6$ satisfies the required condition, the model is consistent with the present data obtained from collider experiments. Moreover, we find no substantial constraint on the masses of $\tilde\sigma$ and $X_\mu$ from the study of the baryon number asymmetry in the previous section at least for the assumed value of $\langle S\rangle$. On the other hand, these new particles could bring about some crucial influence to the thermal history of the Universe depending on their masses. First of all, we consider the case where $X_\mu$ is heavier than $\tilde\sigma$ and then $g_X^2>2\kappa$ is satisfied. The new gauge boson $X_\mu$ couples only with $\tilde\sigma$, $\eta$, $N_i$, and $\tilde N_i$. Since the latter three are considered to be much heavier than $X_\mu$, $X_\mu$ can decay only to $\gamma\tilde\sigma$ and $\ell_\alpha\bar\ell_\beta$ through one-loop diagrams with $\eta$ or $N_i$ and $\tilde N_i$ in the internal lines. If we take into account that the neutrino Yukawa couplings $h_{\alpha i}$ and $\tilde h_{\alpha i}$ should be of $O(10^{-4})$, we find that the dominant contribution to the $X_\mu$ decay comes from the $X_\mu\rightarrow\gamma\tilde\sigma$ process. Its decay width can be estimated as \begin{equation} \Gamma_X\simeq \frac{\alpha_{\rm em}{\cal F}^2}{288(4\pi)^4} \frac{m_X^5}{M_{\eta_c}^4} \left(1-\frac{M_{\tilde\sigma}^2}{m_X^2}\right)^3, \label{crxdecay} \end{equation} where ${\cal F}=\lambda_7-\frac{\lambda_3\lambda_6}{2\lambda_1}$. If we impose that $\Gamma_X ~{^>_\sim}~H$ is satisfied at the temperature where both the freeze-out of the neutron-to-proton ratio and the neutrino decoupling are completed, ${\cal F}$ is found to have a lower bound, \begin{equation} |{\cal F}|~{^>_\sim}~ 10^{-8} \left(\frac{M_{\eta_c}}{1~{\rm TeV}}\right)^2 \left(\frac{300~{\rm GeV}}{m_X}\right)^{\frac{5}{2}} \left(\frac{T}{1~{\rm MeV}}\right) \left(1-\frac{M_{\tilde\sigma}^2}{m_X^2}\right)^{-\frac{3}{2}}. \label{xdecay} \end{equation} Using the constraint on $\lambda_{1,6}$ obtained from the Higgs sector phenomenology and the constraint on $\lambda_{3,7}$ required by the dark matter abundance, $|{\cal F}|$ is found to take a large value of $O(0.1)$. This suggests that $\Gamma_X>H$ could be satisfied at the period where the photon temperature is about 1 MeV even for $m_X~{^>_\sim}~O(1)$~GeV. Although the decay product $\tilde\sigma$ does not have direct interactions with the standard model contents, it can decay to them through loop effects. Such decay products could affect the cosmological thermal history depending on the time when $\Gamma_{\tilde\sigma}\simeq H$ is realized. Since the neutrino Yukawa couplings should be of $O(10^{-4})$, the $\tilde\sigma$ decay is dominated by a two photon final state. It is induced through the one-loop diagram with a charged $\eta$ in the internal line and the decay width can be estimated as \begin{equation} \Gamma_{\tilde\sigma}\simeq \frac{\alpha_{\rm em}^2{\cal F}^2} {9216 \pi^3g_X^2}\frac{M_{\tilde\sigma}^3m_X^2}{M_{\eta_c}^4}. \label{crsdecay} \end{equation} If $g_X<0.88\kappa^{\frac{3}{10}}$ is satisfied for $g_X^2 > 2\kappa$, $\Gamma_{\tilde\sigma}$ is larger than $\Gamma_X$. In such a case, $\tilde\sigma$ is expected to decay instantaneously after the $X_\mu$ decay yields it. Since eq.~(\ref{xdecay}) shows that this $\tilde\sigma$ decay occurs at $T>1$~MeV, no cosmological effect is expected. In the other case, $g_X>0.88\kappa^{\frac{3}{10}}$, the decay of $\tilde\sigma$ occurs with a delay from its production time. If we use the condition $\Gamma_{\tilde\sigma} \sim H$ to make a rough estimation of the temperature where the $\tilde\sigma$ decay comes in the thermal equilibrium, we have \begin{equation} T\sim 54 g_\ast^{-1/4}\left(\frac{|{\cal F}|}{10^{-7}}\right) \left(\frac{1~{\rm TeV}}{M_{\eta_c}}\right)^2 \left(\frac{m_X}{300~{\rm GeV}}\right)^{\frac{3}{2}} \left(\frac{M_{\tilde\sigma}}{m_X}\right)^{\frac{3}{2}} \left(\frac{\langle S\rangle}{2~{\rm TeV}}\right) ~{\rm MeV}. \end{equation} From this result, we find that the $\tilde\sigma$ decay could occur before the neutrino decoupling as long as both $|{\cal F}|$ and $m_X$ take suitable values for a supposed $M_{\tilde\sigma}$. In this case, this decay process does not affect the neutrino effective number in the Universe. For example, the light $X_\mu$ such as $m_X=O(1)$~GeV does not affect it for $|{\cal F}|>O(10^{-4})$ as long as $10^{-4}m_X<M_{\tilde\sigma}<m_X$ is satisfied. On the other hand, $\ell_\alpha\bar\ell_\beta$ could also be a dominant decay mode of $X_\mu$ for smaller values of $|{\cal F}|$ such as $|{\cal F}|~{^<_\sim}~ 10^{-7} \frac{g_X}{g_Y} \left(\frac{\bar h}{10^{-4}}\right)^2$. Here, we recall that the averaged value $\bar h$ of the relevant neutrino Yukawa couplings $h_{\alpha i}$ is required to be of $O(10^{-4})$ to explain both the neutrino oscillation data and the baryon number asymmetry in the Universe. Such small values of $|{\cal F}|$ could be also consistent with the dark matter abundance as long as $\lambda_3$ or $\lambda_4$ is of $O(1)$ and both $|\lambda_6|$ and $|\lambda_7|$ are small enough. In such a case, this decay process could be in thermal equilibrium still after the neutrino decoupling. The neutrinos produced here could contribute to the effective neutrino number as the non-thermal neutrino components. Although this possibility may be interesting from a cosmological view point, a detailed analysis is beyond the scope of this paper. Finally, we study the case where $X_\mu$ is extremely light and then $\tilde\sigma$ is heavier than $X_\mu$. In such a case, the $X_\mu$ decay could cause a cosmological problem generally since its decay mode is limited. The cosmological indication could largely change without affecting other results of the model obtained in the previous part. As an interesting example, we address the situation $m_X<2m_e$ where the gauge coupling $g_X$ becomes unnaturally small.\footnote{We note that leptogenesis could occur successfully in this case as found in the third low of Table 2.} There, $X_\mu$ can decay only to neutrino-antineutrino pairs through one-loop diagrams. These non-thermally produced neutrinos affect the present effective neutrino number. Its deviation from the standard value $N_{\rm eff}=3.046$ may be estimated as done in \cite{dr}. The non-thermal neutrinos make the effective neutrino number shift from the standard value by \begin{equation} \Delta N_{\rm eff}(T)=\frac{120}{7\pi^2}\left(\frac{11}{4}\right)^{\frac{4}{3}} \frac{\rho_\nu^{\rm nth}(T)}{T^4}, \end{equation} where $\rho_\nu^{\rm nth}(T)$ is the energy density of non-thermally produced neutrinos at the photon temperature $T$. This energy density in the co-moving volume $R^3$ evolves following the differential equation \begin{equation} \frac{d(\rho_\nu^{\rm nth}R^3)}{dt}=\Gamma_X(\rho_XR^3) -H(\rho_\nu^{\rm nth}R^3). \end{equation} Assuming radiation domination through this evolution, we can find the solution \begin{equation} \rho_\nu^{\rm nth}R^3=m_XN_X^f\frac{1}{\sqrt{\Gamma_Xt}}\xi(t), \end{equation} where $\xi(t)$ is defined as $\xi(t)={\rm erf}(\sqrt{\Gamma_Xt})-\sqrt{\Gamma_Xt}~e^{-\Gamma_Xt}$ and it is reduced to $\frac{\sqrt{\pi}}{2}$ in the limit $\Gamma_Xt\gg 1$. $N_X^f$ stands for the $X_\mu$ number in the co-moving volume $R^3$ at the freeze-out time of $X_\mu$. Since it could be identified with the freeze-out time of $\eta_R$, $X_\mu$ is relativistic there and then $\frac{N_X^f}{R^3}=\frac{\zeta(3)}{\pi^2}{\rm g}_XT^3$ is satisfied. Using these, we finally obtain the deviation of the effective neutrino number due to the non-thermally produced neutrinos: \begin{eqnarray} \Delta N_{\rm eff}&=&\frac{60\sqrt{2}\zeta(3)}{7\pi^{\frac{7}{2}}} \left(\frac{11}{4}\right)^{\frac{4}{3}}\left(\frac{8\pi^3}{90}\right)^{\frac{1}{4}} {\rm g}_R^{\frac{1}{4}}{\rm g}_Xm_X\sqrt{\frac{1}{\Gamma_Xm_{\rm pl}}} \nonumber \\ &\simeq& 0.39{\rm g}_X\left(\frac{m_X}{\rm MeV}\right) \left(\frac{10^{-20}~{\rm MeV}}{\Gamma_X}\right)^{\frac{1}{2}}, \end{eqnarray} where ${\rm g}_R$ is for the present degrees of freedom of radiation and it can be approximated by the value of the standard model. This result suggests that the decay width of $X_\mu$ should be $\Gamma_X~{^>_\sim}~10^{-20}$~MeV for $X_\mu\rightarrow\nu_\alpha\bar\nu_\beta$ or $X_\mu\rightarrow\nu_\alpha\nu_\beta$ in order to satisfy the present observational results \cite{planck}. However, since the dominant contribution comes from the latter one, which is induced through a one-loop diagram with the small neutrino Yukawa couplings of $O(10^{-4})$ and also $\lambda_5$ of $O(10^{-4})$, the decay width is much smaller than the required value. It means that the neutrinos produced non-thermally through the decay of $X_\mu$ give a too large contribution to $\Delta N_{\rm eff}$. Thus, the model with $m_X<2m_e$ seems to be ruled out by the observed effective neutrino number. If we introduce the kinetic term mixing for $X_\mu$ and $B_\mu$, this problem might be evaded even in such a case. This point is briefly discussed in the appendix. In the present model, the new U(1)$_X$ symmetry is assumed to be local. Even if this symmetry is supposed to be global, the scenario works well in the same way. However, the reasoning for the pairwise introduction of $N_i$ and $\tilde N_i$ is lost in the global U(1) case. The difference between them is whether the massless Nambu-Goldstone boson appears after the breaking of U(1)$_X$ symmetry or not. This boson behaves as dark radiation and changes the effective neutrino number in the Universe just in the same way as discussed in \cite{wein}. \section{Conclusion} We have considered an extension of the radiative neutrino mass model proposed by Ma with a low energy U(1) gauge symmetry. If we assume a cut-off scale of the model at $O(10^{4})$~TeV and the breaking of this U(1) at a rather low energy scale such as $O(1)$~TeV, several assumptions adopted in the original model to explain the neutrino masses, the dark matter abundance, and the baryon number asymmetry in the Universe could be closely related. We have shown that the breaking of this U(1) symmetry could give a common background for these assumptions. Both the mass degeneracy among the right-handed neutrinos required for the resonant decay of the lightest right-handed neutrino and its small neutrino Yukawa coupling required for the out-of-equilibrium decay could be explained by the same reasoning through this extension. The $Z_2$ symmetry, which forbids the tree-level neutrino mass generation and guarantees the dark matter stability, has the same origin as the smallness of the quartic coupling constant between the Higgs doublet scalar and the inert doublet scalar, which is an important feature of the model to explain the small neutrino masses. It is useful to recall that these are independent assumptions in the original Ma model. We have also discussed some cosmological issues of the model which appear to be related to this extension. The effective neutrino number could be an interesting subject in this model. It is interesting that we can have an economical model which could explain the three big problems in the standard model through a simple extension of the Ma model with a low energy U(1) symmetry. A detailed study of the model might give us a clue to the construction of a complete framework beyond the standard model. We will present further results obtained from a quantitative analysis of the related problems in the model elsewhere. \newpage \section*{Appendix} In this appendix, we consider cosmological issues in the case with a very light $X_\mu$, where the resonant leptogenesis occurs successfully as discussed in the text. In order to avoid the late time decay of $X_\mu$, we might introduce kinetic term mixing between the gauge fields $\hat B_\mu$ and $\hat X_\mu$ for the gauge groups U(1)$_Y$ and U(1)$_X$.\footnote{Kinetic term mixing of Abelian gauge fields has been discussed in various phenomenological studies \cite{kinet}. Recent work related to dark matter can be found in \cite{dm-kinet}.} The kinetic term mixing between them may be given by \begin{equation} -\frac{1}{4}\hat F_{\mu\nu}\hat F^{\mu\nu}-\frac{1}{4}\hat G_{\mu\nu} \hat G^{\mu\nu}-\frac{\sin\chi}{2}\hat F_{\mu\nu}\hat G^{\mu\nu}, \end{equation} where $\hat F_{\mu\nu}$ and $\hat G_{\mu\nu}$ are the field strengths of $\hat B_\mu$ and $\hat X_\mu$, respectively. We can diagonalize these terms by taking the canonically normalized basis $B_\mu$ and $X_\mu$ as \begin{equation} \left(\begin{array}{c} \hat B_\mu \\ \hat X_\mu \end{array}\right)= \left(\begin{array}{cc} 1 & -\tan\chi \\ 0 & \displaystyle{\frac{1}{\cos\chi}}\\ \end{array}\right)\left(\begin{array}{c} B_\mu \\ X_\mu \\ \end{array}\right). \end{equation} The modified U(1)$_X$ charge with this new basis is given by \begin{equation} Q_X=\frac{\hat Q_X}{\cos\chi}+\frac{g_Y}{g_X}Y\tan\chi, \label{xcharge} \end{equation} where the U(1)$_Y$ charge and both the coupling constants $g_Y$ and $g_X$ are defined as the ones in the no mixing case. This suggests that the standard model contents with $Y\not=0$ could couple with $X_\mu$ as long as the kinetic term mixing exists. As a result, the analysis of the direct search and the relic abundance of dark matter should be modified. In this case, the following new interaction should be added to eq.~(\ref{v-int}): \begin{equation} \frac{g_X}{2}\left(\frac{1}{\cos\chi}+\frac{g_Y}{2g_X}\right) X^\mu\left(\eta_R\partial_\mu\eta_I - \eta_I\partial_\mu\eta_R\right). \label{v-int1} \end{equation} If the kinetic term mixing exists, inelastic scattering of $\eta_R$ can also be brought about by the $X_\mu$ exchange. Since both $\eta_R$-nucleon scattering cross sections $\sigma_n^0(X_\mu)$ and $\sigma_n^0(Z_\mu)$, which are mediated by the $X_\mu$ and $Z_\mu$ exchange at zero momentum transfer, can be related each other as \begin{equation} \sigma_n^0(X_\mu)\simeq \left(\frac{m_Z^2}{m_X^2}\tan\chi\right)^2\sigma_n^0(Z_\mu), \end{equation} the present experimental results require that the kinetic term mixing should satisfy \begin{equation} \tan\chi~{^<_\sim}~\frac{m_X^2}{m_Z^2}. \label{acon} \end{equation} This shows that the kinetic term mixing should be sufficiently small for $m_X~{^<_\sim}~ O(1)$~GeV. New non-negligible (co)annihilation modes of $\eta_R$ to the standard model contents could also appear, depending on the magnitude of the kinetic term mixing $\sin\chi$. However, the constraint (\ref{acon}) suggests that the $\eta_R$ relic abundance could not be affected by the process mediated through the $X_\mu$ exchange. In the study of the $\eta_R$ relic abundance, even if we introduce the kinetic term mixing, we can neglect the effect of it as long as the condition (\ref{acon}) is satisfied. Thus, the results obtained in this paper do not change. As another interesting phenomenon caused by the kinetic term mixing, we consider the $X_\mu$ direct decay to the lighter fermions in the standard model through tree diagrams. Its decay width could be estimated as \begin{equation} \Gamma_X(f\bar f)=\sum_f\frac{g_Y^2}{16\pi m_X} \left(\frac{Y_f}{2}\right)^2\tan^2\chi. \end{equation} If we impose that $\Gamma_X~{^>_\sim}~H$ is satisfied before the neutrino decoupling, we find that the kinetic term mixing should satisfy \begin{equation} \tan\chi~{^>_\sim}~10^{-11}\left(\frac{1~{\rm GeV}}{m_X}\right)^{\frac{1}{2}} \left(\frac{T}{1~{\rm MeV}}\right). \label{bcon} \end{equation} This shows that a sufficiently small kinetic term mixing is enough to bring about the $X_\mu$ decay to the standard model fermions before the neutrino decoupling. As long as the very small kinetic term mixing exists, the model can overcome the cosmological difficulty for the effective neutrino number in both cases with $m_X>M_{\tilde\sigma}$ and $m_X<M_{\tilde\sigma}$. Especially, if the kinetic term mixing takes a suitable value in the case $m_X< 1$~MeV, the deviation of the effective neutrino number $N_{\rm eff}=3.62\pm 0.25$, which is suggested through the combined analysis of the data from Planck and the $H_0$ measurement from the Hubble Space Telescope \cite{planck}, might be explained. \section*{Acknowledgements} S.~K. is supported by Grant-in-Aid for JSPS fellows (26$\cdot$5862). D.~S. is supported by JSPS Grant-in-Aid for Scientific Research (C) (Grant No. 24540263) and MEXT Grant-in-Aid for Scientific Research on Innovative Areas (Grant No. 26104009). \newpage \bibliographystyle{unsrt}
1,477,468,751,376
arxiv
\section{Introduction} The model of self-correcting point process was proposed in 1972 by Isham and Wescott \cite{IW} to describe a stationary sequence of events $\left\{t_1, t_2,\ldots \right\}$ which does not have the property of Poisson process of independence of increments on the disjoint intervals. To introduce this processes we denote by $X=\left\{X_t, \;t\geq 0\right\}$ the counting process, i.e., $X_t$ is equal to the number of events on the time interval $\left[0,t\right]$. Recall that for a stationary Poisson process with a constant intensity $S>0$ the increments of $X$ on disjoint intervals are independent and distributed according to Poisson law $$ \ds \Pb\left\{X_t-X_s=k\right\}=\frac{S^k\left(t-s\right)^k}{k!}\; {\rm e}^{-S\left(t-s\right) },\quad 0 \leq s<t,\quad k=0,1,\ldots. $$ Particularly, $$ \ds \Pb\left\{X_{t+{\rm d}t}-X_t>0\right\}=S\;{\rm d}t\;\left(1+o\left(1\right)\right). $$ For self-correcting point process we have $$ \ds \Pb\left\{X_{t+{\rm d}t}-X_t>0\; |\; {\cal F}_t\right\}=S\left(t,X_t\right)\;{\rm d}t\;\left(1+o\left(1\right)\right) $$ where ${\cal F}_t $ is the $\sigma $-field generated by $\left\{X_s,0\leq s\leq t\right\}$ and the intensity function $$ S\left(t,X_t\right)=a\,\psi\left(at -X_t\right),\quad t\geq 0. $$ Here $a >0$ and the function $\psi\left(\cdot \right)$ satisfies the following conditions: \begin{enumerate} \item $0\leq \psi\left(x\right)<\infty $ for any $x\in \RR$, \item there exists a positive constant $c$ such that $ \psi\left(x\right)\geq c $ for any $x>0$, \item $\Liminf_{x\rightarrow \infty } \psi\left(x\right)>1 $, and $\Limsup_{x\rightarrow -\infty } \psi\left(x\right)<1 $. \end{enumerate} Self-correcting processes are called as well stress-release processes (see \cite{DaVer}, p. 239). This class of processes is widely used as a good mathematical model for non-poissonian sequences of events. This model was found especially attractive in the description of earthquakes (see Ogata and Vere-Jones \cite{VJ-O1}, Lu {\sl at al.} \cite{LHB}). \bigskip \noindent {\bf Example 1.} Let \begin{equation*} S\left(t,X_t \right)=\exp \left\{\alpha +\beta \left(t-\varrho X_t\right)\right\} \end{equation*} where $\beta >0, \;\varrho >0$. It is easy to see that the conditions 1--3 are fulfilled and the point process with such intensity function is self-correcting. \bigskip This model was studied by many authors (see the references in \cite{DaVer}). Particularly it was shown that under mild conditions there exists an invariant measure $\mu $ and the law of large numbers (LLN) \begin{equation} \label{1} \frac{1}{T}\int_{0}^{T}h\left(St-X_t\right)\; {\rm d}t\longrightarrow \int_{}^{}h\left(y\right)\;\mu \left({\rm d}y\right) \end{equation} is valid (see Vere-Jones and Ogata \cite{VJ-O2}, Hayashi \cite{Hay}), Zheng \cite{Z}. Here $h\left(\cdot \right)$ is a continuous, integrable (w.r.t. $\mu $) function and $S>0$ is the rate of the point process. For the model of Example 1 we have the LLN if $\rho >0$ and $\beta >0$. \bigskip As the self-correcting model is an alternative for the stationary Poisson process, it is natural and important to test these two hypotheses by the observations $\left\{t_1,t_2, \ldots\right\}$ on the time interval $\left[0,T\right]$, i.e., to test $$ S\left(t,X_t\right)=S\;\quad {\rm versus}\;\quad S\left(t,X_t\right)=a\,\psi\left(at-X_t\right). $$ Remind that the likelihood ratio in this problem has the following form \begin{align*} L\left(X^T\right)=&\exp\left\{\int_{0}^{T}\ln\frac{a\,\psi\left(at-X_{t-}\right) }{S}\;\left[{\rm d}X_t-S\,{\rm d}t \right]\right.\\ &\left. -\int_{0}^{T}\left[\frac{a\,\psi\left(at-X_{t}\right)}{S}-1- \ln\frac{a\,\psi\left(at-X_{t}\right) }{S}\;\right]\;S\,{\rm d}t \right\}, \end{align*} where $X_{t-} $ is the limit from the left of $X_{t}$ at the point $t$ \cite{LS1}. Therefore, if the function $a\psi\left(\cdot \right)/S$ is separated from 1 then the second integral in this representation tends to infinity and there are many consistent tests. Hence it is more interesting to compare tests in the situations when the alternatives are {\sl contigous}, i.e. the corresponding sequence of measures are contigous. This corresponds well to Pitman's approach in hypotheses testing \cite{Pit}. We can have such situations if $\psi\left(\cdot \right)=S+o\left(1\right) $ with special rates $o\left(1\right)$. In this work we consider one of such models defined by the intensity function $S\left(t,X_t\right)=S \psi\left(\vartheta \left(St-X_t\right)\right)$ where $\vartheta $ is a {\sl small parameter} and $\psi\left(0\right) =1$. We suppose that the function $\psi\left(\cdot \right)$ is smooth and we can write \begin{align*} &\int_{0}^{T}\left[\psi\left(\vartheta \left(St-X_{t}\right)\right)-1- \ln\psi\left(\vartheta \left(St-X_{t}\right)\right) \;\right]\;S\,{\rm d}t=\\ &\qquad \qquad =\frac{\vartheta ^2\dot\psi\left(0\right)^2S}{2}\int_{0}^{T} \left(St-X_{t}\right)^2{\rm d}t \;\left(1+o\left(1\right)\right). \end{align*} It is easy to see that the rate $\vartheta =\vartheta _T\rightarrow 0$ under hypothesis $S\left(t,X_t\right)=S $ is $\vartheta _T\sim T^{-1}$ because $$ \frac{1}{S\,T^2}\int_{0}^{T} \left(St-X_{t}\right)^2{\rm d}t =\int_{0}^{1}W_T\left(s\right)^2\,{\rm d}s\Longrightarrow \int_{0}^{1}W\left(s\right)^2\,{\rm d}s $$ where $W_T\left(s\right)=\left(ST\right)^{-1/2}\left(S\,Ts-X_{Ts}\right) \Rightarrow W\left(s\right)$, and $\left\{W\left(s\right),\; 0\leq s\leq 1\right\}$ is Wiener process. Note that we put $a=S$, otherwise \begin{align*} &\frac{\dot\psi\left(0\right)^2 \vartheta _T^2}{2}\int_{0}^{T} \left(at-X_t\right)^2\;{\rm d}t=\\ &\quad =\frac{\dot\psi\left(0\right)^2 \;\vartheta _T^2}{2}\int_{0}^{T}\left(\left(a-S\right)t+\sqrt{ST}\;\frac{St-X_t}{\sqrt{ST}} \right)^2\;{\rm d}t\\ &\quad =\frac{\dot\psi\left(0\right)^2 \;\vartheta _T^2}{2}\,T\, \int_{0}^{1} \left(\left(a-S\right)vT+\sqrt{ST}\;W_T\left(v\right)\right)^2\;{\rm d}v\\ &\quad = \frac{\dot\psi\left(0\right)^2 }{6}\;\vartheta _T^2 \left(a-S \right)^2\;T^3\;\left(1+o\left(1\right)\right) \end{align*} Therefore, if $a\neq S$, then we have to take $\vartheta _T=uT^{-3/2}$ and to test the simple hypothesis ${\scr H}_0: u=0$ against ${\scr H}_1: u>0$. In this case the family of measures is LAN and the usual construction provides us {\sl asymptotically uniformly most powerful test} (see, e.g., Roussas \cite{Rou}). Note that according to \eqref{1} for any fixed alternative $\vartheta >0$ we have the convergence $$ \frac{1}{T}\int_{0}^{T}\left(St-X_{t}\right)^2{\rm d}t \longrightarrow \int_{}^{}y^2\;\mu \left({\rm d}y\right) $$ which, of course, requires another normalization. \bigskip Therefore we consider the problem of hypotheses testing when under hypothesis ${\scr H}_0$ the intensity function is a known constant $S>0$ (Poisson process) and the alternative ${\scr H}_1$ is one-sided composite: self-correcting process with intensity function $S\left(t,X_t\right)=S \psi\left(\vartheta_T \left(St-X_t\right)\right)$, where for convenience of notation we put $\vartheta _T=u/S\dot\psi\left(0\right)T$ (we suppose that $\dot\psi\left(0\right)>0 $). In this case the corresponding likelihood ratio $Z_T\left(u\right)$ converges to the limit process $$ Z\left(u\right)=\exp\left\{-u\int_{0}^{1}W\left(s\right)\,{\rm d}W\left(s\right)-\frac{u^2}{2}\int_{0}^{1}W\left(s\right)^2\,{\rm d}s \right\}, $$ i.e., the family of measures is {\sl locally asymptotically quadratic} \cite{LC-Y}. We study three tests: {\sl score function test}, {\sl likelihood ratio test}, {\sl Wald test} and compare their power functions with the power function of the {\sl Neyman-Pearson test}. Note that we calculate all limits under hypothesis (Poisson process) and we obtain the limit distributions of the underlying statistics under alternative (self-correcting process) with the help of Le Cam's Third Lemma. Therefore we do not use directly the conditions 1--3 given above. The similar limit likelihood ratio process arises in the problem of hypotheses testing $u=0$ against $u>0$ for the time series $$ X_j=\left(1-\frac{u}{n}\right)\;X_{j-1} + \varepsilon _j,\qquad \quad j=1,\ldots,n\rightarrow \infty , $$ where $\varepsilon _j$ are i.i.d. random variables, $\Ex \varepsilon _j=0, \Ex \varepsilon _j^2=\sigma ^2$. The asymptotic properties of tests are described under hypothesis and alternatives by Chan and Wei \cite{C-W} and Phillips \cite{Phil}. Particularly, the limits of the power functions are given with the help of Ornstein-Uhlenbeck process $$ {\rm d}Y_s=-u\,Y_s\;{\rm d}s+{\rm d}W_s,\quad Y_0=0, \qquad 0\leq s\leq 1. $$ Then Swensen \cite{Swen97} compared these limit powers. For the model of Example 1 the power functions (for local alternatives) was studied by Ogata and Vere-Jones \cite{VJ-O1} and by Luschgy \cite{Lus1}, \cite{Lus2}. The limit likelihood ratio and tests are similar to that of the mentioned above time series problem. Remind as well that Feigin \cite{Fei} noted that the same limit likelihood ratio arises in the problem of testing the simple hypothesis $u=0$ against one-sided alternative $u>0$ by observations $$ {\rm d}X_t=-\frac{u}{T}\,X_t\;{\rm d}t+{\rm d}W_t,\quad X_0=0, \qquad 0\leq t\leq T\rightarrow \infty . $$ In our case we obtain similar limit expressions for the likelihood ratio and power functions and compare the errors of tests. The analytical considerations give us an asymptotic (for large values of $u$) ordering of the tests. The numerical simulations of the tests show that for the small values of $\varepsilon $ and for the moderate values of $u$ the power functions of the likelihood ratio and Wald tests are indistinguishable (from the point of view of numerical simulations) of the Neyman-Pearson envelope. This interesting property was noticed (for $\varepsilon =0.05$) by Eliott {\sl at al.} \cite{Eli} on the base of $2\cdot 10^3$ simulations. In our work we obtain similar result having $10^7 $ simulations and we observe for the larger values of $\varepsilon $ that the asymptotic ordering of the tests holds already for the moderate values of $u$. A similar problem of hypotheses testing in the situation, when the alternative process is self-exciting \cite{Haw} was considered in \cite{DaK}. \section{Score Function Test} We observe a trajectory $X^T=\left\{X_t,0\leq t\leq T\right\}$ of a point process of intensity function $S\left(\cdot ,X_t \right) $ and consider the problem of testing the simple hypothesis against close one sided composite alternative \begin{eqnarray} \label{3} &&{\scr H}_0:\quad\qquad S\left(t,X_t \right)=S_*,\\ \label{4} &&{\scr H}_1:\quad\qquad S\left(t,X_t \right)=S_*\; \psi\left(\vartheta _T\left[S_*t-X_t\right]\right), \quad \vartheta _T>0, \end{eqnarray} where $\vartheta _T$ is a small parameter, the value $S_*$ and the function $\psi \left(\cdot \right) $ are known. The problem is regular in the following sense. \bigskip {\bf Condition} ${ \cal A}.$ {\sl The function $\psi\left(x \right), x\in \RR$ is positive, continuously differentiable at the point $x=0$, $\psi\left(0\right)=1$ and $\dot \psi\left(0\right)>0$.} \bigskip The rate of convergence $\vartheta _T\rightarrow 0$ is chosen such that the likelihood ratio $L\left(\vartheta _T, X^T\right) $ is asymptotically non degenerate. In the case $\dot \psi\left(0\right)<0$ we need to change just one sign in the test. This leads us to the reparametrization $$ \vartheta _T=\frac{u}{S_*\;\dot \psi\left(0\right)\;T}, \;u\geq 0 $$ and to the corresponding hypotheses testing problem \begin{eqnarray} \label{5} &&{\scr H}_0:\quad\qquad u=0,\\ \label{6} &&{\scr H}_1:\quad\qquad u>0. \end{eqnarray} Therefore, we observe a Poisson process of intensity $S_*$ under hypothesis ${\scr H}_0 $ and the point process under alternative ${\scr H}_1 $ has intensity function $$ S\left(t,X_t \right)=S_*+\frac{u}{T}\left(S_*t-X_{t}\right)+o\left(T^{-1/2}\right). $$ Let us fix $\varepsilon \in \left(0,1\right)$ and denote by ${\scr K}_\varepsilon $ the class of test functions $\phi_T\left(X^T\right)$ of asymptotic size $\varepsilon $, i.e., for $\phi_T\in {\scr K}_\varepsilon $ we have $$ \lim_{T\rightarrow \infty }\Ex_0\;\phi_T\left(X^T\right)=\varepsilon. $$ As usual, $ \phi_T\left(X^T\right)$ is the probability to accept the hypothesis ${\scr H}_1 $ having observations $X^T$. The corresponding power function is $$ \beta _{\vphantom{\widetilde T}T}\left(u,\phi_{\vphantom{\widetilde T}T}\right)=\Ex_u\;\phi_T\left(X^T\right),\quad u\geq 0. $$ Let us introduce the statistic \begin{align} \Delta _T\left(X^T\right)&=\frac{1}{S_*\;T} \int_{0}^{T}\left(S_*t-X_{t-}\right)\;\left[{\rm d}X_t-S_*\;{\rm d}t\right]\nonumber\\ &=\frac{X_T-(X_T-S_*T)^2}{2\,S_*T}. \label{7} \end{align} This equality follows from the elementary representation (see, e.g. \cite{Kut84}, Lemma 4.2.1) for the centered Poisson process $\pi _t=X_t-S_*\;t$ \begin{align*} \pi _T^2=2\int_{0}^{T}\pi _{t-}\;{\rm d}\pi _t+\pi _T+S_*T. \end{align*} which obviously is equivalent to $$ \frac{1}{T}\int_{0}^{T}\pi _{t-}\;{\rm d}\pi _t=\frac{\pi _T^2-X_T}{2T}. $$ Define as well two random variables $$ \Delta(W) =\frac{1}{2}\left(1-W\left(1\right)^2\right)=-\int_{0}^{1}W\left(s\right){\rm d} W\left(s\right),\qquad {\rm J}(W) =\int_{0}^{1}W\left(s\right)^2{\rm d}s , $$ where $\left\{W\left(s\right), 0\leq s\leq 1 \right\}$ is standard Wiener process. Remind that the likelihood ratio in this problem has the following form \cite{LS1} $\left(\gamma =S_*\dot\psi\left(0\right)\right)$ \begin{align} \nonumber &L\left(\frac{u}{\gamma T},X^T\right)=\exp\left\{\int_{0}^{T}\ln \psi\left(\frac{u}{\gamma T}\left(S_*\;t-X_{t-} \right)\right)\; \left[{\rm d}X_t-S_*\;{\rm d}t\right]\right.\\ &\quad \left.-\int_{0}^{T} \left[\psi\left(\frac{u}{\gamma T}\left(S_*\;t-X_t \right)\right)-1-\ln \psi\left(\frac{u}{\gamma T}\left(S_*\;t-X_t \right)\right) \right]S_*\;{\rm d}t\right\}. \label{8} \end{align} Therefore the direct differentiation w.r.t. $u$ at the point $u=0$ gives us the introduced above statistic $$ \left.\frac{\partial }{\partial\; u}\ln L\left(\frac{u}{\gamma T},X^T\right)\right|_{u=0}=\Delta _T\left(X^T\right) . $$ Below we denote $$ a_\varepsilon=\frac{1-z^2_{\frac{1-\varepsilon}{2}}}{2}\qquad {\rm and}\qquad h\left(u\right)=\sqrt{\frac{2u}{1-e^{-2u}}}, $$ where $z_a$ is $1-a$ quantile of standard Gaussian law, i.e., $\Pb\left(\zeta >z_a\right) =a$, for $\zeta \sim {\cal N}\left(0,1\right)$. We have the following result. \begin{theorem} \label{T1} Let the Condition ${\cal A}$ be fulfilled, then the score function test \begin{equation} \label{9} \phi_T^*\left(X^T\right)=\1\zs{\left\{\Delta _T\left(X^T\right)>a_\varepsilon \right\}} \end{equation} belongs to the class ${\scr K}_\varepsilon $ and for any $u_*>0$ its power function \begin{align} \label{10} \beta _T(u_*,\phi_T^*)\rightarrow \beta^*(u_*)=\Pb\left\{\left|\zeta \right|\leq h\left(u_*\right)\;z_{\frac{1-\varepsilon }{2}} \right\} . \end{align} \end{theorem} \bigskip \noindent {\bf Proof.} Under hypothesis ${\scr H}_0$ the value $X_T$ is a poissonian random variable with parameter $S_*T$. Therefore we have immediately $$ \frac{X_T}{S_*T}\longrightarrow 1,\qquad \qquad \frac{X_T-ST}{\sqrt{S_*T} }\Longrightarrow W\left(1\right)\sim {\cal N}\left(0,1\right) $$ and $ \Delta _T\left(X^T\right)\Longrightarrow \Delta \left(W\right) $ as $T\rightarrow \infty $. Hence $$ \Pb_0\left\{\Delta _T\left(X^T\right)>a_\varepsilon \right\}\longrightarrow \Pb\left\{\Delta \left(W\right)> \frac{1-z^2_{\frac{1-\varepsilon}{2}}}{2} \right\} = \Pb\left\{\left|\zeta \right| < z_{\frac{1-\varepsilon}{2}}\right\}=\varepsilon . $$ This provides $ \phi_T^*\in {\scr K}_\varepsilon $. \bigskip To study the power $\beta _T(u_*,\phi_T^*)$ we would like to use the Third Le Cam Lemma \cite{LC-Y}, \cite{Str}. Therefore we need first to show the joint weak convergence \begin{align} \label{11} {\cal L}_0\left(\Delta _T , l _T\left(u\right) \right)\Longrightarrow {\cal L}\left(\Delta(W),u\Delta(W)-\frac{u^2}{2}{\rm J}(W)\right) \end{align} where $ l _T\left(u\right)=\ln L\left(\frac{u}{\gamma T},X^T\right)$. To verify \eqref{11} we denote $$ l _T^*\left(u\right)=u\;\Delta_T\left(X^T\right)-\frac{u^2}{2}\; J_T\left(X^T\right), $$ where $$ J_T\left(X^T\right)=\frac{1}{S_*\;T^2}\int_{0}^{T}\left(S_* t-X_t\right)^2\;{\rm d}t $$ and show that \begin{align} \label{12} {\cal L}_0\left( l _T^*\left(u\right) \right)\Longrightarrow {\cal L}\left(u\Delta(W)-\frac{u^2}{2}{\rm J}(W)\right). \end{align} Then \eqref{11} will follow from the convergence \begin{equation} \label{13} l _T^*\left(u_T\right)-l _T\left(u_T\right)\rightarrow 0 \end{equation} for any bounded sequence $u_T$. \bigskip \begin{lemma} \label{L1} \begin{align} \label{14} {\cal L}_0\left\{\Delta _T\left(X^T\right),\; J_T\left(X^T\right)\right\} \Longrightarrow \left(-\int_{0}^{1}W\left(s\right)\;{\rm d}W\left(s\right), \int_{0}^{1}W\left(s\right)^2\;{\rm d}s\right). \end{align} \end{lemma} \bigskip \noindent {\bf Proof.} Let us put $W _T\left(s\right)=\left(S_*T\right)^{-1/2}\pi _{sT},\;s\in \left[0,1\right]$. Then $$ \Ex_0W_T\left(s\right)=0,\quad \Ex_0\left[W _T\left(s_1\right)W _T\left(s_2\right)\right]= \min\left(s_1,s_2\right) $$ and we have $$ J_T\left(X^T\right)=\frac{1}{S_*\;T^2}\int_{0}^{T}\pi _t^2\;{\rm d}t =\int_{0}^{1}W _T\left(s\right)^2\;{\rm d}s. $$ Using the standard arguments we verify (well-known fact) that for any collection $\left\{s_1,\ldots, s_k\right\}$ we have the weak convergence (as $T\rightarrow \infty $) of the vectors $$ \Bigl(W _T\left(s_1\right),\ldots,W _T\left(s_k\right)\Bigr)\Longrightarrow \Bigl(W\left(s_1\right),\ldots,W\left(s_k\right)\Bigr) . $$ Moreover the following estimate holds \begin{align*} &\left(\Ex_0\left|W _T\left(s_1\right)^2-W _T\left(s_2\right)^2\right|\right)^2\leq \\ &\qquad \qquad \leq \Ex_0\left|W_T\left(s_1\right)-W _T\left(s_2\right)\right|^2 \Ex_0\left|W _T\left(s_1\right)+W _T\left(s_2\right)\right|^2\leq 4\,\left|s_2-s_1\right| . \end{align*} Hence (see Gikhman and Skorohod \cite{GS}, Section IX.7) we have the convergence (in distribution) of integrals $$ \int_{0}^{1}W_T\left(s\right)^2\;{\rm d}s\Longrightarrow \int_{0}^{1}W\left(s\right)^2\;{\rm d}s $$ and $$ \Delta _T\left(X^T\right)=\frac{1-W_T\left(1\right)^2}{2}\; \left(1+o\left(1\right)\right) \Longrightarrow \frac{1-W\left(1\right)^2}{2}=-\int_{0}^{1}W\left(s\right){\rm d}W\left(s\right) . $$ It is easy to see that we have the same time the joint convergence too because from the given above proof it follows that for any $\lambda _1,\lambda _2$ $$ \lambda _1 W_T\left(1\right)^2+\lambda _2\int_{0}^{1}W_T\left(s\right)^2\;{\rm d}s\Longrightarrow \lambda _1 W\left(1\right)^2+\lambda _2\int_{0}^{1}W\left(s\right)^2\;{\rm d}s . $$ Therefore the Lemma \ref{L1} is proved. \bigskip Our goal now is to establish a slightly more strong than \eqref{13} relation \begin{align} \label{15} l_T\left(u_T\right)=u_T\;\Delta _T\left(X^T\right)\;\left(1+o\left(1\right)\right) -\frac{u_T^2}{2}\int_{0}^{1}W_T\left(s\right)^2\;{\rm d} s\;\left(1+o\left(1\right)\right) \end{align} where $o\left(1\right)\rightarrow 0$ for any sequence $u_T\in \UU_T$ with $\UU_T=\{u:\; 0\leq u<\frac{\sqrt{S_*T}}{\ln T}\}$. We can write \begin{align*} &l_T^*\left(u\right)-l_T\left(u\right)=\int_{0}^{T} \left[-\frac{u\;W_T\left(\frac{t}{T}\right)}{\sqrt{S_*T}}- \ln \psi\left(\frac{-uW_T\left(\frac{t}{T}\right)}{\dot\psi\left(0\right)\sqrt{S_*T}}\right) \right]\;{\rm d}\pi _t\\ &\quad -\int_{0}^{T} \left[\frac{u^2\;W_T\left(\frac{t}{T}\right)^2}{2S_*\;T}-\psi\left( \frac{-uW_T\left(\frac{t}{T}\right)}{\dot\psi\left(0\right)\sqrt{S_*T}} \right)+1+\ln \psi\left(\frac{-uW_T\left(\frac{t}{T}\right)}{ \dot\psi\left(0\right)\sqrt{S_*T}}\right) \right]S_*\;{\rm d}t\\ &\quad\equiv u\; \delta _{1,T}-\frac{u^2}{2}\;\delta _{2,T} \end{align*} with obvious notation. Remind that $u>0$. Using Lenglart inequality we obtain for the first term \begin{align*} &\Pb_0\left\{ \left|\delta _{1,T}\right|>a\right\}\leq \frac{b}{a}\\ & \qquad +\Pb_0\left\{\int_{0}^{1} \left[W_T\left(s\right)+\frac{\sqrt{S_*T}}{u}\ln \psi\left(\frac{-uW_T\left(s\right)}{\dot\psi\left(0\right)\sqrt{S_*T}}\right) \right]^2\;{\rm d}s>b\right\} \end{align*} for any $a>0$ and $b>0$. Now expanding the functions $\psi\left(\cdot \right)$ we obtain $$ \psi\left(\frac{-uW_T\left(s\right)}{\dot\psi\left(0\right)\sqrt{S_*T}}\right)= 1-\frac{uW_T\left(s\right)}{\dot\psi\left(0\right)\sqrt{S_*T}}\;\dot\psi\left(\frac{-\tilde uW_T\left(s\right)}{\dot\psi\left(0\right)\sqrt{S_*T}}\right) $$ where $\tilde u\leq u$. Introduce the set $$ \CC_T=\left\{\omega :\qquad \sup_{0\leq s\leq 1}\left|W_T\left(s\right) \right|\leq \dot\psi\left(0\right)\sqrt{\ln T}\right\} $$ and note that for $\omega \in \CC_T$ we have the estimate $$ \sup_{u\in \UU_T}\sup_{0\leq s\leq 1} \frac{ u\left|W_T\left(s\right) \right|}{ \dot\psi\left(0\right)\sqrt{S_*T} }\leq \;\frac{1}{\sqrt{\ln T}} $$ Hence for all $u\in \UU_T$ on this set we can write $$ \sup_{0\leq s\leq 1 }\left|\dot\psi\left(0\right)-\dot\psi\left(\frac{-\tilde uW_T\left(s\right)}{\dot\psi\left(0\right)\sqrt{S_*T}}\right) \right|\leq\sup_{\left|v\right|\leq \left(\ln T\right)^{-1/2} }\left|\dot\psi\left(0\right)-\dot\psi\left(v\right) \right| = h_T\rightarrow 0 $$ as $T\rightarrow \infty $ because the derivative is continuous at the point $v=0$. Let us denote $u_s=\frac{uW_T\left(s\right)}{\dot\psi\left(0\right)\sqrt{S_*T}}$. Using the expansion of the logarithm $$ \ln \left(\psi\left(-u_s\right)\right)=\ln \left(1-u_s\dot\psi\left(-\tilde u_s\right)\right)=-\frac{u_s\dot\psi\left(-\tilde u_s\right)}{1-\tilde{\tilde {u}}_s\dot\psi\left(-\tilde u_s\right)} . $$ we obtain the following estimate \begin{align*} &\Pb_0\left\{\int_{0}^{1} \left[W_T\left(s\right)+\frac{\sqrt{S_*T}}{u}\ln \psi\left(-u_s \right) \right]^2\;{\rm d}s>b\right\} \leq \Pb_0\left\{\CC_T^c\right\}+\\ &\quad + \Pb_0\left\{\int_{0}^{1}W_T\left(s\right)^2\left(1-\frac{\dot\psi\left(-\tilde u_s\right) }{\dot\psi\left(0\right)\;\left(1-\tilde{\tilde {u}}_s\dot\psi\left(-\tilde u_s\right)\right)}\right)^2{\rm d}s>b, \CC_T\right\}. \end{align*} Remind that $W_T\left(s\right)$ is martingale, hence by Doob inequality we have $$ \Pb_0\left\{\CC_T^c\right\}\leq \Pb_0\left\{ \left|W_T\left(1\right)\right|> \dot\psi\left(0\right)\sqrt{\ln T} \right\}\leq \frac{1 }{\dot\psi\left(0\right)^2\ln T}. $$ For the second probability after elementary estimates we obtain \begin{align*} &\Pb_0\left\{\int_{0}^{1}W_T\left(s\right)^2\left(1-\frac{\dot\psi\left(-\tilde u_s\right) }{\dot\psi\left(0\right)\;\left(1-\tilde{\tilde {u}}_s\dot\psi\left(-\tilde u_s\right)\right)}\right)^2{\rm d}s>b, \CC_T\right\}\leq \\ &\quad\quad \leq \Pb_0\left\{C\,\int_{0}^{1}W_T\left(s\right)^2{\rm d}s\;\left(h_T^2+\frac{1}{\ln T}\right)>b\right\}\leq \frac{C}{2b}\,\left(h_T^2+\frac{1}{\ln T}\right) \end{align*} with some constant $C>0$. Recall that by Tchebyshev inequality $$ \Pb_0\left\{\int_{0}^{1}W_T\left(s\right)^2{\rm d}s>A\right\}\leq \; \frac{1}{2A}. $$ Therefore, if we take $b=a^2$ then for any $a>0$ $$ \Pb_0\left\{ \left|\delta _{1,T}\right|>a\right\}\longrightarrow 0 $$ as $T\rightarrow \infty $. The similar arguments allow to prove the convergence $$ \Pb_0\left\{ \left|\delta _{2,T}\right|>a\right\}\longrightarrow 0 $$ too. \bigskip Therefore, the likelihood ratio $Z_T\left(u\right)=L\left(\frac{u}{\gamma T},X^T\right), u\geq 0$ is (under hypothesis ${\scr H}_0$) {\sl locally asymptotically quadratic} (LAQ) \cite{LC-Y}, because \begin{align} \label{16} Z_T\left(u\right)\Longrightarrow Z\left(u\right)=\exp\left\{-u \int_{0}^{1}W\left(s\right)\;{\rm d}W\left(s\right) -\frac{u^2}{2}\int_{0}^{1}W\left(s\right)^2\;{\rm d}s\right\}. \end{align} Moreover, we have the convergence $l_T^*\left(u_T\right)-l_T\left(u_T\right)\rightarrow 0$ for any bounded sequence of $u_T\in \UU_T$. Note that the random function $Z\left(u\right) $ is the likelihood ratio in the hypotheses testing problem \begin{eqnarray*} &&{\scr H}_0:\quad\qquad u=0,\\ &&{\scr H}_1:\quad\qquad u>0, \end{eqnarray*} by observations of Ornstein-Uhlenbeck process \begin{equation} \label{17} {\rm d}Y\left(s\right)=-uY\left(s\right)\;{\rm d}s+{\rm d}W\left(s\right),\quad Y\left(0\right)=0,\qquad 0\leq s\leq 1 \end{equation} under hypothesis $u=0$. This limit for the likelihood ratio under alternative can be obtained directly as follows. Let us denote $$ Y_T\left(s\right)= \frac{X_{sT}-sS_*T}{\sqrt{S_*T} },\qquad 0\leq s\leq 1. $$ Then using the representation $$ X_t=S\,\int_{0}^{t}\psi\left(\vartheta _T\left[S_* r-X_r\right]\right)\;{\rm d}r+M_t $$ where $M_t$ is local martingale and expansion of the function $\psi\left(\cdot \right)$ at the vicinity of $0$ we obtain the equation $$ Y_T\left(s\right)=-u\int_{0}^{s}\frac{\dot\psi\left(g_v\right)}{\dot \psi\left(0\right)}Y_T\left(v\right)\;{\rm d}v + V_T\left(s\right),\quad Y_T\left(0\right)=0,\qquad 0\leq s\leq 1 $$ where $V_T\left(s\right)$ is local martingale and $g_v= \frac{-\tilde u}{\dot\psi\left(0\right)\sqrt{S_*T}} Y_T\left(v \right)\rightarrow 0$. The central limit theorem for local martingales provides the convergence $V_T\left(s\right)\Longrightarrow W\left(s\right)$. Hence the process \eqref{17} is the limit (in distribution) of $Y_T\left(s\right)$. Moreover from \eqref{8} we have $$ \Delta _T\left(X^T\right)=\frac{Y_T\left(1\right)}{2\sqrt{S_*T}}+ \frac{1-Y_T\left(1\right)^2}{2}\Longrightarrow \frac{1-Y\left(1\right)^2}{2}. $$ This limit of the statistic $\Delta _T\left(X^T\right)$ follows from the Third Le Cam Lemma as well. Particularly, for any continuous bounded function $H\left(\cdot \right)$ \begin{align*} &\Ex_uH\left(\Delta _T\left(X^T\right)\right)=\Ex_0 \left[Z_T\left(u\right)H\left(\Delta _T\left(X^T\right)\right)\right] \longrightarrow \\ &\qquad \longrightarrow \Ex_0 \left[Z\left(u\right)H\left(\Delta \left(W\right)\right)\right] =\Ex_uH\left(\Delta \left(Y\right)\right), \end{align*} where $$ \Delta \left(Y\right)=-\int_{0}^{1}Y\left(s\right)\;{\rm d}Y\left(s\right)=\frac{1-Y\left(1\right)^2}{2}. $$ Hence under alternative $\left(\vartheta _T=u_*/\gamma T\right) $ we have the convergence $$ \beta_T \left(u_*,\phi_T^*\right)\longrightarrow \Pb_{u_*}\left\{\left|Y\left(1\right)\right|\leq z_{\frac{1-\varepsilon }{2}} \right\}= \Pb\left\{\left|W\left(1\right)\right|\leq z_{\frac{1-\varepsilon }{2}}\sqrt{\frac{2u_*}{1-e^{-2u_*}}} \right\} $$ because $$ Y\left(1\right)=\int_{0}^{1}e^{-u\left(1-s\right)}\;{\rm d}W\left(s\right)\sim {\cal N}\left(0,\frac{1-e^{-2u_*}}{2u_*} \right) $$ This proves \eqref{10}. \bigskip Theorem~\ref{T1} is {\sl asymptotic in nature}, and it is interesting to see the powers of the score function test for the moderate values of $T$ and especially to compare them with the limit power functions. This can be done using numerical simulations. We consider the model of Example 1 with $S_*=1$ and $\psi(t)=e^t$. This yields the intensity function $$ S\left(u,t,X_t \right)=\exp\left(\frac uT\left[t-X_t\right]\right), \qquad u\geq 0, \quad 0\leq t\leq T. $$ In Figure~1 we represent the power function of the score function test $\phi^*_{\vphantom{\widetilde T}T}$ of asymptotic size $0.05$ given by $$ \beta_{\vphantom{\widetilde T}T}\left(u,\phi^*_{\vphantom{\widetilde T}T}\right)=\Pb_u\left\{\Delta_T \left(X^T\right)>a_{0.05}\right\},\qquad 0\leq u\leq 20, $$ for $T=100$, $300$ and $1000$, as well as the limiting power function $\beta^*(\cdot)$ given by the formula~\eqref{10}. \onepic{SCpowNB.eps}{Fig.~1: Power of the score function test} The function $\beta_{\vphantom{\widetilde T}T}(\cdot,\phi^*_{\vphantom{\widetilde T}T})$ is estimated in the following way. We simulate (for each value of $u$) $M=10^6$ trajectories $X^T_j$, $j=1,\ldots,M$ of self-correcting process of intensity $S\left(u,t,X_t\right)$ and calculate $\Delta_j=\Delta_T(X^T_j)$. Then we calculate the empirical frequency of accepting the alternative hypothesis $$ \frac1M\sum_{j=1}^{M}\1\zs{\left\{\Delta_j>a_{0.05} \right\}}\approx\beta_{\vphantom{\widetilde T}T}(u,\phi^*_{\vphantom{\widetilde T}T}). $$ Note that for $T=1000$ the limiting power function is practically attained. Note also that for $T=100$ the size of the test is $0.079$ which explains the position of the corresponding curve. Remind that score-function test is locally optimal \cite{Cap}. \section{The Likelihood Ratio Test and the Wald Test} Let us study two other well-known tests: the {\sl likelihood ratio test} $\bar\phi_T$ based on the maximum of the likelihood ratio function and the {\sl Wald test} $\hat\phi_T$ based on the MLE $\hat\vartheta _{\vphantom{\widetilde T}T}$. Remind that the log-likelihood ratio formula is \begin{align*} \ln L\left(\vartheta ,X^T\right)&=\int_{0}^{T}\ln \psi \left(\vartheta \left(S_*t-X_{t-}\right)\right) \;\left[{\rm d}X_t-S_*\,{\rm d}t\right]\\ &\quad -\int_{0}^{T}\left[\psi \left(\vartheta \left(S_*t-X_{t-}\right)\right)-1- \ln \psi \left(\vartheta \left(S_*t-X_{t-}\right)\right)\right]\;S_*\,{\rm d}t \end{align*} and the likelihood ratio test is based on the statistic $$ \delta _T\left(X^T\right)=\sup_{\vartheta\in \Theta }L\left(\vartheta ,X^T\right), $$ where $\Theta $ is the set of values of $\vartheta $ under alternative. The test is given by the decision function $$ \bar\phi_T\left(X^T\right)=\1\zs{\left\{\delta _T\left(X^T\right)>\tilde b_\varepsilon \right\}} $$ where the threshold $\tilde b_\varepsilon $ is chosen from the condition $\bar\phi_T\in {\scr K}_\varepsilon $. Note that $\delta _T\left(X^T\right)= L\left(\hat\vartheta_T ,X^T\right)$ as well, where $\hat\vartheta_T $ is the maximum likelihood estimator of the parameter $\vartheta $. The reparametrization $\vartheta =\vartheta _T=u/\gamma T$ reduces the problem \eqref{3}-\eqref{4} to \eqref{5}-\eqref{6} and we have to precise the region of {\sl local alternatives}. In the traditional approach of {\sl locally asymptotically uniformly most powerful tests} \cite{Rou} (regular case) to check the optimality of a test $\phi_{\vphantom{\widetilde T}T}$ we compare the power function $\beta _T\left(u,\phi_{\vphantom{\widetilde T}T}\right)$ with the power function of the Neyman-Pearson test on the compacts $0\leq u\leq K$ for any $K>0$. For these values of $u$ the alternatives are always {\sl contigous}. To consider the similar class of alternatives in our case is not reasonable because the constant $\tilde b_\varepsilon $ became dependent of $K$. Indeed if we take the test function $$ \bar\phi_{\vphantom{\widetilde T}T}\left(X^T\right)=\1\zs{\Bigl\{\sup\limits_{0<u\leq K} Z_T\left(u\right)>\tilde b_\varepsilon\Bigr\}} ,\qquad \qquad Z_T\left(u\right)=L\left(\frac{u}{\gamma T} ,X^T\right), $$ then the condition $\bar\phi_T\in {\scr K}_\varepsilon $ implies $\tilde b_\varepsilon=\tilde b_\varepsilon\left(K\right)$. Therefore we suppose that $K=K_T=\frac{\sqrt{S_*T}}{\ln T}\rightarrow \infty $. Finally, we have the following hypotheses testing problem \begin{eqnarray} \label{18} &&{\scr H}_0:\quad\qquad u=0,\\ \label{19} &&{\scr H}_1:\quad\qquad u=u_*\in \UU_T \end{eqnarray} Therefore, to study $$ \bar\phi_T\left(X^T\right)=\1\zs{\biggl\{\sup\limits_{u\in \UU_T }Z_T\left(u\right)>\tilde b_\varepsilon \biggr\}} $$ we need to describe the asymptotics of its errors under hypothesis ${\scr H}_0$ and alternatives ${\scr H}_1$ with $\vartheta =\frac{u_*}{\gamma T},\; u_*\in \UU_T$. Below $$ \Lambda(W)= \frac{\Delta(W)}{\sqrt{2{\rm J}(W) } }. $$ \begin{theorem} \label{T2} Let us suppose that condition ${\cal A}$ is fulfilled and the value $b_\varepsilon $ is solution of the equation \begin{equation} \label{thrB} \Pb\left(\Lambda(W) > b_\varepsilon\right)=\varepsilon . \end{equation} Then the test $\bar\phi_T $ with $\tilde b_\varepsilon=e^{b_\varepsilon^2} $ belongs to $ {\scr K}_\varepsilon $ and its power function converges to the following limit \begin{align*} \beta \left(u_*,\bar\phi_T\right)\longrightarrow \hat\beta\left(u_*\right) =\Pb\left\{ \Lambda(Y_{u_*} ) > b_\varepsilon \right\}, \end{align*} where $$ \Lambda(Y_{u_*})= \frac{\Delta(Y_{u_*})}{\sqrt{2{\rm J}(Y_{u_*}) } }= \frac{1-Y_{u_*}\left(1\right)^2}{\sqrt{8\;{\rm J}(Y_{u_*}) } }. $$ and $Y_{u_*}=\left\{Y_{u_*}\left(s\right),0\leq s\leq 1\right\}$ is Ornstein-Uhlenbeck process \eqref{17} with $u=u_*$. \end{theorem} \noindent {\bf Proof.} The log-likelihood process $l_T\left(u\right)=\ln Z_T\left(u\right)$ admits (under hypothesis ${\scr H}_0 $) the representation \eqref{15} \begin{align} \label{20} l_T\left(u\right)=u\;\Delta _T\left(X^T\right)\;\left(1+\delta _{1,T} \right)-\frac{u^2}{2}\;{\rm J}_T\left(X^T\right)\;\left(1+\delta_{2,T} \right) \end{align} where $\delta_{i,T}\rightarrow 0$ uniformly on $u\in \UU_T$. Hence \begin{align*} \Lambda _T\left(X^T\right)^2\equiv\sup_{u\in \UU_T}l_T\left(u\right)\Longrightarrow \frac{\Delta \left(W\right)^2}{2{\rm J}\left(W\right)} \end{align*} and we have \begin{align*} \Ex_0\bar\phi_T\left(X^T\right)=\Pb_0\left\{ \sup_{u\in \UU_T}l_T\left(u\right)>b_\varepsilon^2 \right\}\longrightarrow \Pb\left(\Lambda(W) >b_\varepsilon \right)=\varepsilon . \end{align*} Let us fix an alternative $u=u_*$. We have the convergence \begin{align} \label{21} {\cal L}_0\left\{\Lambda _T\left(X^T\right),l_T\left(u_*\right) \right\}\Longrightarrow {\cal L}\left\{\Lambda(W) , u_*\;\Delta(W)-\frac{u_*^2}{2}\;{\rm J}(W)\right\} . \end{align} The convergence \eqref{21} allows us to apply Third Le Cam's Lemma as follows: for any bounded continuous function $H\left(\cdot \right)$ \begin{align*} &\Ex_{u_*}H\left(\Lambda _T\left(X^T\right)\right)=\Ex_0 \left[Z_T\left({u_*}\right)H\left(\Lambda _T\left(X^T\right)\right)\right] \longrightarrow \\ &\qquad \longrightarrow \Ex_0 \left[Z\left({u_*}\right)H\left(\Lambda \left(W\right)\right)\right] =\Ex_{u_*}H\left(\Lambda \left(Y_{u_*}\right)\right). \end{align*} Hence \begin{align*} &\beta\left(u_*,\bar\phi_T\right)=\Pb_{u_*}\left\{\sup_{u\in \UU_T}l_T\left(u\right)>b_\varepsilon^2 \right\}\longrightarrow \Pb_{u_*}\left\{\Lambda \left(Y_{u_*}\right) >b_\varepsilon \right\}. \end{align*} This completes the proof of the theorem \ref{T2}. \bigskip Let us note, that the threshold $b_\varepsilon$ is given implicitly as the solution of the equation~\eqref{thrB}. In the following table we give some values of $b_\varepsilon$ obtained using numerical simulations. \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|} \hline $\varepsilon$&0.01&0.02&0.03&0.04&0.05&0.1 \\\hline $b_\varepsilon$&1.814&1.636&1.524&1.440&1.373&1.144 \\\hline \end{tabular} \end{center} These thresholds are obtained by simulating $M=10^7$ trajectories on $[0,1]$ of a standard Wiener process, calculating for each of them the quantity $\Lambda(W)$ and taking $(1-\varepsilon)M$-th greatest between them. \bigskip The next test usually studied in such hypotheses testing problems is the Wald test $$ \hat\phi_{\vphantom{\widetilde T}T}\left(X^T\right)=\1\zs{\left\{\gamma T \hat\vartheta _T\geq c_\varepsilon \right\}} $$ where $\hat\vartheta_T$ is the maximum likelihood estimator of $\vartheta$. Below $$ \Gamma(W)= \frac{\Delta(W)}{{\rm J}(W)}. $$ \begin{theorem} \label{T3} Let us suppose that condition ${\cal A}$ is fulfilled and the value $c_\varepsilon $ is solution of the equation \begin{equation} \label{thrC} \Pb\left(\Gamma (W) >c_\varepsilon \right)=\varepsilon . \end{equation} Then the test $\hat\phi_T $ belongs to $ {\scr K}_\varepsilon $ and its power function for any alternative $u_*$ converges to the following limit \begin{align*} \beta \left(u_*,\hat\phi_T\right)\longrightarrow \hat\beta\left(u_*\right) =\Pb\left\{ \Gamma(Y_{u_*}) >c_\varepsilon \right\}, \end{align*} where $$ \Gamma(Y_{u_*})= \frac{\Delta(Y_{u_*})}{{\rm J}(Y_{u_*})}=-u_*+ \frac{\int_{0}^{1} Y_{u_*}(s) \,{\rm d}W(s)}{{\rm J}(Y_{u_*})}\,. $$ and $Y_{u_*}$ is the same as in Theorem \ref{T2}. \end{theorem} \noindent {\bf Proof.} The proof follows immediately from the representation \eqref{20}, because \begin{align*} &\Pb_0^{\left(T\right)}\left\{\gamma T \hat\vartheta _T\geq c_\varepsilon \right\}=\Pb_0^{\left(T\right)}\left\{\sup_{0\leq u\leq c_\varepsilon } Z_{\vphantom{\widetilde T}T}\left(u\right)<\sup_{ u> c_\varepsilon,u\in \UU_T }Z_{\vphantom{\widetilde T}T}\left(u\right) \right\}\longrightarrow \\ &\qquad \longrightarrow \Pb_0\left\{\sup_{0\leq u\leq c_\varepsilon } Z\left(u\right)<\sup_{ u> c_\varepsilon }Z\left(u\right)\right\}= \Pb\left\{\Gamma (W) >c_\varepsilon \right\}=\varepsilon \end{align*} and (under alternative $u=u_*$) \begin{align*} &\Pb_{u_*}^{\left(T\right)}\left\{\gamma T \hat\vartheta _T\geq c_\varepsilon \right\}=\Pb_{u_*}^{\left(T\right)}\left\{\sup_{0\leq u\leq c_\varepsilon } Z_{\vphantom{\widetilde T}T}\left(u\right)<\sup_{ u> c_\varepsilon,u\in \UU_T }Z_{\vphantom{\widetilde T}T}\left(u\right) \right\}\longrightarrow \\ &\qquad \longrightarrow \Pb_{u_*}\left\{\sup_{0\leq u\leq c_\varepsilon } Z\left(u\right)<\sup_{ u> c_\varepsilon }Z\left(u\right)\right\}= \Pb\left\{\Gamma(Y_{u_*})>c_\varepsilon \right\}= \hat\beta\left(u_*\right). \end{align*} \bigskip As above, the threshold $c_\varepsilon$ is given implicitly as the solution of the equation~\eqref{thrC}. In the following table we give some values of $c_\varepsilon$ obtained using numerical simulations. \begin{center} \begin{tabular}{| l | c | c | c | c | c | c |} \hline $\varepsilon$&0.01&0.02&0.03&0.04&0.05&0.1 \\\hline $c_\varepsilon$&13.692&11.224&9.803&8.806&8.042&5.719 \\\hline \end{tabular} \end{center} These thresholds are obtained by simulating $M=10^7$ trajectories on $[0,1]$ of a standard Wiener process, calculating for each of them the quantity $\Gamma(W)$ and taking $(1-\varepsilon)M$-th greatest between them. \section{Comparison of the Tests} Remind that all these three tests $\phi_T^*,\bar\phi_T $ and $\hat\phi_T$ in regular (LAN) case are asymptotically equivalent to the Neyman-Pearson test $\phi_{u,T}^\circ $ (with known alternative $u$) and hence are asymptotically uniformly most powerful. In our singular situation all of them have different asymptotic behavior and therefore it is interesting to compare their limit power functions \begin{align*} \beta^*\left(u\right)&=\Pb_u\left\{\Delta \left(Y_u\right)>a_\varepsilon \right\},\qquad \bar \beta\left(u\right)=\Pb_u\left\{\frac{\Delta \left(Y_u\right)}{\sqrt{2{\rm J}\left(Y_u\right)}}>b_\varepsilon \right\},\\ \hat\beta\left(u\right)&=\Pb_u\left\{\frac{\Delta \left(Y_u\right)}{{\rm J}\left(Y_u\right)}>c_\varepsilon \right\},\qquad \beta^\circ\left(u\right)=\Pb_u\left\{u\Delta \left(Y_u\right)-\frac{u^2}{2} {\rm J}\left(Y_u\right)>d_\varepsilon \right\} \end{align*} of course, under condition that all of them belong to $ {\cal K}_\varepsilon $. Our goal is to compare these quantities for the large values of $u$. We have to study the distribution of the vector $\left(\Delta \left(Y_u\right),{\rm J}\left(Y_u\right) \right)$, where $$ \Delta \left(Y_u\right)=-\int_{0}^{1}Y_u\left(s\right)\;{\rm d}Y_u\left(s\right),\qquad {\rm J}\left(Y_u\right)=\int_{0}^{1}Y_u\left(s\right)^2\;{\rm d}s, $$ where $Y_u $ is solution of the equation $$ {\rm d}Y_u\left(s\right)=-u\;Y_u\left(s\right)\;{\rm d}s+{\rm d}W\left(s\right),\qquad Y_u\left(0\right)=0 ,\qquad 0\leq s\leq 1. $$ Let us introduce the stochastic process $y_v=\sqrt{u}\;Y_u\left( \frac{v}{u}\right),0\leq v\leq u$ (this transformation was introduced by Luschgy \cite{Lus2}). Then we can write $$ {\rm d}y_v=-y_v\;{\rm d}v+{\rm d}w_v,\qquad y_0=0,\qquad 0\leq v\leq u, $$ where $w_v =\sqrt{u}\;W\left(\frac{v}{u}\right)$ is a Wiener process and $$ \Delta \left(Y_u\right)=-u^{-1}\int_{0}^{u}y_v\;{\rm d}y_v\equiv\frac{\Delta_u }{u} , \qquad {\rm J}\left(Y_u\right)= u^{-2}\int_{0}^{u}y_v^2\;{\rm d}v\equiv\frac{{\rm J}_u }{u^2}. $$ in obvious notation. Further, the process $y_v$ is ergodic with the density of the invariant law $f\left(y\right)= e^{-y^2}/\sqrt{\pi} $. Hence ${\rm J}_u\rightarrow \infty $ and $$ \frac{1}{u}\int_{0}^{u}y_v^2\;{\rm d}v\longrightarrow \frac{1}{2}. $$ Note that the distribution of the process $y_v$ does not depend on $u$. The constant $d_\varepsilon =d_\varepsilon \left(u\right)$ because it is defined by the equation $$ \Pb_0\left\{u\Delta \left(W\right)-\frac{u^2}{2} {\rm J}\left(W\right)>d_\varepsilon \right\} =\varepsilon . $$ For the large values of $u$ this constant can be approximated as follows. We have (under hypothesis ${\cal H}_0$) as $u\rightarrow \infty $ \begin{align*} &\Pb_0\left\{u\Delta \left(W\right)-\frac{u^2}{2}{\rm J}\left(W\right)>d_\varepsilon \left(u\right) \right\}=\\ &\qquad =\Pb_0\left\{\int_{0}^{1}W\left(s\right)^2{\rm d}s<-\frac{2d_\varepsilon \left(u\right)}{u^2} + \frac{2\Delta\left(W\right)}{u}\right\} \longrightarrow\\ &\qquad \longrightarrow \Pb_0\left\{\int_{0}^{1}W\left(s\right)^2{\rm d}s<e_\varepsilon \right\} =\varepsilon, \end{align*} where the constant $e_\varepsilon $ is defined by the last equality. For example, if we take $\varepsilon =0,05$ then the numerical simulation gives us the value $e_{0,05}=0,056$. Therefore $d_\varepsilon\left(u\right)=-0,5\,e_\varepsilon \,u^2\left(1+o\left(1\right)\right) $. If we suppose that $\varepsilon $ is small and try to solve the equation $$ \int_{0}^{e_\varepsilon }f_{{\rm J} }\left(x\right)\,{\rm d}x=\varepsilon $$ where $f_{{\rm J} }\left(x\right) $ is the density function of the integral ${\rm J}\left(W\right) $, then we can easily see that $f_{{\rm J} }\left(0\right)=0 $ and all its derivatives $f_{{\rm J} }^{\left(k\right)}\left(0\right)=0, k=1,2, \ldots$. Hence to see an approximative solution we need to calculate the large deviation probability of the following form (below $r=s/\sqrt{e_\varepsilon }, E=e_\varepsilon ^{-1/2}\rightarrow \infty $). \begin{align*} \Pb_0\left\{e_\varepsilon ^{-1} \int_{0}^{1}W\left(s\right)^2{\rm d}s<1\right\}= \Pb_0\left\{\int_{0}^{E}W\left(r\right)^2{\rm d}r<1\right\} . \end{align*} Below we put $d_\varepsilon\left(u\right)=-0,5\,e_\varepsilon \,u^2 $. We have the relations \begin{align*} \beta^*\left(u\right)&=\Pb\left\{\Delta_u>u\,a_\varepsilon \right\}=\Pb\left\{\int_{0}^{u}y_v\,{\rm d}w_v<{\rm J}_u-a_\varepsilon\,u\right\},\\ \bar \beta\left(u\right)&=\Pb\left\{\frac{\Delta_u }{\sqrt{2{\rm J}_u}}>b_\varepsilon \right\}=\Pb\left\{\int_{0}^{u}y_v\,{\rm d}w_v<{\rm J}_u-b_\varepsilon\,\sqrt{2{\rm J}_u }\right\},\\ \hat\beta\left(u\right)&=\Pb\left\{\frac{\Delta_u }{{\rm J}_u}>\frac{c_\varepsilon }{u}\right\}=\Pb\left\{\int_{0}^{u}y_v\,{\rm d}w_v<{\rm J}_u-\frac{c_\varepsilon}{u}\,{\rm J}_u \right\},\\ \beta^\circ\left(u\right)&=\Pb\left\{\Delta_u-\frac{{\rm J}_u}{2} >d_\varepsilon \right\}=\Pb\left\{\int_{0}^{u}y_v\,{\rm d}w_v<\frac{1}{2}{\rm J}_u+\frac{e_\varepsilon}{2}\,u^2 \right\} . \end{align*} Therefore the large values of $u$ (${\rm J}_u\sim u/2 $) \begin{align*} &\frac{1}{2}{\rm J}_u+\frac{e_\varepsilon}{2}\,u^2 > {\rm J}_u-\frac{c_\varepsilon}{u}\,{\rm J}_u>{\rm J}_u-b_\varepsilon\,\sqrt{2{\rm J}_u }>{\rm J}_u-a_\varepsilon\,u, \end{align*} and finally $$ \beta^*\left(u\right)<\bar \beta\left(u\right)<\hat\beta\left(u\right)<\beta^\circ\left(u\right). $$ These inequalities are in accord with \cite{Swen97}. Note that for small values of $\varepsilon$ the constant $a_\varepsilon $ is close to 0,5 (e.g. $a_{0,05}=0,498$, $a_{0,01}=0,49992$) and in this asymptotics the power of score-function test is $$ \beta^*\left(u\right) =\Pb\left\{\int_{0}^{u}y_v\,{\rm d}w_v< \left(0,5-a_{\varepsilon }\right)\,u\left(1+o\left(1\right)\right)\right\}. $$ Hence one can expect that in this case the score-function test has essentially smaller power than the others. \bigskip Now let us turn to numerical simulations of the limiting power functions. We aim to obtain the limiting power functions of all the three tests, as well as the Neyman-Pearson envelope, for the moderate values of $u$ ($u\leq15$). Note that for the score function test $\beta^*(u)$ can be computed directly using~\eqref{10}. However the limiting power functions of the likelihood ratio and of the Wald tests are written as probabilities of some events related to Ornstein-Uhlenbeck process and can be obtained using numerical simulations. For the likelihood ratio test we have $$ \bar\beta\left(u\right)= \Ex_u \1_{\left\{\Lambda(Y_u)>b_\varepsilon\right\}}= \Ex_0 Z\left(u\right)\1_{\left\{\Lambda(W)>b_\varepsilon\right\}} $$ where $$ Z\left(u\right)=\exp\left\{u\Delta(W)-\frac{u^2}{2}\,{\rm J}(W)\right\}. $$ So we simulate $M=10^7$ trajectories $W_j=\left\{W_j(s),\ 0\leq s\leq 1\right\}$, $j=1,\ldots,M$ of a standard Wiener process and calculate for each of them the quantities $\Delta_j=\Delta(W_j)$, ${\rm J}_j={\rm J}(W_j)$, $\Lambda_j=\Delta_j/{\rm J}_j$ and (for each value of $u$) $Z_j\left(u\right)=\exp\left\{u\Delta_j-\frac{u^2}{2}\,{\rm J}_j\right\}$. Then we calculate the empirical mean $$ \frac1M\sum_{j=1}^{M} Z_j\left(u\right)\1_{\left\{\Lambda_j>b_\varepsilon\right\}} \approx\bar\beta\left(u\right). $$ For the Wald test we have similarly $$ \frac1M\sum_{j=1}^{M} Z_j\left(u\right)\1_{\left\{\Gamma_j>c_\varepsilon\right\}} \approx\hat\beta\left(u\right) $$ where $\Gamma_j=\Delta_j/\sqrt{2\,{\rm J}_j}$. Finally, in order to compute the Neyman-Pearson envelope, we first approximate (for each value of $u$) the quantity $d_\varepsilon=d_\varepsilon(u)$ by the $(1-\varepsilon)M$-th greatest between the quantities $\ln Z_j(u)$, and then calculate $$ \frac1M\sum_{j=1}^{M} Z_j\left(u\right)\1_{\left\{\ln Z_j(u)>d_\varepsilon(u)\right\}}\approx\beta^\circ\left(u\right). $$ The results of these simulations for $\varepsilon=0.05$ are presented in Figure~2. \onepic{SCpow1.eps}{Fig.~2: Limiting powers for $\varepsilon=0.05$} Let us note here that in this case the power functions of the likelihood ratio test and of the Wald test are indistinguishable (from the point of view of numerical simulations) from the Neyman-Pearson envelope. This quite surprising fact was already mentioned by Eliott {\sl at al.} \cite{Eli}, who showed the similar pictures having $2\cdot 10^3$ simulations. As we see from Figure 2, with $10^7$ simulations the curves are still indistinguishable. The situation is however different for bigger values of $\varepsilon$. The results of simulations for $\varepsilon=0.01$, $0.05$, $0.25$ and $0.5$ are presented in Figure~3. \onepic{SCpow3.eps}{Fig.~3: Limiting powers for different values of $\varepsilon $} One can note that for big values of $\varepsilon$ (e.g. $\varepsilon=0.5$) the powers became more distinguishable, and that the asymptotically established ordering of the tests holds already for these moderate values of $u$. Note also that for the small values of $\varepsilon$ (e.g. $\varepsilon=0.01$ and $0.05$) the curve of score-function test is essentially lower as expected. \section{Discussion} \noindent {\bf Remark 1.} Note that alternatives $u=u_T\rightarrow \infty $ with $\vartheta _{u_T}\rightarrow 0$ are local but not contigous. That means that the corresponding sequences of measures $\left(\Pb^{\left(T\right)}_{\vartheta _{u_T}},\Pb^{\left(T\right)}_{0}\right), T\rightarrow \infty $ are not contigous. Particularly, the second integral in the likelihood ratio formula tends to infinity: $$ \int_{0}^{T}\left[\psi \left(\vartheta _{u_T} \left(S_*t-X_{t-}\right)\right)-1- \ln \psi \left(\vartheta _{u_T} \left(S_*t-X_{t-}\right)\right)\right]\;S_*\,{\rm d}t\longrightarrow \infty . $$ In such situation the power function of any reasonable test tends to 1 and to compare tests we have to use, say, the large deviation principle. For example, the likelihood ratio test $\phi_{\vphantom{\widetilde T}T}^* $ is consistent for the {\sl local far alternatives} $\vartheta =\frac{v}{\sqrt{S_*T}}, v\in \left[\nu ,V\right]$ where $0<\nu<V< \infty $. Indeed, under mild regularity conditions we can write \begin{align*} &\Ex_v\phi_{\vphantom{\widetilde T}T}^*\left(X^T\right)= \Pb_0\left\{\sup_{\nu <v<V} L\left(\frac{v}{ \sqrt{S_*T}},X^T\right)>c_\varepsilon \right\}=\\ &\qquad = \Pb_0\left\{\sup_{\nu <v<V}\left[\sqrt{S_*T}\int_{0}^{1}\ln\psi\left(vW_T\left(s\right)\right){\rm d}W_T\left(s\right)-\right.\right.\\ &\qquad \quad \left.\left. -S_*T\int_{0}^{1}\left[\psi\left(vW_T\left(s\right)\right)-1- \ln\psi\left(vW_T\left(s\right)\right)\right] {\rm d}s\right]>\ln c_\varepsilon \right\}=\\ &=\Pb_0\left\{\sup_{\nu <v<V}\left[\frac{1}{\sqrt{S_*T}}\int_{0}^{1}\ln\psi\left(vW_T\left(s\right)\right){\rm d}W_T\left(s\right)-\right.\right.\\ &\qquad \quad \left.\left. -\int_{0}^{1}\left[\psi\left(vW_T\left(s\right)\right)-1- \ln\psi\left(vW_T\left(s\right)\right)\right] {\rm d}s\right]>\frac{\ln c_\varepsilon}{S_*T} \right\}\longrightarrow \\ &\longrightarrow \Pb\left\{\inf_{\nu <v<V} \int_{0}^{1}\left[\psi\left(vW\left(s\right)\right)-1- \ln\psi\left(vW\left(s\right)\right)\right]{\rm d}s>0\right\}=1 \end{align*} because the function $g\left(y\right)=y-1-\ln y> 0$ for $y\neq 1$ and $g\left(y\right)=0$ iff $y=1$. \bigskip \noindent {\bf Remark 2.} Note, that we can construct asymptotically uniformly most powerful test if we change the statement of the problem in the following way. Let us fix some $D>0$ and introduce the stopping time $$ \tau _D=\inf \left\{\tau :\;\int_{0}^{\tau } \left(S_*t-X_t\right)^2\;S_* \;{\rm d}t\geq D^2\right\}. $$ Then we consider the problem of testing hypotheses \begin{align*} {\scr H}_0\quad :\quad\qquad S\left(t,X_t \right)&=S_*,\\ {\scr H}_1\quad :\quad\qquad S\left(t,X_t \right)&=S_*\; \psi\left(\vartheta _D\left[S_*t-X_t\right]\right), \quad \vartheta _D=\frac{u}{\dot\psi\left(0\right)D}>0 \end{align*} by observations $X^{\tau _D}=\left\{X_t,0\leq t\leq \tau _D\right\}$ in the asymptotics $D\rightarrow \infty $. Now the likelihood ratio $Z_{\tau_{D}}\left(u\right)=L\left(\frac{u}{\dot\psi\left(0\right)D},X^D\right)$ will be LAN: $$ Z_{\tau_{D}}\left(u\right)\Longrightarrow \exp\left\{u\;\zeta -\frac{u^2}{2} \right\},\qquad \zeta \sim {\cal N}\left(0,1\right) $$ and the test $\hat\phi_{\tau_{D}}=\1\zs{\left\{\Delta _{\tau_{D}}\left(X^{\tau_{D}}\right)>z_\varepsilon \right\}}$ where $$ \Delta_{\tau_{D}}\left(X^{\tau_{D}}\right)=\frac{1}{D}\int_{0}^{\tau _D}\left(S_*t-X_{t-}\right)\;\left[{\rm d}X_t-S_*{\rm d}t\right] $$ is locally asymptotically uniformly most powerful. The proof follows from the central limit theorem for stochastic integrals and the standard arguments (for LAN families). \bigskip \noindent {\bf Remark 3.} Note that these problems of hypotheses testing are similar to the corresponding problems of hypotheses testing for diffusion processes. In particular, let the observed process $X^T=\left\{X_t,0\leq t\leq T \right\}$ be diffusion $$ {\rm d}X_t= \psi \left(-\vartheta_T\,X_t\right)\;{\rm d}t+\sigma \,{\rm d}W_t,\quad X_0=0, \quad 0\leq t\leq T , $$ where the function $\psi\left(0 \right)$=0, is continuously differentiable at the point $0$ and $\dot{\psi}\left(0 \right)>0$. If we consider two hypotheses: $\vartheta =0 $ and $\vartheta >0 $ then the reparametrization $$ \vartheta _T=\frac{u\,\sigma}{\dot{\psi}\left(0 \right)\;T } $$ provides local contigous alternatives, i.e., the log-likelihood ratio in the problem \begin{align*} {\scr H}_0&:\quad\qquad \quad\qquad u=0,\\ {\scr H}_1&:\quad\qquad \quad\qquad u>0. \end{align*} has the limit: $$ \ln L\left(\frac{u\,\sigma}{\dot{\psi}\left(0 \right)\;T },X^T\right)\Longrightarrow -u\int_{0}^{1}W\left(s\right) {\rm d}W\left(s\right)-\frac{u^2}{2}\int_{0}^{1}W\left(s\right)^2 {\rm d}s. $$ The score function test based on the statistic $$ \Delta _T^*\left(X^T\right)= -\frac{1}{T}\int_{0}^{T}X_{t}\;{\rm d}X_t, $$ the likelihood ratio test and the Wald test have the same asymptotic properties as those described in Theorems 1, 2 and 3 above. For example, if $\psi\left(x\right)=x$, then we have the Wiener process (under hypothesis ${\scr H}_0$) against ergodic Ornstein-Uhlenbeck process under alternative ${\scr H}_1$. \bigskip \noindent {\bf Remark 4.} We supposed above that the derivative of the function $\psi\left(x \right)$ at the point $x=0$ is not equal to 0, but sometimes it can be interesting to study the score function and the likelihood ratio test in the situations when the first $k-1$ derivatives with $k\geq 2$ are null. Let us consider a self-correcting process $X^T=\left\{X_t,0\leq t\leq T\right\}$ with intensity function $S_*\psi\left(\vartheta \left(S_*t-X_t\right) \right)$ such that $\psi\left(0 \right)=1$ $\dot\psi\left(0 \right)=0$ and $\ddot\psi\left(\cdot \right)\neq 0$ $\left(k=2\right)$. In this case the modifications have to be the following. Suppose that $\ddot\psi\left(0 \right)> 0$. To have LAQ family at the point $\vartheta =0$ we chose the reparametrization $\vartheta =\vartheta _u$ $$ \vartheta_u =\sqrt{\frac{2\,u}{\ddot\psi\left(0\right)}}\;\left(S_*T\right)^{-3/4}, $$ which provides the limit $$ \ln L\left(\vartheta_u,X^T\right)\Longrightarrow u\;\int_{0}^{1}W\left(s\right)^2\;{\rm d}W\left(s\right)-\frac{u^2}{2}\;\int_{0}^{1}W\left(s\right)^4\;{\rm d}s $$ Then in the hypotheses testing problem \begin{align*} {\scr H}_0&:\quad\qquad \quad\qquad u=0,\\ {\scr H}_1&:\quad\qquad \quad\qquad u>0 \end{align*} the score function test $\hat\psi\left(X^T\right)=\1_{\left\{\Delta _{\vphantom{\widetilde T}T}\left(X^T\right)>c_\varepsilon \right\}}$ is based on the statistic $$ \Delta _{\vphantom{\widetilde T}T}\left(X^T\right)=\frac{1}{\left(S_*T\right)^{3/2}}\int_{0}^{T} \left(S_*t-X_t\right)^2\;\left[{\rm d}X_t-S_*{\rm d}t\right]. $$ It is easy to see that under ${\scr H}_0$ $$ \Delta _{\vphantom{\widetilde T}T}\left(X^T\right)\Longrightarrow \frac{W\left(1\right)^3}{3}-\int_{0}^{1}W\left(s\right)\;{\rm d}s . $$ Hence to chose the threshold $c_\varepsilon $ we have to solve the following equation $$ \frac{4}{3}\iint_{x^3-y>3c}\exp\left\{-2x^2+2xy-\frac{2}{3}y^2\right\}\;\;{\rm d}x\,{\rm d}y=\varepsilon $$ because $\left(W\left(1\right),3\int_{0}^{1}W\left(s\right)\;{\rm d}s\right) $ is Gaussian vector. The cases $k>2$ can be treated in a similar way.
1,477,468,751,377
arxiv
\section{Introduction} \pagenumbering{arabic} In this paper we are concerned to "breather" and "soliton" solutions to the Ricci flow. Recall that, given a closed riemannian manifold $(M^{n},g_{0})$, the Ricci flow is the evolution equation \begin{align}\label{Ricci equation} &\frac{\partial g}{\partial t}=-2Ric(g) \end{align} Strictly related to the Ricci flow is the evolution equation \begin{align}\label{Ricci equation normalized} &\frac{\partial g}{\partial t}=\frac{2}{n}rg-2Ric(g) \end{align} where $r=\frac{\int_{M}Rd\mu}{\int_{M}d\mu}$, which is called normalized Ricci flow since has the property to preserve the volume of the initial riemannian manifold. Both these evolution equations were introduced by R. Hamilton in \cite{Ha1}, where it is also proved that evolution equations \ref{Ricci equation} and \ref{Ricci equation normalized} differ only by a change of scale in space and a change of parametrization in time.\\ We now recall the definitions of Ricci breather and Ricci soliton. Let $(M^{n},g_{0})$ be closed manifold and let $(M^{n},g(t))$ be a solution to the Ricci flow with initial condition $g(0)=g_{0}$. The solution $g(t)$ is called Ricci breather, if for some $t>0$ and $\alpha>0$ the metrics $\alpha g(t)$ and $g(0)$ differ only by a diffeomorphism; the cases $\alpha=1$, $\alpha<1$, $\alpha>1$ are called steady, shrinking and expanding breathers respectively. Breathers for which the metrics $g(t)$ and $g(0)$ differ only by diffeomorphism and scaling for each time $t>0$ are called solitons. More precisely, $g(t)$ is called Ricci soliton if there exist a smooth function $\sigma(t)$ and a $1$-parameter family of diffeomorphisms $\left\{\psi_{t}\right\}$ of $M^{n}$ such that \begin{align}\label{ricci soliton} g(t)=\sigma(t)\psi^{*}_{t}(g_{0}) \end{align} with $\sigma(0)=1$ and $\psi_{0}=id_{M^{n}}$. Now, taking the time derivative in \ref{ricci soliton} and evaluating the result for $t=0$, we get that the metric $g_{0}$ satisfies the identity \begin{align}\label{lie soliton} -2Ric(g_{0})=2\epsilon g_{0}+L_{X}g_{0}, \end{align} where $\epsilon=\frac{\sigma^{'}(0)}{2}$ and $X$ is the vector field on $M^{n}$ generated by $\left\{\psi_{t}\right\}$ for $t=0$. Conversely, given a metric $g_{0}$ which satisfies \ref{lie soliton} there exist $\sigma(t)$ and $\left\{\psi_{t}\right\}$ such that $g(t)=\sigma(t)\psi^{*}_{t}(g_{0})$ is a solution to the Ricci flow with initial condition $g(0)=g_{0}$. In particular we can choose $\sigma(t)=1+2\epsilon t$ and $\left\{\psi_{t}\right\}$ as the $1$-parameter family of diffeomorphisms generated by the vector fields \begin{displaymath} Y_{t}=\frac{1}{\sigma(t)}X, \end{displaymath} see \cite{Chow1}. In summary, each self-similar solution to the Ricci flow can be written in the canonical form \begin{displaymath} g(t)=(1+2\epsilon t)\psi^{*}_{t}(g_{0}). \end{displaymath} We then say that the soliton is expanding, shrinking, or steady, if $\epsilon>0$, $\epsilon<0$, or $\epsilon=0$ respectively. Finally, if the vector field $X$ is the gradient field of a function $f$, one says that the soliton is a gradient Ricci soliton. \section{Ricci Breathers and Solitons} In this section we give a brief overview on the present theory of Ricci breathers and solitons, then we prove a no breather and soliton theorem for compact homogeneous solutions to the Ricci flow. As pointed out in \cite{Ha1}, because of the diffeomorphism invariance of the Ricci tensor, the Ricci flow preserves the isometries of the initial Riemannian manifold. We conclude that an initial homogeneous Riemannian metric remains homogeneous during the flow, it is then meaningful to speak about homogeneous solutions of the Ricci flow. For an interesting study of homogeneous solutions in dimension three we refer to \cite{Ise}.\\ We now start with a little example in order to illustrate some ideas and difficulties that arise in the study of breather and soliton solutions of the flow. As explained in the introduction, on a soliton solution, the initial metric changes only by diffeomorphisms and scale. We then have that any geometric quantity must varies in a simple way during the flow. For example, let $\Delta_{g}=Tr_{g}\nabla^{2}$ be the Laplacian operator on $C^{\infty}(M)$ and \begin{displaymath} Spec(g)=\left\{0=\lambda_{0}(g)<\lambda_{1}(g)\leq\lambda_{2}(g)\leq...\leq\lambda_{k}(g)\leq...\right\} \end{displaymath} the spectrum of $\Delta_{g}$.\\ We have thus that if $g(t)$ is a Ricci soliton on $(M^{n},g_{0})$ then \begin{align}\label{specsoliton} Spec(g(t))=\frac{1}{\sigma(t)}Spec(g_{0}), \end{align} that is $Spec(g(t))$ is "proportional" to the initial spectrum $Spec(g_{0})$ and shrinks, is stationary(isospectral deformation), or expands depending on wheter $\epsilon$ is positive, zero, or negative. On the other hand it is clear that on a soliton any eigenvalue $\lambda_{k}(t)$ and eigenfunction $f_{k}(t)$ varies smoothly on $t$, we can then take the variation of any eigenvalue just taking the variation of the Rayleigh-Ritz quotient which defines it. It turns out that if we assume $g_{0}$ to be homogeneous then the first variation formula for an eigenvalue of the Laplacian is given by the simple formula \begin{align} \frac{d\lambda}{dt}=2\int_{M}Ric(\nabla f,\nabla f)d\mu \label{homogeneous}\end{align} we refer to \cite{Eig} for the details of computations. As shown in \cite{Ha2} , in any dimension the nonnegativity of the curvature operator is preserved along the Ricci flow we then conclude combining \ref{specsoliton} and \ref{homogeneous} that there are not expanding or steady homogeneous Ricci solitons with positive curvature operator. Unfortunately, even in the homogeneous case, this approach presents two main problems. First, the monotonicity result is proved for the whole spectrum of the Laplacian and then it is expected to rely on some particular curvature assumption. Second, the eigenvalues of the Laplacian are not scale invariant and then it is hard to treat the shrinking case.\\ A more convenient approach is based on the study of the scalar curvature along the flow. This approach has been first suggested by Ivey in \cite{Ivey}, where he proved the non existence of three dimensional breathers and solitons, and later extended by Hamilton in \cite{Ha3}. In \cite{Ha3}, Hamilton checked that the minimum of the scalar curvature $\min_{x\in M^{n}} R(x,t)$ is nondecreasing along the normalized Ricci flow whenever it is non positive, and monotonicity is strict unless the metric is Einstein. This important observation implies that the scale invariant quantity \begin{align} \label{minimo1} \min_{x\in M^{n}} R(x,t)V(t)^{\frac{2}{n}} \end{align} is nondecreasing along the Ricci flow whenever it is non positive, as follows easily using the fact that the two flows differ only by scale and reparametrization in time. On the other hand on a steady or expanding Ricci breather or soliton we must have \begin{align} \label{minimo2} \min_{x\in M^{n}} R(x,0)<0 \end{align} as follows using the formula \begin{align} \label{volume} \frac{dV}{dt}=-\int_{M}Rd\mu \end{align} and the fact that the nonnegativity of the scalar curvature is preserved along the Ricci flow; for a proof of these facts see \cite{Ha1}. We can then use \ref{minimo2} and the monotonicity result for the scale invariant quantity \ref{minimo1} to rule out the existence of steady and expanding breathers and solitons.\\ Recently, in the celebrated paper \cite{Per}, the above result has been proved in a different way and extended by Perelman. In \cite{Per}, Perelman showed that the least eigenvalue $\lambda_{1}$ of the elliptic operator $-4\Delta+R$ is non decreasing along the Ricci flow. Using this remarkable observation he was able to prove the nonexistence of nontrivial steady solitons and breathers. Analogously, studying the monotonicity of the scale invariant quantity \begin{align} \label{pere1} \lambda_{1}(g(t))V(t)^{\frac{2}{n}} \end{align} he ruled out the existence of nontrivial expanding solitons and breathers. Finally, using a more involved functional approach he proved the following \begin{theorem}\label{perelma} There are no shrinking breathers other than gradient solitons. \end{theorem} It is interesting to notice that nontrivial examples of compact gradient Ricci solitons were constructed by Koiso in \cite{Koi}. These examples start in dimension four, in agreement with the result of Ivey \cite{Ivey}. We conclude that that in dimension greater or equal to four theorem \ref{perelma} cannot be improved without some particular curvature assumption. We then focus on the homogeneous case proving the following \begin{theorem}\label{luca} There are no other homogeneous compact breathers or solitons other than trivial, i.e Einstein, Ricci solitons. \end{theorem} Differently from the general case, if we restrict our attention to homogeneous solutions of the Ricci flow, we can treat the expanding, steady and shrinking cases in a unified way. This unified approach bases on the following lemma \begin{lemma} Let $(M^{n},g(t))$ be an homogeneous solution to the normalized Ricci flow, then $R_{g(t)}$ is monotonically increasing unless the initial manifold is Einstein. \end{lemma} \begin{proof} As proved in \cite{Ha1}, the scalar curvature function satisfies the following heat type evolution equation \begin{displaymath} \frac{\partial R}{\partial t}=\Delta R+2\left|Ric\right|^{2}-\frac{2}{n}rR, \end{displaymath} then for homogeneous solutions we have \begin{displaymath} \frac{dR}{dt}=2\left|Ric\right|^{2}-\frac{2}{n}R^{2}=2\left|Ric-\frac{R}{n}g\right|^{2}. \end{displaymath} \end{proof} The above lemma implies also that the scale invariant quantity $R(t)V(t)^{\frac{2}{n}}$ is monotonically increasing along homogeneous solutions to the Ricci flow. However, we decide to derive this result directly directly \begin{lemma} Let $(M^{n},g(t))$ be an homogeneous solution to the Ricci Flow, then $R(t)V(t)^{\frac{2}{n}}$ is monotonically increasing unless the initial manifold is Einstein. \end{lemma} \begin{proof} As proved in \cite{Ha1}, the scalar curvature function satisfies the following evolution equation \begin{displaymath} \frac{\partial R}{\partial t}=\Delta R+2\left|Ric\right|^{2}, \end{displaymath} then using \ref{volume} we have \begin{align}\notag \frac{d}{dt}R(t)V(t)^{\frac{2}{n}}&=R^{'}(t)V(t)^{\frac{2}{n}}+\frac{2}{n}V(t)^{\frac{2-n}{n}}V^{'}(t)R(t)\\ \notag &=2\left|Ric\right|^{2}V(t)^{\frac{2}{n}}-\frac{2}{n}R^{2}V(t)^{\frac{2}{n}}\\ \notag &=2\left|Ric-\frac{R}{n}g\right|^{2}V(t)^{\frac{2}{n}}. \end{align} \end{proof} Now, the monotonicity of the scale invariant quantity $R(t)V(t)^{\frac{2}{n}}$ can be used to rule out the existence of no trivial breathers and solitons, therefore proving theorem \ref{luca}.\\ In summary, non trivial compact solitons can be only of gradient shrinking type, of dimension greater or equal than four and they cannot be homogeneous. Finally, we remark that Hamilton-Ivey and Perelman approaches, based on the study of the monotonicity of \ref{minimo1} and \ref{pere1} respectively, coincide on homogeneous solutions since in this case we have $\lambda_{1}(g(t))=R_{g(t)}$. \section{Acknowledgements} The author would like to thank Professor Michael Anderson for reading the present work and for his useful comments on it.
1,477,468,751,378
arxiv
\section{\label{section:Intro}Introduction} The prime numbers theorem and the Dirichlet theorem on primes in arithmetic progressions (and their generalizations to number fields) rely respectively on the fact that \begin{equation}\label{stm::ZetaNotZero} \zeta(1+it)\neq0 \end{equation} for each real $t\neq0$ and that \begin{equation}\label{stm::ElleNotZero} L(\chi,1)\neq0 \end{equation} for each non principal character $\chi:\NN^+\to\CC$. Further results on the density of the primes on arithmetic progressions follows from the inequality \begin{equation}\label{stm::ElleNotZeroBis} L(\chi,1+it)\neq0 \end{equation} for each real $t$. In the above statements $\zeta(s)$ is the famous Riemann zeta function, which is meromorphic on the whole complex plane and coincides with the Dirichlet series \begin{equation*} \sum_{n=1}^{+\infty}\dfrac{1}{n^s} \end{equation*} when $\Re s>1$, while $L(\chi,s)$ is the \emph{Dirichlet $L-$function} associated to the character $\chi$, which (when $\chi$ is not principal) is a holomorphic entire function which when $\Re s>1$ coincides with the series \begin{equation*} \sum_{n=1}^{+\infty}\dfrac{\chi(n)}{n^s}. \end{equation*} Let us recall that the Riemann zeta function is meromorphic on the whole complex plane, has a unique simple pole at $s=1$ with residue $1$. The zeroes of $\zeta(s)$ are the so called \emph{trivial zeroes} \begin{equation*} \zeta(-2m)=0,\ m=1,2,\ldots \end{equation*} and the remaining ones $\rho$ satisfies \begin{equation*} 0<\Re\rho<1. \end{equation*} The famous \emph{Riemann hypothesis} is that all of them are on the \emph{critical line} \begin{equation*} \Re s=\dfrac{1}{2}. \end{equation*} Nevertheless it is known that infinite of them are on the critical line. Here, and in the rest of the paper, $\NN^+$ is the set of the positive integers and if \begin{equation*} a:\NN^+\to\CC \end{equation*} is an arbitrary bounded arithmetic function then \begin{equation*} \invZ(a, s)=\sum_{n=1}^{+\infty}\dfrac{a(n)}{n^s} \end{equation*} is the corresponding associated Dirichlet series. We recall that a \emph{Dirichlet character} is an arithmetic function, that is a function \begin{equation*} \chi:\NN^+\to\CC \end{equation*} which is \emph{completely multiplicative}, that is $\chi(1)=1$ and \begin{equation*} \chi(mn)=\chi(m)\chi(n) \end{equation*} for each pair of $m,n\in\NN^+$ and which also is periodic of period $q>1$, that is $\chi(n+q)=\chi(n)$ for each $n\in\NN^+$ and $\chi(k)=0$ if $k$ and $q$ are not relatively prime. Clearly any Dirichlet character $\chi$ is a bounded function and it easy to show that values of a bounded completely multiplicative functions are complex numbers $z$ which satisfies $\abs{z}\leq1$. There are several methods in the literature to achieve \eqref{stm::ElleNotZeroBis}. One of them is an easy consequence of the following remarkable result of Ingham \cite{article:InghamNoteOnRiemannZeta}. \begin{theorem}\label{stm::InghamBase} Let \begin{equation*} \invZ(a, s)=\sum_{n=1}^{+\infty}\dfrac{a(n)}{n^s} \end{equation*} be a Dirichlet series where $a:\NN^+\to\CC$ is an arbitrary bounded completely multiplicative arithmetic function. Assume that $\invZ(a, s)$ extends to a holomorphic function on the open half space \begin{equation*} \Re s>\dfrac{1}{2}-\delta \end{equation*} with $\delta>0$. Then \begin{equation*} \invZ(a, 1+it)\neq0 \end{equation*} for each $t\in\RR$. \end{theorem} When $a=\chi$, a non principal Dirichlet character, one obtain \eqref{stm::ElleNotZeroBis}. The Ingham proof of theorem \ref{stm::InghamBase} is quite involved, but a very simple proof is given by Batemen in \cite{article:BatemanOnInghamNoZeroes}. The first result of this paper is the following. \begin{theorem}\label{stm::ZetaRiemannZeroFreeRegion} Let $\APrev{\sigma}<1$ be given. If there exists a completely multiplicative bounded arithmetic function \begin{equation*} a:\NN^+\to\CC \end{equation*} such that the Dirichlet series \begin{equation*} \invZ(a,s)=\sum_{n=1}^{\infty}\dfrac{a(n)}{n^s} \end{equation*} extends holomorphically on the open half space \begin{equation*} \Re s>\APrev{\sigma} \end{equation*} and \begin{equation*} \invZ(a,1)=0 \end{equation*} then \begin{equation*} \zeta(s)\neq0 \end{equation*} for each $s$ satisfying $\Re s>\APrev{\sigma}$. \end{theorem} The theorem above, together with the fact that the Riemann zeta function $\zeta(s)$ has (infinite) zeroes on the line $\Re s=1/2$, easily implies the Ingham result; see at the end of section \ref{section:MainA} for details. When $a=\chi$, where $\chi$ is a non principal Dirichlet character it is easy to see that the corresponding Dirichlet series $L(\chi, s)$ extends holomorphically on the half space $\Re s>0$ (actually $L(\chi, s)$ extends holomorphically as an entire function). If \begin{equation*} L(\chi,1)=0 \end{equation*} for some non principal Dirichlet character $\chi$ then Theorem \ref{stm::ZetaRiemannZeroFreeRegion} implies that the Riemann zeta function $\zeta(z)$ wouldn't have any zero on the half space $\Re s>0$, which is clearly absurd. The observation above explains the title of this paper. The author is not able to give any example of a Dirichlet series $\invZ(a,s)$ as in Theorem \ref{stm::ZetaRiemannZeroFreeRegion} such that satisfies $\invZ(a,1)=0$ and is holomorphic on a half space \begin{equation*} \Re s>\sigma_0 \end{equation*} with \begin{equation*} \dfrac{1}{2}\leq\sigma_0<1. \end{equation*} Indeed the existence of such a series for $\sigma_0=1/2$, combined with our Theorem \ref{stm::ZetaRiemannZeroFreeRegion}, would imply the Riemann hypothesis, and as far I know the existence of an half space $\Re s >\sigma_0$ with $1/2<\sigma_0<1$ which is a zero free region for the Riemann zeta function is also an open conjecture. Nevertheless we observe that the converse of Theorem \ref{stm::ZetaRiemannZeroFreeRegion} also holds. Indeed if $\lambda(n)$ denotes the \emph{Liouville function} (see next section for details) then the meromorphic function \begin{equation*} \invZ(\lambda,s) =\sum _{n=1}^{\infty }\dfrac{\lambda(n)}{n^s} =\dfrac{\zeta (2s)}{\zeta (s)} \end{equation*} satisfies $ \invZ(\lambda,1)=0 $ and obviously $\invZ(\lambda,s)$ is holomorphic on the half space $\Re s>\sigma, 1/2\leq\sigma<1$ if, and only if, such a half space is a zero free region for $\zeta(s)$. The proof of Theorem \ref{stm::ZetaRiemannZeroFreeRegion} is obtained as an elementary consequences of a general non vanishing principle for holomorphic functions which are analytic continuations of exponentials of completely monotone functions which we think of independent interest. Let us recall that a $C^{\infty}$ function \begin{equation*} \cmFunc:\cmDomain\to\RR \end{equation*} where $\cmDomain$ is a interval of $\RR$ is \emph{completely monotone} if for $k=0,1,\ldots$ \begin{equation*} (-1)^k\cmFunc^{(k)}(x)\geq0 \end{equation*} for each $x\in\cmDomain$. Then our result is the following. \begin{theorem}\label{stm::cmHoloExtension} Let $\AFirst{\sigma},\ASecond{\sigma}\in\RR$ with $\AFirst{\sigma}<\ASecond{\sigma}$ and let $ f(s)\ $ be a holomorphic function defined on the open half space \begin{equation*} \Re s >\ASecond{\sigma}. \end{equation*} Assume that the restriction of $f(s)$ to the half line $ ]\ASecond{\sigma},+\infty[ $ is a real completely monotone function and \begin{equation*} F(s)\definedby\exp f(s) \end{equation*} extends holomorphically on the open half space \begin{equation*} \Re s >\AFirst{\sigma}, \end{equation*} Then the function $f(s)$ also extends holomorphically on the open half space \begin{equation*} \Re s >\AFirst{\sigma} \end{equation*} and hence \begin{equation*} F(s)=\exp f(s)\neq0 \end{equation*} when $\Re s>\AFirst{\sigma}$. \end{theorem} It is quite surprising that our approach makes it unnecessary to use anywhere the standard Euler product expansion of such Dirichlet series. Actually the Euler product expansion is necessary to prove the non vanishing in the half space of absolute convergence of the Dirichlet series associate to \emph{multiplicative} but not completely multiplicative functions, that is to functions \begin{equation*} a:\NN^+\to\CC \end{equation*} such that \begin{equation*} a(mn)=a(m)a(n) \end{equation*} when $m$ and $n$ are relatively prime. But in this paper we do not consider multiplicative arithmetic function. For \emph{completely multiplicative} arithmetic functions we obtain directly their representation as exponential of Dirichlet series without using its product expansion; see (the proofs of) Lemma \ref{stm::DiriInftyLim}, and Proposition \ref{stm::BeurExp} for details. Let us now describe the content of the paper. In secion \ref{section:Prereq} we recall basic facts on arithmetic functions and the associated Dirichlet series that we need in the rest of the paper. The proof of Theorem \ref{stm::cmHoloExtension} and Theorem \ref{stm::InghamBase} are given respectively in section \ref{section:Pringsheim} and \ref{section:MainA}. In section \ref{section:PringsheimEx} we give a refined version of theorem \ref{stm::cmHoloExtension} which is used in section \ref{section:MainEx} to prove theorem \ref{stm::ZetaBeurZeroFreeRegion}, the main result of this paper, which extends to a class of generalized Dirichlet series the above theorem \ref{stm::InghamBase}, including, among the others, the Dirichlet series associated to completely multiplicative functions defined on the ideals of the ring of the integers of a number fields. Non vanishing theorem for general $L-$type functions on the boundary of the half space of absolute convergence are then obtained in section \ref{section:Corollaries}. We end this introduction with a ``toy application'' of Theorem \ref{stm::cmHoloExtension} giving three proofs of $ \zeta(1+it)\neq0. $ Each of them contains ``themes'' which will be expanded in the rest of the paper. All of them start observing that when $\Re s>1$ we have \begin{equation*} \zeta(s)=\exp\left(\sum_{n=1}^{\infty}\dfrac{\Lambda (n)}{n^s\log n}\right) \end{equation*} where $\Lambda(n)$ is the \emph{von Mangolt function} (see, e.g., next section). Assume that \begin{equation*} \zeta(1+i\tzero)=0 \end{equation*} for some $\tzero>0$. Following \cite[pag. 199]{article:OggOnSatoTateConjecture} consider the function \begin{equation*} F(s)=\zeta(s)^2\zeta(s+i\tzero)\zeta(s-i\tzero). \end{equation*} Then $F(s)$ has removable singularities at $s=1$, $s=1+i\tzero$ and $s=1-i\tzero$ and hence is a holomorphic entire function. When $\Re s>1$ we have then \begin{equation*} F(s) =\exp f(s). \end{equation*} where \begin{equation*} f(s) =\sum_{n=1}^{\infty}\dfrac{2\bigl(1+\Re(n^{-i\tzero})\bigr)\Lambda (n)}{n^s\log n}. \end{equation*} Since $\Re n^{-i\tzero}=\cos\bigl(\tzero\log(n)\bigr)\geq-1$ the Dirichlet series $f(s)$ has non negative coefficients and hence the function \begin{equation*} ]1,+\infty[\ni\sigma\mapsto f(\sigma) \in\RR \end{equation*} is completely monotone. Theorem \ref{stm::cmHoloExtension} implies that $f(s)$ extents to an entire holomorphic function and $F(s)=\exp f(s)$ is a not vanishing entire holomorphic function; a classical Landau's theorem (see theorem \ref{stm::LandauStandard}) implies that the series $f(s)$ then converges for all $s\in\CC$. Now we have three ways to conclude the proof. The first one begins by observing that at $s=1+i\tzero$ the factor $\zeta(s)^2$ of the function $F(s)$ has a zero of the second order while the factor $\zeta(s-i\tzero)$ has a simple pole. Since $F(1+i\tzero)\neq0$ then the remaining factor $\zeta(s+i\tzero)$ must necessarily have a simple pole at $s=1+i\tzero$, that is the Riemann zeta function $\zeta(s)$ would have also an other pole at $s=1+2i\tzero$, which is absurd. The second one follows from the fact that if $F(s)$ never vanishes then $\zeta(s)$ also never vanishes when $s\neq1$. In particular it follows that $\zeta(s)$ does not vanishes neither when $s=-2m$, $m=1,2,\ldots$ nor when $\Re s=1/2$ and this is not the case $\ldots$ For the third let denote by $\Primes=\{2,3,5,\ldots\}$ the set of the positive prime numbers and set $a(n)=n^{-i\tzero}$. If $\sigma\in\RR$ then \begin{eqnarray*} f(\sigma)&=&\sum_{n=1}^{\infty} \dfrac{2\bigl(1+\Re a(n)\bigr)\Lambda(n)}{n^s\log(n)} =\sum_{p\in\Primes}\sum_{m=1}^\infty\dfrac{2\bigl(1+\Re a(p)^m\bigr)}{m p^{m\sigma}}\\ &\geq&\sum_{p\in\Primes}\sum_{m=1}^2\dfrac{2\bigl(1+\Re a(p)^m\bigr)}{m p^{m\sigma}}\\ &\geq&\sum_{p\in\Primes}\dfrac{\bigl(2+\Re a(p)+\Re a(p)^2\bigr)}{p^{2\sigma}}. \end{eqnarray*} Observing that \begin{equation*} \boxed{ \abs{w}\leq1\implies\Re w+\Re w^2\geq-\dfrac{9}{8} } \end{equation*} with equality at \begin{equation*} w=\dfrac{1}{4}\pm i\dfrac{\sqrt{15}}{4} \end{equation*} we then obtain \begin{equation*} f(\sigma)\geq\dfrac{7}{8}\sum_{p\in\Primes}\dfrac{1}{p^{2\sigma}}. \end{equation*} Since the series of the reciprocal of the prime numbers diverges then the series $f(s)$ also diverges at $s=1/2$. For a fourth (easy) proof see corollary \ref{stm::ElleZetaPNT}. \section{\label{section:Prereq}Prerequisites} In this paper we need only the basic results on (multiplicative) arithmetic function and their associated Dirichlet series. Basic references include the first chapters of \cite{book:ApostolANT}, \cite{book:NarkiewiczANT3}, \cite{book:NeukirchANT}, \cite{book:MontgomeryVaughan2006}, \cite{book:OverholtCourseAnalyticNT} and \cite{book:TenenbaumAnalyticNumberTheory}. Here we review basic material that we need in this paper. The already mentioned \emph{von Mangoldt function} is the arithmetic function \begin{equation*} \Lambda (n)= \begin{cases} \log p&\text{if }n=p^k\text{ for some prime }p\text{ and integer }k\geq 1,\\ 0 &\text{otherwise.} \end{cases} \end{equation*} It satisfies the identity \begin{equation}\label{baseMangolt} \sum_{d|n}\Lambda(d)=\log n. \end{equation} The \emph{Liouville function} is the completely multiplicative functions \begin{equation*} \lambda (n)=(-1)^{\Omega (n)}. \end{equation*} where $\Omega(n)$ is the number of prime factors of $n$, counted with multiplicity. The Liouville function and the Riemann zeta function are related by the identity \begin{equation*} \dfrac{\zeta (2s)}{\zeta (s)}=\sum _{n=1}^{\infty }\dfrac{\lambda(n)}{n^s} \end{equation*} We will need the following elementary result on Dirichlet series. \begin{lemma}\label{stm::beur::FirstCoeffBase} Let \begin{equation*} F(s)=\sum_{n=1}^{\infty}\dfrac{a(n)}{n^s}. \end{equation*} where $a:\NN^+\to\CC$ is a bounded arithmetic function. Then, \begin{equation*} \lim_{\RR\ni\sigma\to\infty}F(\sigma)=a(1). \end{equation*} \end{lemma} The following statement is a classical result of Landau (see e.g. \cite[Theorem 1.7, pag. 16]{book:MontgomeryVaughan2006} or \cite[Lemma 1, pag. 314]{book:LangAlgebraicNumerTheory}) \begin{theorem}\label{stm::LandauStandard} Let $\APrev{\sigma},\AFirst{\sigma}\in\RR$ with $\APrev{\sigma}<\AFirst{\sigma}$ and let \begin{equation*} f(s)=\sum_{n=1}^{\infty}\dfrac{a_n}{n^s} \end{equation*} be a Dirichlet series with non negative coefficients $a_n\geq0$. Assume that $f(s)$ converges when $\Re s>\AFirst{\sigma}$ and extends to a holomorphic function on $\Re s>\APrev{\sigma}$. Then the series $f(s)$ also converges when $\Re s>\APrev{\sigma}$. \end{theorem} \section{\label{section:Pringsheim}A non vanishing principle} In this section we prove theorem \ref{stm::cmHoloExtension}. We begin recalling a theorem due to Pringsheim which appeared in \cite{article:Pringsheim1894AnCont} also known as the Pringsheim-Vivanti theorem. See also \cite[Theorem 5.7.1]{book:HilleAFTVol1}, and \cite[Theorem 8.2.2]{book:SansoneGerretsen}. \begin{theorem}\label{stm::Pringsheim} Let \begin{equation*} f(z)=\sum_{n=0}^{\infty}a_nz^n \end{equation*} be a convergent power series with radius of convergence $R$, with $0<R<+\infty$. If $a_n\geq0$ for each $n$ then it is not possible to extend $f(z)$ holomorphically in a neighbourhood of $z=R$. \end{theorem} The following proposition is the basic technical tool of the paper. \begin{proposition}\label{stm::ExpRadiusIsEqualEx} Let \begin{equation*} \cmE:\CC\to\CC \end{equation*} be a holomorphic entire function such that $\cmE(\RR)\sset\RR$, $\cmE'(t)>0$ for each $t>0$ and \begin{equation*} \lim_{t\to+\infty}\cmE(t)=+\infty. \end{equation*} Let \begin{equation*} f(z)=\sum_{n=1}^{+\infty}\SCoeff_nz^n \end{equation*} and \begin{equation*} F(z)=\sum_{n=1}^{+\infty}\ECoeff_nz^n \end{equation*} be two convergent powers series with real coefficients such that \begin{equation*} F(z)=\cmE\bigl(f(z)\bigr) \end{equation*} in a neighbourhood of $z=0$. If the coefficients $\SCoeff_n$ of $f(z)$ are not negative then the series $f(z)$ and $F(z)$ have the same radius of convergence. \end{proposition} \begin{proof} If the function $f$ is constant then $F$ also is constant and in this case the assertion is obvious. We hence assume that $f$ is not constant. Let $r$ and $R$ denote the radius of convergence respectively of the series $f(z)$ and $F(z)$. Then $F(z)=\cmE\bigl(f(z)\bigr)$ is holomorphic on the disc $\abs{z}<r$. Standard theorems of one complex variable imply that $R\geq r$. We now assume that $R>r$ and derive a contradiction. Since $f$ is not constant $a_{n_0}>0$ for some $n_0>0$ and hence \begin{eqnarray*} f(x)&=&\sum_{n=0}^{\infty}na_nx^{n} \geq a_{n_0}x^{n_0}>0,\\ f'(x)&=&\sum_{n=1}^{\infty}na_nx^{n-1} \geq n_0a_{n_0}x^{n_0-1}>0 \end{eqnarray*} when $0<x<r$. Then if $0<x<r$ we have \begin{equation*} F'(x)=\cmE'\bigl(f(x)\bigr)f'(x)>0 \end{equation*} and hence, by the Lagrange theorem, the function $F(x)$ is strictly increasing on the closed interval $[0,r]$. Since $F(r)>F(0)$ then \begin{equation*} F(x)>F(0),\ 0<x< r' \end{equation*} for some $r'$ with \begin{equation*} r<r'<R. \end{equation*} The hypotheses on $\cmE$ implies that the inverse function \begin{equation*} \cmE^{-1}:]\cmE(0),+\infty[\to]0,+\infty] \end{equation*} is well defined and real analytic. Being \begin{equation*} F(x)>F(0)=\cmE\bigl(f(0)\bigr)\geq\cmE(0) \end{equation*} when $0<x<r'$, it follows that the function \begin{equation*} ]0,r'[\ni x\mapsto u(x)\definedby\cmE^{-1}\bigl(F(x)\bigr)\in\RR \end{equation*} is a well defined real analytic function which coincides with $f(x)$ when $0<x<r$. The power expansion of $u(x)$ at $x=r$ then defines a holomorphic extension of the function $f(z)$ in a neighbourhood of $z=r$ and this contradicts theorem \ref{stm::Pringsheim}. \end{proof} We are now ready to prove theorem \ref{stm::cmHoloExtension}. So, let $\AFirst{\sigma},\ \ASecond{\sigma}\in\RR$ and $f(s)$, $F(s)$ be given as in theorem \ref{stm::cmHoloExtension}. Let us fix $\ALarge>\ASecond{\sigma}$. Then the functions \begin{equation*} f_\ALarge(z)=f(a-z) \end{equation*} and \begin{equation*} F_\ALarge(z)=F(a-z). \end{equation*} are holomorphic respectively on the disks $\abs{z}<\ALarge-\ASecond{\sigma}$ and $\abs{z}<\ALarge-\AFirst{\sigma}$. The relation $F_\ALarge(z)=\exp\bigl(f_\ALarge(z)\bigr)$ holds on the smaller disk $\abs{z}<\ALarge-\ASecond{\sigma}$ and for each integer $n\geq0$ \begin{equation*} f_\ALarge^{(n)}(0)=(-1)^n f^{(n)}(a)\geq0. \end{equation*} Theorem \ref{stm::ExpRadiusIsEqualEx} (with $\cmE=\exp$) implies then that the radius of convergence of the power series expansions of $f_\ALarge(z)$ and $F_\ALarge(z)$ at $z=0$ are the same. Since $F_\ALarge(z)$ is holomorphic on the disk $\abs{z}<\ALarge-\AFirst{\sigma}$ such a common radius of convergence is at least $\ALarge-\AFirst{\sigma}$. Hence the function $f_\ALarge(z)$, defined in the smaller disk $\abs{z}<\ALarge-\ASecond{\sigma}$, extends holomorphically on the bigger disk $\abs{z}<\ALarge-\AFirst{\sigma}$. The formula \begin{equation*} f(s)=f_\ALarge(a-s) \end{equation*} then defines a holomorphic extension of the function $f(s)$ on the disk $\abs{z}<\ALarge-\AFirst{\sigma}$. The observation that the union of all the disks \begin{equation*} \abs{s-\ALarge}<\ALarge-\AFirst{\sigma},\ \ALarge>\ASecond{\sigma} \end{equation*} is the open half space \begin{equation*} \Re s>\AFirst{\sigma} \end{equation*} completes the proof. \section{\label{section:MainA}Proof of theorem \ref{stm::ZetaRiemannZeroFreeRegion}} Here the proof of theorem \ref{stm::ZetaRiemannZeroFreeRegion}. So, let $\APrev{\sigma}<1$ and let $a:\NN^+\to\CC$ be a bounded completely multiplicative functions. Assume that the Dirichlet series \begin{equation*} \invZ(a, s)=\sum_{n=1}^{+\infty}\dfrac{a(n)}{n^s} \end{equation*} extends holomorphically in the half space $\Re s>\APrev{\sigma}$ and \begin{equation*} \invZ(a,1)=0. \end{equation*} We denote the function $L(a,s)$ simply with $L(s)$. Since $a(n)$ is completely multiplicative and bounded then necessarily \begin{equation*} \abs{a(n)}\leq1 \end{equation*} for each $n\in\NN^+$ and hence the series $L(s)$ converges absolutely when $\Re s>1$. The following lemma is well known, but we include a very elementary proof. \begin{lemma}\label{stm::DiriInftyLim} The series $\invZ(s)$ satisfies \begin{equation*} \invZ(s)=\exp\left( \sum_{n=2}^{\infty}\dfrac{a(n)\Lambda(n)}{n^s\log n} \right) \end{equation*} when $\Re s>1$ \end{lemma} \begin{proof} Let \begin{equation*} f(s) =\sum_{n=2}^{\infty}\dfrac{a(n)\Lambda(n)}{n^s\log n}. \end{equation*} Then \begin{equation*} f'(s) =-\sum_{n=2}^{\infty}\dfrac{a(n)\Lambda(n)}{n^s}. \end{equation*} Since the arithmetic functions $a(n)$ and $\lambda(n)$ are completely multiplicative then the identity \eqref{baseMangolt} easily implies that \begin{equation*} f'(s)\invZ(s)=\invZ'(s). \end{equation*} Then the derivative of the function \begin{equation*} s\mapsto\exp\bigl(-f(s)\bigr)\invZ(s) \end{equation*} vanishes and hence is constant, that is \begin{equation*} \invZ(s)=c\exp\bigl(f(s)\bigr) \end{equation*} for some constant $c$. Now put $s=\sigma\in\RR$; and take the limit as $\sigma\to+\infty$; lemma \ref{stm::beur::FirstCoeffBase} then implies \begin{equation*} 1=a(1)=c\exp(0)=c. \end{equation*} \end{proof} When $a(n)=1$ then $\invZ(s)=\zeta(s)$ and hence we obtain \begin{equation*} \zeta(s)=\exp\left( \sum_{n=1}^{\infty}\dfrac{\Lambda(n)}{n^s\log n} \right) . \end{equation*} Consider the function \begin{equation*} F(s)=\zeta(s)^2\invZ(s)\bar{\invZ(\bar{s})}. \end{equation*} Since $\invZ(1)=0$, then the function $F(s)$ is holomorphic on the open half $\Re s >\APrev{\sigma}$ and when $\Re s>1$ also satisfies \begin{equation*} F(s)=\exp f(s) \end{equation*} where \begin{equation*} f(s)=\sum_{n=1}^{\infty}\dfrac{2\bigl(1+\Re a(n)\bigr)\Lambda(n)}{n^s\log n}. \end{equation*} Clearly $f(s)$ is a Dirichlet series (absolutely) convergent when $\Re s>1$ with non negative coefficients and hence the restriction of $f(s)$ to the real half line $]1,+\infty[$ is a completely monotone function. Theorem \ref{stm::cmHoloExtension} implies that $F(s)$ never vanishes on the half space $\Re s>\APrev{\sigma}$. This forces $\zeta(s)\neq0$ when $\Re s>\APrev{\sigma}$ and the proof of Theorem \ref{stm::ZetaRiemannZeroFreeRegion} is completed. We observe that the Ingham Theorem \ref{stm::InghamBase} easily follows from the theorem \ref{stm::ZetaRiemannZeroFreeRegion}. Indeed let \begin{equation*} F(s)=\sum_{n=1}^{+\infty}\dfrac{a(n)}{n^s} \end{equation*} be as in theorem \ref{stm::InghamBase} and assume that $F(1+it)=0$ for some $t\in\RR$. Then the function $\invZ(s)=F(s+it)$ satisfies the hypotheses of theorem \ref{stm::ZetaRiemannZeroFreeRegion}. Indeed $\invZ(1)=0$ and when $\Re s>1$ we have \begin{equation*} \invZ(s)=F(s+it)=\sum_{n=1}^{\infty}\dfrac{a(n)n^{-it}}{n^s}. \end{equation*} The map $n\mapsto a(n)n^{-it}$ is a bounded completely multiplicative arithmetic function. Theorem \ref{stm::ZetaRiemannZeroFreeRegion} then implies that the open half space $\Re s>1/2-\delta$ is a zero free region for the Riemann zeta function and this is not the case. \section{\label{section:PringsheimEx}A refined non vanishing principle} In order to extend theorem \ref{stm::cmHoloExtension} we need a refined version of theorem \ref{stm::Pringsheim}. \begin{theorem}\label{stm::RealAnalyticAMPringsheim} Let $R,\delta>0$ be positive real numbers. Let \begin{equation*} \amFunc:]-\delta,R[\to\RR \end{equation*} be a real analytic function. If \begin{equation*} a_n\definedby\dfrac{\amFunc^{(n)}(0)}{n!}\geq0,\ n=0,1,\ldots, \end{equation*} then the radius of convergence of the series \begin{equation*} \sum_{n=0}^{\infty}a_nz^n \end{equation*} is greater than $R$. \end{theorem} \begin{proof} Let denote by $r>0$ the radius of convergence of the series \begin{equation*} f(z)=\sum_{n=0}^{\infty}a_nz^n. \end{equation*} Then $f(z)$ is holomorphic on the disc $\abs{z}<r$ and $f(x)=u(x)$ when $0\leq x< r$. Assume that $r<R$. As $\amFunc(x)$ is real analytic it follows that the power series expansion of $\amFunc(x)$ at $x=r$ defines a holomorphic extension of the function $f(z)$ in a neighbourhood of $z=r$. But this is not allowed by theorem \ref{stm::Pringsheim}. It necessarily follows that $r\geq R$ and we are done. \end{proof} Let us recall that a $C^{\infty}$ function \begin{equation*} \cmFunc:\cmDomain\to\RR \end{equation*} where $\cmDomain$ is a interval of $\RR$ is \emph{absolutely monotone} if for $k=0,1,\ldots$ \begin{equation*} \cmFunc^{(k)}(x)\geq0 \end{equation*} for each $x\in\cmDomain$. The absolutely monotone functions were introduced by S. Bernstein in \cite{article:Bernstein1914AM}. For a comprehensive treatment of the theory of such a functions (and their cousins the completely monotone ones) see e.g., \cite[Chapter IV]{book:WidderLaplace}; see also \cite[Chapter 1]{book:SchillingEtcBernsteinFunctions}. The following is an extension theorem \ref{stm::cmHoloExtension}. \begin{theorem}\label{stm::cmHoloExtensionEx} Let $\AFirst{\sigma},\ASecond{\sigma}\in\RR$ with $\AFirst{\sigma}<\ASecond{\sigma}$ and let $ f(s)\ $ be a holomorphic function defined on the open half space \begin{equation*} \Re s >\ASecond{\sigma}. \end{equation*} Assume that the restriction of $f(s)$ to the half line $ ]\ASecond{\sigma},+\infty[ $ is a real completely monotone function and \begin{equation*} F(s)=\exp f(s) \end{equation*} extends holomorphically in an open neighbourhood of the real interval $ ]\AFirst{\sigma},\ASecond{\sigma}]. $ Then both the functions $f(s)$ and $F(s)$ extend holomorphically on the open half space \begin{equation*} \Re s >\AFirst{\sigma}, \end{equation*} the relation \begin{equation*} F(s)=\exp f(s) \end{equation*} still holds on such a half space and hence \begin{equation*} F(s)\neq0,\ \Re s>\AFirst{\sigma}. \end{equation*} \end{theorem} \begin{proof} As in the proof of theorem \ref{stm::cmHoloExtensionEx} we choose $\ALarge>\ASecond{\sigma}$ and consider the functions \begin{equation*} f_\ALarge(z)=f(a-z) \end{equation*} and \begin{equation*} F_\ALarge(z)=F(a-z). \end{equation*} They are both holomorphic functions on the disk $\abs{z}<\ALarge-\ASecond{\sigma}$ and the function $F_\ALarge(z)$ is also holomorphic in a neighbourhood of the segment $[0, \ALarge-\AFirst{\sigma}[$. Since $f$ is completely monotone on the interval $]\ASecond{\sigma}, 2\ALarge-\ASecond{\sigma}[$ then $f_\ALarge$ is absolutely monotone on the interval $]\ASecond{\sigma}-\ALarge,\ALarge-\ASecond{\sigma}[$. Since the composition of absolutely monotone function is absolutely monotone it follows that the function $F_\ALarge(z)=\exp\bigl(f_\ALarge(z)\bigr)$ is also absolutely monotone on the interval $]\ASecond{\sigma}-\ALarge,\ALarge-\ASecond{\sigma}[$. But the function $F_\ALarge(z)$ is real analytic on the interval $]\ASecond{\sigma}-\ALarge,\ALarge-\AFirst{\sigma}[$. Then theorem \ref{stm::RealAnalyticAMPringsheim} implies that the radius of convergence of the powers expansion of $F_a(t)$ at $t=0$ is greater that $\ALarge-\AFirst{\sigma}$. Proposition \ref{stm::ExpRadiusIsEqualEx} implies that both the powers expansions of $f_\ALarge(z)$ and $F_\ALarge(z)$ at $z=0$ have the same radius of convergence at least $\ALarge-\AFirst{\sigma}$ and hence provide holomorphic extensions respectively of $f(s)$ and $F(s)$ on the disc $\abs{s-\ALarge}<\ALarge-\AFirst{\sigma}$. As in the proof of theorem \ref{stm::cmHoloExtensionEx} we conclude the proof by observing that the union of all the disks \begin{equation*} \abs{s-\ALarge}<\ALarge-\AFirst{\sigma},\ \ALarge>\ASecond{\sigma} \end{equation*} is the open half space $ \Re s>\AFirst{\sigma}. $ \end{proof} \section{\label{section:MainEx}The main theorem} In this section we consider a fixed real arithmetic function \begin{equation*} \beur:\NN^+\to\RR \end{equation*} satisfying the following properties: \begin{enumerate} \item\label{beur::Multiplicative} $\beur$ is completely multiplicative, that is \begin{equation*} \beur(mn)=\beur(n)\beur(m) \end{equation*} for each $m,n\in\NN^+$; \item\label{beur::Positive} $ \beur(n)>1\ $ for each $n>1$; \item\label{beur::Zeta} For a certain $\AFirst{\sigma}>0$ the (generalized) Dirichlet series \begin{equation*} Z(s)=\sum_{n=1}^{\infty}\dfrac{1}{\beur(n)^s} \end{equation*} converges for $\Re s>\AFirst{\sigma}$ and \begin{equation*} \lim_{\sigma\to\AFirst{\sigma}^+} Z(\sigma)=+\infty \end{equation*} \end{enumerate} Observe that condition \eqref{beur::Zeta} forces \begin{equation}\label{beu::Infty} \lim_{n\to+\infty}\beur(n)=+\infty \end{equation} and hence: \begin{proposition}\label{beur::StriclyPositive} There exists $\beur_0>1$ such that \begin{equation*} \beur(n)\geq\beur_0 \end{equation*} for each $n>1$. \end{proposition} Observe that we do not require the sequence $n\mapsto\beur(n)$ to be not decreasing. The basic example is given by the Dedekind zeta function of number field (see, e.g., \cite{book:LangAlgebraicNumerTheory}, \cite{book:NeukirchANT} ). Let $\NumField$ be a number field, that is a finite extension of the rational field $\Q$ and let $\NumInt$ be the ring of the algebraic integers in $\NumField$. Consider a one to one bijection \begin{equation}\label{eq::PrimeBijection} p\mapsto\PrimeIdeal_p \end{equation} between the positive prime numbers $P=\{2,3,5,\ldots\}$ and the non zero prime (i.e. maximal) ideals of $\NumInt$. Since the non zero ideals of $\NumInt$ factor uniquely as product of non zero prime ideals then we can extend \eqref{eq::PrimeBijection} to a bijection \begin{equation*} n\mapsto\NumIdeal_n \end{equation*} between the positive integers $n\in\NN^+$ in such a way that if \begin{equation*} n=p_1^{\PrimeExponent_1}\cdots p_k^{\PrimeExponent_k} \end{equation*} then \begin{equation*} \NumIdeal_n= \PrimeIdeal_{p_1}^{\PrimeExponent_1} \cdots\PrimeIdeal_{p_k}^{\PrimeExponent_k}. \end{equation*} Then the arithmetic function \begin{equation*} \beur(n)=\mathfrak{N}(\NumIdeal_n) \end{equation*} where $\mathfrak{N}(\NumIdeal_n)$ is the \emph{norm} of the ideal $\PrimeIdeal_n$ satisfies the properties $(1)$, $(2)$ and $(3)$ above. Then \begin{equation*} Z(s)=\sum_{n=1}^{\infty}\dfrac{1}{\beur(n)^s}=\zeta_\NumField(s), \end{equation*} where \begin{equation*} \zeta_\NumField(s)=\sum_{\NumIdeal}\dfrac{1}{\mathfrak{N}(I)^s} \end{equation*} is the Dedekind zeta function of the number field $\NumField$. Other examples arise from abstract arithmetic semigroups (see \cite{book:KnopfmacherAbstractANT}), Beurling's generalized prime numbers (see \cite[Section 8.4, pag. 266]{book:MontgomeryVaughan2006}, \cite{article:BeurlingPrimes}), and the zeta function associated to an arithmetical scheme (see \cite{article:SerreZandL1965}). The purpose of this section is to extend theorem \ref{stm::ZetaRiemannZeroFreeRegion} to (generalized) Dirichlet series of the form \begin{equation*} \sum_{n=1}^{+\infty}\dfrac{a(n)}{\beur(n)^s}. \end{equation*} Clearly if the arithmetic function $a(n)$ is bounded the such a series defines a holomorphic function on the open half space $\Re s>\AFirst{\sigma}$. We need to prove some elementary properties of such a series. Let us begin by extending Lemma \ref{stm::beur::FirstCoeffBase} and Lemma \ref{stm::DiriInftyLim} to such a series. \begin{lemma}\label{stm::beur::FirstCoeff} Let \begin{equation*} F(s)=\sum_{n=1}^{\infty}\dfrac{a(n)}{\beur(n)^s}. \end{equation*} where $a:\NN^+\to\CC$ is a bounded arithmetic function. Then \begin{equation*} \lim_{\sigma\to\infty}F(\sigma)=a(1). \end{equation*} \end{lemma} \begin{proof} Let $\beur_0>1$ such that $\beur(n)\geq\beur_0$ when $n>1$. Fix $\ASecond{\sigma}>\AFirst{\sigma}$. Then for each $\sigma>\ASecond{\sigma}$ we have \begin{equation*} \abs{F(\sigma)-a(1)} \leq\sum_{n=2}^{\infty}\dfrac{\abs{a(n)}}{\beur(n)^\sigma} \leq\dfrac{1}{\beur_0^{\sigma-\ASecond{\sigma}}} \sum_{n=2}^{\infty}\dfrac{\abs{a(n)}}{\beur(n)^{\ASecond{\sigma}}}. \end{equation*} Since \begin{equation*} \lim_{\sigma\to+\infty}\dfrac{1}{\beur_0^{\sigma-\ASecond{\sigma}}}=0 \end{equation*} the assertion follows. \end{proof} We define the $\beur-$von Mangold function \begin{equation*} \Lambda_\beur (n)={\begin{cases}\log\beur( p)&{\text{if }}n=p^{k}{\text{ for some prime }}p{\text{ and integer }}k\geq 1,\\0&{\text{otherwise.}}\end{cases}} \end{equation*} As in the classical case it is easy to prove that for each $n>0$ \begin{equation}\label{beurMangolt} \sum_{d|n}\Lambda_\beur(d)=\log\beur(n) \end{equation} \begin{proposition}\label{stm::BeurExp} Let \begin{equation*} F(s)=\sum_{n=1}^{\infty}\dfrac{a(n)}{\beur(n)^s}. \end{equation*} where $a:\NN^+\to\CC$ is a bounded completely multiplicative arithmetic function. Then when $\Re s >\AFirst{\sigma}$ \begin{equation*} F(s) =\exp\left( \sum_{n=2}^{+\infty}\dfrac{a(n)\Lambda_\beur(n)}{\beur(n)^s\log\beur(n)} \right) \end{equation*} \end{proposition} \begin{proof} Let \begin{equation*} f(s) =\sum_{n=2}^{+\infty}\dfrac{a(n)\Lambda_\beur(n)}{\beur(n)^s\log\beur(n)}. \end{equation*} Then \begin{equation*} f'(s) =-\sum_{n=2}^{+\infty}\dfrac{a(n)\Lambda_\beur(n)}{\beur(n)^s}. \end{equation*} Since the arithmetic functions $a(n)$ and $\lambda(n)$ are completely multiplicative then \eqref{beurMangolt} easily implies that \begin{equation*} f'(s)F(s)=F'(s). \end{equation*} Then the derivative of the function \begin{equation*} \exp\bigl(-f(s)\bigr)F(s) \end{equation*} vanishes, that is \begin{equation*} F(s)=c\exp\bigl(f(s)\bigr) \end{equation*} for some constant $c$. Then put $s=\sigma\in\RR$ and take the limit as $\sigma\to+\infty$; lemma \ref{stm::beur::FirstCoeff} then implies \begin{equation*} 1=a(1)=c\exp(0)=c. \end{equation*} \end{proof} Let us denote by $\Primes$ the set of positive prime numbers. \begin{proposition}\label{stm::beuPrimesDiverges} \begin{equation*} \lim_{\sigma\to\AFirst{\sigma}^+}\sum_{p\in\Primes}\dfrac{1}{\beur(p)^\sigma}=+\infty \end{equation*} \end{proposition} \begin{proof} If $\sigma>\AFirst{\sigma}$ proposition \ref{stm::BeurExp} implies \begin{eqnarray*} Z(\sigma)&=&\exp\left( \sum_{n=2}^{+\infty}\dfrac{\Lambda_\beur(n)}{\beur(n)^\sigma\log\beur(n)} \right) =\exp\left( \sum_{p\in\Primes}\sum_{m=1}^\infty\dfrac{1}{m\beur(p)^{m\sigma}} \right)\\ &=&\exp\left( \sum_{p\in\Primes}\dfrac{1}{\beur(p)^{\sigma}} \right)\exp\bigl(h(\sigma)\bigr)\\ \end{eqnarray*} where \begin{equation*} h(\sigma)= \sum_{p\in\Primes}\sum_{m=2}^\infty\dfrac{1}{m\beur(p)^{m\sigma}} \end{equation*} As in the classical case, using the fact that $\beur(n)\geq\beur_0>1$ when $n>1$ it is easy to show that the series $h(\sigma)$ converges for $\sigma>\AFirst{\sigma}/2$ and hence \begin{equation*} \lim_{\sigma\to\AFirst{\sigma}^+}\sum_{p\in\Primes}\dfrac{1}{\beur(p)^\sigma}= \lim_{\sigma\to\AFirst{\sigma}^+}\bigl(\log Z(\sigma) -h(\sigma)\bigr)=+\infty. \end{equation*} \end{proof} We now are ready to state and prove the main theorem of the paper. \begin{theorem}\label{stm::ZetaBeurZeroFreeRegion} Let $\APrev{\sigma}<\AFirst{\sigma}$ and let $\Domain\sset\CC$ be an open connected subset containing the real half line $]\APrev{\sigma},+\infty[$ and contained in the open half space $ \Re s>\APrev{\sigma}.\ $ Assume also that $\Domain$ is symmetric with respect to the real axis, that is $s\in\Domain\implies\bar{s}\in\Domain$ and contains the open half space $ \Re s>\AFirst{\sigma}. $ Let \begin{equation*} \invZ:\Domain\to\CC \end{equation*} be a meromorphic function on $\Domain$ which is holomorphic on $\Domain\cap\RR\setminus\{\AFirst{\sigma}\}$ and \begin{equation*} \invZ(s)=\sum_{n=1}^{\infty}\dfrac{a(n)}{\beur(n)^s} \end{equation*} when $\Re s>1$, where $a(n)$ is a bounded completely multiplicative arithmetic function. Assume also that \begin{equation*} Z(s)=\sum_{n=1}^{\infty}\dfrac{1}{\beur(n)^s} \end{equation*} extends to a meromorphic function on $\Domain$ with a unique simple pole at $s=\AFirst{\sigma}$. Let $s_0\in\Domain\setminus\{\AFirst{\sigma}\}$ be given. Then: \begin{enumerate} \item\label{stm::ZetaBeurMain::PingPong} if $\AFirst{\sigma}$ is a zero or a pole of $L(s)$ and $Z(s_0)\neq0$ then $s_0$ is a zero (resp. a pole) of $L(s)$ if, and only if, $\bar{s_0}$ is a pole (resp. a zero) of $L(s)$; \item\label{stm::ZetaBeur::CaseElleZeroCase} if $L(\AFirst{\sigma})=0$ and $L(s)$ is holomorphic at $s=s_0$ and $s=\bar{s_0}$ then $Z(s_0)\neq0$; in particular, if $L(s)$ is holomorphic on $\Domain$ then $\Domain$ is a zero free region for $Z(s)$; \item\label{stm::ZetaBeur::ElleHalf} if $L(\AFirst{\sigma})=0$ then the boundary point $\APrev{\sigma}$ of $\Domain$ satisfies the constraint \begin{equation*} \APrev{\sigma}\geq\dfrac{\AFirst{\sigma}}{2}. \end{equation*} \end{enumerate} \end{theorem} \begin{proof} Since $a(n)$ is completely multiplicative and bounded then necessarily \begin{equation*} \abs{a(n)}\leq1 \end{equation*} for each $n\in\NN^+$ and by proposition \ref{stm::BeurExp} we also have \begin{equation*} \invZ(s)=\exp\left( \sum_{n=1}^{\infty}\dfrac{a(n)\Lambda_\beur(n)}{\beur(n)^s\log\beur(n)} \right) \end{equation*} and analogously \begin{equation*} Z(s)=\exp\left( \sum_{n=1}^{\infty}\dfrac{\Lambda_\beur(n)}{\beur(n)^s\log\beur(n)} \right). \end{equation*} Consider the functions \begin{equation*} F(s)=Z(s)^2\invZ(s)\bar{\invZ(\bar{s})} \end{equation*} and \begin{equation*} G(s)=\dfrac{Z(s)^2}{\invZ(s)\bar{\invZ(\bar{s})}} \end{equation*} When $\Re s>1$ \begin{equation*} F(s)=\exp f(s),\ G(s)=\exp g(s) \end{equation*} where \begin{equation*} f(s)=\sum_{n=1}^{\infty}\dfrac{2\bigl(1+\Re a(n)\bigr)\Lambda_\beur(n)}{\beur(n)^s\log\beur(n)}. \end{equation*} and \begin{equation*} g(s)=\sum_{n=1}^{\infty}\dfrac{2\bigl(1-\Re a(n)\bigr)\Lambda_\beur(n)}{\beur(n)^s\log\beur(n)}. \end{equation*} Clearly $f(s)$ and $g(s)$ are Dirichlet series (absolutely) convergent when $\Re s>\AFirst{\sigma}$ with non negative coefficients and hence their restriction to the real half line $]\AFirst{\sigma},+\infty[$ are completely monotone functions. Assume now that $L(\AFirst{\sigma})=0$. Theorem \ref{stm::cmHoloExtensionEx} then implies that $F(s)$ never vanishes on $\Domain$ and hence \begin{equation*} \invZ(s)\bar{\invZ(\bar{s})}=\dfrac{F(s)}{Z(s)^2}. \end{equation*} Similarly, if $L(s)$ has a pole at $s=\AFirst{\sigma}$ we obtain \begin{equation*} \invZ(s)\bar{\invZ(\bar{s})}=\dfrac{Z(s)^2}{G(s)} \end{equation*} with $G(s)$ never vanishing on $\Domain$. In both cases \eqref{stm::ZetaBeurMain::PingPong} easily follows. If $L(\AFirst{\sigma})=0$ then $F(s)=Z(s)^2\invZ(s)\bar{\invZ(\bar{s})}$ never vanishes on $\Domain$ and \eqref{stm::ZetaBeur::CaseElleZeroCase} also follows. It remains to prove \eqref{stm::ZetaBeur::ElleHalf}, that is the inequality $\APrev{\sigma}\geq\AFirst{\sigma}/2$ assuming $L(\AFirst{\sigma})=0$. For this purpose observe that theorem \ref{stm::cmHoloExtensionEx} also implies that the defined above function \begin{equation*} f(s)=\sum_{n=1}^{\infty}\dfrac{2\bigl(1+\Re a(n)\bigr)\Lambda_\beur(n)}{\beur(n)^s\log\beur(n)} \end{equation*} extends holomorphically on the open half space $\Re s>\APrev{\sigma}$. The Landau's theorem \ref{stm::LandauStandard} also holds for these generalized series (see, e.g., \cite[Lema 15.1, pag. 463]{book:MontgomeryVaughan2006}) and hence the series $f(s)$ also converges when $\Re s>\APrev{\sigma}$. We will obtain the desired inequality $\APrev{\sigma}\geq\AFirst{\sigma}/2$ showing that the series $f(s)$ diverges at $s=\AFirst{\sigma}/2$. An argument similar to the one given at the end of the introduction of the paper yields \begin{equation*} f(\sigma)\geq\dfrac{7}{8}\sum_{p\in\Primes}\dfrac{1}{\beur(p)^{2\sigma}}. \end{equation*} Proposition \ref{stm::beuPrimesDiverges} then implies $f(\sigma)\to+\infty$ as $\sigma\to\AFirst{\sigma}/2$ from the left, as desired. \end{proof} \section{\label{section:Corollaries}Corollaries} In this section we will prove several immediate corollaries of theorem \ref{stm::ZetaBeurZeroFreeRegion} which in a simple and unified manner gives various non vanishing results for zeta and $L$ like functions on the boundary of the half plane of absolute convergence. Let \begin{equation*} \beur:\NN^+\to\RR \end{equation*} be as in the previous section with the associated ``zeta function'' \begin{equation*} Z(s)=\sum_{n=1}^{\infty}\dfrac{1}{\beur(n)^s} \end{equation*} having $\AFirst{\sigma}>0$ as abscissa of absolute convergence. Let us begin with the ``prime number theorem''. Such a theorem is already proved in the literature (see \cite{article:MurtyOnSatoConj} and \cite[Theorem 1.2, pag. 10]{book:MurtyMyrtyNonVanishingElleF}) but our proof is very easy. \begin{corollary}\label{stm::ElleZetaPNT} If $Z(s)$ extends in an open neighbourhood of the closed half space \begin{equation*} \Re s\geq\AFirst{\sigma} \end{equation*} to a meromorphic function with a unique simple pole at $s=\AFirst{\sigma}$ then \begin{equation*} Z(\AFirst{\sigma}+it)\neq0 \end{equation*} for each $t\in\RR\setminus\{0\}$. \end{corollary} \begin{proof} Let $t\in\RR\setminus\{0\}$ and suppose that $Z(\AFirst{\sigma}+it)=0$. Since $\bar{Z(\bar{s})}=Z(s)$ then also $Z(\AFirst{\sigma}-it)=0$. Set $L(s)=Z(s+it)$. When $\Re s>\AFirst{\sigma}$ \begin{equation*} L(s)=\sum_{n=1}^{\infty}\dfrac{\beur(n)^{-it}}{\beur(n)^s}, \end{equation*} and the function $n\mapsto\beur(n)^{-it}$ is bounded and completely multiplicative. We also have $L(\AFirst{\sigma})=0$ and $L(\AFirst{\sigma}-2it)=0$. Then assertion \eqref{stm::ZetaBeurMain::PingPong} of theorem \ref{stm::ZetaBeurZeroFreeRegion} implies that $L(s)$ has a pole at $s=\AFirst{\sigma}+2it$, that is $Z(s)$ has (another) pole at $s=\AFirst{\sigma}+3it$ and this contradicts the hypotheses made on $Z(s)$. \end{proof} Let also \begin{equation*} a:\NN^+\to\RR \end{equation*} be a bounded completely multiplicative function with the associated ``$L-$function'' \begin{equation*} \invZ(s)=\sum_{n=1}^{\infty}\dfrac{a(n)}{\beur(n)^s} \end{equation*} Most non vanishing theorem for $L-$ function associated to various Dirichlet/Hecke characters follow from the following statement. \begin{corollary}\label{stm::ElleZetaZPole} Let $Z(s)$ and $L(s)$ be given. Assume that $L(s)$ is meromorphic on an open neighbourhood of the closed half space \begin{equation*} \Re s\geq\dfrac{\AFirst{\sigma}}{2} \end{equation*} and also $Z(s)$ is holomorphic there with the exception of a simple pole at $s=\AFirst{\sigma}$. If $L(\AFirst{\sigma}+it)=0$ for some $t\in\RR$ then the fuction $L(s)$ admit at least a pole at $s=\sigma+it$ for some $\sigma$ satisfying \begin{equation*} \dfrac{\AFirst{\sigma}}{2}\leq\sigma<\AFirst{\sigma}. \end{equation*} In particular, if $L(s)$ is holomorphic on such a neighbourhood then \begin{equation*} L(\AFirst{\sigma}+it)\neq0 \end{equation*} for each $t\in\RR$. \end{corollary} \begin{proof} Let $t\in\RR$ and assume that $L(\AFirst{\sigma}+it)=0$. Then the function \begin{equation*} L_t(s)\definedby L(s+it) \end{equation*} satisfies $L_t(\AFirst{\sigma})=0$. If $L_t(s)$ has no poles at $s=\sigma\geq\AFirst{\sigma}/2$ then $L_t(s)$ is holomorphic up to the left of $s=\AFirst{\sigma}/2$, contradicting \eqref{stm::ZetaBeur::ElleHalf} of theorem \ref{stm::ZetaBeurZeroFreeRegion}. \end{proof} Observe that if the Riemann hypothesis holds the constraint $\Re s\geq\AFirst{\sigma}/2$ in the corollary above is optimal. Indeed, when $\AFirst{\sigma}=1$ and $\beur(n)=n$, that is, $Z(s)=\zeta(s)$, the Riemann zeta function, then the function \begin{equation*} L(s)=\dfrac{\zeta (2s)}{\zeta (s)}=\sum _{n=1}^{\infty }\dfrac{\lambda(n)}{n^s}, \end{equation*} where $\lambda(n)$ is the Liouville function, is holomorphic on the open half space $\Re s>1/2$ and $L(1)=0$. We end with a curiosity. \begin{lemma}\label{stm::PingPongLemma} Let $\Ping$ and $\Pong$ two subset of $\RR$. Assume that $0\in\Ping$ and for each pair of distincts reals $x,y$ whenever \begin{equation*} \dfrac{x+y}{2}\in\Ping \end{equation*} then \begin{equation*} x\in\Ping\Longleftrightarrow y\in\Pong. \end{equation*} If $\pong\in\Pong$ then $3\pong\in\Ping\cap\Pong$ and if $\ping\in\Ping$ then $-3\ping\in\Ping\cap\Pong$. \end{lemma} \begin{proof} Since $0\in\Ping$ then for each $x\neq0$ \begin{equation*} x\in\Ping\Longleftrightarrow-x\in\Pong. \end{equation*} Let $\pong\in\Pong$ be given. If $b=0$ then the assertion is trivially verified. Assume hence that $\pong\neq0$. Then we have $-\pong\in\Ping$ and \begin{eqnarray*} \dfrac{(-2\pong)+0}{2}=-\pong\in\Ping,\ 0\in\Ping\ &\implies& -2\pong\in\Pong,\ 2\pong\in\Ping,\\ \dfrac{(-3\pong)+\pong}{2}=-\pong\in\Ping,\ \pong\in\Pong\ &\implies& -3\pong\in\Ping,\ \boxed{3\pong\in\Pong},\\ \dfrac{\pong+3\pong}{2}=2\pong\in\Ping,\ \pong\in\Pong\ & \implies&\boxed{3\pong\in\Ping}. \end{eqnarray*} Thus we see that $3\pong\in\Ping\cap\Pong$, as required. Let now $\ping\in\Ping$. If $a=0$ then the assertion is trivially verified and if $\ping\neq0$ then $-\ping\in\Pong$ and we have just seen that then $-3\ping\in\Ping\cap\Pong$, as desired. \end{proof} \begin{proposition}\label{stm::ElleZetaLNoPole} Let $Z(s)$ and $L(s)$ be meromorphic in an open neighbourhood of the closed half space \begin{equation*} \Re s\geq\AFirst{\sigma}. \end{equation*} Assume that $Z(s)$ has a unique simple pole at $s=\AFirst{\sigma}$. If $\AFirst{\sigma}$ is a pole or a zero of $L(s)$ then $L(s)$ has no poles on the line $\Re s=\AFirst{\sigma}$ and $L(\AFirst{\sigma}+it)\neq0$ for each real $t\neq0$. \end{proposition} \begin{proof} Assume first that $L(\AFirst{\sigma})=0$. Let $\Ping, \Pong\sset\RR$ the set of $t\in\RR$ such that $\AFirst{\sigma}+it$ is respectively a zero or a pole of $L(s)$. We now show that $\Ping$ and $\Pong$ satisfy the hypotheses of lemma \ref{stm::PingPongLemma}. Of course $0\in\Ping$ being $L(\AFirst{\sigma})=0$. Let $u,v\in\RR$ with $u\neq v$ and assume that \begin{equation*} t\definedby\dfrac{u+v}{2}\in\Ping, \end{equation*} that is $L(\AFirst{\sigma}+it)=0$. Consider then the function \begin{equation*} L_t(s)\definedby L(s+it) \end{equation*} and set $s_0=\AFirst{\sigma}+i(u-t)$. Since $L_t(\sigma)=0$ then the assertion \eqref{stm::ZetaBeur::ElleHalf} of theorem \ref{stm::ZetaBeurZeroFreeRegion} implies that $s_0$ is a zero of $L_t(s)$ if, and only if, $\bar{s_0}$ is a pole of $L_t(s)$. Observing that and $\bar{s_0}=\AFirst{\sigma}+i(v-t)$ we obtain that $u\in\Ping$ if, and only if, $v\in\Pong$, as desired. Since obviously $\Ping\cap\Pong=\void$ lemma \ref{stm::PingPongLemma} then forces $\Pong=\void$ and $\Ping=\{0\}$ and this completes the proof of the proposition in the case $L(\AFirst{\sigma})=0$. Assume now that $\AFirst{\sigma}$ is a pole of $L(s)$. Then it suffices to repeat the aforementioned argument interchanging $\Ping$ with $\Pong$ and observing that the function $L_t(s)$ has a pole at $s=\AFirst{\sigma}$ instead of a zero. The proof of the proposition is so completed. \end{proof} \bibliographystyle{amsalpha} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,477,468,751,379
arxiv
\section{Introduction} This paper considers the representation of energy storage technologies in electricity sector planning models. These models are typically formulated as optimization problems to find least-cost portfolios for power sector investments and dispatch and have been used in the power industry and for associated energy sector policy analysis since the 1950s. They have been extensively discussed in the operational research and management science literatures \citep{Masse1957,Murphy2005,Singh2009}. Model formulations have been updated in recent years to incorporate technological developments in, and policy supports for, variable renewables (especially wind and solar power) and energy storage technologies. In particular, higher temporal and spatial resolutions are needed to adequately capture variability, a key economic characteristic for renewables and storage \citep{Cole_2017,Collins_2017}. Strategies to capture these features often focus on renewables and not energy storage \citep{Blanford2018,Merrick2016e}, which entails related but distinct modeling considerations \citep{Bistline_2020b}. Specifically, temporal aggregation strategies do not typically include chronology, which is necessary to represent state-of-charge (energy balance) constraints for energy storage systems, raising issues of potential interest to the operational research community. {Other decisions that depend on chronology include unit commitment decisions for individual power plants (e.g., considering how many hours a plant might be used) and time-shifting demand like electric vehicle charging.} Reduced complexity of electricity models has large impacts on feasibility, cost, and emissions outcomes \citep{Bistline_2020c,Bistline_2017b}, which makes it important to think critically about formulations (for example, \cite{Bistline_2020} demonstrate how electricity model formulations can alter the sign of emissions impacts from energy storage deployment). This paper evaluates approaches to address this problem of temporal aggregation in electric sector models with energy storage. Storage technologies have become increasingly important in modeling decarbonization and high renewables scenarios, especially as costs decline and deployments increase \citep{Gorman_2020}. However, storage technologies have complex and diverse cost, value, and performance characteristics that make them challenging to model \citep{Bistline_2020b}. One approach is to use merchant price-taker models with historical market data for hypothetical energy storage systems, but these frameworks omit power system feedbacks and have a limited ability to estimate market depth \citep{Evans_2019,Braff_2016}. Production cost models can assess short-run system operations using detailed simulations of unit commitment and dispatch typically over a year, but only capture static systems without changes in investment and thus cannot estimate market depth \citep{EPRI_2020}. Capacity planning and dispatch models -- the focus of this paper -- can help to assess the long-run value of energy storage by accounting for both investment and dispatch effects over a multi-decadal horizon \citep{Santen_2017}. Temporal aggregation is the focus of this work due to both its importance in driving model results and its operational research dimension; however, it is important to keep in mind that it is only one challenge among many in modeling storage \citep{Bistline_2020b}. Additional modeling challenges associated with energy storage are representations of technologies (including hybrid systems and cross-sector interactions), market participation, policies/incentives, and spatial aggregation. The novel contributions of this paper comprise a conceptual framing of the modeling problem in Section \ref{sec:model}, an illustration of the numerical challenge at the scale of large interconnected power systems as exist in North America, Europe, and China, in Section \ref{sec:thechallenge}, and an investigation of solutions in Section \ref{sec:solutions}. A key insight from Section \ref{sec:model} is that aggregation methods in the literature can be represented as a general representation and that conditions exist for a general lossless aggregation. Finding the minimal such aggregation for a given model input dataset remains an open problem however. Furthermore, the numerical analysis of Section \ref{sec:thechallenge} indicates the limited scope for substantial aggregation in practice across all questions that may be asked of a model. The solutions investigated in Section \ref{sec:solutions} range from alternative modeling paradigms to a decomposition approach that enables the solution of large problems, avoiding the need for aggregation. \section{Background} \label{sec:background} This work investigates the representation of energy storage technologies in capacity planning models, which consider system-level interactions for investment decisions (including storage, generation, and transmission assets) and operational dynamics (which influence and are influenced by investment decisions). Prior work, such as that of \cite{Cruise2019,Zhou2016}, applies operational research approaches to the efficient operation of an individual storage unit. The models underlying our paper consider different questions, for example, how much storage is economical to deploy in a given system, how does storage contribute to public decarbonization policies, and how does it interact with other technologies across interconnected power systems. More broadly, many interconnected decisions, like choosing the level of storage deployment, are treated endogenously in such capacity planning models, necessitating careful formulation to enable efficient computations while capturing salient system features. We particularly explore in this paper the aggregation of model variables and constraints that are defined over temporal operating periods, and how this aggregation affects the representation of storage technologies. An example of such a variable is energy discharged from a battery at each hour. Temporal aggregation has significant computational benefits and, in the absence of storage technologies, was a good candidate for aggregation due to the high amount of redundancy in associated temporal data. In addition to the computational benefits, \cite{Merrick2019} discuss how aggregation and reduced-form representations are conceptually desirable once the aggregation does not materially distort relevant model outputs. In the absence of storage, as \cite{Merrick2016e} discusses, and in accordance with the analysis of \cite{Rogers1991} and the aggregation bounds of \cite{Zipkin1980a,Zipkin1980}, a good strategy for temporal aggregation is the gathering together of similar hours. With the presence of energy storage however, aggregation methods must also maintain a representation of the chronology between periods. \subsection{Value of energy storage} When we aggregate temporal representation, what representation of energy storage do we wish to maintain? In addition to not distorting the valuation of other technologies, we wish to appropriately value energy storage options, the fundamentals of which we now discuss. The potential value of energy storage systems is more complex than other technologies due to many services that it can provide and difficulty of capturing all streams in a single framework, since they operate over wide spatial and temporal scales and exhibit location-specific variation based on the grid mix, benefiting parties, and market rules \citep{Balducci_2018}. We focus on value streams that are most relevant in long-run planning models, reflecting projections for potential market depth and services already captured for other technologies; namely, energy and capacity value \citep{Bistline_2020b}. Representation of energy storage value streams not considered in this paper, such as ancillary service provision and transmission deferral, will benefit from the same advances in modeling chronology that enable representation of the energy and capacity value of storage. We define energy value as the ability to take advantage of daily, weekly, and even seasonal arbitrage opportunities, and different technologies are more and less suited to capture different arbitrage opportunities \citep{Mongrid_2019}. We define capacity value as the ability to provide electricity during scarcity events, when price of power is higher than marginal cost of production.\footnote{Energy and capacity value are defined mathematically in Section \ref{sec:evcv}.} As we next discuss, many temporal aggregation strategies in the literature make \emph{a priori} assumptions about which value streams matter. \subsection{Aggregation approaches in the literature} \label{sec:survey} This section introduces and discusses numerous aggregation approaches found in the literature, but does not provide an exhaustive list. For instance, we are not including the so-called ``infinite reservoir approach,'' which omits the storage balance/state-of-charge constraint. We also do not discuss offline methods, where storage-related assessments use pre- or post-process calculations that can iterate with the main optimization model \citep{Cole_2017}. \subsubsection{Representative sequences} Representative sequence approaches, and representative day methods in particular, are the most common temporal aggregation strategies in the literature. Representative day approaches harness an intuitive, interpretable structure in the data: daily cycles. Note that while the daily cycle structure is readily apparent in load and solar data, in many regions it is not necessarily present in wind data. The approach no longer involves the aggregation of hours with similar characteristics, but the aggregation of days (or other sequences) with similar characteristics, allowing chronology to be maintained within days, but not necessarily across them (representative weeks are alternately chosen on occasion. For example, see \cite{DeSisternes2013}). The model may choose to deploy storage units to operate within these representative days. Papers on how to choose representative days include \cite{Johnston_2019,Teichgraeber2019,Garcia-Cerezo2019,Liu2017,Nahmmacher2016,Poncelet2016a}. These papers are largely based on clustering methods, with various adaptations adopted to improve the choice of representative days. Implementations typically select 4 to the order of 10 days with hourly resolution or multi-hour blocks, a sample of which are shown in Table \ref{Table_rep}. \begin{table} \caption{Number of representative days for selected models.} \vskip 6pt \centering \begin{tabular}{l c c} \hline\hline \multicolumn{1}{c}{\textbf{Paper}} & \multicolumn{1}{c}{\textbf{Model}} & \multicolumn{1}{c}{\textbf{Rep. Days}} \\ \hline \cite{Jayadev2020} & OSeMOSYS & 4 \\ \cite{Nahmmacher2016} & LIMES-EU & 6 \\ \cite{Despres2017} & POLES/EUCAD & 12 \\ \cite{Nelson2012} & SWITCH 1.0 & 24 \\ \hline \end{tabular} \label{Table_rep} \end{table} These aggregation strategies often do not allow linkages across representative sequences (e.g., specifying that energy storage balances must return to zero by the end of each sequence). This focus on intraday linkages suggests that representative day methods are more suitable for some energy storage technologies such as lithium-ion batteries than others. The key assumption however is that days can be chosen that are representative of the distribution of possible days. As shown in Section \ref{sec:model}, extreme peak pricing periods matter for energy storage valuation, and clustering methods that choose days that capture the center of the distribution may miss crucial extreme days. In turn, knowing \emph{a priori} which day the peak pricing may fall is challenging, as it is endogenous to the model outcomes and may shift based on the grid mix across different scenarios and regions. \cite{Merrick2016e} shows for a sample dataset, when wind and solar profiles are included in addition to load profiles, the number of unique days increases dramatically. In Section \ref{sec:hard}, we illustrate this challenge further at the scale of the contiguous United States and its associated spatial diversity in temporal profiles. \subsubsection{System states} System states approaches identify unique states that comprise time periods with similar characteristics and estimate a probability transition matrix between states \citep{Wogrin2016}. The transition matrix enables the representation of chronology. To keep track of the hourly energy balance and ensure it is within energy storage capacity, the approach includes a structure to infer the hourly balance from the state transitions. Computationally expensive aspects of this structure are avoided by introducing the assumption \emph{a priori} which hours are `charge hours' and which hours are `discharge hours'. \cite{Tejada-Arango2018} adapt \cite{Wogrin2016}, by combining system states and representative days through modeling short term storage dynamics within the representative period (day), and applying the system state approach to model storage across these periods. In their empirical results, they show their approach performs better than the system states approach alone. \cite{Kotzur2018a} present a similar approach where representative days are designed to represent short term storage dynamics, while a superposition concept represents seasonal storage dynamics. The assumption is made that a storage unit exhibits identical behaviour within each day, just at different absolute levels. For example, in a world with two days, instead of modeling 48 hours, 24 hours are modeled to represent the relative pattern, and 2 additional variables model the absolute level in each day. Empirical work indicates (for example calculations in \cite{Merrick2016e}) that there are simply a large number of unique system states to represent, blunting the effectiveness of the system states aggregation. However, as discussed in the context of a more general representation introduced in Section \ref{sec:general}, the method theoretically can lead to a lossless aggregation. \subsubsection{Adapted aggregation methods} \cite{Pineda2018} consider clustering with the constraint that only adjacent periods may be clustered, allowing the aggregated periods to then be treated as chronological within a model. This method shows improved performance relative to choosing representative days and weeks, for the same number of periods. However, this improved performance is achieved at a scale in terms of number of periods far greater than most models in the field, challenging their computational limits (for example, \cite{Pineda2018} compare the performance of adjacent clustering relative to other methods for 672 aggregated periods (28 days/4 weeks), a greater resolution than many models). In Appendix \ref{sec:adj}, we see the challenge of the approach at the scale of the contiguous United States, with diverse load and renewables profiles across regions. If this scale is manageable, it is a promising method to achieve some level of aggregation. \cite{Zhang2018a} and \cite{Duan2019} discuss state compression of Markov processes, which appears to be a promising framework for reasoning about {general aggregation strategies} that maintain chronology. At the core of the method is singular value decomposition upon the transition matrix between states. How many states are required, or how similar members of states are required to be, for the representation of storage, remain an input to these methods, and depend on the structure of the optimization model where the aggregated data are applied. \section{Framing the Modeling Problem} \label{sec:model} We next introduce the core model structure that underlies this paper and allows us to frame the modeling problem of representing storage, and more generally, chronology. To keep the notation compact, the greenfield long-run minimum-cost model does not include model variables and constraints tracking intertemporal dynamics that reflect investments and retirements in capacity over time. We also do not include here the spatial dimension representing multiple model regions and transmission expansion decisions as would be present in many interconnected power systems models. Furthermore, the mathematical model structure is this section implicitly assumes perfectly competitive markets with foresight and has no representation of stochasticity, a feature appropriate for certain questions (for example, see \cite{Murphy2005}). These features and dynamics are important to represent in many applications and have the sizable secondary effect of increasing model computational requirements. That we do not include these features and dynamics in this mathematical representation does not preclude the associated discussion and insights from models with these features, as the structure represented here is an essence of all such models. For example, the model we use for numerical calculations later in this paper, the US-REGEN model \citep{REGEN_2020}, employs the core model structure outlined here in addition to other features and dynamics. \subsection{Notation} \noindent Sets: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item $g=1,..,m$ generator types \item $h=1,..,n$ time periods (e.g. hours) \end{itemize} Parameters: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item $c^x_g\in\mbox{\rm\bf R}^n$, variable cost of generation from generator type $g$ {(typically a vector of constants}) \item $\c^z\in\mbox{\rm\bf R}^m$, vector of generator capacity costs \item $c^t\in\mbox{\rm\bf R}$, cost of storage power capacity (``door'' cost in \$/kW) \item $c^u\in\mbox{\rm\bf R}$, cost of storage energy capacity (``room'' cost in \$/kWh) \item $\d\in\mbox{\rm\bf R}^n$, vector of electricity demands across time \item $\a_g\in\mbox{\rm\bf R}^n$, vector of generator availability \end{itemize} Variables: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item $\mbox{\boldmath $x$}_g\in\R_+^n$, vector of generation by generator $g$ \item $\mbox{\boldmath $z$}\in\R_+^m$, vector of generator capacity investment \item $t\in\R_+$, storage power capacity (``door'') investment \item $u\in\R_+$, storage energy capacity (``room'') investment \item $\mbox{\boldmath $r$}\in\mbox{\rm\bf R}^n$, vector of net charge amount \item $\mbox{\boldmath $s$}\in\R_+^n$, vector of storage levels/balance/state-of-charge \end{itemize} \subsection{Model} \[ \begin{array}{rclll} \mbox{\rm minimize }_{\mbox{\boldmath $x$},\mbox{\boldmath $z$},t,u,\mbox{\boldmath $r$},\mbox{\boldmath $s$}} & \sum_{g}c_g^{x}\mbox{\boldmath $x$}_g+\c^{z}\mbox{\boldmath $z$}+c^{t}t+c^{u}u&&&\\ \mbox{\rm subject to } &\sum_g \mbox{\boldmath $x$}_g+\mbox{\boldmath $r$}&= \d&:\mbox{\boldmath $\lambda$}&\\ &\mbox{\boldmath $x$}_g &\leq \a_{g}z_g&:\mbox{\boldmath $\gamma$}_g&\forall g=1,..,m\\ &s_h&=s_{h-1}+r_h&:\Omega_h&\forall h=1,..,n\\ &|\mbox{\boldmath $r$}|&\leq t&:\mbox{\boldmath $\delta$}&\\ &\mbox{\boldmath $s$}&\leq u&:\mbox{\boldmath $\tau$}&\\ \end{array} \addtag \label{mdl:core} \] The objective function minimizes the cost of supplying electricity, including the provision of energy storage technologies. The first constraint, with dual variable $\mbox{\boldmath $\lambda$}\in\mbox{\rm\bf R}^n$, requires supply in each dispatch period to meet demand. The second constraint, with dual variable $\mbox{\boldmath $\gamma$}\in\mbox{\rm\bf R}^{m*n}$, states that generation of each technology cannot exceed available capacity in each period. The third constraint, with dual variable $\Omega\in\mbox{\rm\bf R}^n$, tracks the storage balance as the storage unit charges and discharges. Since the current state of the energy storage system is impacted by all previous states (which is uncommon for power sector resources), this state-of-charge constraint requires chronology (i.e., linkages across time) to be represented, which makes storage computationally challenging. The fourth and fifth constraints, with dual variable vectors $\mbox{\boldmath $\delta$}\in\mbox{\rm\bf R}^n$ and $\mbox{\boldmath $\tau$}\in\mbox{\rm\bf R}^n$ respectively, ensure that a) charge/discharge does not exceed the charge/discharge capacity of the unit (i.e., the power capacity or ``door''), and b) the stored energy does not exceed the storage energy capacity (``room''). Finally, note a number of our variables are defined in the domain of the positive real numbers. Note that this general formulation can encompass a range of energy storage technologies such as batteries, pumped hydro, and compressed air energy storage \citep{Mongrid_2019}. The cost structure of energy storage is taken as an input, including the power capacity cost ($c^t$ in \$/kW) and energy capacity cost ($c^u$ in \$/kWh). Outputs include the energy storage power capacity ($t$ in kW), which governs the maximum rated charge and discharge rates, and energy capacity ($u$ in kWh), which bounds the total electricity stored. The ratio of energy to power capacity determines the duration that the storage device can provide rated power (or length of time needed to charge) and also can be specified exogenously. \subsection{Derivations} \label{sec:value} \subsubsection{Marginal value from optimality conditions} For a similar model structure, \cite{Lamont2013} derives optimality conditions relating to the marginal value of investment in power charge/discharge capacity (``door,'' kW) and energy capacity (``room,'' kWh). As noted by \cite{Lamont2013}, the marginal value of these capacities when they are deployed is the sum of rents on their associated constraints: \[ c^u=\sum_h\tau_h \label{eq:room} \addtag \] \[ c^t=\sum_h\delta_h \label{eq:door} \addtag \] These conditions show that the model will invest in storage capacities until such point that the sum of rents, the marginal value, equals the marginal cost of capacity deployment. All of these values are a function of the grid mix, including the level of energy storage deployment. In an intertemporal model setting, the model will invest such that the net present value of future rents over the time horizon equals the upfront cost of installation. We next derive some further identities from the model structure with the goal of illustrating characteristics of energy storage representation we wish an aggregation strategy to capture. Exploring (\ref{eq:room}) further, and for cases when there is some positive investment in energy storage capacity, we can derive the following (See Appendix \ref{sec:proofs} for derivations associated with this section): \[ c^u=\sum_h(\Omega_{h+1}-\Omega_h)^+ \addtag \label{eq:mvroom} \] That is, the marginal value is the sum of the positive differences in the dual variable of the storage balance constraint. \cite{Lamont2013} points out the relevance of cycles in the structure of the optimal solution, where a cycle comprises the set of periods between the storage unit having zero energy stored. Noting that the point where a monotonic increase in $\Omega$ will reverse is when the quantity of stored energy is at zero, we can derive the following: \[ c^u=\sum_k(\max_{h\in k}\Omega_{h}-\min_{h\in k}\Omega_h)=\sum_k(\Omega_{b(k)}-\Omega_{a(k)}) \addtag \label{eq:mvroom2} \] Where the index $k$ is across the set of charge/discharge cycles of a storage unit, and $b(k)$ is the last period of the discharge portion of a cycle $k$, and $a$ is the first period of the charge portion of a cycle $k$. While the location and duration of cycles are endogenous to the model, the cycle is a useful device for considering what comprises the marginal value of a storage unit. Furthermore, (\ref{eq:mvroom2}) has the interesting implication that we can collapse the marginal value obtained from each charge/discharge cycle into two boundary prices. The associated buying and selling price, $\Omega$, can be derived as follows, when the unit is charging or discharging, i.e. $|r|>0$ : \[ \Omega_h = \lambda_h+\delta_h^c-\delta_h^d \addtag \] Recalling that $\delta$ is the dual variable (rent) of the door capacity constraint, we can see that, when that constraint is not binding, $\Omega$ equals $\lambda$, the price of electricity. $\Omega$ may be interpreted as the local price of electricity facing the charging/discharging storage unit, differing from the system price by a congestion charge entering / exiting the unit. When buying energy, the congestion charge increases the price the unit pays, while when selling, the congestion charge decrease the price received. The extent to which a unit of storage can claim the full price of electricity when arbitraging across periods is thus limited by the price associated with charge/discharge capacity. If the charge/discharge capacity of a storage unit were free, the marginal value of storage would be purely based on electricity price arbitrage. Since electricity arbitrage value is dependent on peak and off-peak price differentials, models with limited temporal resolution could dampen price variability and lower the value of energy storage \citep{Diaz_2019}. As (\ref{eq:mvroom2}) shows, the greater the dispersion in prices, the greater the marginal value of a storage unit (wind and solar deployment have been shown in modeling studies and market data to increase price variability \citep{Mills_2020}, \emph{ceteris paribus} increasing the marginal value of energy storage). Also, as storage costs decline, it allows the marginal value of the technology to decline while still remaining a viable investment. Note that the technology itself decreases its own marginal value as it is increasingly deployed, linking periods and reducing disparities in prices \citep{Denholm_2019,Bistline_2017,deSisternes_2016,Blanford_2015}. Decreasing returns (``value deflation'') also occurs for wind and solar, as their economic value declines as their penetration increases. These declines have been observed in a range of actual market settings and prospective modeling studies \citep{Wiser_2017,Bistline_2017,Gowrisankaran_2016,Hirth_2013}. These identities are displayed to provide intuition into the drivers of storage value, and the relationship between energy and power charge/discharge capacity, all at the margin. Such drivers aggregate model representations of energy storage will want to maintain. To analyze these countervailing effects and which dominate (under different conditions), we need a numerical model like the one applied in Section \ref{sec:general} and described in Appendix \ref{sec:regen}. At this point, while we have not explicitly modeled features like round trip storage efficiency, we note both the role of cycles and the role of dispersion in electricity prices are fundamental in the marginal value of a storage unit, indicating these are concepts that may be harnessed by aggregation strategies. \subsubsection{Energy and capacity value} \label{sec:evcv} We next consider total value realized by a storage unit deployed in the model. Energy value and capacity value are a useful distinction for considering this realized value, and are relevant for assessing the strengths and weaknesses of various aggregation methods. Defining $\mbox{\boldmath $\gamma$}^*$ as the difference between the electricity price, $\mbox{\boldmath $\lambda$}$, and the maximum marginal cost for any dispatched technology (noting from the optimality conditions that $\mbox{\boldmath $\lambda$}=c_g^x+\mbox{\boldmath $\gamma$}_g,\forall g\quad\implies$ $\gamma^*_h=min_{(x_{h,g}>0,g)}(\lambda_h-c_g^x)$), we define energy and capacity value as follows: \begin{itemize} \item Energy value: Profits in energy arbitrage from a storage unit from prices excluding the scarcity premium in peak pricing periods. We define this as $\mbox{\boldmath $r$}\mbox{\boldmath $\lambda$}^*$, where $\mbox{\boldmath $\lambda$}^*$ is the electricity price, $\mbox{\boldmath $\lambda$}$, adjusted to remove the scarcity premium, $\mbox{\boldmath $\gamma$}^*$. \item Capacity value: Value of dispatch in scarce periods, i.e. the value of the scarcity premium captured by the storage unit, $\mbox{\boldmath $r$}\mbox{\boldmath $\gamma$}^*$. Note that the model formulation (\ref{mdl:core}) omits a reserve margin constraint for concision, so the market-clearing constraint with dual variable $\mbox{\boldmath $\lambda$}\in\mbox{\rm\bf R}^n$ embodies both the energy and capacity prices, while in a model with an explicit reserve margin constraint, the dual of that constraint will carry the capacity price. \end{itemize} We would like any representation of energy storage in a model to allow both these value streams to be realized, as missing either can lead to misvaluation. For these streams to be realized, electricity prices, including scarcity premiums, need to be represented and adjusted appropriately as the model endogenously chooses storage deployment levels and the capacity mix of other technologies. Additionally, restricting storage deployment to fixed ratios of solar deployment, for example, considering only solar plus storage as a joint technology, could miss some of the associated value of either individual technology as prices endogenously adjust to different model outcomes. As discussed in Section \ref{sec:hard}, capacity value in particular requires a representation of peak pricing periods, which are difficult to identify \emph{a priori} for large interconnected multi-region systems with many generation technology options. \subsection{General representation of an aggregated model} \label{sec:general} This subsection develops a generalized formulation of a capacity planning model with energy storage that encapsulates both the non-aggregated formulation and aggregated approaches discussed in Section \ref{sec:survey}. This formulation illustrates common features, and common strengths and weaknesses, across aggregation methods, with a view to aiding design of future improvements. With the introduction of the following parameters, we can develop a more general version of (\ref{mdl:core}) from Section \ref{sec:model}: \begin{itemize} \item $\mbox{\boldmath $w$}\in\mbox{\rm\bf R}^n$ weight associated with each period \item $P\in\mbox{\rm\bf R}^{n\times n}$ state transition matrix, where $P_{ij}$ is the probability of jumping from any period $i$ to any other period $j$ \item $\mbox{\boldmath $q$}\in\mbox{\rm\bf R}^n$ duration in a state before transitioning to another state \end{itemize} Introducing $\mbox{\boldmath $w$}$ into the objective function, and $P$ and $q$ into the storage balance constraint, allows us to present our problem's model structure as in (\ref{mdl:agg}) below: \[ \begin{array}{rcll} \mbox{\rm minimize }_{\mbox{\boldmath $x$}_g,\mbox{\boldmath $z$},t,u,\mbox{\boldmath $r$},\mbox{\boldmath $s$}} & \mbox{\boldmath $w$}\sum_{g}c_g^{x}\mbox{\boldmath $x$}_g+\c^{z}\mbox{\boldmath $z$}+c^{t}t+c^{u}u&&\\ \mbox{\rm subject to } &\sum_g \mbox{\boldmath $x$}_g+\mbox{\boldmath $r$}&= \d&\\ &\mbox{\boldmath $x$}_g &\leq \a_{g}z_g&\forall g=1,..,m\\ &\mbox{\boldmath $s$}&=P\mbox{\boldmath $s$}+\mbox{\boldmath $q$}\mbox{\boldmath $r$}&\\ &|\mbox{\boldmath $r$}|&\leq t&\\ &\mbox{\boldmath $s$}&\leq u&\\ \end{array} \addtag \label{mdl:agg} \] This formulation is equivalent to our earlier formulation (\ref{mdl:core}) when (i) $\mbox{\boldmath $w$}$ consists of the vector of ones, (ii) $\mbox{\boldmath $q$}$ consists of the vector of ones, and (iii) $P$ is the identity matrix with all ones shifted one place to the right, i.e., $P_{h-1,h}=1$ for all $h$. Furthermore, the aggregation methods mentioned above, with a compressed temporal dimension, may all be considered as special cases of the formulation (\ref{mdl:agg}). For example, in a representative day scenario, $\mbox{\boldmath $w$}$ encodes the weight of each day, $\mbox{\boldmath $q$}=1$, and $P$ encodes the representative day structure with cycles of 24 states each, connected or unconnected depending on how intraday storage is treated. For the adjacent hierarchical clustering proposal of \cite{Pineda2018}, $\mbox{\boldmath $w$}=\mbox{\boldmath $q$}$ and $P$ comprises again the identity matrix shifted one unit to the right. \begin{figure} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.97\textwidth]{states_h.pdf} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.97\textwidth]{states_a.pdf} \end{minipage} \caption{Aggregation of hourly data into system states. For example, let us say hours 1, 25, and 39 are similar to each other and then could be aggregated to a common state, State $A$.} \label{fig:state_transition} \end{figure} System states in its raw form involves aggregating hours and then, given the aggregation, calculating $P, \mbox{\boldmath $q$}, \mbox{\boldmath $w$}$ empirically for that aggregation. $P, \mbox{\boldmath $q$}, \mbox{\boldmath $w$}$ provide a synthetic mapping to represent reduced-form chronology. Figure \ref{fig:state_transition} illustrates how each of the hours can be grouped together with other hours that are similar into aggregate States $A,B,C,\dots$ with the probabilities calculated empirically of moving from one state to another or remaining in a particular state. Let us say that the resulting $P$ matrix implies the stored energy incoming to State $C$ is 0.25 from State $B$, and 0.75 from State $D$. The associated interpretation is challenging, with a plausible interpretation that each state has an ``expected value'' of storage available when making dispatch decisions in the context of a planning model. Similarly, the optimality condition on the marginal value of energy storage capacity (\ref{eq:mvroom}) shifts, under the more general formulation, as follows: \[ \quad c^u=\sum_h(\Omega_{h+1}-\Omega_h)^+ \quad \rightarrow \quad c^u=\sum_{{s}}(\mbox{\boldmath $p$}_{{s}}{\mbox{\boldmath $\Omega$}}-\Omega_{s})^+ \] This states that the value the energy storage unit receives for transferring energy to the future shifts from the price of the next period to the \emph{expected price} across all periods to which the current period is linked. Mapping this identity to cycles is more abstract in this setting, and can aversely affect the performance of % straightforward implementation of this method, which is computationally explored in Section \ref{sec:mv}. Reflecting on Figure \ref{fig:state_transition}, it might appear that a `raw' system states approach could never do well, allowing unrealistic energy transfers. Figures \ref{fig:qw1} and \ref{fig:qw2} illustrate a thought experiment where, by inspection, we can see that an aggregated model is equivalent to a non-aggregated model. In this example, the number of states is compressed from six to two while maintaining a representation of chronology. More particularly, in this case, our six states comprised two states repeated in a certain order, an order where it was possible to capture the chronology. Crucial also is the recognition of the distinction between the weight of a state in the objective function, $\mbox{\boldmath $w$}$, and how long one remains in a state upon entering, $\mbox{\boldmath $q$}$. In our thought experiment, due to structure identified in the input data, it is possible to parameterize the general formulation (\ref{mdl:agg}) in such a way that aggregation does not distort model outcomes. We can generalize from this experiment to show (see Appendix \ref{sec:proof-ident}) that an aggregation scheme comprising a mapping between hours $h$ and states $s$ that can retain the following properties will allow an aggregated model be equivalent to a non-aggregated model: \begin{itemize} \item Hours that are members of a given state have equivalent temporal characteristics: $a_{i}=a_{j}$, $d_{i}=d_{j}$ for all hours $(i,j)$ mapped to a state $s$ \item Each state has only one incoming connection to another state, and only one outgoing connection: each row of the $P$ matrix contains one $1$ entry, with the remainder all zeros \item For each $s$, each subsequence of hours that maps to it is of equal length, $q_s$ \end{itemize} As an example, a year comprising 365 identical days would fit these conditions. Similarly, a year with two unique days, one always following the other, would also meet these conditions. Additionally, the previously mentioned adjacent clustering idea would also meet these conditions. Note that while this would be an aggregation with a guarantee, it is not proven as a minimal representation. However, we may ask is there a systematic way of aggregating model input data in such a way as to meet these conditions, particularly when structure is not obvious in real data. Also, an aggregation that is simply close to meeting these conditions might be sufficient for real world applications. We pose this question for future research. \begin{figure} \centering \begin{subfigure} \centering \includegraphics[width=0.5\textwidth]{te.pdf} \caption{Illustrative example: periods classified into like states} \label{fig:qw1} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.6\textwidth]{wq.pdf} \caption{Illustrative example: aggregation of Figure \ref{fig:qw1}} \label{fig:qw2} \end{subfigure} \end{figure} \section{The Numerical Challenge} \label{sec:thechallenge} % Following our conceptual framing of the modeling problem, this section illustrates the numerical challenge to represent energy storage technologies at the spatial and temporal scale of models used for pressing policy and strategic questions. \subsection{Challenge of identifying critical days} \label{sec:hard} As discussed in earlier sections, one feature of the capacity planning problem with energy storage that makes temporal aggregation challenging is that we do not know \emph{a priori} which extreme periods are the important ones in driving energy storage's value. In part, this difficulty arises from the dependence of prices on the grid mix and level of storage deployment. This endogeneity suggests that a temporal aggregation method should work for arbitrary levels of energy storage deployment, variable renewable shares, and other system conditions (i.e., capture all relevant extremes of time-series variables such as hourly load, potential wind output, and potential solar output). To propose an approach to capture these extremes and to illustrate the challenge of representative day selection, this subsection conducts an empirical investigation with time-series data from the United States that populate the Regional Economy, Greenhouse Gas, and Energy (REGEN) model \citep{REGEN_2020,Bistline_2019,Blanford2014}. The model comprises an intertemporal planning model for the contiguous United States that chooses electricity capacity investments and retirements over time that meets electricity demand in line with modeled policies and technical constraints. Its core logic is consistent with the model structure introduced in Section \ref{sec:model}. \cite{Merrick2016e} showed that the number of unique days in a sample dataset was in the hundreds, indicating the magnitude of the challenge of finding an appropriate aggregation based around representative days, given the aforementioned difficulty of knowing \emph{ex ante} where the peak pricing periods may lie. Referring to the previously introduced concepts of energy value and capacity value, both are endogenous to the dispatch \emph{and} capacity choices made by the model. We do know that the peak price that drives capacity value will occur when the system is under strain, for example when demand is high and renewables availability is low. Even more straining on the system is when there are a sequence of consecutive hours with such properties, since storage systems are energy-limited resources. The other extreme of low prices also matters for valuation of variable renewables and storage. These occur for example when demand is low and renewables availability is high. The idea behind this experiment is to compress an annual dataset of temporal availability across the United States into `cumulative days', summing load and variable renewables within each day, before normalizing. Over the resulting 365 days in dataset we apply the strategy of \cite{Blanford2018} to search for selecting `extreme days'. This procedure involves working out the day closest to each load-wind-solar extreme vertex and choosing the minimum number of days such that, for each region, at least one day is chosen that it is within some radius of each vertex point. Figure \ref{fig:cumulative_day} displays the results for one region (Texas) in the 15-region dataset. The extreme hour finding algorithm in this experiment returns with circa 100 extreme days to meet these requirements across the contiguous United States, not 4 to 10 days as is typically used in ``representative hour'' implementations for electric sector models (Table \ref{Table_rep}). This result highlights the challenge of finding a small set of representative days. Note that this is a \emph{lower bound} on the number of days required, as it ignores diverging patterns within the days, along with the necessary days to cover interior points of the distribution, which could be important in capturing storage value during normal periods. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{texas.png} \caption{Extreme cumulative days example for Texas using normalized load, wind, and solar data from the US-REGEN model \citep{REGEN_2020}. Red points are cumulative days (365 in the annual dataset), the bubbles are the local vertex requirements, and blue points are the chosen days. Blue points inside a bubble are the days that meet the local requirements, and blue points outside the bubble meet the requirements in other regions.} \label{fig:cumulative_day} \end{figure} Also note that this experiment only considers three time-series variables: load, potential wind output, and potential solar output. If you are in a region where one renewable resource is known \emph{a priori} as not in-the-money, then the problem may be easier. However, there could be additional time-series variables that may correlate with energy storage value (e.g., hydro output) or additional series to represent wind/solar technologies with different resource profiles, which would increase the number of representative days. It is also important to note that, although the number of required days could be lower if only a single region is considered, the interconnectedness of markets is important for capturing system operations and economics, which increases the dimensionality of the optimization problem. It could also be desirable for a single-region application to have many subregions to characterize differences in resources, which would increase the number of representative days. \subsection{Performance of selected methods} \label{sec:mv} To learn about the in-practice performance of a number of selected aggregation methods, we conduct a series of numerical experiments with a `static' version of the aforementioned US-REGEN model. In the static version, we only solve for one model year, not the full intertemporal problem. The reduced computational burden allows us to run the non-aggregated 8760 resolution case, and compare associated model outputs to model outputs with aggregated resolutions. A challenge to conduct meaningful experiments in this fashion is that while a given aggregated method may perform well under the certain set of circumstances of a numerical run, it may not do so well under different model parameterizations. Table \ref{tbl:runsA} compares outputs from a) the 8760 non-aggregated case, b) what we term the `Expected Value Method', direct implementation of the general formulation outlined in Section \ref{sec:general}, and c) a representative day formulation. For the latter, we choose the peak and median load days from each season of the year (8 days, 192 periods), while for the former we choose 192 periods in the fashion described by \cite{Blanford2018} that preserves load, wind, and solar distributions, and directly calculate a probability transition matrix between periods. The scenario is a carbon price scenario designed to test the performance of the methods in a case where variable renewables are incentivized, and the static model year is 2030. To focus on the impact of renewables and storage, there is a constraint on new nuclear construction, but this also leads to a narrowing of model options under the carbon price scenario, which in turn reduces the scope for error from an aggregation method, highlighting the challenge of numeric assessment of methods that we wish to perform generally. Perhaps the most striking takeaway from Table \ref{tbl:runsA} is the dependence on the question being asked of the model when evaluating aggregation methods. We see that the aggregation only instigates a small change in the objective value of the model, while creating a dramatic speedup in runtime (RAM constraints lengthen the 8760 runtime in this example). By many criteria, the change in objective function may be considered a very small price for the computational gains. However, if the question is what is the market for storage capacity under this carbon price scenario, we see a greater deviation (the `net capacity value' is higher for the Expected Value method in this case as it deploys less capacity, keeping prices higher, allowing the capacity it does deploy to achieve greater returns). For the sort of applications these models are used for, and the questions asked, we often want whole subcomponents of the aggregated model to match corresponding components of the original model. Similarly Table \ref{tbl:runsA} contains results spatially aggregated across the USA, masking more divergence from the 8760 case at regional level. Another feature of Table \ref{tbl:runsA} is that given the nature of representative day choice, with many potentially critical system days omitted, how does the representative day aggregation do so well? We know that days have an intrinsic structure and that if load is still the dominant effect, a representative day choice based on load, as this one is, could perform well. Similarly, we consider lithium-ion battery storage, and not longer cycle storage forms. As regards renewables representation, perhaps errors in one region are cancelling out errors in others. However, there may also simply be an element of luck in this case. Table \ref{tbl:runsB} shows a repeat of the experiment with a slight change in how the representative days are chosen. Selection B and C respectively replace the median load day in each season with the day either side of the median in the ranking of load days. With this slight change, we can see divergence in outcomes, significant in some cases, and this could be amplified in different scenarios. \begin{table} \caption{{Comparison of aggregated model outputs relative to non-aggregated 8760 case. The scenario is a carbon price scenario (\$81/tonne) for the 2030 model year with assumptions about limited CCS availability}} \vskip 6pt \centering \begin{tabular}{l | c | c c} \hline\hline \multicolumn{1}{c}{\textbf{Model Output}} & \multicolumn{1}{c}{\textbf{8760}} & \multicolumn{1}{c}{\textbf{Expected Value}} & \multicolumn{1}{c}{\textbf{Rep. Days}} \\ & & \multicolumn{1}{c}{\textbf{(relative)}} & \multicolumn{1}{c}{\textbf{(relative)}} \\ \hline Storage Room (GWh) & 98.33 & 0.96 & 0.23 \\ Storage Door (GW) & 37.42 & 0.6 & 0.37\\ Objective Value (Billion \$) & 176.386 & 1.016 & 1.004 \\ Net Capacity Value {(\$/kW)} & 15.65 & 2.62 & 0.98\\ Net Energy Value {(\$/kW)}& 20.26 & 0.75 & 0.55\\ CO$_2$ Emissions (MtCO$_2$) & 370 & 1.06 & 1.13\\ Variable renewable energy [VRE] (TWh) & 1627 & 0.99 & 0.98\\ Natural Gas Capacity (GW) & 434.05 & 1.02 & 0.94 \\ Speed (s) & 6504 & 0.004 & 0.003\\ \hline \end{tabular} \label{tbl:runsA} \end{table} \begin{table} \caption{{Comparison of varying representative day choices}} \vskip 6pt \centering \begin{tabular}{l | c c c} \hline\hline \multicolumn{1}{c}{\textbf{Model Output}} & \multicolumn{1}{c}{\textbf{Rep. Days A}} & \multicolumn{1}{c}{\textbf{Rep. Days B}} & \multicolumn{1}{c}{\textbf{Rep. Days C}} \\ & \multicolumn{1}{c}{\textbf{(relative)}} & \multicolumn{1}{c}{\textbf{(relative)}} & \multicolumn{1}{c}{\textbf{(relative)}} \\ \hline Storage Room & 0.23 & 0.59&0.7\\ Storage Door & 0.37&0.81&0.74\\ Objective Value & 1.004 &0.94&0.9 \\ Net Capacity Value & 0.98 &1.17&0.91\\ Net Energy Value & 0.55 &0.39&1.21\\ CO$_2$ Emissions & 1.13 &0.72&0.8\\ VRE & 0.98 &1.24&1.12\\ Natural Gas GT Capacity & 0.99 &0.99&0.99\\ Natural Gas CC Capacity & 0.91 &0.74&0.74\\ \hline \end{tabular} \label{tbl:runsB} \end{table} Figure \ref{fig:mv} plots deployment of storage `room' capacity, across different costs, showing a marginal value curve. Note again that this is for only one instance of the parameterization. Table \ref{tbl:runsA} referred to the \$175/kW point on the y-axis. We see the `expected value curve' matching the `8760 curve' along a portion of the curve and the `representative day curve' doing better on another. At the lowest storage cost, we see the representative day method undervaluing storage, likely missing opportunities due to days omitted, while the expected value method overvalues storage, seeing unrealistic opportunities in its probabilistic connections between states. As discussed, how much these deviations matter depends upon the question and applications at hand. \begin{figure} \centering \includegraphics[width=0.99\textwidth,trim={1cm 4cm 1cm 4cm},clip]{mvcurve_paper.pdf} \caption{{Marginal value curve of storage across different temporal representations}} \label{fig:mv} \end{figure} \section{Solutions} \label{sec:solutions} Based on our exploration of the modeling problem of representing energy storage, this section outlines solutions, and pathways to solutions, to the representation problem, particularly, a) improvement of aggregation methods, b) alternative modeling paradigms, and c) avoiding aggregation through the harnessing of computational and algorithmic advances. \subsection{Improvement of aggregation methods} In the improvement of aggregation methods, the goal is to find temporal aggregation strategies that are better than excluding chronology and/or energy storage technologies from long-term models, since omitting energy storage is likely not an acceptable approach given current costs and expectations for future competitiveness. In addition to speed/feasibility and the capacity to address intertemporal questions, other desirable properties in a temporal aggregation strategy relate to the accuracy of the approximation, including the ability to: \begin{itemize} \item Characterize both energy and capacity value accurately \item Reflect changes in the marginal value of energy storage at different levels of deployment (namely, decreasing marginal returns at increasing penetration levels) \item Retain chronological information across diurnal, weekly, and seasonal timescales \item Preserve joint variation of all hourly time-series data across all model regions (e.g., so that variable renewables are simultaneously valued accurately) \end{itemize} Thus, a reduced-form representation should work for arbitrary levels of energy storage deployment, variable renewable shares, and other system conditions. As an example, Tables \ref{tbl:runsA} and \ref{tbl:runsB} showed an example of aggregation strategies not robust to different model parameterizations. Section \ref{sec:general} showed that the aggregation approaches from the literature introduced in Section \ref{sec:survey} could all be considered instances of a general aggregate representation. Furthermore, conditions were shown where this general aggregation would produce the same model outputs as a non-aggregated model. Of the methods discussed, adjacent clustering met these conditions, however further compression was possible through identification of cyclic structure in model data. A systematic approach that could find the most compressed representation that meets the conditions, would improve all the methods discussed, and essentially collapse them to one. \subsection{Alternate paradigms} Underlying our challenge of aggregating electricity planning models in the presence of storage is that we are, for a particular set of assumptions, trying to find the minimum-cost solution across time and all possible capacity mixes that conforms to policy and technical constraints (while evaluating investment and dispatch simultaneously). An alternate to finding aggregation strategies or greatly increasing computation time is to consider two inter-related options of (a) changing the question, or (b) changing the modeling paradigm. As regards the question, strategies that are employed include a strong decoupling of planning and operation models, where storage is only thoroughly represented in an operation model that considers one year as a snapshop, or alternately, a planning model considers one future year only \citep{Bistline_2020,Brown_2018} with all 8,760 hours and annualized costs. However, the challenge with both these approaches is that, with technological development like energy storage technologies, the planning and operations modes have become increasingly coupled, with outcomes in one strongly affecting the other. Although a ``static analysis'' approach allows many insights and is well-suited for certain questions, it misses intertemporal dynamics that are important for evaluating the time path of investments and retirements, which include meeting intermediate goals with long-lived capital investments. A related method is to use a ``sequential myopic'' (sometimes called ``recursive dynamic'') approach, where each year is solved individually and the capital stock is carried over. However, this strategy also misses the intertemporal/forward-looking nature of the capacity planning and dispatch problem. As regards the modeling paradigm, we can note that we have seen the capacity value rest on a small subset of periods, which we do not necessarily know \emph{a priori}. This could perhaps be harnessed by solving the model multiple times, starting with a coarse resolution, learning from the solution, and adding in additional periods as warranted, an approach similar in philosophy to column generation. \cite{Munoz2016} and \cite{Teichgraeber2020} present approaches of this type. This broad idea, along with its interactions with model treatment of uncertainty, is a promising area for future research.\footnote{For example, does uncertainty in model input data warrant simplified model structures?} \subsection{Avoiding temporal aggregation} \label{sec:decomp} Decomposition approaches may allow the harnessing of computational and algorithmic advances to avoid temporal aggregation altogether. This section outlines a scheme for applying the Alternating Direction Method of Multipliers (ADMM) approach to capacity planning problems at large scale for intertemporally consistent decisions. \cite{Hoschle2018} propose an ADMM-based method for computing risk-averse equilibrium in electricity markets, whereas, with the exception of \cite{Frew2016} there has been limited applications of ADMM to our knowledge for capacity planning problems. The ADMM approach, discussed further in \cite{Merrickdiss} and \cite{Merrick2017}, and drawing upon \cite{Bertsekas2015,Boyd2011,Bertsekas1989,Ye2020}, decomposes the general problem of minimizing $f(\mbox{\boldmath $x$})$ subject to $Ax=\b$ into $n$ blocks as follows: \[ \min f_1(\mbox{\boldmath $x$}_1)+\dots+f_n(\mbox{\boldmath $x$}_n) \] such that \[ A_1\mbox{\boldmath $x$}_1+\dots+A_n\mbox{\boldmath $x$}_n=\b\quad\text{or}\quad A_i\mbox{\boldmath $x$}_i-\mbox{\boldmath $y$}_i=0\text{ }\forall i, \sum_{i=1}^n\mbox{\boldmath $y$}_i=\b \] The $\mbox{\boldmath $x$}_i$ blocks are then solved separately, and allowably in parallel: \[ x_i^{k+1}=\text{argmin}_{\mbox{\boldmath $x$}_i} f_i(\mbox{\boldmath $x$}_i)-\alpha_i^k(A_i\mbox{\boldmath $x$}_i-\mbox{\boldmath $y$}_i^k)+\frac{\beta}{2}||A_i\mbox{\boldmath $x$}_i-\mbox{\boldmath $y$}_i^k||^2 \] After solving for each block of $\mbox{\boldmath $x$}_i$ variables, the dual variable vector, $\mbox{\boldmath $\alpha$}$ and `target' vector $\mbox{\boldmath $y$}$ are updated, with the magnitude of the update depending on the global $A\mbox{\boldmath $x$}-\b$ residual. The process is then repeated until the residuals converge to $0$. The bulk of the computational work is done in solving the primal $\mbox{\boldmath $x$}$ values at each iteration, and the ability to solve this in parallel allows harnessing of distributed computing to solve a problem at scale. The updating of the dual variables $\mbox{\boldmath $\alpha$}$ and $\mbox{\boldmath $y$}$ are closed-form updates on a central computer. In the context of our electricity sector planning model application, an important consideration is how to choose the allocation of variables to different blocks. A major consideration is how many constraints are `split' by the decomposition. Each constraint that is split adds to the dimensionality of the convergence criteria, and the number of dual variables and `target' variables to be updated at each iteration. However, not splitting enough constraints can imply, depending on the structure, too many blocks for efficient convergence \citep{Zhu_2019}. There are also useful economic interpretations of the decomposition. We have implemented this scheme in the aforementioned US-REGEN model by separating operation decisions and capacity decisions, with system wide consequences of decisions priced in through the augmented terms in the objective function. Furthermore, operation decisions are solved into separate fast solving hourly dispatch decisions. Price and quantity information are traded back and forth between the individual dispatch blocks and the investment/retirement capacity decisions at each iteration. While convergence is guaranteed, as has been noted in other applications of ADMM, the number of iterations to convergence and solution quality can be a challenge in practice \citep{Boyd2011}, so the appropriateness of use of this method depends upon the question being asked. A further challenge is that manual implementation of the algorithm may be required (as was the case in US-REGEN implementation), rather than simply handing off a specified model to a solver. It is not yet a catch-all solution, however, but it does provide solutions to problems previously unsolvable, including the ability to evaluate a benchmark to check the accuracy of temporal aggregation methods for intertemporal capacity planning and dispatch models with energy storage. \section{To Conclude} \label{sec:conclusion} This paper has provided context to improve the representation of energy storage in electricity planning models, and thus indirectly improve the modeling that informs important societal decisions about the power sector, the energy sector more broadly, and decarbonization strategies. The goal of this paper has been to consolidate work on aggregation for electricity sector planning models with energy storage, and to build a rigorous foundation for future modeling progress. Specifically, we contribute to this literature by a) appraising approaches to address temporal aggregation in electricity planning models, b) framing the modeling problem of developing a general representation of an aggregation that values storage technologies appropriately, c) illustrating from a numerical perspective the challenge of finding such an aggregation at relevant geographic scales, and d) investigating solutions to the problem. The core challenge is identifying methods that can represent chronology while being robust to the wide variety of technology and policy scenarios a planning model of the electricity sector must consider. We also note that we frequently want an aggregated model robust in representing all model outputs and not only the aggregate objective function value. A substantial increase in periods appears unavoidable within the realm of existing modeling methods, and this work ideally will enable clarity when proceeding so that modeling support for large-scale public and private electricity related decisions are not distorted simply by model structure. In this way, our analysis extends the literature demonstrating how the reduced complexity of electricity models has a large influence on feasibility, cost, and emissions outcomes \citep{Bistline_2020c,Diaz_2019,Blanford2018,Mallapragada_2018,Brown_2018,Bistline_2017b,Cole_2017}. Section \ref{sec:solutions} identified a number of questions and directions for future research, including a number particularly ripe for contributions from the operational research community. Furthermore, while the focus in this paper is detailed power systems models, there are research implications for related models. For example, reduced-form representations of electric sector planning are embedded in integrated assessment models with broader geographical and sectoral coverage, which comes at the expense of limited spatial, temporal, and technological detail \citep{Santen_2017}. Integrated assessment models do not capture storage operations or investment explicitly but assume deployment based on solar/wind penetration. Given how we show here that \emph{ex ante} assumptions about storage resources is likely an overly restrictive framing, how can detailed power system models be used to inform higher-level models that attempt to capture investment and dispatch dynamics? Conversely, how can more detailed energy storage system models and production cost models be used or linked to inform capacity planning models \citep{Bistline_2020b}? \newpage \section*{Acknowledgements} The authors would like to thank those who have read previous versions of this manuscript, as well as participants in two workshops, for their many helpful suggestions. The views expressed in this paper are those of the authors alone and do not necessarily reflect those of their institutions.
1,477,468,751,380
arxiv
\section{Related Work} The effectiveness of any BO approach over hybrid spaces depends critically on the choice of surrogate model. Prior work explored a variety of surrogate models. SMAC \cite{SMAC} employs random forest, which may suffer from inaccurate uncertainty quantification due to its frequentist estimation. TPE \cite{TPE} models each input dimension {\em independently} by a kernel density estimator, which can be restrictive due to large size of input dimensions and no inter-dependency among models of different input dimensions. MiVaBO \cite{MiVaBO-IJCAI2020} employs a Bayesian linear regressor by defining features that capture the discrete part using BOCS model \cite{BOCS,PSR}, continuous part using random fourier features \cite{RFF}, and pairwise interaction between continuous and discrete features. As the number of parameters increase, it will need a lot of training examples for learning accurate statistical model. GP based models overcome the drawbacks of all the above methods. \cite{lobato} provided a solution for BO over discrete spaces using an input-transformed kernel. A recent work referred as CoCaBO \cite{cocabo} employs a sum kernel (summing a Hamming kernel over discrete subspace and a RBF kernel over continuous subspace) to learn GP models and showed good results over SMAC and TPE. Unfortunately, the sum kernel captures limited interactions between discrete and continuous variables. In contrast, our additive hybrid diffusion kernel allows to capture higher-order interactions among hybrid variables and our data-driven approach can automatically learn the strengths of these interactions from training data. HyperBand (HB) \cite{HB} and its model-based variant BOHB \cite{BOHB} are efficient {\em multi-fidelity methods} for hyper-parameter optimization that build on existing methods to optimize hybrid spaces. Our HyBO approach is complementary to this line of work. Prior methods perform search over discrete and continuous subspaces (e.g., gradient descent) to solve the acquisition function optimization problem. SMAC employs a {\em hand-designed} local search procedure. MiVaBO uses integer program solvers to search discrete subspace. Learning methods to improve the accuracy of search \cite{L2S-DISCO} are complementary to SMAC, MiVABO, and HyBO. CoCaBO maintains a separate multi-armed bandit for each discrete variable and employs the EXP3 algorithm \cite{EXP3} to select their values {\em independently}. This method does not exploit dependencies among variables, which can be detrimental to accuracy. TPE samples from the learned density estimator to pick the best input for evaluation. \section{Diffusion Kernels over Hybrid Structures} \label{sec:hybrid-kernels} We first provide the details of key mathematical and computational tools that are needed to construct hybrid diffusion kernels. Next, we describe the algorithm to automatically construct additive diffusion kernels over hybrid structures. Finally, we present theoretical analysis to show that hybrid diffusion kernels satisfy universal approximation property. \subsection{Key Mathematical and Computational Tools}\label{sec:key_math} Diffusion kernels \cite{diffusion_kernel,diffusion_statistical_manifold} are inspired from the diffusion processes occurring in physical systems like heat and gases. The mathematical formulation of these processes naturally lends to kernels over both continuous and discrete spaces(e.g., sequences, trees, and graphs). \noindent {\bf Diffusion kernel over continuous spaces.} The popular radial basis function (RBF) kernel (also known as Gaussian kernel) \cite{diffusion_kernel} is defined as follows: \begin{align} k(x, x') = \frac{1}{2 \pi \sigma^2} e^{-\|x-x'\|^2/2\sigma^2} \label{eqn:rbf_kernel} \end{align} where $\sigma$ is the length scale hyper-parameter. This is the solution of the below continuous diffusion (heat) equation: \begin{align} \frac{\partial}{\partial t}k_{x_0} (x, t) = \Delta k_{x_0} (x, t) \label{eqn:continuous_heat_equation} \end{align} where $\Delta$ = $\frac{\partial^2}{\partial x_1^2} + \frac{\partial^2}{\partial x_2^2} \cdots \frac{\partial^2}{\partial x_D^2}$ is the second-order differential operator known as the {\em Laplacian operator}, and $k_{x_0} (x, t)$ = $k(x, x')$ with $x'$ = $x_0$ and $t$ = $\sigma^2/2$. \subsection{Diffusion Kernel over discrete spaces} The idea of diffusion kernels for continuous spaces is extended to discrete structures (e.g., sequences, graphs) \cite{diffusion_kernel_original} by utilizing the spectral properties of a graph representation of the discrete space. A discrete analogue of the Equation \ref{eqn:continuous_heat_equation} can be constructed by employing the matrix exponential of a graph and the {\em graph Laplacian operator} $L$ as given below: \begin{align} \frac{\partial}{\partial \beta} e^{\beta L} = L e^{\beta L} \label{eqn:discrete_diffusion_kernel} \end{align} where $L$ is the graph Laplacian of a suitable graph representation of the discrete input space and $\beta$ is a hyper-parameter of the resulting diffusion kernel similar to the length scale parameter $\sigma$ of the RBF kernel. The solution of Equation \ref{eqn:discrete_diffusion_kernel} defines a positive-definite kernel for discrete spaces known as the discrete diffusion kernel. According to Equation \ref{eqn:discrete_diffusion_kernel}, one important ingredient required for defining diffusion kernels on discrete spaces is a suitable graph representation for discrete spaces. One such representation was proposed in a recent work \cite{combo}. In this case, the entire discrete space is represented by a combinatorial graph $G$. Each node in the vertex set $V$ of the graph corresponds to one candidate assignment of all the discrete variables. Two nodes are connected by an edge if the Hamming distance between the corresponding assignments for all discrete variables is exactly one. The diffusion kernel over this representation is defined as follows: \begin{align} k(V, V) &= \exp(-\beta L(G)) \label{eqn:main_discrete_kernel}\\ k(V, V) &= \Phi \exp(-\beta \Pi) \Phi^T \end{align} where $\Phi$ = $[\phi_1, \cdots, \phi_{|V|}]$ is the eigenvector matrix and $\Pi$ = $[\pi_1, \cdots, \pi_{|V|}]$ is the eigenvalue matrix, where $\phi_i$'s and $\pi_i$'s are the eigenvectors and eigenvalues of the graph Laplacian $L(G)$ respectively. Although this graph representation contains an exponential number of nodes, \cite{combo} computes the graph Laplacian $L(G)$ by decomposing it over the Cartesian product ($\diamond$) of $m$ (number of discrete variables) sub-graphs ($G_1, G_2 \cdots, G_m$) with each sub-graph $G_i$ representing one variable individually. This algorithmic approach has time-complexity $O(\sum_{i=1}^m (C(v_i))^3)$, where $C(v_i)$ is the number of candidate values (arity) for the $i$th discrete variable. However, this method is computationally expensive, especially, for problems with large-sized arity. To avoid this computational challenge, we leverage prior observation in \cite{diffusion_kernel_original} which provides a {\em closed-form} of the discrete diffusion kernel by exploiting the structure of the above combinatorial graph representation. We explain this observation for binary variables $\{0, 1\}$. From its definition in Equation \ref{eqn:main_discrete_kernel}, the discrete diffusion kernel over single-dimensional input will be: \begin{align}\label{eqn:binary_dd_closed_form} k(x_d, x_d') = \left\{ \begin{array}{ll} (1 - e^{-2\beta}) & \mbox{if } x_d \neq x_d' \\ (1 + e^{-2\beta}) & \mbox{if } x_d = x_d' \end{array} \right. \end{align} Since the kernel over $m > 1$ dimensions is defined using the Kronecker product over $m$ dimensions, the above expression is easily extended to multiple dimensions setting giving: \begin{align} k(x_d, x_d') = \prod_{i=1}^m \frac{(1 - e^{-2\beta_i})}{(1 + e^{-2\beta_i})}^{\delta(x_d^i, x_d'^i)} \end{align} where $\delta(x_d^i, x_d'^i)$ = $0$ if $x_d^i$ is equal to $x_d'^i$ and $1$ otherwise. The subscript $d$ denotes that the variables are discrete and the superscript refers to the $i$th dimension of the discrete subspace. For general (discrete spaces with arbitray categories), we follow the same observation \cite{diffusion_kernel_original} and use the following constant-time expression of the discrete diffusion kernel in our method: \begin{align} k(x_d, x_d') = \prod_{i=1}^m \left( \frac{1-e^{-C(v_i)\beta_i}}{1 + (C(v_i)-1) e^{-C(v_i)\beta_i}} \right)^{\delta(x_d^i, x_d'^i)} \label{eqn:final_discrete_diffusion} \end{align} \subsection{Diffusion Kernels over Hybrid Spaces} \noindent {\bf Unifying view of diffusion kernels.} Our choice of diffusion kernels is motivated by the fact that they can be naturally defined for both discrete and continuous spaces. In fact, there is a nice transition of the diffusion kernel from discrete to continuous space achieved by continuous space limit operation. More generally, both discrete and continuous diffusion kernel can be seen as continuous limit operation on two parameters of random walks: {\em time} and {\em space}. For illustration, consider a random walk on an evenly spaced grid where mean time of jump is $t$ and mean gap between two points is $s$. If $t \to 0$, the resulting continuous time and discrete space random walk generates the diffusion kernel on discrete spaces. Additionally, in the limit of the grid spacing $s$ going to zero, the kernel will approach the continuous diffusion kernel. \noindent {\bf Algorithm to construct hybrid diffusion kernels.} We exploit the general formulation of additive Gaussian process kernels \cite{additive_gp_kernels} to define an {\em additive hybrid diffusion} kernel over hybrid spaces. The key idea is to assign a base kernel {\em for each input dimension $i \in \{1, 2, \cdots, m+n\}$}, where $m$ and $n$ stand for the number of discrete and continuous variables in hybrid space $\mathcal{X}$; and construct an overall kernel by summing all possible orders of interactions (upto $m+n$) between these base kernels. In our case, the RBF kernel and the discrete diffusion kernel acts as the base kernel for continuous and discrete input dimensions respectively. The $p^{th}$ order of interaction (called {\em $p^{th}$ additive kernel}) is defined as given below: \begin{align*} \mathcal{K}_{p} = \theta_p^2 \sum_{1\leq i_1 < i_2 < \cdots, i_p\leq m+n} \left(\prod_{d=1}^p k_{i_d}(x_{i_d}, x_{i_d}')\right) \label{eqn:additive_kernel_def} \end{align*} where $\theta_p$ is a hyper-parameter associated with each additive kernel and $k_{i_d}$ is the base kernel for the input dimension $i_d$. In words, the $p$th additive kernel is a sum of $m+n \choose p$ terms, where each term is a product of $p$ distinct base kernels. Estimation of $\theta_p$ hyper-parameter from data allows automatic identification of important orders of interaction for a given application. The overall {\em additive hybrid diffusion kernel} $\mathcal{K}_{HYB}(x, x')$ over hybrid spaces is defined as the sum of all orders of interactions as given below: \begin{align} \mathcal{K}_{HYB} &= \sum_{p=1}^{m+n} \mathcal{K}_{p} \\ \mathcal{K}_{HYB} &= \sum_{p=1}^{m+n} (\theta_p^2 \sum_{i_1, \cdots, i_p} \prod_{d=1}^p k_{i_d}(x_{i_d}, x_{i_d}')) \label{eqn:additive_diff_kernel} \end{align} It should be noted that the RHS in Equation \ref{eqn:additive_diff_kernel} requires computing a sum over exponential number of terms. However, this sum can be computed in polynomial time using Newton-Girard formula for elementary symmetric polynomials \cite{additive_gp_kernels}. It is an efficient formula to compute the $p^{th}$ additive kernel recursively as given below: \begin{align} \mathcal{K}_{p} = \theta_p^2 \cdot \left(\frac{1}{p} \sum_{j=1}^p (-1)^{(j-1)} \mathcal{K}_{p-j} S_j\right) \end{align} where $S_j = \sum_{i=1}^{m+n} k_i^{j}$ is the $j$th power sum of all base kernels $k_j$ and the base case for the recursion can be taken as 1 (i.e., $\mathcal{K}_0 = 1$). This recursive algorithm for computing additive hybrid diffusion kernel has the time complexity of $\mathcal{O}((n+m)^2)$. \noindent {\bf Data-driven specialization of kernel for a given application.} In real-world applications, the importance of different orders of interaction can vary for optimizing the overall performance of BO approach (i.e., minimizing the number of expensive function evaluations to uncover high-quality hybrid structures). For example, in some applications, we may not require all orders of interactions and only few will suffice. The $\theta_p$ hyper-parameters in the additive hybrid diffusion kernel formulation allows us to identify the strength/contribution of the $p$th order of interaction for a given application in a {\em data-driven} manner. We can compute these parameters (along with the hyper-parameters for each base kernel) by maximizing the marginal log-likelihood, but we consider a fully Bayesian treatment by defining a prior distribution for each of them. This is important to account for the uncertainty of the hyper-parameters across BO iterations. The acquisition function $\mathcal{AF}(x)$ is computed by marginalizing the hyper-parameters as given below: \begin{align} \mathcal{AF}(x; \mathcal{D}) = \int \mathcal{AF}(x; D, \Theta) p(\Theta|D) d\Theta \label{acquisition} \end{align} where $\Theta$ is a variable representing all the hyperparameters ($\sigma$ for continuous diffusion kernel, $\beta$ for discrete diffusion kernel, and $\theta$ for strengths of different orders of interaction in hybrid diffusion kernel) and $\mathcal{D}$ represents the aggregate dataset containing the hybrid structure and function evaluation pairs. The posterior distribution over the hyper-parameters is computed using slice sampling \cite{slice_sampling}. \vspace{-1.0ex} \subsection{Theoretical Analysis}\label{sec:theoretical} Intuitively, a natural question to ask about the modeling power of a kernel is whether (given enough data) it can approximate (with respect to a suitable metric) any black-box function defined over hybrid spaces. This is a minimum requirement that should guide our choice of kernel in the given problem setting. This question has been studied widely in the form of a key property called {\em universality} of a kernel \cite{steinwart_univeral,micchelli_universal,sri_univeral,mania_univeral}. In this section, we prove the universality of the {\em additive hybrid diffusion kernel} by combining the existing result on the universality of RBF (Gaussian) kernel with a novel result proving the universality of discrete diffusion kernels. \vspace{-1.0ex} \begin{restatable}{proposition}{cl} \cite{steinwart_univeral,micchelli_universal} Let $\mathcal{X}_c$ be a compact and non-empty subset of $\mathbb{R}^n$. The RBF kernel in Equation \ref{eqn:rbf_kernel} is a universal kernel on $\mathcal{X}_c$.\label{theorem:universal_cont} \end{restatable} \noindent A kernel $k$ defined on an input space $\mathcal{X}_c$ has a unique correspondence with an associated Reproducing Kernel Hilbert Space (RKHS) of functions $\mathcal{H}_k$ defined on $\mathcal{X}_c$ \cite{svm_book}. For compact metric input spaces $\mathcal{X}_c$, a kernel $k$ is called universal if the RKHS $\mathcal{H}_k$ defined by it is dense in the space of continuous functions $C(\mathcal{X}_c)$. \cite{steinwart_univeral} proved the universality of the RBF (Gaussian) kernel with respect to the uniform norm. \cite{micchelli_universal} established universality for a larger class of translation invariant kernels. \cite{sri_univeral} discussed various notions of universality and connected to the concept of {\em characteristic kernels}. \vspace{-1.5ex} \begin{restatable}{proposition}{dl} Let $\mathcal{X}_d$ be the discrete space $\{0, 1\}^m$ and a psuedo-boolean function on $\mathcal{X}_d$ is defined as $f : \mathcal{X}_d \mapsto \mathbb{R}$. The discrete diffusion kernel is a universal kernel on $\mathcal{X}_d$. \label{theorem:universal_discrete} \end{restatable} {\noindent \bf Proof.} A Reproducing Kernel Hilbert Space $\mathcal{H}_k$ associated with a kernel $k: \mathcal{X} \times \mathcal{X} \mapsto \mathbb{R}$ is defined as: \begin{align} \mathcal{H}_k = cl(span\{k(x, \cdot), \forall x \in \mathcal{X}\}) \end{align} where $cl$ represents the closure and $k(x, \cdot)$ is called as the feature map of $x$ \cite{svm_book}. In our setting, a kernel $k$ defined on discrete input space $\mathcal{X}_d$ is universal if and only if any pseudo-Boolean function $f$ can be written as a linear combination of functions ($k(x_{i_d}, \cdot), \forall x_{i_d} \in \mathcal{X}_d$) in the RKHS $\mathcal{H}_k$ \cite{mania_univeral,gretton_universal}, i.e. \begin{align} \forall f : \mathcal{X}_d \mapsto \mathbb{R}; \quad \exists a_i \in \mathbb{R}; f = \sum_i a_i k(x_{i_d}, \cdot); \end{align} We prove that this is true by computing the explicit form of functions ($k(x_{i_d}, \cdot), \forall x_{i_d} \in \mathcal{X}_d$) existing in the RKHS $\mathcal{H}_k$ of the discrete diffusion kernel. To see this, we exploit the structure of the combinatorial graph representation of the discrete space discussed in Section \ref{sec:key_math}. The discrete diffusion kernel is defined in terms of the eigenvectors $\phi_i$ and eigenvalues $\pi_i$ of the graph Laplacian $L(G)$ as follows: \begin{align} k(x_d, x_d') &= \sum_{i=1}^{2^n} \phi_i[x_d] \exp(-\beta \pi_i) \phi_i[x_d'] \label{eqn:kernel} \end{align} Since the combinatorial graph $G$ is generated by the Cartesian product over sub-graphs $G_i$ (one for each discrete variable), the eigenvectors term $\phi_i[x_d]$ can be calculated via an explicit formula, i.e., $\phi_i[x_d] = -1^{w^Tx_d}$, where $w$ is a binary vector of size $n$ \cite{chung1997spectral} (number of discrete variables). \begin{align} k(x_d, x_d') &= \sum_{i=1}^{2^n} -1^{w^Tx_d} \exp(-\beta \pi_i) -1^{w^Tx_d'} \\ <k(x_d, \cdot), & k(x_d', \cdot)> = \sum_{i=1}^{2^n} -1^{w^Tx_d} \exp(-\beta \pi_i) -1^{w^Tx_d'} \end{align} where the inner product in LHS follows from the reproducing property \cite{svm_book} of a kernel $k$. Therefore, the functions $k(x_d, \cdot)$ in the RKHS $\mathcal{H}_k$ of the discrete diffusion kernel are of the form $\{-1^{{w_j}^Tx_d} ; w_j \in [0, 2^n-1]\}$, which is the well-known {\em Walsh Basis} \cite{walsh_basis} for pseudo-Boolean functions. Therefore, any pseudo-Boolean function $f$ can be represented by a linear combination of functions in $\mathcal{H}_k$ since they form a basis. \vspace{-1.5ex} \begin{restatable}{theorem}{mt} Let $\mathcal{X}_c$ be a compact and non-empty subset of $\mathbb{R}^n$ and $\kappa_c$ be RBF kernel on $\mathcal{X}_c$. Let $\mathcal{X}_d$ be the discrete space $\{0, 1\}^m$ and $\kappa_d$ be discrete diffusion kernel on $\mathcal{X}_d$. The additive hybrid diffusion kernel defined in Eqn \ref{eqn:additive_diff_kernel}, instantiated with $k_c$ and $k_d$ for continuous and discrete spaces respectively, is a universal kernel for the hybrid space $\mathcal{X}_c \times \mathcal{X}_d$. \label{theorem:universal_hybrid} \end{restatable} \noindent According to Equation \ref{eqn:additive_kernel_def}, any $p$th order of interaction term in the additive hybrid diffusion kernel is defined as $\left(\prod_{d=1}^p k_{i_d}(x_{i_d}, x_{i_d}')\right)$. Therefore, if each $k_{i_d}$ is universal over its corresponding dimension $X_{i_d}$ (which is true from Propositions 1 and 2), we need to show that the product $\left(\prod_{d=1}^p k_{i_d}(x_{i_d}, x_{i_d}')\right)$ is universal over the union of dimensions $\mathcal{X}_{i_1} \times \mathcal{X}_{i_2} \cdots \times \mathcal{X}_{i_p}$. This was proven by Lemma A.5 in \cite{heirarchical_steinwart}. We provide the lemma here for completeness. \begin{lemma}{From \cite{heirarchical_steinwart}}\label{tensor-universal} Let $\mathcal{X}\subset \mathbb{R}^m$ be a compact and non-empty subset, $I,J\subset \{1,\dots,m\}$ be non-empty, and $k_I$ and $k_J$ be universal kernels on $\mathcal{X}_I \times \mathcal{X}_J$, respectively. Then $k_I\otimes k_J$ defined by \begin{displaymath} k_I\otimes k_J(x , x' ) := k_I(x_I, x'_I) \cdot k_J(x_J, x'_J) \end{displaymath} for all $x , x' \in \mathcal{X}_{I} \times \mathcal{X}_{J}$ is a universal kernel on $\mathcal{X}_{I} \times \mathcal{X}_{J}$. \end{lemma} Since both continuous and discrete spaces are compact and Lemma \ref{tensor-universal} is true for arbitrary compact spaces, each order of interaction is universal with respect to its corresponding ambient dimension $\mathcal{X}_{i_1} \times \mathcal{X}_{i_2} \cdots \times \mathcal{X}_{i_p}$. In particular, it is true for $m+n$th order of interaction which is defined over the entire hybrid space $\mathcal{X}_c \times \mathcal{X}_d$ which proves the theorem. \section{Problem Setup and Hybrid Bayesian Optimization Approach} \noindent {\bf Problem Setup.} Let $\mathcal{X}$ be a hybrid space to be optimized over, where each element $x \in \mathcal{X}$ is a hybrid structure. Without loss of generality, let each hybrid structure $x$ = $(x_d \in \mathcal{X}_d, x_c \in \mathcal{X}_c) \in \mathcal{X}$ be represented using $m$ discrete variables and $n$ continuous variables, where $x_d$ and $x_c$ stands for the discrete and continuous sub-space of $\mathcal{X}$. Let each discrete variable $v_d$ from $x_d$ take candidate values from a set $C(v_d)$ and each continuous variable $v_c$ from $x_c$ take values from a compact subset of $\mathbb{R}$. In parts of the ML literature, a distinction is made between categorical and discrete variables based on their values: {\em categorical} refers to an unordered set (e.g., different types of optimizers for neural network training) and {\em discrete} refers to an ordered set ( e.g., number of layers in a neural network). We do not make such distinction because our HyBO approach works for both cases. Concretely, by our definition, a categorical variable is also a discrete variable, i.e., $C(v_d)$ is just the no. of candidate values for categorical variable $v_d$. We are given a space of hybrid structures $\mathcal{X}$. We assume an unknown, expensive real-valued objective function $\mathcal{F}: \mathcal{X} \mapsto \mathbb{R}$, which can evaluate each hybrid structure $x$ (also called an experiment) and produces an output $y$ = $\mathcal{F}(x)$. For example, in high-entropy alloys optimization application, $x_d$ corresponds to the presence/absence of metals and $x_c$ corresponds to their relative concentrations, and $\mathcal{F}(x)$ corresponds to running a physical lab experiment using additive manufacturing techniques. The main goal is to find a hybrid structure $x \in \mathcal{X}$ that approximately optimizes $\mathcal{F}$ by conducting a limited number of evaluations and observing their outcomes. \noindent {\bf Bayesian Optimization Framework.} BO is a very efficient framework to solve global optimization problems using {\em black-box evaluations of expensive objective functions} \cite{BO-Survey:2016}. BO algorithms intelligently select the next input for evaluation guided by a learned statistical model to quickly direct the search towards optimal inputs. The three key elements of BO framework are: {\em 1) Statistical model} of the true function $\mathcal{F}(x)$. {\em Gaussian Process (GP)} \cite{GP-Book} is the most popular choice for statistical model. GPs allows to incorporate domain knowledge by defining an appropriate kernel over the input space and have better uncertainty quantification ability. A GP over a space $\mathcal{X}$ is a random process from $\mathcal{X}$ to $\mathbb{R}$. It is characterized by a mean function $\mu : \mathcal{X} \mapsto \mathbb{R}$ and a covariance or kernel function $k : \mathcal{X} \times \mathcal{X} \mapsto \mathbb{R}$. {\em 2) Acquisition function} ($\mathcal{AF}$) to score the utility of evaluating a candidate input $x \in \mathcal{X}$ based on the statistical model $\mathcal{M}$. Expected improvement (EI) \cite{EI} is a prototypical acquisition function. {\em 3) Optimization procedure} to select the best scoring candidate input for evaluation according to $\mathcal{AF}$. \vspace{-1.0ex} \begin{algorithm}[H] \caption{HyBO Approach} \footnotesize \textbf{Input}: $\mathcal{X}$ = Hybrid input space, $\mathcal{K}(x, x')$ = Kernel over hybrid structures, $\mathcal{AF}(\mathcal{M},x)$ = Acquisition function parametrized by model $\mathcal{M}$ and input $x$, $\mathcal{F}(x)$ = expensive objective function \\ \textbf{Output}: $\hat{x}_{best}$, the best structure \begin{algorithmic}[1] \STATE Initialize $\mathcal{D}_0 \leftarrow$ initial training data; and $t \leftarrow$ 0 \REPEAT \STATE Learn statistical model: $\mathcal{M}_t \leftarrow$ \textsc{GP-Learn}($\mathcal{D}_t$, $\mathcal{K}$) \STATE Compute the next structure to evaluate: \\ $x_{t+1} \leftarrow \mbox{arg}\,max_{x \in \mathcal{X}} \, \mathcal{AF}(\mathcal{M}_t,x)$ \\ \begin{ALC@g} \STATE $x_c \leftarrow$ Optimize continuous subspace conditioned on assignment to discrete variables $x_d$ \STATE $x_d \leftarrow$ Optimize discrete subspace conditioned on assignment to continuous variables $x_c$ \end{ALC@g} \STATE Evaluate objective function $\mathcal{F}(x)$ at $x_{t+1}$ to get $y_{t+1}$ \STATE Aggregate the data: $\mathcal{D}_{t+1} \leftarrow \mathcal{D}_{t} \cup \left\{(x_{t+1}, y_{t+1})\right\}$ \STATE $t \leftarrow t+1$ \UNTIL{convergence or maximum iterations} \STATE $\hat{x}_{best} \leftarrow \mbox{arg}\,max_{x_t \in \mathcal{D}} \, y_t$ \STATE \textbf{return} the best uncovered hybrid structure $\hat{x}_{best}$ \end{algorithmic} \label{alg:HyBO} \end{algorithm} \vspace{-3.0ex} \noindent {\bf Hybrid Bayesian Optimization Approach.} Our {\em HyBO} approach is an instantiation of the generic BO framework by instantiating the statistical model and acquisition function optimization procedure for hybrid spaces (see Algorithm~\ref{alg:HyBO}). {\em Statistical model over hybrid structures.} We employ GPs to build statistical models. To accurately model the complex interactions between discrete and continuous variables, we invoke a principled approach to {\em automatically} construct additive diffusion kernels over hybrid structures by leveraging diffusion kernels over continuous and discrete spaces. {\em Acquisition function optimization.} Suppose $\mathcal{M}_t$ is the statistical model at iteration $t$. Let us assume that $\mathcal{AF}(\mathcal{M}_t, x)$ is the acquisition function that need to be optimized to select the next hybrid structure $x_{t+1}$ for function evaluation. We solve this problem using an iterative procedure that performs search over continuous sub-space ($x_c$) and discrete sub-space ($x_d$) alternatively. For searching continuous and discrete sub-spaces, we employ CMA-ES \cite{CMA-ES} and hill-climbing with restarts respectively. We observed that {\em one} iteration of optimizing continuous and discrete subspaces gave good results and they were not sensitive to more iterations. All results of HyBO are with one iteration. \section{Introduction} A large number of science and engineering applications involve optimizing hybrid spaces (mixture of discrete and continuous input variables) guided by expensive black-box function evaluations. For example, in materials design optimization, discrete variables correspond to the presence/absence of primitive elements and continuous variables correspond to their relative concentrations, and evaluation of each design involves performing an expensive physical lab experiment. A popular and effective framework for optimizing expensive black-box functions is Bayesian optimization (BO) \cite{BO-Survey:2016,bo_tutorial,greenhill2020bayesian,MESMO,ACDesign,USEMO,USEMOC,MESMOC,belakaria2020PSD}. The key idea behind BO is to learn a surrogate statistical model and intelligently select the sequence of inputs for evaluation to approximately optimize the unknown objective. Gaussian process (GP) \cite{GP-Book} is the most popular choice for learning statistical models. GPs allow to incorporate domain knowledge about the problem in the form of a kernel over the input space and provide good uncertainty quantification. GPs have been successfully applied for both continuous \cite{BO-Survey:2016,MF-MESMO,iMOCA} and discrete spaces \cite{combo,MerCBO,reviewer_ref_3}. However, as we discuss in the related work section, there is very limited work on BO methods to optimize hybrid spaces \cite{SMAC,SMAC:TR2010,TPE,MiVaBO-IJCAI2020,cocabo}. Most of them employ non-GP based surrogate models as it is challenging to define a generic kernel over hybrid spaces that can account for complex interactions between variables. To precisely fill this gap in our knowledge, we propose a novel approch referred as {\em {\bf Hy}brid {\bf B}ayesian {\bf O}ptimization (HyBO)}. HyBO builds GP based surrogate models using diffusion kernels, which are naturally defined over continuous \cite{diffusion_kernel} and discrete spaces \cite{diffusion_kernel_original}. We develop a principled approach to construct diffusion kernels over hybrid spaces. This approach employs the general formulation of additive Gaussian process kernels \cite{additive_gp_kernels} to define {\em additive hybrid diffusion} kernels. The key idea is to assign a base kernel for each discrete/continuous variable and construct an overall kernel by summing over all possible orders of interaction between these kernels. This construction procedure has two advantages: 1) Allows to leverage existing kernels for continuous and discrete spaces; and 2) Can automatically identify the strength of different orders of interaction in a data-driven manner for a given application. . A key question about the modeling strength of this hybrid diffusion kernel is whether given sufficient data, can it approximate any black-box function defined over hybrid spaces. This question has been studied in the past in terms of a property called {\em universality} of a kernel \cite{steinwart_univeral,micchelli_universal,sri_univeral,mania_univeral}. We prove that the proposed hybrid diffusion kernel has universal approximation property by composing a known result for continuous diffusion kernels with a novel result for discrete diffusion kernels. Our theoretical results have broader significance going beyond the BO literature. Our experiments on diverse synthetic benchmarks and real-world applications show that HyBO performs significantly better than state-of-the-art methods. We also empirically demonstrate that superiority of HyBO's performance is due to better surrogate model resulting from the proposed additive hybrid diffusion kernel. \noindent {\bf Contributions.} The key contribution of this paper is the development and evaluation of the HyBO approach to perform BO over hybrid spaces. Specific list includes: \vspace{-1.5ex} \begin{itemize} \setlength\itemsep{0em} \item Development of a principled approach to construct additive diffusion kernels over hybrid spaces for building GP based surrogate statistical models. \item Theoretical analysis to prove that additive hybrid diffusion kernel has the universal approximation property. \item Experiments on synthetic and real-world benchmarks to show that HyBO significantly improves over state-of-the-art methods. The code and data are available on the GitHub repository \url{https://github.com/aryandeshwal/HyBO}. \end{itemize} \section{Experiments and Results} We first describe our experimental setup. Next, we discuss experimental results along different dimensions. \vspace{-1.0ex} \subsection{Benchmark Domains} \vspace{0.5ex} \noindent {\bf Synthetic benchmark suite.} \texttt{bbox-mixint} is a challenging mixed-integer blackbox optimization benchmark suite \cite{bbox_mixint} that contains problems of varying difficulty. This benchmark suite is available via COCO platform\footnote{\url{https://github.com/numbbo/coco}}. We ran experiments with multiple problems from this benchmark, but for brevity, we present canonical results on four benchmarks (shown in Table \ref{tab:bbox}) noting that all the results show similar trends. \begin{table}[t!] \centering \begin{tabular}{|l | c | c |} \hline Name & Name in the suite & Dimension \\ \hline \hline Function 1 & f001\_i01\_d10 & 10 (8d, 2c) \\ Function 2 & f001\_i02\_d10 & 10 (8d, 2c) \\ Function 3 & f001\_i01\_d20 & 20 (16d, 4c) \\ Function 4 & f001\_i02\_d20 & 20 (16d, 4c) \\ \hline \end{tabular} \vspace{-1ex} \caption{Benchmark problems from bbox-mixint suite.} \label{tab:bbox} \vspace{-1ex} \end{table} {\bf \noindent Real world benchmarks.} We employ six diverse real-world domains. The complete details (function definition, bounds for input variables etc.) are in the Appendix. \vspace{-0.5ex} {\bf 1) Pressure vessel design optimization.} This mechanical design problem \cite{pressure_vessel_1,pressure_vessel_2} involves minimizing the total cost of a cylindrical pressure vessel. There are two discrete (thickness of shell and head of pressure vessel) and two continuous (inner radius and length of cylindrical section) variables. \vspace{-0.5ex} {\bf 2) Welded beam design optimization.} The goal in this material engineering domain \cite{weld_design_1,weld_design_2} is to design a welded beam while minimizing the overall cost of the fabrication. There are six variables: two discrete (type of welding configuration and bulk material of the beam) and four continuous (weld thickness, welded joint length, beam width and thickness). \vspace{-0.5ex} {\bf 3) Speed reducer design optimization.} In this domain from NASA \cite{speed_reducer}, the goal is to minimize the weight of a speed reducer defined over seven input variables: one discrete (number of teeth on pinion) and six continuous (face width, teeth module, lengths of shafts between bearings, and diameters of the shafts) \vspace{-0.5ex} {\bf 4) Optimizing control for robot pushing.} This is a 14 dimensional control parameter tuning problem, where a robot is trying to push objects toward a goal location \cite{EBO}. We consider a hybrid version of this problem by discretizing ten input variables corresponding to location of the robot and number of simulation steps. The remaining four parameters corresponding to rotation are kept as continuous. \vspace{-0.5ex} {\bf 5) Calibration of environmental model.} The problem of calibration and uncertainty analysis of expensive environmental models is very important in scientific domains \cite{em_func,astudillo2019bayesian}. There are four input variables (one discrete and three continuous). \begin{figure*}[h!] \centering \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/function_1.pdf} \label{fig:function_1}} \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/function_2.pdf} \label{fig:function_2}} \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/function_3.pdf} \label{fig:function_3}} \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/function_4.pdf} \label{fig:function_4}} \caption{Results of HyBO and state-of-the-art baselines on bbob-mixint benchmark suite for functions shown in Table \ref{tab:bbox}.} \label{fig:synthetic} \end{figure*} \vspace{-0.5ex} {\bf 6) Hyper-parameter optimization.} We consider hyper-parameter tuning of a neural network model on a diverse set of benchmarks \cite{nn_hpo}: five discrete (hidden layer size, activation type, batch size, type of learning rate, and whether to use early stopping or not) and three continuous (learning rate initialization, momentum parameter, and regularization coefficient) hyper-parameters. \vspace{-1.0ex} \subsection{Experimental Setup} \noindent {\bf Baseline methods.} We compare HyBO with four strong baselines: 1) \texttt{CoCaBO}, a state-of-the-art method \cite{cocabo}; 2) \texttt{SMAC} \cite{SMAC}; 3) \texttt{TPE} \cite{TPE}; 4) \texttt{HyBO w/o Marg} is a special case of HyBO, where we do not perform marginalization over the hyper-parameters of the hybrid diffusion kernel; and 5) \texttt{Cont-BO} treats discrete variables as continuous and performs standard BO over continuous spaces (both modeling and acquisition function optimization). We did not include \texttt{MiVaBO} \cite{MiVaBO-IJCAI2020} as there was no publicly available implementation \cite{PC:2020} \footnote{Personal communication with the lead author.}. \noindent {\bf Configuration of algorithms and baselines.} We configure \texttt{HyBO} as follows. We employ uniform prior for the length scale hyperparameter ($\sigma$) of the RBF kernel. Horse-shoe prior is used for $\beta$ hyper-parameter of the discrete diffusion kernel (Equation \ref{eqn:final_discrete_diffusion}) and hyper-parameters $\theta$ of the additive diffusion kernel (Equation \ref{eqn:additive_kernel_def}). We employ expected improvement \cite{EI} as the acquisition function. For acquisition function optimization, we perform iterative search over continuous and discrete sub-spaces as shown in Algorithm~\ref{alg:HyBO}. For optimizing discrete subspace, we run local search with 20 restarts. We normalize each continuous variable to be in the range $[-1, 1]$ and employed CMA-ES algorithm \footnote{\url{https://github.com/CMA-ES/pycma}} for optimizing the continuous subspace. We found that the results obtained by CMA-ES were not sensitive to its hyper-parameters. Specifically, we fixed the population size to 50 and initial standard deviation to 0.1 in all our experiments. We employed the open-source python implementation of CoCaBO \footnote{\url{https://github.com/rubinxin/CoCaBO_code}}, SMAC \footnote{\url{https://github.com/automl/SMAC3}}, and TPE \footnote{\url{https://github.com/hyperopt/hyperopt}}. All the methods are initialized with same random hybrid structures. We replicated all experiments for 25 different random seeds and report the mean and two times the standard error in all our figures. \noindent {\bf Evaluation metric.} We use the best function value achieved after a given number of iterations (function evaluations) as a metric to evaluate all methods. The method that uncovers high-performing hybrid structures with less number of function evaluations is considered better. \vspace{-1.0ex} \subsection{Results and Discussion} \vspace{1.5ex} \begin{figure*}[h!] \centering \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/mae_func_1.pdf} \label{fig:mae_function_1}} \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/mae_func_2.pdf} \label{fig:mae_function_2}} \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/mae_func_3.pdf} \label{fig:mae_function_3}} \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.25\textwidth]{figures/mae_func_4.pdf} \label{fig:mae_function_3}} \caption{Results showing mean absolute test error with increasing size of training set on the bbob-mixint synthetic benchmarks.} \label{fig:mae} \end{figure*} \begin{figure*}[h!] \centering \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/pressure_vessel.pdf} \label{fig:pressure_vessel}} \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/weld_design.pdf} \label{fig:weld_design}} \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/em_func.pdf} \label{fig:em_func}} \quad \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/speed_reducer.pdf} \label{fig:speed_reducer}} \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/push_robot.pdf} \label{fig:push_robot}} \caption{Results comparing the proposed HyBO approach with state-of-the-art baselines on multiple real world benchmarks.} \label{fig:real_world} \end{figure*} \begin{table*}[h!] \centering \begin{tabular}{|l | c | c | c | c | c |} \hline {\bf Dataset} & {\bf Cont-BO} & {\bf TPE} & {\bf SMAC} & {\bf CoCaBO} & {\bf HyBO} \\ \hline \hline blood\_transfusion & 76.089 (0.325) & 76.711 (0.432) & 76.658 (0.418) & 76.978 (0.455) & {\bf 77.819 (0.463)} \\ kc1 & 85.185 (0.129) & 85.637 (0.069) & 85.453 (0.087) & 85.415 (0.099) & 85.466 (0.116) \\ vehicle & 80.501 (1.120) & 80.913 (1.051) & 83.669 (1.013) & 82.882 (1.222) & {\bf 86.104 (0.894)} \\ segment & 87.253 (0.995) & 87.792 (0.537) & 89.986 (0.692) & 89.639 (0.727) & {\bf 91.433 (0.277)} \\ cnae & 95.370 (0.103) & 95.691 (0.082) & 95.605 (0.063) & 95.679 (0.108) & 95.644 (0.135) \\ jasmine & 77.317 (0.216) & 77.893 (0.071) & 77.460 (0.189) & 77.513 (0.202) & 77.121 (0.172) \\ \hline \end{tabular} \caption{Results on the task of hyper-parameter tuning of neural network models. Bold numbers signify statistical significance.} \label{tab:nn_hpo_results} \end{table*} \vspace{-1.5ex} \noindent {\bf Results on mixed integer benchmark suite.} Figure \ref{fig:synthetic} shows the canonical results on four benchmarks from \texttt{bbox-mixint} listed in Table \ref{tab:bbox} noting that all results show similar trends. HyBO and its variant HyBO-Round performs significantly better and converges much faster than all the other baselines. One key reason for this behavior is that hybrid diffusion kernel accounts for higher-order interactions between variables. Cont-BO performs the worst among all the methods. This shows that simply treating discrete variables as continuous is sub-optimal and emphasizes the importance of modeling the structure in discrete variables. \begin{table*}[h!] \centering \begin{tabular}{|l | c | c | c | c | c |} \hline {\bf Benchmark} & {\bf TPE} & {\bf SMAC} & {\bf CoCaBO} & {\bf HyBO} \\ \hline Synthetic Function 1 & 0.012 & 2.34 & 2.30 & 50 \\ Synthetic Function 2 & 0.012 & 0.98 & 1.31 & 50 \\ Synthetic Function 3 & 0.026 & 2.99 & 3.18 & 180 \\ Synthetic Function 4 & 0.026 & 1.98 & 2.96 & 180 \\ Pressure Vessel Design & 0.003 & 0.34 & 0.85 & 20 \\ Welded Beam Design & 0.004 & 0.64 & 1.02 & 40 \\ Speed Reducer Design & 0.006 & 1.38 & 0.94 & 40 \\ Push Robot & 0.017 & 1.94 & 1.70 & 90 \\ Environment model & 0.005 & 0.31 & 0.50 & 40 \\ \hline \end{tabular} \caption{Computational cost in average wall-clock time (seconds) per BO iteration.} \label{tab:computational_cost} \end{table*} \vspace{-1ex} \noindent {\bf Ablation results for statistical models.} To understand the reasons for the better performance of HyBO, we compare the performance of its surrogate model based on hybrid diffusion kernels with those of CoCaBO and SMAC. We perform the following experiment. We constructed testing dataset (pairs of hybrid structures and their function evaluations) of size 200 via uniform random sampling. We compute the mean absolute error (MAE) of the three surrogate models as a function of training set size. The results are shown in Figure \ref{fig:mae} which depicts the mean and two times standard error of the MAE on 25 random testing datasets. HyBO clearly has very low error compared to CoCaBO and SMAC on Function 1 and 2. Although HyBO has similar MAE to CoCaBO in the beginning on Function 3 and 4, it rapidly decreases as the training set size increases which is not the case for other two methods. This experiment provides strong empirical evidence for the fact that the proposed surrogate model in HyBO can model hybrid spaces more accurately when compared to CoCaBO and SMAC. \noindent {\bf Ablation results for marginalization in HyBO.} Bayesian treatment of hyper-parameters (marginalization) is one key component of our proposed HyBO method. However, to decouple the efficacy of additive diffusion kernel from the usage of marginalization, we performed experiments using HyBO without marginalization (HyBO w/o Marg in Figures). As evident from Figure \ref{fig:synthetic}, HyBO w/o Marg finds better solutions than all the baselines albeit with slower convergence which is improved by adding marginalization. \noindent {\bf Results for real-world domains.} Figure \ref{fig:real_world} shows comparison of HyBO approach with baseline methods on all real-world domains except hyper-parameter optimization. We make the following observations. 1) HyBO consistently performs better than all the baselines on all these benchmarks. 2) Even on benchmarks such as speed reducer design and welded beam design where HyBO finds a similar solution as CoCaBO, it does so with much faster convergence. 3) CoCaBO performs reasonably well on these benchmarks but its performance is worse than HyBO demonstrating that its sum kernel (along with Hamming kernel for discrete spaces) is less powerful than hybrid diffusion kernel of HyBO. 4). TPE has the worst performance on most benchmarks possibly a direct result of its drawback of not modeling the interactions between input dimensions. 5) SMAC performs poorly on all the benchmarks potentially due to poor uncertainty estimates from random forest surrogate model. Table \ref{tab:nn_hpo_results} shows the final accuracy (mean and standard error) obtained by all methods including HyBO on the task of tuning neural network models for six different datasets (BO curves are similar for all methods). HyBO produces comparable or better results than baseline methods. \noindent {\bf Computational cost analysis.} We compare the runtime of different algorithms including HyBO. All experiments were run on a AMD EPYC 7451 24-Core machine. Table {\bf \ref{tab:computational_cost}} shows the average wall-clock time (in seconds) per BO iteration. We can see that HyBO is relatively expensive when compared to baseline methods. However, for real-world science and engineering applications, minimizing the cost of physical resources to perform evaluation (e.g., conducting an additive manufacturing experiment for designing materials such as alloys) is the most important metric. The computational cost for selecting inputs for evaluation is a secondary concern. HyBO uses more time to select inputs for evaluation to minimize the number of function evaluations to uncover better structures. We provide a finer-analysis of the HyBO runtime in Table \ref{tab:orders_of_interaction_time}. Each kernel evaluation time with all orders of interactions is very small. The overall runtime is spent on two major things: a) Sampling from posterior distributions of hyperparameters using slice sampling; and b) AFO using CMA-ES + local search. We can reduce the sampling time by considering HyBO without marginalization which shows slightly worse performance, but takes only 10 percent of the sampling time in HyBO. \begin{table}[h!] \centering \begin{tabular}{|l | c | c | c| c|} \hline {\bf \specialcell{Orders of\\ interaction}} & {\bf \specialcell{ HyBO \\ iteration}} & {\bf \specialcell{AFO}} & {\bf \specialcell{Sampling}} & {\bf \specialcell{Kernel \\ eval.}} \\ \hline 2 & 62 & 46 & 16 & 0.005 \\ 5 & 68 & 50 & 18 & 0.006 \\ 10 & 102 & 68 & 34 & 0.010 \\ 20 (HyBO) & 180 & 114 & 66 & 0.020 \\ \hline \end{tabular} \vspace{-2ex} \caption{Average runtime (seconds) for different orders of interaction within hybrid kernel for synthetic Function 3.} \label{tab:orders_of_interaction_time} \end{table} \section{Appendix} In this section, we illustrate the additive hybrid diffusion kernel (Equation \ref{eqn:additive_diff_kernel}) by providing a running example. \subsection{Running example for additive hybrid diffusion kernel} We illustrate the additive hybrid diffusion kernel and its recursive computation using a 3-dimensional hybrid space, where the first two dimensions correspond to discrete subspace and the last dimension correspond to continuous subspace. Let $k_1, k_2, k_3$ be the base kernels for first, second, and third dimension respectively. The additive diffusion kernel can be computed recursively step-wise as shown below: \begin{align*} \mathcal{K}_1 &= \theta_1^2 \cdot (k_1 + k_2 + k_3), \hspace{13mm} \textcolor{red}{\mathcal{S}_1 = (k_1 + k_2 + k_3)}\\ \mathcal{K}_2 &= \theta_2^2 \cdot (k_1 k_2 + k_1 k_3 + k_2 k_3), \hspace{2.5mm} \textcolor{red}{\mathcal{S}_2 =(k_1^2 + k_2^2 + k_3^2)}\\ \mathcal{K}_3 &= \theta_3^2 \cdot (k_1 k_2 k_3), \hspace{21mm} \textcolor{red}{\mathcal{S}_3 = (k_1^3 + k_2^3 + k_3^3)}\\ \mathcal{K}_0 &= 1; \\ \mathcal{K}_1 &= \theta_1^2 \cdot \textcolor{red}{S_1}; \\ \mathcal{K}_2 &= \theta_2^2 \cdot \frac{1}{2}\left(\mathcal{K}_1 \cdot \textcolor{red}{S_1} - \textcolor{red}{S_2}\right); \\ \mathcal{K}_3 &= \theta_3^2 \cdot \frac{1}{3}\left(\mathcal{K}_2 \cdot \textcolor{red}{S_1} - \mathcal{K}_1 \cdot \textcolor{red}{S_2} + \textcolor{red}{S_3}\right); \\ \mathcal{K}_{HYB} &= \mathcal{K}_1 + \mathcal{K}_2 + \mathcal{K}_3 \end{align*} \section{Additional Experimental Details} \subsection{Real world benchmarks} {\bf 1) Pressure vessel design optimization.} The objective function (cost of cylindrical pressure vessel design) $\mathcal{F}(x)$ for this domain is given below: \begin{align} \begin{split} \min_{\{x_1, x_2, x_3, x_4\}}& 0.6224x_1x_3x_4 + 1.7781x_2x_3^2 + \\ & 3.1661 x_1^2x_4 + 19.84 x_1^2 x_3 \end{split} \end{align} where $x_1, x_2$ are discrete variables (thickness of shell and head of pressure vessel) lying in $\{1, \cdots, 100\}$ and $x_3 \in [10, 200], x_4 \in [10, 240]$ are continuous variables (inner radius and length of cylindrical section). {\bf 2) Welded beam design optimization.} The objective function (cost of fabricating welded beam) $\mathcal{F}(x)$ for this domain is: \begin{align} \min_{\{x_1, x_2, x_3, x_4, x_5, x_6\}} (1+G_1) (x_1 x_5 + x_4) x_3^2 + G_2 x_5 x_6 (L + x_4) \end{align} where $x_1 \in \{0, 1\}, x_2 \in \{0, 1, 2, 3\}$ are discrete variables, $x_3 \in [0.0625, 2], x_4 \in [0, 20], x_5 \in [2, 20], x_6 \in [0.0625, 2]$ are continuous variables, $G_1$ is the cost per volume of the welded material, and $G_2$ is the cost per volume of the bar stock. The constants ($G_1, G_2, L$), which are dependent on the second discrete variable $x_2$, are given in \cite{weld_design_1,weld_design_2}. {\bf 3) Speed reducer design optimization.} The objective function (weight of speed reducer) $\mathcal{F}(x)$ for this domain is: \begin{align} \begin{split} &\min_{\{x_1, x_2, x_3, x_4, x_5, x_6, x_7\}} 0.79 x_2 x_3^2(3.33 x_1^3 + 14.93 x_1 - 43.09) \\ & -1.51 x_2 (x_6^2 + x_7^2) + 7.48 (x_6^3 + x_7^3) + 0.79 (x_4 x_6^2 + x_5 x_7^2) \end{split} \end{align} where $x_1 \in \{17,18 \cdots,28\}$ represents the discrete variable (number of teeth on pinion), $x_2 \in [2.6, 3.6], x_3 \in [0.7, 0.8], x_4 \in [7.3, 8.3], x_5 \in [0.7, 0.8], x_6 \in [2.9, 3.9], x_7 \in [5, 5.5]$ represents the continuous variables (face width, teeth module, lengths of shafts between bearings, and diameters of the shafts respectively). The above three benchmarks are usually described with {\em known} constraints in a declarative manner. However, for simplicity, we consider their unconstrained version for evaluation in this paper. If required, since the constraints are known, we can easily avoid searching for invalid solutions by using an appropriate acquisition function optimizer within HyBO. {\bf 4) Optimizing control for robot pushing.} This domain was taken from this URL \footnote{\url{https://github.com/zi-w/Ensemble-Bayesian-Optimization/tree/master/test_functions}}. We consider a hybrid version of this problem by discretizing the location parameters ($x_1, x_2, x_3, x_4 \in \{-5, -4, \cdots, 5\}$ and $x_5, x_6, x_7, x_8 \in \{-10, -9, \cdots, 10\}$). There are two other discrete variables corresponding to simulation steps $x_9, x_{10} \in \{2, 3, 4, \cdots, 30\}$ and two continuous variables $x_{11}, x_{12}$ lying in $[0, 2\pi]$. {\bf 5) Calibration of environmental model.} The details of the objective function for this domain are available in \cite{em_func, astudillo2019bayesian}. The single discrete variable has 284 candidate values lying in the set $\{30.01, 30.02, \cdots 30.285\}$. There are three continuous variables lying in the range: $x_2 \in [7,13], x_3 \in [0.02, 0.12], x_4 \in [0.01, 3]$. {\bf 6) Hyper-parameter optimization.} The type and range for different hyper-parameters considered in this domain are given in Table \ref{tab:hyper_ranges}. We employed the scikit-learn \cite{scikit-learn} neural network implementation for this benchmark. \begin{table*}[t!] \centering \begin{tabular}{|c|c|c|} \hline Hyperparameter & Type & Range \\ \hline \hline Hidden layer size & Discrete & $\{40, 60, \cdots, 300\}$\\ Type of activation & Discrete & $\{\text{'identity'}, \text{'logistic'}, \text{'tanh'}, \text{'relu'}\}$\\ Batch size & Discrete & $\{40, 60, \cdots, 200\}$ \\ Type of learning rate & Discrete & $\{\text{'constant'}, \text{'invscaling'}, \text{'adaptive'}\}$ \\ Early stopping & Discrete & True/False \\ Learning rate initialization & Continuous & $[0.001, 1]$ \\ Momentum & Continuous & $[0.5, 1]$ \\ Alpha parameter & Continuous & $[0.0001, 1]$ \\ \hline \end{tabular} \caption{Type and range of hyper-parameters considered for the HPO benchmark.} \label{tab:hyper_ranges} \end{table*} \begin{figure*}[h!] \centering \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/pressure_vessel_cont.pdf} \label{fig:pressure_vessel}} \subfloat[Subfigure 2 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/weld_design_cont.pdf} \label{fig:weld_design}} \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/em_func_cont.pdf} \label{fig:em_func}} \quad \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/speed_reducer_cont.pdf} \label{fig:speed_reducer}} \subfloat[Subfigure 1 list of figures text][]{ \includegraphics[width=0.30\textwidth]{figures/push_robot_cont.pdf} \label{fig:hybo_legend}} \includegraphics[width=0.30\textwidth]{figures/hybo_legend.png} \label{fig:hybo_legend} \caption{Results comparing the proposed HyBO approach with state-of-the-art baselines on multiple real world benchmarks. These figures also contain HyBO without marginalization and Cont-BO results.} \label{fig:real_world_appendix} \end{figure*} \section{Additional Results} \noindent {\bf Results for real-world benchmarks.} Figure \ref{fig:real_world_appendix} extends the plots of Figure \ref{fig:real_world} by including the performance of Cont-BO and HyBO w/o Marg on the real-world benchmarks. The results show similar trend where Cont-BO performs worse than all other methods showing the need to take into account the hybrid input structure. Also, the performance of HyBO w/o Marg remains similar to HyBO (except on calibration of environment model) demonstrating the effective modeling strength of additive hybrid diffusion kernel. \begin{table}[h!] \centering \begin{tabular}{|l | c | c | c|} \hline {\bf Benchmark} & {\bf HyBO} & {\bf G-M et al.} & {\bf Vanilla BO} \\ \hline Synthetic Function 1 & 79.7 & 99.4 & 86.2 \\ Synthetic Function 2 & 394.6 & 420 & 407 \\ Synthetic Function 3 & 81.1 & 143 & 135 \\ Synthetic Function 4 & 395.2 & 458 & 456.8 \\ \hline \end{tabular} \caption{Results for additional baseline experiments} \label{tab:more_baselines} \end{table} \noindent{\bf Comparison with \cite{lobato}} As mentioned in our related work, this is an interesting approach for BO over discrete spaces but it is specific to discrete spaces alone. Since our problem setting considers hybrid input spaces, we performed experiments using this method for the discrete part and using the standard BO approach for the continuous part with HyBO’s AFO procedure. Results of this approach (referred as G-M et al.,) on the 4 synthetic benchmarks are shown in Table \ref{tab:more_baselines}. The best function value achieved after 200 iterations and averaged over 25 different runs (same configuration as described in the main paper) is shown. We also add another baseline named Vanilla BO (GP with RBF kernel to model hybrid space + HyBO’s AFO procedure) in Table \ref{tab:more_baselines}. It is evident from the results that HyBO performs significantly better. \section{Conclusions} We studied a novel Bayesian optimization approach referred as HyBO for optimizing hybrid spaces using Gaussian process based surrogate models. We presented a principled approach to construct hybrid diffusion kernels by combining diffusion kernels defined over continuous and discrete sub-spaces in a tractable and flexible manner to capture the interactions between discrete and continuous variables. We proved that additive hybrid kernels have the universal approximation property. Our experimental results on diverse synthetic and real-world benchmarks show that HyBO performs significantly better than state-of-the-art methods. \noindent {\bf Acknowledgements.} This research is supported by NSF grants IIS-1845922, OAC-1910213, and CNS-1955353.